diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..809af009b820d26dbc5bf24032821dbdf4c3ea1b
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,421 @@
+# Gaggle: Visual Analytics for Model Space Navigation
+
+Subhajit Das
+
+Georgia Institute of Technology
+
+Dylan Cashman
+
+Tufts University
+
+Remco Chang
+
+Tufts University Alex Endert Georgia Institute of Technology
+
+
+
+Figure 1: The Gaggle UI allows people to interactively navigate a model space to support interactive classification and ranking of data items. Users can create labels, drag and drop data items into various class labels to specify their subjective preferences to construct classification and ranking models.
+
+## Abstract
+
+Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity adds usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.
+
+Index Terms: Human-centered computing-Visualization-Classification and ranking model visualization-Mixed initiative systems;
+
+## 1 INTRODUCTION
+
+Visual analytic (VA) techniques continue to leverage machine learning (ML) to provide people effective systems for gaining insights into data [28]. Systems such as Interaxis [39] help domain experts combine their knowledge and reasoning skills about a dataset or domain with the computational prowess of machine learning. These systems are traditionally designed with a pre-defined single ML model that has a carefully chosen learning algorithm and hyperparameter setting. Various combination of learning algorithms and hyperparam-eters give rise to a vast number of different model types (see Table 1). These different models constitute an exhaustive model space from which unique models can be sampled using a distinct combination of a learning algorithm and associated hyperparameters. For example, support vector machine (SVM) models have many options for kernel functions (i.e., linear, poly or radial) and hyperparameters (i.e., $C$ (regularization parameter), $\gamma$ (kernel coefficient), etc.).
+
+When a model is correctly chosen for the phenomena, task, data distribution, or question users try to answer, existing VA techniques can effectively support users in exploration and analysis. However, in cases where the right model (or optimal model, as desired by the user) to use for a problem is not known a priori, one needs to navigate this model space to find a fitting model for the task or the problem. To combat this, recent VA systems use multiple ML models to support a diverse set of user tasks (e.g., Regression, Clustering , etc. $\left\lbrack {{15},{17},{22},{69}}\right\rbrack$ ). For example, the VA system Clustervision [42] allows users to inspect multiple clustering models and select one based on quality and preference. Similarly, Snowcat [16] allows inspecting multiple ML models across a diverse set of tasks, such as classification, regression, time-series forecasting, etc. However, these multi-model systems are often more complex to use compared to single-model alternatives (e.g, in Clustervision users need to be well-versed with cluster model metrics and shown models.) We refer to this complex combination of parameter and hyperparameter settings as model space, as there are a large number of models that can be instantiated in this hyperdimensional space. Further, the interactive exploration of different parameter and hyperparameter combinations can be referred to as model space navigation. Our definition of model space is related to the work by Brown et al. [14] where they presented a tool called ModelSpace to analyze how the model parameters have changed over time during data exploration.
+
+In this paper we present Gaggle, a visual analytic tool that provides the user experience of a single-model system, yet leverages multiple models to support data exploration. Gaggle constructs multiple classification and ranking models, and then using a bayesian optimization based hyperparameter selection technique, automatically finds a classification and ranking model for users to inspect, thus simplifying the search for an optimal model as preferred by the user. Furthermore, our technique utilises simple user interactions for model space navigation to find the right model for the task. For example, users can drag data items into specific classes to record classification task's user preferences. Similarly, users can demonstrate that specific data items should be higher or lower in rank within a class by dragging them on top of each other.
+
+Gaggle uses ML to help users in data exploration or data structuring tasks, e.g, grouping data in self-defined categories, and ranking the members of the group based on their representativeness to the category. For example, a professor may want to use a tool to help categorize new student applications in different sets, and then rank the students in each set. Similarly, a salesman may want to cluster and rank potential clients in various groups. These problems fall under classification tasks in ML; however, unlike a conventional classification problem, our use case specifically supports interactive data exploration or data structuring, the models constructed are not meant to predict labels for unseen data items in future. Using this workflow, we expect our technique guards against possible model overfitting incurred due to adjusting the models to confirm to specified user preferences. Furthermore, Gaggle addresses a common problem of datasets that either lack adequate ground truth, or do not have it $\left\lbrack {{54},{67},{80}}\right\rbrack$ . To resolve this problem, Gaggle allows users to iteratively define classes and add labels. On each iteration, users add labels to data items and then build a classifier.
+
+We conducted a qualitative user study of Gaggle to collect user feedback on the system design and usability. The results of our study indicate that users found the workflow in Gaggle intuitive, and they were able to perform classification and ranking tasks effectively. Further, users confirmed that Gaggle incorporated their feedback into the interactive model space navigation technique to find the right model for the task. Overall, the contributions of this paper include:
+
+- A model space navigation technique facilitated by a Bayesian optimization hyperparameter tuning and automated model selection approach.
+
+- A VA tool Gaggle, that allows interactive model space navigation supporting classification and ranking tasks using simple demonstration-based user interactions.
+
+- The results of a user study testing Gaggle's effectiveness to interactively build classifiers and ranking models.
+
+## 2 RELATED WORK
+
+### 2.1 Interactions in Visual Analytics
+
+Interactive model construction is a flourishing avenue of research. In general, the design of such systems makes use of both explicit user interactions such as specifying parameters via graphical widgets (e.g., sliders), or implicit feedback including demonstration-based interactions or eye movements to provide guidance on model selection and steering. These types of systems build many kinds of models, including classification $\left\lbrack {7,{32}}\right\rbrack$ , interactive labeling $\left\lbrack {18}\right\rbrack$ , metric learning [15], decision trees [72], and dimensional reduction [27, 39, 43]. For example, Jeong et al. presented iPCA to show how directly manipulating the weights of attributes via control panels helps people adjust principal component analysis [36]. Similarly, Amershi et al. presented an overview of interactive model building [4]. Our work differs from these works in two primary ways. First, our technique searches through multiple types of models (i.e., Random Forest models with various hyperparameter settings for classification and ranking tasks). Second, our tool interprets user interaction as feedback on the full hyperparameter space using bayesian optimization, causing hyperparameter tuning directly changing model behavior in parallel. Stumpf et al. conducted experiments to understand the interaction between users and machine learning based systems [65]. They found that user feedback included suggestions for re-weighting of features, proposing new features, relational features, and changes to the learning algorithm. They showed that user feedback has the potential to improve ML systems, but that learning algorithms need to be extended to assimilate this feedback [64].
+
+Interactive model steering can also be done via demonstration-based interaction. The core principle in these approaches is that users do not adjust the values of model parameters directly, but instead visually demonstrate partial results from which the models learn the parameters $\left\lbrack {{13},{15},{25} - {27},{31},{44}}\right\rbrack$ . For instance, Brown et al. showed how repositioning points in a scatterplot could be used to demonstrate an appropriate distance function [15]. It saves the user the hassle to manipulating model hyperparameters directly to reach their goal. Similarly, Kim et al. presented InterAxis [39], which showed how users could drag data objects to the high and low locations on both axes of a scatterplot to help them interpret, define, and change axes with respect to a linear dimension reduction technique. Using this simple interaction, the user can define constraints which informed the underlying model to understand how the user is clustering the data. Wenskovitch and North used the concept of observation level interaction in their work by having the user define clusters in the visualized dataset [76]. By visually interacting with data points, users are able to construct a projection and a clustering algorithm that incorporated their preferences. Prior work has shown benefits from directly manipulating visual glyphs to interact with visualizations, as opposed to control panels [11, 26, 38, 46, 56, 59].
+
+Active Learning (AL) appears similar to techniques used in Gaggle, yet has a few distinct differences. AL is often used in supervised learning problems (e.g., classification) where adequate annotations are not available in the training data, thereby the algorithm selectively seeks labels for a set of informative training examples $\left\lbrack {{35},{48},{53},{81}}\right\rbrack$ . Standard AL processes assumes that an oracle (usually a user) can provide accurate labels or annotations for any queried data sample $\left\lbrack {{19},{61},{70}}\right\rbrack$ . From a UI perspective, the work presented in this paper aligns closely with both AL and demonstration-based techniques. Gaggle's interaction design lets users manipulate the visual results of the models, interactively add labels to the training set to incrementally navigate the model space. However, the data items users label are not selected by the system, but by users during exploration.
+
+### 2.2 Multi-Model Visual Analytic Systems
+
+Current visual analytics systems focus on allowing the user to steer and interact with a single model type. However, recent work has explored the capability for a user to concurrently interact with multiple models. These systems implement a multi-model steering technique which facilitates the adjustment of model hyperparameters to incrementally construct models that are better suited to user goals. For instance, Das et al. showed interactive multi-model inspection and steering of multiple regression models [22]. Hypertuner [69] looked at tuning multiple machine learning models' hyperpa-rameters. Xu et al. enabled user interactions with many models, but instead of each model, users interacted with ensemble models through multiple coordinated contextual views [77]. Dingen et al. built RegressionExplorer that allowed users to select subgroups and attributes (rows and columns) to build regression models. However, their technique does not weight the rows and columns; they only select 0 or 1 [23]. Mühlbacher et al. showed a technique to rank variables and pairs of variables to support multiple regression model's trade-off analysis, model validation, and comparison [47]. HyperMoVal [32] addressed model validation of multiple regression models by visualizing model outputs through multiple 2D and 3D sub-projections of the n-dimensional function space [52].
+
+Table 1: User tasks, learning algorithms, hyperparameters, and parameters in Gaggle.
+
+
| Tasks | Learning Algorithm | $\mathbf{{Hyper} - }$ parameters | Parameters |
| Classif- ication | Random Forest | Criteria Max Depth Min Samples | Attribute Entropy, Information Gain |
| Ranking | Ranking Random Forest | Criteria Max Depth Min Samples | Attribute Entropy, Information Gain |
+
+Kwon et al. [42] demonstrated a technique to visually identify and select an appropriate cluster model from multiple clustering algorithms and parameter combinations. Clusterophile 2 [17] enabled users to explore different choices of clustering parameters and reason about clustering instances in relation to data dimensions. Similarly, StarSpire from Bradel et al. [13] showed how semantic interactions [26] can steer multiple text analytic models. While effective, their system is scoped to text analytics and handling text corpora at multiple levels of scale. Further, many of these systems target data scientists, while Gaggle is designed for users who are non-experts in ML. In addition, our work focuses on tabular data. It supports interactive navigation of a model space within two classes of models (classification and ranking) by tuning hyperparameters of each of these types of models.
+
+### 2.3 Human-Centered Machine Learning
+
+Human-Centered Machine Learning focuses on how to include people in ML processes $\left\lbrack {4 - 6,{58}}\right\rbrack$ . A related area of study is the modification of algorithms to account for human intent. Sacha et al. showed how visual analytic based processes can allow interaction between automated algorithms and visualizations for effective data analysis [58]. They examined criteria for model evaluation on an interactive supervised learning system. The found users evaluate models by conventional metrics, such as accuracy and cost, as well as novel criteria such as unexpectedness. Sun et al. developed Label-and-Learn, allowing users to interactively label data [66]. Their goal was to allow users to determine a classifier's success and analyze the performance benefits of adding expert labels [66]. Many researchers have emphasized the knowledge generation process of users performing labeling tasks $\left\lbrack {9,{10},{24}}\right\rbrack$ . Ren et al. explained debugging multiple classifiers using an interactive tool called Squares [55].
+
+Holzinger et al. discussed how automatic machine learning methods are useful in numerous domains [33]. They note that these systems generally benefit from large static training sets, which ignore frequent use cases where extensive data generation would be prohibitively expensive or unfeasible. In the cases of smaller datasets or rare events, automatic machine learning suffers from insufficient training samples, which they claim can be successfully solved by interactive machine learning leveraging user input $\left\lbrack {{33},{34}}\right\rbrack$ . Crouser et al. further formalize this concept of computational models fostering human and machine collaboration [20].
+
+### 2.4 Model Space Navigation
+
+We looked at notable works from the literature which supports model space navigation or visualization to understand the current state better. Sedlmair et al. [60] defined a method of variation of model parameters, generating a diverse range of model outputs for each such combination of parameters. This technique called visual parameter analysis investigated the relationship between the input and the output within the described parameter space. Similarly, Pajer et al. [49] showed a visualization technique for visual exploration of a weight space which ranks plausible solutions in the domain of multi-criteria decision making. However, this technique does not explicitly allow navigating models by adjusting hyperparameters but instead varies weightings of user-defined criteria. Boukhelifa et al. explored model simulations by reducing the model space, then presenting it in a SPLOM and linked views [12]. While Gaggle demonstrates an implicit parameter space exploration, this implements an explicit parameter space.
+
+### 2.5 Automated Model Selection
+
+Model building requires selecting a model type, finding a suitable library, and then searching through the hyperparameter spaces for an optimal setting to fit their data. For non-experts, this task can amount to many iterations of trial and error. In order to combat this guessing game, non-experts could use automated model selection tools such as AutoWeka [41,68], SigOpt [50], HyperOpt [8,40], Google Cloud AutoML [45], and AUTO-SKLEARN [30]. These tools execute intelligent searches over the model space and hyperparameter spaces, providing an optimal model for the given problem type and dataset. However, these tools are all based on optimization of an objective function which takes into account only features or attributes that are quantifiable, often ignoring user feedback. Instead, our work explores how to incorporate domain expertise into an automated model selection process supported by interactive navigation of the model space.
+
+## 3 USAGE SCENARIO
+
+Gaggle allows users to assign data points to classes and then partially order data items within the classes to demonstrate classification and ranking. Next, the system responds by constructing a model space, then samples multiple variants of classification and ranking models from it. Gaggle searches various sub-regions of the model space to automatically find an optimal classification and ranking model based on model performance metrics (explained later in the paper). Users can iterate using Gaggle by triggering it to construct new models. In this process users provide feedback to the system through various forms of interaction (e.g., dragging rows, assigning new examples to the class labels, correcting previous labels, etc.). This process continues until the user is satisfied with the model, meaning that the automatically selected model has correctly learned the user's subjective knowledge and interpretation of the data (Figure 2). We present a usage scenario to demonstrate the type of problem being solved and the general workflow of the tool.
+
+Problem Space: Imagine Jonathan runs a sports camp for baseball players. He has years of experience in assessing the potential of players. He not only understands which data features are important but also has prior subjective knowledge about the players. Jonathan wants to categorize, and rank the players into various categories ("Best Players", "In-form Players", and "Struggling Players") based on their merit.
+
+User-Provided Labeling: Jonathan starts by importing the dataset of baseball players in Gaggle, data publicly available from Open-ML [73]. The data contains 400 players (represented as rows) and 17 attributes of both categorical and quantitative types. The dataset does not have any ground truth labels. He sees the list of all the players in the Data Viewer (Figure 3-B). He creates the three classes mentioned above and drags respective players in these bins or classes to add labels. Knowing Carl Yastrzemski as a very highly rated player, he places him in the "Best Players" class. Gaggle shows him recommendations of similar players for labeling (Figure 3).
+
+Automated Model Generation: Jonathan clicks the build model button from the Side Bar (see Figure 1-F). Based on Jonathan's interaction so far, Gaggle constructs the model space comprising of multiple classification and ranking models. Gaggle runs its optimizer to navigate the model space based on Jonathan's interaction to automatically find the best performing model, out of an exhaustive search of over 200 random forest models. When the system responds, he finds player Ernie Banks is misclassified. He places this player in the "In-form Players" class instead of the "Struggling Players". He moves Ernie Banks and similar other misclassified players to the correct class and asks Gaggle to find an optimal model that takes his feedback into account.
+
+Gaggle updates its model space based on the feedback provided by Jonathan, and samples a new classification and ranking model. Gaggle updates the Data Viewer with the optimal model's output. Jonathan reviews the results to find that many of the previously misclassified players are correctly labeled and pins them to ensure they do not change labels in future iterations. Next, he looks at the Attribute Viewer (Figure 1-B) in search of players with high "batting average" and "home runs". He moves players that match his criteria into respective labels (e.g., placing Sam West and Bill Madock in the "In-Form Players" class). After Gaggle responds with a new optimal model, he verifies the results returned by the model in the interacted row visualization (Figure 1-C). He accepts the classification model and moves on to rank the players within each class.
+
+Jonathan specifies examples for the ranking model by dragging players in each class up or down. After showing a set of relative ranking orderings between data instances (green highlights show interacted items), he iterates to check the full data set, as ranked by Gaggle. He moves player Norm Cash and Walker Cooper to the top of the "struggling players" class, and moves player Hal Chase in the "best players" class. Observing that some data items are not ranked as expected, he further specifies other ranking examples, and triggers Gaggle to construct a new ranking model. Finally, Jonathan finds Gaggle ranked most of the players at the correct spot. In this scenario, we showed how Gaggle helps a domain expert navigate the model space to classify and rank data items solely based on his prior subjective domain knowledge, following the iterative process shown in Figure 2.
+
+While this use case presented how Gaggle could be used by domain experts with a specific dataset, there are other datasets which can be utilised with Gaggle to perform classification and ranking. For example, the drug consumption dataset [29], contains personality measurements (e.g., neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), level of educa-ton, age, gender, ethnicity, etc. of 1885 respondents. Using this dataset a narcotics expert/analyst can classify and rank the respondents into one of the seven class categories (in relation to drug use), namely: "Never Used","Used over a Decade Ago", "Used in Last Decade", "Used in Last Year", "Used in Last Month", "Used in Last Week", and "Used in Last Day". They can also create new classes and rank the respondents based on the attribute values and their prior knowledge. Similarly Gaggle can support a loan officer to use the credit card default payment dataset [79] to decide on approval of loan applications by classifying and ranking the applicants.
+
+## 4 GAGGLE: SYSTEM DESCRIPTION
+
+The overarching goal driving the design of Gaggle is to let people interactively navigate a model space of classification, and ranking models in a simple and usable way. More specifically, the design goals of Gaggle are:
+
+Enable interactive navigation of model space: Gaggle should allow the exploration and navigation of the hyper-dimensional model space for classifiers and ranking models.
+
+
+
+Figure 2: The gray box on the top shows the model space from which candidate models are sampled and ranked based on metrics derived from user interactions, ultimately selecting and showing a single model.
+
+Support direct manipulation of model outputs: Model outputs should be shown visually (lists for ranking models, and bins for classifiers). User feedback should directly adjusting data item ranking or class membership, not adjusting model hyperparameters directly. Generalize user feedback across model types: User feedback to navigate the model space should not be isolated on any specific type of model. For instance, providing visual feedback to the classification of data points might also adjust the ranking of data items.
+
+Leverage user interaction as training data: User feedback on data points should serve as training data for model creation. Data items interacted with will serve as the training set, and performance is validated against the remaining data for classification and ranking.
+
+### 4.1 User Interface
+
+Data Viewer: The main view of Gaggle is the Data Viewer, which shows the data items within each class (Figure 1-A). Users can add, remove, or rename classes at any point during data exploration and drag data instances to bins to assign labels. Users can re-order instances by dragging them higher or lower within a bin to specify relative ranking order of items. Gaggle marks these instance with a green highlight, see Figure 1-G. When Gaggle samples models from the model space and finds an optimal model, the Data Viewer updates the class membership and ranking of items to reflect the models' output. Our design decision to solve for a single model to show at each iteration is to simplify the user interface by removing a model comparison and selection step.
+
+Attribute Viewer: Users can hover over data items to see attribute details (Figure 1-B) on the right. Every quantitative attribute is shown as a glyph on a horizontal line. The position of the glyph on the horizontal line shows the value of the attribute in comparison to all the other data instances. The color encodes the instance's attribute quality in comparison to all other instances (i.e., green, yellow, and red encodes high, mid, and low values respectively).
+
+Data Recommendations: When users drag data instances to different bins, Gaggle recommends similar data instances (found using a cosine distance metric), which can also be added (Figure 3 and 1-H). This is to expedite the class assignment during the data exploration process. The similarity is computed based on the total distance ${D}_{a}$ of each attribute ${d}_{i}$ of the moved data instance to other instances in the data. Users can accept or ignore these recommendations.
+
+Interacted Row Visualization: This view (Figure 1-C) shows the list of all interacted data items. In addition, with color encoding it shows correct label matches (shown in blue color) and incorrect label matches(shown in pink color). Same is true for ranking (blue for correct ranked order prediction as expected and pink for otherwise). It shows how many constraints were correctly predicted.
+
+
+
+Figure 3: Gaggle's recommendation dialog box.
+
+User Interactions: Gaggle lets users give feedback to the system to sample models in the next iteration, adjust model parameters and hyperparameters, and allow users to explore data and gain insight.
+
+- Assign class llabels: Users can reassign classes by dragging data items from one class to another. They can also add or remove classes. These interactions provide more constraints to steer the hyperparameters of the classifier.
+
+- Reorder items within classes: Users can reorder data items within classes, (see Figure 1-G) to change their ranking. This interaction helps users exemplify their subjective order of data instances within classes. This feedback is incorporated as training data for the ranking model.
+
+- Pin data items: When sure of a class assignment of a data item, the user can pin it to the respective class bin (see Figure 1-I). It ensures that data item will always be assigned that class in every subsequent iteration.
+
+- Constrain classifier: When satisfied by the classifier, users can constrain the last best classifier. It allows users to move on to show ranking examples for Gaggle to focus on improving the ranking model (Figure 1-C).
+
+## 5 TECHNIQUE
+
+Models: We define a model as a function $f : \mathcal{X} \mapsto \mathcal{Y}$ , mapping from the input space $\mathcal{X}$ to the prediction space $\mathcal{Y}$ . We are concerned primarily with semi-supervised learning models, in which we are provided with a partially labeled or unlabeled training set ${D}_{\text{train }} =$ ${D}_{U} \cup {D}_{L}$ , where ${D}_{L}$ is labeled data and ${D}_{U}$ is unlabeled data such that if ${d}_{i} \in {D}_{L}$ , then ${d}_{i} = \left( {{\mathbf{x}}_{\mathbf{i}},{y}_{i}}\right)$ , and if ${d}_{i} \in {D}_{U}$ , then ${d}_{i} = \left( {\mathbf{x}}_{\mathbf{i}}\right)$ , where ${\mathbf{x}}_{\mathbf{i}}$ are features and ${y}_{i}$ is a label. A learning algorithm $A$ maps a training set ${D}_{\text{train }}$ to a model $f$ by searching through a parameter space. A model is described by its parameters $\theta$ , while a learning algorithm is described by its hyperparameters $\lambda$ . A model parameter is internal to a model, where its value can be estimated from the data, while model hyperparameters are external to the model.
+
+Model Space: Varying the learning algorithm and the hyperparame-ters creates a diverse set of new models. This space of every possible combination of learning algorithms and hyperparameters forms a high dimensional model space. The objective to find an optimal model from this high-dimensional, infinitely large space without any computational guidance or statistical methods is similar to finding a needle in a haystack. Conventionally, ML practitioners/developers navigate the model space using data science principles to test various candidate models. They search for regions (sub-space of the model space) to find optimal models. For instance, one can navigate the model space by randomly sampling new models and testing their performance in terms of accuracy (or other defined metrics) to find a model that best suits the task.
+
+Gaggle constructs a model space by sampling multiple random forest models which takes a predefined list of hyperparameters (criteria, max depth, and min samples to set a node as a leaf) within a set domain range (see Table 1). While Gaggle uses a random forest model for the system evaluation, the general optimization method used is designed to work with other learning algorithms and hyperpa-rameter combinations as well. For instance, Gaggle's optimizer can sample multiple SVM models using a set of chosen hyperparameters such as $C$ (regularization parameter), $\gamma$ (kernel coefficient).
+
+
+
+Figure 4: The model ranking method uses Bayesian optimization solver to rank candidate models from the model space.
+
+### 5.1 Interactive model space navigation
+
+To facilitate interactive user feedback and navigation of the model space, Gaggle uses a Bayesian optimization technique [51, 62]. This navigation is initiated by randomly sampling models from the model space as shown in Figure 5. Gaggle seeds the optimization technique by providing: a learning algorithm $A$ , a domain range ${D}_{r}$ for each hyperparameter, and the total number of models to sample $n$ for both classification and ranking models. Gaggle uses a Bayesian optimization module that randomly picks a hyperparameter combination $h{p}_{1}$ , $h{p}_{2}$ and $h{p}_{3}$ . For example, a model ${M}_{1}$ can be sampled by providing "learning algorithm" = "random forest", "criteria type" = gini, "max-depth" = 30, and "min-samples-leaf" = 12. Likewise, the Bayesian optimization module samples ${M}_{1},{M}_{2},{M}_{3},{M}_{4}\ldots {M}_{n}$ models. For each model, it also computes a score ${S}_{i}$ based on custom-defined model performance metrics inferred from user interactions.
+
+The Bayesian optimization module uses a Gaussian process to find an expected improvement point in the search space (of hyperpa-rameter values) over current observations. For example, a current observation could be mapped to a machine learning model, and its metric for evaluation of the expected probability can be precision score or cross-validation score. Using this technique, the optimization process ensures consistently better models are sampled by finding regions in the model space where better performing models are more likely to be found (see Figure 2). Next, the Bayesian optimization module finds the model with the best score ${S}_{i}$ (see Figure 5). Gaggle performs this process for both classification and ranking models driven by user-defined performance metrics.
+
+Classification Model Technique: Gaggle follows an unconventional classifier training pipeline. As Gaggle is designed to help users in data exploration using ML, and not to make predictions on unseen data items, the applicability of conventional train and test set does not apply. Gaggle begins with an unlabeled dataset. As the user interacts with an input dataset $D$ of $n$ items, labels are added; e.g., if the user interacts with $e$ data items, they become part of the training set for the classification model. The rest of the instances $n - e$ , are used as a test set to assign labels from the trained model. If $e$ is lower than a threshold value $t$ , then Gaggle automatically finds $s$ similar data instances to the interacted items and places them in the training set along with the interacted data items ( $s$ gets the label from the most similar labeled data item in $e$ ). The similarity is measured by the cosine distance measure using the features of the interacted samples. This ensures that there are enough training samples to train the classifier effectively.
+
+As users interact with more data instances, the size of the training set grows, and test set shrinks, helping them to build a more robust classifier. For each classifier, Gaggle determines the class probabilities ${P}_{ij}$ , representing the probability of data item $i$ classified into class $j$ . The class probability is used to augment the ranking computation (explained below) as they represent the confidence the model has over a data item to be a member of a said class. Gaggle's interactive labeling approach has close resemblance to active learning (AL) [78], where systems actively suggest data items users should label. Instead, Gaggle allows users to freely label any data item in $D$ to construct a classifier. Furthermore, our technique incorporates user feedback to both classify and rank data items.
+
+Ranking Model Technique: Gaggle's approach to aid interactive navigation of the model space for the ranking task is inspired by $\left\lbrack {{37},{74}}\right\rbrack$ ; which allows users to subjectively rank multi-attribute data instances. However, unlike them, Gaggle constructs the model space using a random forest model (a similar approach to [82]) to classify between pairs of data instances ${R}_{i}$ and ${R}_{j}$ . While we tested both of these approaches, we adhered to random forest models owing to it's better performance with various datasets. Using this technique, a model predicts if ${R}_{i}$ should be placed above or below ${R}_{j}$ . It continues to follow the same strategy between all the interacted data samples and the rest of the data set. Further, Gaggle augments this ranking technique with a feature selection method based on the interacted rows. For example, assume a user moves ${R}_{i}$ from rank ${B}_{i}$ to ${B}_{j}$ where ${B}_{i} > {B}_{j}$ (the row is given a higher rank) Our feature selection technique checks all the quantitative attributes of ${R}_{i}$ , and retrieves $m = 3$ (the value of $m$ is learnt by heuristics and can be adjusted) attributes $Q = {Q}_{1},{Q}_{2}$ , and ${Q}_{3}$ which best represents why ${R}_{i}$ should be higher in rank than ${R}_{j}$ . The attribute set $Q$ are the ones in which ${R}_{i}$ is better than ${R}_{j}$ . If ${B}_{i} < {B}_{j}$ then Gaggle retrieves features that supports it and follows the above protocol.
+
+This technique performs the same operations for all the interacted rows, and finally retrieves a set of features $\left( {F}_{s}\right.$ , by taking the common features from each individually interacted row) that defines the user's intended ranked order. In this technique, if a feature satisfies one interaction but fails on another, they are left out. Only the common features across interacted items get selected. If the user specifies incoherent data instances that leads to no or very small set in ${F}_{s}$ , Gaggle uses SK Learn's K Best feature selection technique to fill ${F}_{s}$ . However, this may produce models which do not adhere to the shown user interactions.
+
+The set of selected features ${F}_{s}$ are then used to build the random forest model for the ranking task which computes a ranking score ${E}_{ij}$ (ith instance, of $j$ th class) for each data item in $D$ . Next, using the class probabilities ${P}_{i}j$ and the ranking score ${E}_{i}j$ , Gaggle ranks the data instances within each class. A final ranking score ${G}_{ij} =$ ${E}_{ij} * {W}_{r} + {P}_{ij} * \left( {1 - {W}_{r}}\right)$ is computed by combining the ranking score ${E}_{ij}$ of each data item in $D$ and its class probability ${P}_{ij}$ , retrieved from the classifier, where ${W}_{r}$ is the weight of the rank score and $1 - {W}_{r}$ is the weight of the classification probability (see Figure 4). The weights are set based on the model accuracy on various datasets. Finally the dataset is sorted by ${G}_{ij}$ . While the described technique uses random forest models, in practice we have tested it with other ML models such as SVM. Furthermore, the weights described here are a set of hyperparameters that needs to be tuned based on the chosen model and the dataset.
+
+### 5.2 Model Selection
+
+Gaggle selects an optimal model from the model space based on the following metrics which describe each model's performance (see Figure 4). These are fed to the Bayesian optimization module to sample better models:
+
+
+
+Figure 5: Model space navigation approach using Bayesian optimization to find the best performing model based on user-defined metrics.
+
+Classification Metrics: Metrics used to evaluate the classifiers include: percentage of wrongly labeled interacted data instances ${C}_{u}$ , and cross-validation scores from 10 -fold evaluation ${C}_{v}$ (both range between $0 - 1$ ). Other metrics such as precision, F1-score, can be specified based on the dataset and the user's request. The final metric is the sum total of these components computed as: ${C}_{u} * {W}_{u} + {C}_{v} * {W}_{v}$ where, ${W}_{u}$ and ${W}_{v}$ are the respective weights for each of the aforementioned classification metric components. Different weight values were tested during implementation and testing. We chose the set of weights which led to the best gain in model accuracy.
+
+Ranking Metrics: To evaluate models for the ranking task, Gaggle computes three ranking metrics based on the absolute distance from a data instance’s position before and after a said model ${M}_{i}$ is applied to the data. Assume a row $r$ is ranked $q$ when the user interacted with the data. After applying model ${M}_{i}$ to the data, the row $r$ is at position $p$ , then the absolute distance is given by ${d}_{r} = {abs}\left( {p - q}\right)$ . The first ranking metric computes the absolute distances only between the interacted rows. It is defined as ${Z}_{u} = \left( {\mathop{\sum }\limits_{{r \in I}}{d}_{r}}\right) /l$ , where row $r$ is in the set $I$ of all $l$ interacted rows. The second metric, ${D}_{v}$ , computes the absolute distance between the interacted rows $I$ and the immediate $h$ rows above and below of each interacted rows. It is defined as ${Z}_{v} = \left( {\mathop{\sum }\limits_{{r \in {I}_{l}}}\left( {\mathop{\sum }\limits_{{t \in {H}_{h}}}{d}_{tr}}\right) /h}\right) /l$ where row $r$ is in the set $I$ of all $l$ interacted rows, $H$ is the set of $h$ rows above and below of each interacted row $I$ . This metric captures if the ranked data item is placed in the same neighborhood of data items as intended by the user. In Gaggle, $h$ defaults to 3 (but could be adjusted). The third metric, ${D}_{w}$ , computes the absolute distance between all the instances of the data before and after a model is applied. defined as ${Z}_{w} = \left( {\mathop{\sum }\limits_{{r \in {D}_{n}}}{d}_{r}}\right) /n$ where row $r$ is in the set ${D}_{n}$ of all $n$ rows. A lower distance represents a better model fit.
+
+The final ranking metric is computed by the weighted summation of these metrics defined as ${Z}_{\text{total }} = {Z}_{u} * {W}_{u} + {Z}_{v} * {W}_{v} + {Z}_{w} * {W}_{w}$ , where, ${W}_{u},{W}_{v},{W}_{w}$ are the weights for the three ranking metrics. Weights were tested during implementation and chosen based on the set of weights which gave the best model accuracy. While ${Z}_{u}$ captures user-defined ranking interactions in the current iteration, ${Z}_{v}$ and ${Z}_{w}$ both ensure that user’s progress (over multiple interations) is preserved in ranking the entire dataset. Furthermore, we used these metrics instead of other ranking metrics such as, normalized discounted cumulative gain (NDCG) [75], as the latter relies on document relevance, which in this context seemed less useful to capture user preferences. Also, NDCG is not derived from a ranking function, instead relies on document ranks [71]. Another metric called Bayesian personalized ranking (BPR) by Rendle et al. [57] allows ranking a recommended list of items based on users implicit behavior. However, unlike the use case supported by BPR, our work specifically allows users to rank the data subjectively. Furthermore, unlike BPR, our metric also takes into account negative examples, (i.e., when a data item is ranked lower than the rest).
+
+## 6 User Study
+
+We conducted a user study to evaluate Gaggle's automatic model space navigation technique to support the classification and ranking tasks. Our goal was to get user feedback/responses to Gaggle's system features, design, and workflow. Further, collecting observational data, we wanted to know if our technique helps them to find an optimal model satisfying their goal. We designed a qualitative controlled lab study where participants used Gaggle to perform a set of predefined tasks. In the end, they gave feedback to the system design, usability, and workflow.
+
+### 6.1 Participants
+
+We recruited 22 graduate and undergraduate students (14 male). The inclusion criteria were that participants should be non-experts in ML, and have adequate knowledge of movies and cities (datasets used for the study). None of the participants used Gaggle prior to the study. We compensated the participants with a $\$ {10}$ gift card. The study was conducted in a lab environment using a laptop with a 17-inch display and a mouse. The full experiment lasted 60-70 minutes.
+
+### 6.2 Study Design
+
+Participants were asked to complete 4 tasks: multi-class classification of items (3 classes), ranking the classified data items, binary classification of items, and ranking the classified data items. Participants performed the above 4 tasks on 2 datasets, Movies [3] and Cities [2]. To reduce learning and ordering effects, the order of the datasets and the tasks were randomized. In total, each participant performed 8 tasks, 4 per dataset. We began with a practice session to teach users about Gaggle. During this session, participants performed 4 tasks, which took 15 minutes, and included multi-class classification and ranking, and binary classification and ranking on the Cars dataset [1]. We encouraged participants to ask as many questions as they want to clarify system usability or interaction issues. We proceeded to the experimental sessions only when participants were confident with using Gaggle.
+
+Participants were asked to build a multi-class classifier first. This was followed by a binary classification and ranking task on the same dataset. Then they repeat the same set of tasks on the other dataset. The movies data had 5000 items, with 11 attributes, while the cities dataset had 140 items with 45 attributes. We asked participants to create specific classes for each dataset. For the Movies dataset multi-class labels were sci-fi, horror/thriller, and misc, and fun-cities, work-cities, and misc for the Cities dataset. For the binary classification task, the given labels were popular and unpopular (Movies dataset), and western and non-western (Cities dataset).
+
+### 6.3 Data Collection and Analysis
+
+We collected subjective feedback and observational data through the study. We encouraged participants to think aloud while they interacted with Gaggle. During the experiment sessions, we observed the participants silently in an unobtrusive way to not interrupt their flow mitigating Hawthorne and Rosenthal effects. We audio and video recorded every participant's screen. We collected qualitative feedback through a semi-structured interview comprising of open-ended questions at the end of the study. We asked questions such as What were you thinking while using Gaggle to classify data items?, What was your experience working with Gaggle?, etc. Furthermore, after each trial per dataset, we asked participants to complete a questionnaire containing likert scale questions. For example, we asked: (1) On a scale of 1 to 5, how successfully the system was learning based on interactions provided? (1 is randomly, 5 is very consistently), (2) On a scale of 1 to 5, how satisfied are you with the classification model output? (1 is not satisfied, 5 is very satisfied), (3) On a scale of 1 to 5, how satisfied are you with the ranking model output? (1 is not satisfied, 5 is very satisfied). Here satisfaction means, how well the underlying model adhered to
+
+
+
+Figure 6: User preferences (averaged over datasets) for the four tasks. the users demonstrated interactions. Please refer the supplemental material $\left( {}^{1}\right)$ to know about the study questionnaire.
+
+### 6.4 User Preferences
+
+We collected user preference rating for all four tasks (see Figure 6). The scores were between $1 - 5(1$ meaning least preferred, 5 meaning highly preferred). The average rating of Gaggle for multi-class classification with the ranking task was 3.97. The average rating of Gaggle for the binary classification with the ranking task was 4.17. Though users approved Gaggle's simplicity to allow them to classify and rank data samples, they seemed to prefer Gaggle for the binary classification and ranking task owing to higher accuracy and consistently matching users interpretation of the data.
+
+### 6.5 Model Switching Behavior
+
+For all participants, we collect log data to track how models were selected when users interactively navigated the model space. We sought to understand how model hyperparameters switch during usage. For participants using the Movies dataset (multi-class classification task) the max-depth hyperparameter changed values (ranging from 3 to 18). Similarly, for the Cities dataset (multi-class classification task) the hyperparameter Criteria ranged from entropy to gini. The min-samples hyperparameter varied within the range of 5 to 36 for both datasets. For the binary classification task, max-depth ranged from 4 to 9 for both datasets. Also we noticed the criteria hyperparameter switching from gini to entropy for both datasets for the binary classification task. On average the hyperparameters switched $M = {9.34}\left\lbrack {{7.49},{11.19}}\right\rbrack$ times to support the multi-class classification and ranking task, while the average change was $M =$ ${5.41}\left\lbrack {{4.89},{5.93}}\right\rbrack$ for binary classification and ranking task. These results indicate that the interactive navigation of the model space technique found new models as participants interacted with Gaggle.
+
+### 6.6 Qualitative Feedback
+
+Drag and drop interaction: All the participants liked the drag and drop interaction to demonstrate examples to the system. "I like the drag items feature, it feels very natural to move data items around showing the system quickly what I want." (P8). However, with a long list of items in one class, it can become difficult to move single items. P18 suggested, "I would prefer to drag-drop a bunch of data items in a group.". In future, we will consider adding this functionality.
+
+Ease of system use: Most participants found the system easy to use. P12 said "The process is very fluid and interactive. It is simple and easy to learn quickly." P12 added "While the topic of classification and ranking models is new to me, I find the workflow and the interaction technique very easy to follow. I can relate to the use case and see how it [Gaggle] can help me explore data in various scenarios."
+
+Recommended items: Recommending data while dragging items into various labels helped users find correct data items to label. P12 said "I liked the recommendation feature, which most of the time was accurate to my expectation. However, I would expect something like that for ranking also." P2 added "I found many examples from the recommendation panel. I felt it was intelligent to adapt to my already shown examples."
+
+---
+
+${}^{1}$ Data: https://gtvalab.github.io/projects/gaggle.html
+
+---
+
+User-defined Constraints: The interacted row visualization helped users understand the constraints they placed on the classification and ranking models. P14 said "This view shows me clearly what constraints are met and what did not. I can keep track of the number of blue encodings to know how many are correctly predicted". Even though the green highlights in the Data Viewer also mark the interacted data items, the Interacted Row View shows a list of all correct/incorrect matches in terms of classification and ranking.
+
+Labeling Strategy Few participants changed their strategy to label items as they interacted with Gaggle. They expected it might confuse the system. However, to their surprise, Gaggle adapted to the interactions and still satisfied most of the user-defined class definitions. P17 said "In the movies data set, I was classifying sci-fi, and thriller movies differently at first, but later I changed based on recent movies that I saw. I was surprised to see Gaggle still got almost all the expected labels right for non-interacted movies."
+
+## 7 DISCUSSION AND LIMITATIONS
+
+Large Model Search Space: Searching models by combining different learning algorithms and hyperparameters leads to an extremely large search space. As a result, a small set of constraints on the search process would not sufficiently reduce the space, leading to a large number of sub-constrained and ill-defined solutions. Thus, how many interactions are considered optimal for a given model space? In this work, we approached this challenge by using Bayesian optimization for ranking models. However, larger search spaces may pose scalability issues while too many user constraints may "over-constrain" models leading to poor results.
+
+Scalability: The current interaction design is intended to support small to moderate dataset sizes. In the user study, we limited the dataset size to understand how users interact with the system and provide feedback to classification and ranking models. However, the current design is not meant to handle cases when the data set is large-ish, i.e., say twenty thousand data items. In the future, we would like to address this concern by using Auto-ML based cloud services coupled with progressive visual analytics [63].
+
+Abrupt Model and Result Changes: As users interact to navigate the model space, each iteration of the process may find substantially different models. For example, users might find a random forest model with "criteria $=$ gini" with depth of tree $= {50}^{\prime \prime }$ in one iteration and "criteria = entropy with depth of tree = 2 " in the next iteration. This may entail significant changes in the results of these models. While these abrupt changes may not impact users greatly if they are unaware of the model parameterizations, showing users what changed in the output may ease these transition states.
+
+Data Exploration using ML: Supporting data exploration while creating models interactively is challenging. Users may change their task definition slightly or learn new information about their data. In these cases, user feedback may be better modeled by a different model hyperparameterization compared to earlier in their task. Updating the class definition or showing better examples impacts the underlying decision boundary, which the classifier needs to map correctly. For example, in earlier iterations, a linear decision boundary might characterize the data. However, when new examples for classes are provided the decision boundary might be better approximated using a polynomial or radial surface (see Figure 7). In situations like this, Gaggle helps users by finding an optimal model with new hyperparameter settings, without changing them manually. ML practices and model overfitting: Conventionally classifiers are trained on a training set, and then validated on a test set. Our technique utilises the full dataset as input data, interacted data items as training set, and the rest as test set. As users iteratively construct classifiers, the training set grows in size and test set reduces. We used
+
+
+
+Figure 7: As users gain more knowledge through exploration, they may change their task, which might require different model hyperpa-rameters. (Blue and orange points represent positive and negative classes; white points represent data items not interacted with.)
+
+this approach to account for user-specified preferences through iterative interactions. Nevertheless, our process follows the conventional ML principle, where the classifier training is done independently of the test data. It only makes prediction on it after training, and Gaggle enables users to inspect the results. However, a challenge systems like Gaggle faces is model overfitting [21]. An overly aggressive search through the model space might lead to a model which best serves the user's added constraints, but might underperform on an unseen dataset. We believe that in use cases where ML is utilised to organize or explore the data, the problem of overfitting is less problematic, considering the constructed models are not meant to be used for unseen data.
+
+Active Learning and Gaggle: Gaggle's approach to interactive labeling is closely related to active learning (AL) strategies in ML, in which systems request users to specify labels to data instances, on which the model is less confident. However, Gaggle allows freedom in terms of which items to label. AL on the other hand relies on existing labels in the training data and only asks users to re-confirm labels to certain data instances when needed (e.g, when the classifier is less confident on the prediction of a data instance). While the approach incorporated in Gaggle gives users more agency over the process, this approach may be less suitable for larger datasets where AL techniques could present items to users which need feedback.
+
+Extending the Model Space navigation: The interactive model space navigation technique that translates user interactions into classification and ranking metrics can be extended to other ML models. For example, other than a random forest model, we have tested Gaggle with SVM model for the classification task and using the RankSVM technique for the ranking task. Likewise, Gaggle can be used with a boosting model for the classification task and a weighted ranking model from each component model from the boosted model for the ranking task.
+
+## 8 CONCLUSION
+
+In this paper, we present an interactive model space navigation approach for helping people perform classification and ranking tasks. Current VA techniques rely on a pre-selected model for a designated task or problem. However, these systems may fail if the selected model does not suit the task or the user's goals. As a solution, our technique helps users find a model suited to their goals by interactively navigating the high-dimensional model space. Using this approach, we prototyped Gaggle, a VA system to facilitate classification and ranking of data items. Further, with a qualitative user study, we collected and analyzed user feedback to understand the usability and effectiveness of Gaggle. The study results show that users agree that Gaggle is easy to use, intuitive, and helps them interactively navigate the model space to find an optimal classification and ranking model.
+
+## 9 ACKNOWLEDGEMENTS
+
+Support for the research is partially provided by DARPA FA8750- 17-2-0107. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.
+
+## REFERENCES
+
+[1] Cars dataset. http://courses.washington.edu/hcde511/s14/ datasets/cars.xls. Accessed: 2018-12-11.
+
+[2] Cities dataset. http://graphics.cs.wisc.edu/Vis/ Explainers/data.html. Accessed: 2018-12-11.
+
+[3] Movies dataset. https://www.kaggle.com/carolzhangdc/ imdb-5000-movie-dataset#movie_metadata.csv. Accessed: 2018-12-11.
+
+[4] S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza. Power to the people: The role of humans in interactive machine learning. ${AI}$ Magazine, 35(4):105-120, 2014.
+
+[5] S. Amershi, J. Fogarty, A. Kapoor, and D. S. Tan. Effective end-user interaction with machine learning. In ${AAAI},{2011}$ .
+
+[6] S. Amershi, J. Fogarty, and D. Weld. Regroup: Interactive machine learning for on-demand group creation in social networks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 21-30. ACM, New York, NY, USA, 2012. doi: 10.1145/2207676.2207680
+
+[7] D. Arendt, E. Saldanha, R. Wesslen, S. Volkova, and W. Dou. Towards rapid interactive machine learning: Evaluating tradeoffs of classification without representation. In Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI '19, pp. 591-602. ACM, New York, NY, USA, 2019. doi: 10.1145/3301275.3302280
+
+[8] J. Bergstra, D. Yamins, and D. D. Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference, pp. 13-20, 2013.
+
+[9] J. Bernard, M. Zeppelzauer, M. Sedlmair, and W. Aigner. A Unified Process for Visual-Interactive Labeling. In M. Sedlmair and C. Tominski, eds., EuroVis Workshop on Visual Analytics (EuroVA). The Eurograph-ics Association, 2017. doi: 10.2312/eurova.20171123
+
+[10] J. Bernard, M. Zeppelzauer, M. Sedlmair, and W. Aigner. Vial: a unified process for visual interactive labeling. The Visual Computer, pp. 73-77, Mar 2018.
+
+[11] F. Bouali, A. Guettala, and G. Venturini. Vizassist: An interactive user assistant for visual data mining. Vis. Comput., 32(11):1447-1463, Nov. 2016. doi: 10.1007/s00371-015-1132-9
+
+[12] N. Boukhelifa, A. Bezerianos, I. C. Trelea, N. M. Perrot, and E. Lutton. An exploratory study on visual exploration of model simulations by multiple types of experts. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 644:1-644:14. ACM, New York, NY, USA, 2019. doi: 10.1145/3290605.3300874
+
+[13] L. Bradel, C. North, L. House, and S. Leman. Multi-model semantic interaction for text analytics. In 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 163-172, Oct 2014. doi: 10.1109/VAST.2014.7042492
+
+[14] E. Brown, Y. Sriram, K. Cook, R. Chang, and A. Endert. ModelSpace: Visualizing the Trails of Data Models in Visual Analytics Systems, 2018.
+
+[15] E. T. Brown, J. Liu, C. E. Brodley, and R. Chang. Dis-function: Learning distance functions interactively. In Proceedings of the 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), VAST '12, pp. 83-92. IEEE Computer Society, Washington, DC, USA, 2012. doi: 10.1109/VAST.2012.6400486
+
+[16] D. Cashman, S. R. Humayoun, F. Heimerl, K. Park, S. Das, J. Thompson, B. Saket, A. Mosca, J. T. Stasko, A. Endert, M. Gleicher, and R. Chang. Visual analytics for automated model discovery. CoRR, abs/1809.10782, 2018.
+
+[17] M. Cavallo and Çagatay Demiralp. Clustrophile 2: Guided visual clustering analysis. IEEE Transactions on Visualization and Computer Graphics, 25:267-276, 2018.
+
+[18] M. Chegini, J. Bernard, P. Berger, A. Sourin, K. Andrews, and
+
+T. Schreck. Interactive labelling of a multivariate dataset for supervised machine learning using linked visualisations, clustering, and active learning. Visual Informatics, 3(1):9-17, 2019. Proceedings of PacificVAST 2019. doi: 10.1016/j.visinf.2019.03.002
+
+[19] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. Artif. Int. Res., 4(1):129-145, Mar. 1996.
+
+[20] R. J. Crouser, L. Franklin, A. Endert, and K. Cook. Toward theoretical techniques for measuring the use of human effort in visual analytic systems. IEEE transactions on visualization and computer graphics, 23(1):121-130, 2017.
+
+[21] P. Daee, T. Peltola, A. Vehtari, and S. Kaski. User modelling for avoiding overfitting in interactive knowledge elicitation for prediction. In 23rd International Conference on Intelligent User Interfaces, IUI '18, pp. 305-310. ACM, New York, NY, USA, 2018. doi: 10.1145/ 3172944.3172989
+
+[22] Das, C. Subhajit, C. Dylan, Remco, Endert, and Alex. Beames: Interactive multi-model steering, selection, and inspection for regression tasks. In IEEE CGA, 2019.
+
+[23] D. Dingen, M. van 't Veer, P. Houthuizen, E. H. J. Mestrom, H. H. M. Korsten, A. R. A. Bouwman, and J. J. van Wijk. Regressionexplorer: Interactive exploration of logistic regression models with subgroup analysis. IEEE Transactions on Visualization and Computer Graphics, 25:246-255, 2018.
+
+[24] J. J. Dudley and P. O. Kristensson. A review of user interface design for interactive machine learning. ACM Trans. Interact. Intell. Syst., 8(2):8:1-8:37, June 2018. doi: 10.1145/3185517
+
+[25] A. Endert, L. Bradel, and C. North. Beyond Control Panels: Direct Manipulation for Visual Analytics. IEEE Computer Graphics and Applications, 33(4):6-13, 2013. doi: 10.1109/MCG.2013.53
+
+[26] A. Endert, P. Fiaux, and C. North. Semantic interaction for visual text analytics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 473-482. ACM, New York, NY, USA, 2012. doi: 10.1145/2207676.2207741
+
+[27] A. Endert, C. Han, D. Maiti, L. House, S. C. Leman, and C. North. Observation-level Interaction with Statistical Models for Visual Analytics. In IEEE VAST, pp. 121-130, 2011.
+
+[28] A. Endert, W. Ribarsky, C. Turkay, B. Wong, I. Nabney, I. D. Blanco, and F. Rossi. The state of the art in integrating machine learning into visual analytics. In Computer Graphics Forum, vol. 36, pp. 458-486. Wiley Online Library, 2017.
+
+[29] E. Fehrman, A. K. Muhammad, E. M. Mirkes, V. Egan, and A. N. Gorban. The five factor model of personality and evaluation of drug consumption risk. 2017.
+
+[30] M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, pp. 2962-2970, 2015.
+
+[31] S. Gratzl, A. Lex, N. Gehlenborg, H. Pfister, and M. Streit. Lineup: Visual analysis of multi-attribute rankings. IEEE Transactions on Visualization and Computer Graphics (InfoVis '13), 19(12):2277-2286, 2013. doi: 10.1109/TVCG.2013.173
+
+[32] F. Heimerl, S. Koch, H. Bosch, and T. Ertl. Visual classifier training for text document retrieval. IEEE Transactions on Visualization and Computer Graphics, (12):2839-2848, 2012.
+
+[33] A. Holzinger. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2):119-131, Jun 2016. doi: 10.1007/s40708-016-0042-6
+
+[34] A. Holzinger, M. Plass, K. Holzinger, G. C. Crişan, C.-M. Pintea, and V. Palade. Towards interactive machine learning (iml): Applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach. vol. LNCS-9817 of Availability, Reliability, and Security in Information Systems, pp. 81-95. Springer International Publishing, Aug 2016.
+
+[35] K. Jedoui, R. Krishna, M. S. Bernstein, and F.-F. Li. Deep bayesian active learning for multiple correct outputs. ArXiv, abs/1912.01119, 2019.
+
+[36] D. H. Jeong, C. Ziemkiewicz, B. Fisher, W. Ribarsky, and R. Chang. ipca: An interactive system for pca-based visual analytics. In Computer Graphics Forum, vol. 28, pp. 767-774. Wiley Online Library, 2009.
+
+[37] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02, pp. 133-142. ACM, New York, NY, USA, 2002. doi: 10.1145/775047.775067
+
+[38] S. Kandel, A. Paepcke, J. Hellerstein, and J. Heer. Wrangler: Interactive visual specification of data transformation scripts. In ACM Human Factors in Computing Systems (CHI), 2011.
+
+[39] H. Kim, J. Choo, H. Park, and A. Endert. Interaxis: Steering scatterplot axes via observation-level interaction. IEEE transactions on visualization and computer graphics, 22(1):131-140, 2016.
+
+[40] B. Komer, J. Bergstra, and C. Eliasmith. Hyperopt-sklearn: automatic hyperparameter configuration for scikit-learn. In ICML workshop on AutoML, 2014.
+
+[41] L. Kotthoff, C. Thornton, H. H. Hoos, F. Hutter, and K. Leyton-Brown. Auto-weka 2.0: Automatic model selection and hyperparameter optimization in weka. Journal of Machine Learning Research, 17:1-5, 2016.
+
+[42] B. C. Kwon, B. Eysenbach, J. Verma, K. Ng, C. deFilippi, W. F. Stewart, and A. Perer. Clustervision: Visual supervision of unsupervised clustering. IEEE Transactions on Visualization and Computer Graphics, PP(1):142-151, 2018.
+
+[43] B. C. Kwon, H. Kim, E. Wall, J. Choo, H. Park, and A. Endert. Axisketcher: Interactive nonlinear axis mapping of visualizations through user drawings. IEEE Trans. Vis. Comput. Graph., 23(1):221- 230, 2017.
+
+[44] S. C. Leman, L. House, D. Maiti, A. Endert, and C. North. Visual to parametric interaction (v2pi). PloSone, 8(3):e50474, 2013.
+
+[45] F.-F. Li and J. Li. Cloud automl: Making ai accessible to every business. https://www.blog.google/topics/google-cloud/ cloud-automl-making-ai-accessible-every-business/. Accessed: 2019-06-30.
+
+[46] K. Matkovic, D. Gracanin, M. Jelovic, and H. Hauser. Interactive visual steering - rapid visual prototyping of a common rail injection system. IEEE Transactions on Visualization and Computer Graphics, 14(6):1699-1706, Nov. 2008.
+
+[47] T. Mühlbacher and H. Piringer. A partition-based framework for building and validating regression models. IEEE Transactions on Visualization and Computer Graphics, 19:1962-1971, 2013.
+
+[48] H. T. Nguyen and A. Smeulders. Active learning using pre-clustering. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML '04, p. 79. Association for Computing Machinery, New York, NY, USA, 2004. doi: 10.1145/1015330.1015349
+
+[49] S. Pajer, M. Streit, T. Torsney-Weir, F. Spechtenhauser, T. Möller, and H. Piringer. Weightlifter: Visual weight space exploration for multi-criteria decision making. IEEE Transactions on Visualization and Computer Graphics, pp. 611-620, October 2016.
+
+[50] S. Paparizos, J. M. Patel, and H. Jagadish. Sigopt: Using schema to optimize xml query processing. In Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on, pp. 1456-1460. IEEE, 2007.
+
+[51] M. Pelikan, D. E. Goldberg, and E. Cantú-Paz. Boa: The bayesian optimization algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1, GECCO'99, pp. 525-532. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1999.
+
+[52] H. Piringer, W. Berger, and J. Krasser. Hypermoval: Interactive visual validation of regression models for real-time simulation. In Proceedings of the 12th Eurographics / IEEE - VGTC Conference on Visualization, EuroVis'10, pp. 983-992. The Eurographs Association & John Wiley & Sons, Ltd., Chichester, UK, 2010. doi: 10.1111/j.1467 -8659.2009.01684.x
+
+[53] B. Qian, X. Wang, J. Wang, H. Li, N. Cao, W. Zhi, and I. Davidson. Fast pairwise query selection for large-scale active learning to rank. 2013 IEEE 13th International Conference on Data Mining, pp. 607-616, 2013.
+
+[54] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. J. Mach. Learn. Res., 11:1297-
+
+1322, Aug. 2010.
+
+[55] D. Ren, S. Amershi, B. Lee, J. Suh, and J. D. Williams. Squares: Supporting interactive performance analysis for multiclass classifiers. IEEE Transactions on Visualization and Computer Graphics, 23(1):61-
+
+70, Jan 2017. doi: 10.1109/TVCG.2016.2598828
+
+[56] D. Ren, T. Höllerer, and X. Yuan. ivisdesigner: Expressive interactive design of information visualizations. IEEE Transactions on Visualization and Computer Graphics, 20(12):2092-2101, Dec 2014. doi: 10. 1109/TVCG.2014.2346291
+
+[57] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09, pp. 452-461. AUAI Press, Arlington, Virginia, United States, 2009.
+
+[58] D. Sacha, M. Sedlmair, L. Zhang, J. A. Lee, J. Peltonen, D. Weiskopf, S. C. North, and D. A. Keim. What you see is what you can change: Human-centered machine learning by interactive visualization. Neuro-computing, 268:164-175, 2017.
+
+[59] B. Saket, H. Kim, E. T. Brown, and A. Endert. Visualization by demonstration: An interaction paradigm for visual data exploration. IEEE Transactions on Visualization and Computer Graphics, 23(1):331-340, Jan 2017. doi: 10.1109/TVCG.2016.2598839
+
+[60] M. Sedlmair, C. Heinzl, S. Bruckner, H. Piringer, and T. Möller. Visual parameter space analysis: A conceptual framework. IEEE Transactions on Visualization and Computer Graphics, 20(12):2161-2170, Dec 2014. doi: 10.1109/TVCG.2014.2346321
+
+[61] B. Settles. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan Claypool Publishers, 2012.
+
+[62] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds., Advances in Neural Information Processing Systems 25, pp. 2951-2959. Curran Associates, Inc., 2012.
+
+[63] C. D. Stolper, A. Perer, and D. Gotz. Progressive visual analytics: User-driven visual exploration of in-progress analytics. IEEE Transactions on Visualization and Computer Graphics, 20(12):1653-1662, Dec 2014. doi: 10.1109/TVCG.2014.2346574
+
+[64] S. Stumpf, V. Rajaram, L. Li, M. Burnett, T. Dietterich, E. Sullivan, R. Drummond, and J. Herlocker. Toward harnessing user feedback for machine learning. In Proceedings of the 12th International Conference on Intelligent User Interfaces, IUI '07, pp. 82-91. ACM, New York, NY, USA, 2007. doi: 10.1145/1216295.1216316
+
+[65] S. Stumpf, V. Rajaram, L. Li, W.-K. Wong, M. Burnett, T. Dietterich, E. Sullivan, and J. Herlocker. Interacting meaningfully with machine learning systems: Three experiments. Int. J. Hum.-Comput. Stud., 67(8):639-662, Aug. 2009. doi: 10.1016/j.ijhcs. 2009.03.004
+
+[66] Y. Sun, E. Lank, and M. Terry. Label-and-learn: Visualizing the likelihood of machine learning classifier's success during data labeling. In Proceedings of the ${22}\mathrm{{Nd}}$ International Conference on Intelligent User Interfaces, IUI '17, pp. 523-534. ACM, New York, NY, USA, 2017. doi: 10.1145/3025171.3025208
+
+[67] M. E. Taylor and P. Stone. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633-1685, Dec. 2009.
+
+[68] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 847-855. ACM, 2013.
+
+[69] L. Tianyi, G. Convertino, W. Wang, and H. Most. Hypertuner: Visual analytics for hyperparameter tuning by professionals. Machine Learning from User Interaction for Visualization and Analytics, IEEE VIS 2018, 2018.
+
+[70] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. J. Mach. Learn. Res., 2:45-66, Mar. 2002. doi: 10.1162/153244302760185243
+
+[71] H. Valizadegan, R. Jin, R. Zhang, and J. Mao. Learning to rank by optimizing ndcg measure. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, eds., Advances in Neural Information Processing Systems 22, pp. 1883-1891. Curran Associates, Inc., 2009.
+
+[72] S. Van Den Elzen and J. J. van Wijk. Baobabview: Interactive construction and analysis of decision trees. In Visual Analytics Science and
+
+Technology (VAST), 2011 IEEE Conference on, pp. 151-160. IEEE, 2011.
+
+[73] J. Vanschoren, J. N. Van Rijn, B. Bischl, and L. Torgo. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49-60, 2014.
+
+[74] E. Wall, S. Das, R. Chawla, B. Kalidindi, E. T. Brown, and A. Endert. Podium: Ranking data using mixed-initiative visual analytics. IEEE Transactions on Visualization Computer Graphics, 24(1):288-297, Jan. 2018. doi: 10.1109/TVCG.2017.2745078
+
+[75] Y. Wang, L. Wang, Y. Li, D. He, T. Liu, and W. Chen. A theoretical analysis of NDCG type ranking measures. CoRR, abs/1304.6480, 2013.
+
+[76] J. Wenskovitch and C. North. Observation-level interaction with clustering and dimension reduction algorithms. In Proceedings of the 2Nd Workshop on Human-In-the-Loop Data Analytics, HILDA'17, pp. 14:1-14:6. ACM, New York, NY, USA, 2017. doi: 10.1145/3077257. 3077259
+
+[77] K. Xu, M. Xia, X. Mu, Y. Wang, and N. Cao. Ensemblelens: Ensemble-based visual exploration of anomaly detection algorithms with multidimensional data. IEEE Transactions on Visualization and Computer Graphics, 25(1):109-119, Jan 2019. doi: 10.1109/TVCG.2018. 2864825
+
+[78] Y. Yan, R. Rosales, G. Fung, F. Farooq, B. Rao, and J. Dy. Active learning from multiple knowledge sources. In N. D. Lawrence and M. Girolami, eds., Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, vol. 22 of Proceedings of Machine Learning Research, pp. 1350-1357. PMLR, La Palma, Canary Islands, 21-23 Apr 2012.
+
+[79] I.-C. Yeh and C.-h. Lien. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl., 36(2):2473-2480, Mar. 2009. doi: 10.1016/j.eswa. 2007.12.020
+
+[80] S. M. Yimam, C. Biemann, L. Majnaric, Š. Šabanović, and A. Holzinger. Interactive and iterative annotation for biomedical entity recognition. In Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, eds., Brain Informatics and Health, pp. 347-357. Springer International Publishing, Cham, 2015.
+
+[81] H. Zhang, S. S. Ravi, and I. Davidson. A graph-based approach for active learning in regression. ArXiv, abs/2001.11143, 2020.
+
+[82] Y. Zhou and G. Qiu. Random forest for label ranking. CoRR, abs/1608.07710, 2016.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..612653aeaf6e39211b2efa8031afce2d8c5e1f99
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/KyMw0p9rWL/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,253 @@
+§ GAGGLE: VISUAL ANALYTICS FOR MODEL SPACE NAVIGATION
+
+Subhajit Das
+
+Georgia Institute of Technology
+
+Dylan Cashman
+
+Tufts University
+
+Remco Chang
+
+Tufts University Alex Endert Georgia Institute of Technology
+
+ < g r a p h i c s >
+
+Figure 1: The Gaggle UI allows people to interactively navigate a model space to support interactive classification and ranking of data items. Users can create labels, drag and drop data items into various class labels to specify their subjective preferences to construct classification and ranking models.
+
+§ ABSTRACT
+
+Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity adds usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.
+
+Index Terms: Human-centered computing-Visualization-Classification and ranking model visualization-Mixed initiative systems;
+
+§ 1 INTRODUCTION
+
+Visual analytic (VA) techniques continue to leverage machine learning (ML) to provide people effective systems for gaining insights into data [28]. Systems such as Interaxis [39] help domain experts combine their knowledge and reasoning skills about a dataset or domain with the computational prowess of machine learning. These systems are traditionally designed with a pre-defined single ML model that has a carefully chosen learning algorithm and hyperparameter setting. Various combination of learning algorithms and hyperparam-eters give rise to a vast number of different model types (see Table 1). These different models constitute an exhaustive model space from which unique models can be sampled using a distinct combination of a learning algorithm and associated hyperparameters. For example, support vector machine (SVM) models have many options for kernel functions (i.e., linear, poly or radial) and hyperparameters (i.e., $C$ (regularization parameter), $\gamma$ (kernel coefficient), etc.).
+
+When a model is correctly chosen for the phenomena, task, data distribution, or question users try to answer, existing VA techniques can effectively support users in exploration and analysis. However, in cases where the right model (or optimal model, as desired by the user) to use for a problem is not known a priori, one needs to navigate this model space to find a fitting model for the task or the problem. To combat this, recent VA systems use multiple ML models to support a diverse set of user tasks (e.g., Regression, Clustering, etc. $\left\lbrack {{15},{17},{22},{69}}\right\rbrack$ ). For example, the VA system Clustervision [42] allows users to inspect multiple clustering models and select one based on quality and preference. Similarly, Snowcat [16] allows inspecting multiple ML models across a diverse set of tasks, such as classification, regression, time-series forecasting, etc. However, these multi-model systems are often more complex to use compared to single-model alternatives (e.g, in Clustervision users need to be well-versed with cluster model metrics and shown models.) We refer to this complex combination of parameter and hyperparameter settings as model space, as there are a large number of models that can be instantiated in this hyperdimensional space. Further, the interactive exploration of different parameter and hyperparameter combinations can be referred to as model space navigation. Our definition of model space is related to the work by Brown et al. [14] where they presented a tool called ModelSpace to analyze how the model parameters have changed over time during data exploration.
+
+In this paper we present Gaggle, a visual analytic tool that provides the user experience of a single-model system, yet leverages multiple models to support data exploration. Gaggle constructs multiple classification and ranking models, and then using a bayesian optimization based hyperparameter selection technique, automatically finds a classification and ranking model for users to inspect, thus simplifying the search for an optimal model as preferred by the user. Furthermore, our technique utilises simple user interactions for model space navigation to find the right model for the task. For example, users can drag data items into specific classes to record classification task's user preferences. Similarly, users can demonstrate that specific data items should be higher or lower in rank within a class by dragging them on top of each other.
+
+Gaggle uses ML to help users in data exploration or data structuring tasks, e.g, grouping data in self-defined categories, and ranking the members of the group based on their representativeness to the category. For example, a professor may want to use a tool to help categorize new student applications in different sets, and then rank the students in each set. Similarly, a salesman may want to cluster and rank potential clients in various groups. These problems fall under classification tasks in ML; however, unlike a conventional classification problem, our use case specifically supports interactive data exploration or data structuring, the models constructed are not meant to predict labels for unseen data items in future. Using this workflow, we expect our technique guards against possible model overfitting incurred due to adjusting the models to confirm to specified user preferences. Furthermore, Gaggle addresses a common problem of datasets that either lack adequate ground truth, or do not have it $\left\lbrack {{54},{67},{80}}\right\rbrack$ . To resolve this problem, Gaggle allows users to iteratively define classes and add labels. On each iteration, users add labels to data items and then build a classifier.
+
+We conducted a qualitative user study of Gaggle to collect user feedback on the system design and usability. The results of our study indicate that users found the workflow in Gaggle intuitive, and they were able to perform classification and ranking tasks effectively. Further, users confirmed that Gaggle incorporated their feedback into the interactive model space navigation technique to find the right model for the task. Overall, the contributions of this paper include:
+
+ * A model space navigation technique facilitated by a Bayesian optimization hyperparameter tuning and automated model selection approach.
+
+ * A VA tool Gaggle, that allows interactive model space navigation supporting classification and ranking tasks using simple demonstration-based user interactions.
+
+ * The results of a user study testing Gaggle's effectiveness to interactively build classifiers and ranking models.
+
+§ 2 RELATED WORK
+
+§ 2.1 INTERACTIONS IN VISUAL ANALYTICS
+
+Interactive model construction is a flourishing avenue of research. In general, the design of such systems makes use of both explicit user interactions such as specifying parameters via graphical widgets (e.g., sliders), or implicit feedback including demonstration-based interactions or eye movements to provide guidance on model selection and steering. These types of systems build many kinds of models, including classification $\left\lbrack {7,{32}}\right\rbrack$ , interactive labeling $\left\lbrack {18}\right\rbrack$ , metric learning [15], decision trees [72], and dimensional reduction [27, 39, 43]. For example, Jeong et al. presented iPCA to show how directly manipulating the weights of attributes via control panels helps people adjust principal component analysis [36]. Similarly, Amershi et al. presented an overview of interactive model building [4]. Our work differs from these works in two primary ways. First, our technique searches through multiple types of models (i.e., Random Forest models with various hyperparameter settings for classification and ranking tasks). Second, our tool interprets user interaction as feedback on the full hyperparameter space using bayesian optimization, causing hyperparameter tuning directly changing model behavior in parallel. Stumpf et al. conducted experiments to understand the interaction between users and machine learning based systems [65]. They found that user feedback included suggestions for re-weighting of features, proposing new features, relational features, and changes to the learning algorithm. They showed that user feedback has the potential to improve ML systems, but that learning algorithms need to be extended to assimilate this feedback [64].
+
+Interactive model steering can also be done via demonstration-based interaction. The core principle in these approaches is that users do not adjust the values of model parameters directly, but instead visually demonstrate partial results from which the models learn the parameters $\left\lbrack {{13},{15},{25} - {27},{31},{44}}\right\rbrack$ . For instance, Brown et al. showed how repositioning points in a scatterplot could be used to demonstrate an appropriate distance function [15]. It saves the user the hassle to manipulating model hyperparameters directly to reach their goal. Similarly, Kim et al. presented InterAxis [39], which showed how users could drag data objects to the high and low locations on both axes of a scatterplot to help them interpret, define, and change axes with respect to a linear dimension reduction technique. Using this simple interaction, the user can define constraints which informed the underlying model to understand how the user is clustering the data. Wenskovitch and North used the concept of observation level interaction in their work by having the user define clusters in the visualized dataset [76]. By visually interacting with data points, users are able to construct a projection and a clustering algorithm that incorporated their preferences. Prior work has shown benefits from directly manipulating visual glyphs to interact with visualizations, as opposed to control panels [11, 26, 38, 46, 56, 59].
+
+Active Learning (AL) appears similar to techniques used in Gaggle, yet has a few distinct differences. AL is often used in supervised learning problems (e.g., classification) where adequate annotations are not available in the training data, thereby the algorithm selectively seeks labels for a set of informative training examples $\left\lbrack {{35},{48},{53},{81}}\right\rbrack$ . Standard AL processes assumes that an oracle (usually a user) can provide accurate labels or annotations for any queried data sample $\left\lbrack {{19},{61},{70}}\right\rbrack$ . From a UI perspective, the work presented in this paper aligns closely with both AL and demonstration-based techniques. Gaggle's interaction design lets users manipulate the visual results of the models, interactively add labels to the training set to incrementally navigate the model space. However, the data items users label are not selected by the system, but by users during exploration.
+
+§ 2.2 MULTI-MODEL VISUAL ANALYTIC SYSTEMS
+
+Current visual analytics systems focus on allowing the user to steer and interact with a single model type. However, recent work has explored the capability for a user to concurrently interact with multiple models. These systems implement a multi-model steering technique which facilitates the adjustment of model hyperparameters to incrementally construct models that are better suited to user goals. For instance, Das et al. showed interactive multi-model inspection and steering of multiple regression models [22]. Hypertuner [69] looked at tuning multiple machine learning models' hyperpa-rameters. Xu et al. enabled user interactions with many models, but instead of each model, users interacted with ensemble models through multiple coordinated contextual views [77]. Dingen et al. built RegressionExplorer that allowed users to select subgroups and attributes (rows and columns) to build regression models. However, their technique does not weight the rows and columns; they only select 0 or 1 [23]. Mühlbacher et al. showed a technique to rank variables and pairs of variables to support multiple regression model's trade-off analysis, model validation, and comparison [47]. HyperMoVal [32] addressed model validation of multiple regression models by visualizing model outputs through multiple 2D and 3D sub-projections of the n-dimensional function space [52].
+
+Table 1: User tasks, learning algorithms, hyperparameters, and parameters in Gaggle.
+
+max width=
+
+Tasks Learning Algorithm $\mathbf{{Hyper} - }$ parameters Parameters
+
+1-4
+Classif- ication Random Forest Criteria Max Depth Min Samples Attribute Entropy, Information Gain
+
+1-4
+Ranking Ranking Random Forest Criteria Max Depth Min Samples Attribute Entropy, Information Gain
+
+1-4
+
+Kwon et al. [42] demonstrated a technique to visually identify and select an appropriate cluster model from multiple clustering algorithms and parameter combinations. Clusterophile 2 [17] enabled users to explore different choices of clustering parameters and reason about clustering instances in relation to data dimensions. Similarly, StarSpire from Bradel et al. [13] showed how semantic interactions [26] can steer multiple text analytic models. While effective, their system is scoped to text analytics and handling text corpora at multiple levels of scale. Further, many of these systems target data scientists, while Gaggle is designed for users who are non-experts in ML. In addition, our work focuses on tabular data. It supports interactive navigation of a model space within two classes of models (classification and ranking) by tuning hyperparameters of each of these types of models.
+
+§ 2.3 HUMAN-CENTERED MACHINE LEARNING
+
+Human-Centered Machine Learning focuses on how to include people in ML processes $\left\lbrack {4 - 6,{58}}\right\rbrack$ . A related area of study is the modification of algorithms to account for human intent. Sacha et al. showed how visual analytic based processes can allow interaction between automated algorithms and visualizations for effective data analysis [58]. They examined criteria for model evaluation on an interactive supervised learning system. The found users evaluate models by conventional metrics, such as accuracy and cost, as well as novel criteria such as unexpectedness. Sun et al. developed Label-and-Learn, allowing users to interactively label data [66]. Their goal was to allow users to determine a classifier's success and analyze the performance benefits of adding expert labels [66]. Many researchers have emphasized the knowledge generation process of users performing labeling tasks $\left\lbrack {9,{10},{24}}\right\rbrack$ . Ren et al. explained debugging multiple classifiers using an interactive tool called Squares [55].
+
+Holzinger et al. discussed how automatic machine learning methods are useful in numerous domains [33]. They note that these systems generally benefit from large static training sets, which ignore frequent use cases where extensive data generation would be prohibitively expensive or unfeasible. In the cases of smaller datasets or rare events, automatic machine learning suffers from insufficient training samples, which they claim can be successfully solved by interactive machine learning leveraging user input $\left\lbrack {{33},{34}}\right\rbrack$ . Crouser et al. further formalize this concept of computational models fostering human and machine collaboration [20].
+
+§ 2.4 MODEL SPACE NAVIGATION
+
+We looked at notable works from the literature which supports model space navigation or visualization to understand the current state better. Sedlmair et al. [60] defined a method of variation of model parameters, generating a diverse range of model outputs for each such combination of parameters. This technique called visual parameter analysis investigated the relationship between the input and the output within the described parameter space. Similarly, Pajer et al. [49] showed a visualization technique for visual exploration of a weight space which ranks plausible solutions in the domain of multi-criteria decision making. However, this technique does not explicitly allow navigating models by adjusting hyperparameters but instead varies weightings of user-defined criteria. Boukhelifa et al. explored model simulations by reducing the model space, then presenting it in a SPLOM and linked views [12]. While Gaggle demonstrates an implicit parameter space exploration, this implements an explicit parameter space.
+
+§ 2.5 AUTOMATED MODEL SELECTION
+
+Model building requires selecting a model type, finding a suitable library, and then searching through the hyperparameter spaces for an optimal setting to fit their data. For non-experts, this task can amount to many iterations of trial and error. In order to combat this guessing game, non-experts could use automated model selection tools such as AutoWeka [41,68], SigOpt [50], HyperOpt [8,40], Google Cloud AutoML [45], and AUTO-SKLEARN [30]. These tools execute intelligent searches over the model space and hyperparameter spaces, providing an optimal model for the given problem type and dataset. However, these tools are all based on optimization of an objective function which takes into account only features or attributes that are quantifiable, often ignoring user feedback. Instead, our work explores how to incorporate domain expertise into an automated model selection process supported by interactive navigation of the model space.
+
+§ 3 USAGE SCENARIO
+
+Gaggle allows users to assign data points to classes and then partially order data items within the classes to demonstrate classification and ranking. Next, the system responds by constructing a model space, then samples multiple variants of classification and ranking models from it. Gaggle searches various sub-regions of the model space to automatically find an optimal classification and ranking model based on model performance metrics (explained later in the paper). Users can iterate using Gaggle by triggering it to construct new models. In this process users provide feedback to the system through various forms of interaction (e.g., dragging rows, assigning new examples to the class labels, correcting previous labels, etc.). This process continues until the user is satisfied with the model, meaning that the automatically selected model has correctly learned the user's subjective knowledge and interpretation of the data (Figure 2). We present a usage scenario to demonstrate the type of problem being solved and the general workflow of the tool.
+
+Problem Space: Imagine Jonathan runs a sports camp for baseball players. He has years of experience in assessing the potential of players. He not only understands which data features are important but also has prior subjective knowledge about the players. Jonathan wants to categorize, and rank the players into various categories ("Best Players", "In-form Players", and "Struggling Players") based on their merit.
+
+User-Provided Labeling: Jonathan starts by importing the dataset of baseball players in Gaggle, data publicly available from Open-ML [73]. The data contains 400 players (represented as rows) and 17 attributes of both categorical and quantitative types. The dataset does not have any ground truth labels. He sees the list of all the players in the Data Viewer (Figure 3-B). He creates the three classes mentioned above and drags respective players in these bins or classes to add labels. Knowing Carl Yastrzemski as a very highly rated player, he places him in the "Best Players" class. Gaggle shows him recommendations of similar players for labeling (Figure 3).
+
+Automated Model Generation: Jonathan clicks the build model button from the Side Bar (see Figure 1-F). Based on Jonathan's interaction so far, Gaggle constructs the model space comprising of multiple classification and ranking models. Gaggle runs its optimizer to navigate the model space based on Jonathan's interaction to automatically find the best performing model, out of an exhaustive search of over 200 random forest models. When the system responds, he finds player Ernie Banks is misclassified. He places this player in the "In-form Players" class instead of the "Struggling Players". He moves Ernie Banks and similar other misclassified players to the correct class and asks Gaggle to find an optimal model that takes his feedback into account.
+
+Gaggle updates its model space based on the feedback provided by Jonathan, and samples a new classification and ranking model. Gaggle updates the Data Viewer with the optimal model's output. Jonathan reviews the results to find that many of the previously misclassified players are correctly labeled and pins them to ensure they do not change labels in future iterations. Next, he looks at the Attribute Viewer (Figure 1-B) in search of players with high "batting average" and "home runs". He moves players that match his criteria into respective labels (e.g., placing Sam West and Bill Madock in the "In-Form Players" class). After Gaggle responds with a new optimal model, he verifies the results returned by the model in the interacted row visualization (Figure 1-C). He accepts the classification model and moves on to rank the players within each class.
+
+Jonathan specifies examples for the ranking model by dragging players in each class up or down. After showing a set of relative ranking orderings between data instances (green highlights show interacted items), he iterates to check the full data set, as ranked by Gaggle. He moves player Norm Cash and Walker Cooper to the top of the "struggling players" class, and moves player Hal Chase in the "best players" class. Observing that some data items are not ranked as expected, he further specifies other ranking examples, and triggers Gaggle to construct a new ranking model. Finally, Jonathan finds Gaggle ranked most of the players at the correct spot. In this scenario, we showed how Gaggle helps a domain expert navigate the model space to classify and rank data items solely based on his prior subjective domain knowledge, following the iterative process shown in Figure 2.
+
+While this use case presented how Gaggle could be used by domain experts with a specific dataset, there are other datasets which can be utilised with Gaggle to perform classification and ranking. For example, the drug consumption dataset [29], contains personality measurements (e.g., neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), level of educa-ton, age, gender, ethnicity, etc. of 1885 respondents. Using this dataset a narcotics expert/analyst can classify and rank the respondents into one of the seven class categories (in relation to drug use), namely: "Never Used","Used over a Decade Ago", "Used in Last Decade", "Used in Last Year", "Used in Last Month", "Used in Last Week", and "Used in Last Day". They can also create new classes and rank the respondents based on the attribute values and their prior knowledge. Similarly Gaggle can support a loan officer to use the credit card default payment dataset [79] to decide on approval of loan applications by classifying and ranking the applicants.
+
+§ 4 GAGGLE: SYSTEM DESCRIPTION
+
+The overarching goal driving the design of Gaggle is to let people interactively navigate a model space of classification, and ranking models in a simple and usable way. More specifically, the design goals of Gaggle are:
+
+Enable interactive navigation of model space: Gaggle should allow the exploration and navigation of the hyper-dimensional model space for classifiers and ranking models.
+
+ < g r a p h i c s >
+
+Figure 2: The gray box on the top shows the model space from which candidate models are sampled and ranked based on metrics derived from user interactions, ultimately selecting and showing a single model.
+
+Support direct manipulation of model outputs: Model outputs should be shown visually (lists for ranking models, and bins for classifiers). User feedback should directly adjusting data item ranking or class membership, not adjusting model hyperparameters directly. Generalize user feedback across model types: User feedback to navigate the model space should not be isolated on any specific type of model. For instance, providing visual feedback to the classification of data points might also adjust the ranking of data items.
+
+Leverage user interaction as training data: User feedback on data points should serve as training data for model creation. Data items interacted with will serve as the training set, and performance is validated against the remaining data for classification and ranking.
+
+§ 4.1 USER INTERFACE
+
+Data Viewer: The main view of Gaggle is the Data Viewer, which shows the data items within each class (Figure 1-A). Users can add, remove, or rename classes at any point during data exploration and drag data instances to bins to assign labels. Users can re-order instances by dragging them higher or lower within a bin to specify relative ranking order of items. Gaggle marks these instance with a green highlight, see Figure 1-G. When Gaggle samples models from the model space and finds an optimal model, the Data Viewer updates the class membership and ranking of items to reflect the models' output. Our design decision to solve for a single model to show at each iteration is to simplify the user interface by removing a model comparison and selection step.
+
+Attribute Viewer: Users can hover over data items to see attribute details (Figure 1-B) on the right. Every quantitative attribute is shown as a glyph on a horizontal line. The position of the glyph on the horizontal line shows the value of the attribute in comparison to all the other data instances. The color encodes the instance's attribute quality in comparison to all other instances (i.e., green, yellow, and red encodes high, mid, and low values respectively).
+
+Data Recommendations: When users drag data instances to different bins, Gaggle recommends similar data instances (found using a cosine distance metric), which can also be added (Figure 3 and 1-H). This is to expedite the class assignment during the data exploration process. The similarity is computed based on the total distance ${D}_{a}$ of each attribute ${d}_{i}$ of the moved data instance to other instances in the data. Users can accept or ignore these recommendations.
+
+Interacted Row Visualization: This view (Figure 1-C) shows the list of all interacted data items. In addition, with color encoding it shows correct label matches (shown in blue color) and incorrect label matches(shown in pink color). Same is true for ranking (blue for correct ranked order prediction as expected and pink for otherwise). It shows how many constraints were correctly predicted.
+
+ < g r a p h i c s >
+
+Figure 3: Gaggle's recommendation dialog box.
+
+User Interactions: Gaggle lets users give feedback to the system to sample models in the next iteration, adjust model parameters and hyperparameters, and allow users to explore data and gain insight.
+
+ * Assign class llabels: Users can reassign classes by dragging data items from one class to another. They can also add or remove classes. These interactions provide more constraints to steer the hyperparameters of the classifier.
+
+ * Reorder items within classes: Users can reorder data items within classes, (see Figure 1-G) to change their ranking. This interaction helps users exemplify their subjective order of data instances within classes. This feedback is incorporated as training data for the ranking model.
+
+ * Pin data items: When sure of a class assignment of a data item, the user can pin it to the respective class bin (see Figure 1-I). It ensures that data item will always be assigned that class in every subsequent iteration.
+
+ * Constrain classifier: When satisfied by the classifier, users can constrain the last best classifier. It allows users to move on to show ranking examples for Gaggle to focus on improving the ranking model (Figure 1-C).
+
+§ 5 TECHNIQUE
+
+Models: We define a model as a function $f : \mathcal{X} \mapsto \mathcal{Y}$ , mapping from the input space $\mathcal{X}$ to the prediction space $\mathcal{Y}$ . We are concerned primarily with semi-supervised learning models, in which we are provided with a partially labeled or unlabeled training set ${D}_{\text{ train }} =$ ${D}_{U} \cup {D}_{L}$ , where ${D}_{L}$ is labeled data and ${D}_{U}$ is unlabeled data such that if ${d}_{i} \in {D}_{L}$ , then ${d}_{i} = \left( {{\mathbf{x}}_{\mathbf{i}},{y}_{i}}\right)$ , and if ${d}_{i} \in {D}_{U}$ , then ${d}_{i} = \left( {\mathbf{x}}_{\mathbf{i}}\right)$ , where ${\mathbf{x}}_{\mathbf{i}}$ are features and ${y}_{i}$ is a label. A learning algorithm $A$ maps a training set ${D}_{\text{ train }}$ to a model $f$ by searching through a parameter space. A model is described by its parameters $\theta$ , while a learning algorithm is described by its hyperparameters $\lambda$ . A model parameter is internal to a model, where its value can be estimated from the data, while model hyperparameters are external to the model.
+
+Model Space: Varying the learning algorithm and the hyperparame-ters creates a diverse set of new models. This space of every possible combination of learning algorithms and hyperparameters forms a high dimensional model space. The objective to find an optimal model from this high-dimensional, infinitely large space without any computational guidance or statistical methods is similar to finding a needle in a haystack. Conventionally, ML practitioners/developers navigate the model space using data science principles to test various candidate models. They search for regions (sub-space of the model space) to find optimal models. For instance, one can navigate the model space by randomly sampling new models and testing their performance in terms of accuracy (or other defined metrics) to find a model that best suits the task.
+
+Gaggle constructs a model space by sampling multiple random forest models which takes a predefined list of hyperparameters (criteria, max depth, and min samples to set a node as a leaf) within a set domain range (see Table 1). While Gaggle uses a random forest model for the system evaluation, the general optimization method used is designed to work with other learning algorithms and hyperpa-rameter combinations as well. For instance, Gaggle's optimizer can sample multiple SVM models using a set of chosen hyperparameters such as $C$ (regularization parameter), $\gamma$ (kernel coefficient).
+
+ < g r a p h i c s >
+
+Figure 4: The model ranking method uses Bayesian optimization solver to rank candidate models from the model space.
+
+§ 5.1 INTERACTIVE MODEL SPACE NAVIGATION
+
+To facilitate interactive user feedback and navigation of the model space, Gaggle uses a Bayesian optimization technique [51, 62]. This navigation is initiated by randomly sampling models from the model space as shown in Figure 5. Gaggle seeds the optimization technique by providing: a learning algorithm $A$ , a domain range ${D}_{r}$ for each hyperparameter, and the total number of models to sample $n$ for both classification and ranking models. Gaggle uses a Bayesian optimization module that randomly picks a hyperparameter combination $h{p}_{1}$ , $h{p}_{2}$ and $h{p}_{3}$ . For example, a model ${M}_{1}$ can be sampled by providing "learning algorithm" = "random forest", "criteria type" = gini, "max-depth" = 30, and "min-samples-leaf" = 12. Likewise, the Bayesian optimization module samples ${M}_{1},{M}_{2},{M}_{3},{M}_{4}\ldots {M}_{n}$ models. For each model, it also computes a score ${S}_{i}$ based on custom-defined model performance metrics inferred from user interactions.
+
+The Bayesian optimization module uses a Gaussian process to find an expected improvement point in the search space (of hyperpa-rameter values) over current observations. For example, a current observation could be mapped to a machine learning model, and its metric for evaluation of the expected probability can be precision score or cross-validation score. Using this technique, the optimization process ensures consistently better models are sampled by finding regions in the model space where better performing models are more likely to be found (see Figure 2). Next, the Bayesian optimization module finds the model with the best score ${S}_{i}$ (see Figure 5). Gaggle performs this process for both classification and ranking models driven by user-defined performance metrics.
+
+Classification Model Technique: Gaggle follows an unconventional classifier training pipeline. As Gaggle is designed to help users in data exploration using ML, and not to make predictions on unseen data items, the applicability of conventional train and test set does not apply. Gaggle begins with an unlabeled dataset. As the user interacts with an input dataset $D$ of $n$ items, labels are added; e.g., if the user interacts with $e$ data items, they become part of the training set for the classification model. The rest of the instances $n - e$ , are used as a test set to assign labels from the trained model. If $e$ is lower than a threshold value $t$ , then Gaggle automatically finds $s$ similar data instances to the interacted items and places them in the training set along with the interacted data items ( $s$ gets the label from the most similar labeled data item in $e$ ). The similarity is measured by the cosine distance measure using the features of the interacted samples. This ensures that there are enough training samples to train the classifier effectively.
+
+As users interact with more data instances, the size of the training set grows, and test set shrinks, helping them to build a more robust classifier. For each classifier, Gaggle determines the class probabilities ${P}_{ij}$ , representing the probability of data item $i$ classified into class $j$ . The class probability is used to augment the ranking computation (explained below) as they represent the confidence the model has over a data item to be a member of a said class. Gaggle's interactive labeling approach has close resemblance to active learning (AL) [78], where systems actively suggest data items users should label. Instead, Gaggle allows users to freely label any data item in $D$ to construct a classifier. Furthermore, our technique incorporates user feedback to both classify and rank data items.
+
+Ranking Model Technique: Gaggle's approach to aid interactive navigation of the model space for the ranking task is inspired by $\left\lbrack {{37},{74}}\right\rbrack$ ; which allows users to subjectively rank multi-attribute data instances. However, unlike them, Gaggle constructs the model space using a random forest model (a similar approach to [82]) to classify between pairs of data instances ${R}_{i}$ and ${R}_{j}$ . While we tested both of these approaches, we adhered to random forest models owing to it's better performance with various datasets. Using this technique, a model predicts if ${R}_{i}$ should be placed above or below ${R}_{j}$ . It continues to follow the same strategy between all the interacted data samples and the rest of the data set. Further, Gaggle augments this ranking technique with a feature selection method based on the interacted rows. For example, assume a user moves ${R}_{i}$ from rank ${B}_{i}$ to ${B}_{j}$ where ${B}_{i} > {B}_{j}$ (the row is given a higher rank) Our feature selection technique checks all the quantitative attributes of ${R}_{i}$ , and retrieves $m = 3$ (the value of $m$ is learnt by heuristics and can be adjusted) attributes $Q = {Q}_{1},{Q}_{2}$ , and ${Q}_{3}$ which best represents why ${R}_{i}$ should be higher in rank than ${R}_{j}$ . The attribute set $Q$ are the ones in which ${R}_{i}$ is better than ${R}_{j}$ . If ${B}_{i} < {B}_{j}$ then Gaggle retrieves features that supports it and follows the above protocol.
+
+This technique performs the same operations for all the interacted rows, and finally retrieves a set of features $\left( {F}_{s}\right.$ , by taking the common features from each individually interacted row) that defines the user's intended ranked order. In this technique, if a feature satisfies one interaction but fails on another, they are left out. Only the common features across interacted items get selected. If the user specifies incoherent data instances that leads to no or very small set in ${F}_{s}$ , Gaggle uses SK Learn's K Best feature selection technique to fill ${F}_{s}$ . However, this may produce models which do not adhere to the shown user interactions.
+
+The set of selected features ${F}_{s}$ are then used to build the random forest model for the ranking task which computes a ranking score ${E}_{ij}$ (ith instance, of $j$ th class) for each data item in $D$ . Next, using the class probabilities ${P}_{i}j$ and the ranking score ${E}_{i}j$ , Gaggle ranks the data instances within each class. A final ranking score ${G}_{ij} =$ ${E}_{ij} * {W}_{r} + {P}_{ij} * \left( {1 - {W}_{r}}\right)$ is computed by combining the ranking score ${E}_{ij}$ of each data item in $D$ and its class probability ${P}_{ij}$ , retrieved from the classifier, where ${W}_{r}$ is the weight of the rank score and $1 - {W}_{r}$ is the weight of the classification probability (see Figure 4). The weights are set based on the model accuracy on various datasets. Finally the dataset is sorted by ${G}_{ij}$ . While the described technique uses random forest models, in practice we have tested it with other ML models such as SVM. Furthermore, the weights described here are a set of hyperparameters that needs to be tuned based on the chosen model and the dataset.
+
+§ 5.2 MODEL SELECTION
+
+Gaggle selects an optimal model from the model space based on the following metrics which describe each model's performance (see Figure 4). These are fed to the Bayesian optimization module to sample better models:
+
+ < g r a p h i c s >
+
+Figure 5: Model space navigation approach using Bayesian optimization to find the best performing model based on user-defined metrics.
+
+Classification Metrics: Metrics used to evaluate the classifiers include: percentage of wrongly labeled interacted data instances ${C}_{u}$ , and cross-validation scores from 10 -fold evaluation ${C}_{v}$ (both range between $0 - 1$ ). Other metrics such as precision, F1-score, can be specified based on the dataset and the user's request. The final metric is the sum total of these components computed as: ${C}_{u} * {W}_{u} + {C}_{v} * {W}_{v}$ where, ${W}_{u}$ and ${W}_{v}$ are the respective weights for each of the aforementioned classification metric components. Different weight values were tested during implementation and testing. We chose the set of weights which led to the best gain in model accuracy.
+
+Ranking Metrics: To evaluate models for the ranking task, Gaggle computes three ranking metrics based on the absolute distance from a data instance’s position before and after a said model ${M}_{i}$ is applied to the data. Assume a row $r$ is ranked $q$ when the user interacted with the data. After applying model ${M}_{i}$ to the data, the row $r$ is at position $p$ , then the absolute distance is given by ${d}_{r} = {abs}\left( {p - q}\right)$ . The first ranking metric computes the absolute distances only between the interacted rows. It is defined as ${Z}_{u} = \left( {\mathop{\sum }\limits_{{r \in I}}{d}_{r}}\right) /l$ , where row $r$ is in the set $I$ of all $l$ interacted rows. The second metric, ${D}_{v}$ , computes the absolute distance between the interacted rows $I$ and the immediate $h$ rows above and below of each interacted rows. It is defined as ${Z}_{v} = \left( {\mathop{\sum }\limits_{{r \in {I}_{l}}}\left( {\mathop{\sum }\limits_{{t \in {H}_{h}}}{d}_{tr}}\right) /h}\right) /l$ where row $r$ is in the set $I$ of all $l$ interacted rows, $H$ is the set of $h$ rows above and below of each interacted row $I$ . This metric captures if the ranked data item is placed in the same neighborhood of data items as intended by the user. In Gaggle, $h$ defaults to 3 (but could be adjusted). The third metric, ${D}_{w}$ , computes the absolute distance between all the instances of the data before and after a model is applied. defined as ${Z}_{w} = \left( {\mathop{\sum }\limits_{{r \in {D}_{n}}}{d}_{r}}\right) /n$ where row $r$ is in the set ${D}_{n}$ of all $n$ rows. A lower distance represents a better model fit.
+
+The final ranking metric is computed by the weighted summation of these metrics defined as ${Z}_{\text{ total }} = {Z}_{u} * {W}_{u} + {Z}_{v} * {W}_{v} + {Z}_{w} * {W}_{w}$ , where, ${W}_{u},{W}_{v},{W}_{w}$ are the weights for the three ranking metrics. Weights were tested during implementation and chosen based on the set of weights which gave the best model accuracy. While ${Z}_{u}$ captures user-defined ranking interactions in the current iteration, ${Z}_{v}$ and ${Z}_{w}$ both ensure that user’s progress (over multiple interations) is preserved in ranking the entire dataset. Furthermore, we used these metrics instead of other ranking metrics such as, normalized discounted cumulative gain (NDCG) [75], as the latter relies on document relevance, which in this context seemed less useful to capture user preferences. Also, NDCG is not derived from a ranking function, instead relies on document ranks [71]. Another metric called Bayesian personalized ranking (BPR) by Rendle et al. [57] allows ranking a recommended list of items based on users implicit behavior. However, unlike the use case supported by BPR, our work specifically allows users to rank the data subjectively. Furthermore, unlike BPR, our metric also takes into account negative examples, (i.e., when a data item is ranked lower than the rest).
+
+§ 6 USER STUDY
+
+We conducted a user study to evaluate Gaggle's automatic model space navigation technique to support the classification and ranking tasks. Our goal was to get user feedback/responses to Gaggle's system features, design, and workflow. Further, collecting observational data, we wanted to know if our technique helps them to find an optimal model satisfying their goal. We designed a qualitative controlled lab study where participants used Gaggle to perform a set of predefined tasks. In the end, they gave feedback to the system design, usability, and workflow.
+
+§ 6.1 PARTICIPANTS
+
+We recruited 22 graduate and undergraduate students (14 male). The inclusion criteria were that participants should be non-experts in ML, and have adequate knowledge of movies and cities (datasets used for the study). None of the participants used Gaggle prior to the study. We compensated the participants with a $\$ {10}$ gift card. The study was conducted in a lab environment using a laptop with a 17-inch display and a mouse. The full experiment lasted 60-70 minutes.
+
+§ 6.2 STUDY DESIGN
+
+Participants were asked to complete 4 tasks: multi-class classification of items (3 classes), ranking the classified data items, binary classification of items, and ranking the classified data items. Participants performed the above 4 tasks on 2 datasets, Movies [3] and Cities [2]. To reduce learning and ordering effects, the order of the datasets and the tasks were randomized. In total, each participant performed 8 tasks, 4 per dataset. We began with a practice session to teach users about Gaggle. During this session, participants performed 4 tasks, which took 15 minutes, and included multi-class classification and ranking, and binary classification and ranking on the Cars dataset [1]. We encouraged participants to ask as many questions as they want to clarify system usability or interaction issues. We proceeded to the experimental sessions only when participants were confident with using Gaggle.
+
+Participants were asked to build a multi-class classifier first. This was followed by a binary classification and ranking task on the same dataset. Then they repeat the same set of tasks on the other dataset. The movies data had 5000 items, with 11 attributes, while the cities dataset had 140 items with 45 attributes. We asked participants to create specific classes for each dataset. For the Movies dataset multi-class labels were sci-fi, horror/thriller, and misc, and fun-cities, work-cities, and misc for the Cities dataset. For the binary classification task, the given labels were popular and unpopular (Movies dataset), and western and non-western (Cities dataset).
+
+§ 6.3 DATA COLLECTION AND ANALYSIS
+
+We collected subjective feedback and observational data through the study. We encouraged participants to think aloud while they interacted with Gaggle. During the experiment sessions, we observed the participants silently in an unobtrusive way to not interrupt their flow mitigating Hawthorne and Rosenthal effects. We audio and video recorded every participant's screen. We collected qualitative feedback through a semi-structured interview comprising of open-ended questions at the end of the study. We asked questions such as What were you thinking while using Gaggle to classify data items?, What was your experience working with Gaggle?, etc. Furthermore, after each trial per dataset, we asked participants to complete a questionnaire containing likert scale questions. For example, we asked: (1) On a scale of 1 to 5, how successfully the system was learning based on interactions provided? (1 is randomly, 5 is very consistently), (2) On a scale of 1 to 5, how satisfied are you with the classification model output? (1 is not satisfied, 5 is very satisfied), (3) On a scale of 1 to 5, how satisfied are you with the ranking model output? (1 is not satisfied, 5 is very satisfied). Here satisfaction means, how well the underlying model adhered to
+
+ < g r a p h i c s >
+
+Figure 6: User preferences (averaged over datasets) for the four tasks. the users demonstrated interactions. Please refer the supplemental material $\left( {}^{1}\right)$ to know about the study questionnaire.
+
+§ 6.4 USER PREFERENCES
+
+We collected user preference rating for all four tasks (see Figure 6). The scores were between $1 - 5(1$ meaning least preferred, 5 meaning highly preferred). The average rating of Gaggle for multi-class classification with the ranking task was 3.97. The average rating of Gaggle for the binary classification with the ranking task was 4.17. Though users approved Gaggle's simplicity to allow them to classify and rank data samples, they seemed to prefer Gaggle for the binary classification and ranking task owing to higher accuracy and consistently matching users interpretation of the data.
+
+§ 6.5 MODEL SWITCHING BEHAVIOR
+
+For all participants, we collect log data to track how models were selected when users interactively navigated the model space. We sought to understand how model hyperparameters switch during usage. For participants using the Movies dataset (multi-class classification task) the max-depth hyperparameter changed values (ranging from 3 to 18). Similarly, for the Cities dataset (multi-class classification task) the hyperparameter Criteria ranged from entropy to gini. The min-samples hyperparameter varied within the range of 5 to 36 for both datasets. For the binary classification task, max-depth ranged from 4 to 9 for both datasets. Also we noticed the criteria hyperparameter switching from gini to entropy for both datasets for the binary classification task. On average the hyperparameters switched $M = {9.34}\left\lbrack {{7.49},{11.19}}\right\rbrack$ times to support the multi-class classification and ranking task, while the average change was $M =$ ${5.41}\left\lbrack {{4.89},{5.93}}\right\rbrack$ for binary classification and ranking task. These results indicate that the interactive navigation of the model space technique found new models as participants interacted with Gaggle.
+
+§ 6.6 QUALITATIVE FEEDBACK
+
+Drag and drop interaction: All the participants liked the drag and drop interaction to demonstrate examples to the system. "I like the drag items feature, it feels very natural to move data items around showing the system quickly what I want." (P8). However, with a long list of items in one class, it can become difficult to move single items. P18 suggested, "I would prefer to drag-drop a bunch of data items in a group.". In future, we will consider adding this functionality.
+
+Ease of system use: Most participants found the system easy to use. P12 said "The process is very fluid and interactive. It is simple and easy to learn quickly." P12 added "While the topic of classification and ranking models is new to me, I find the workflow and the interaction technique very easy to follow. I can relate to the use case and see how it [Gaggle] can help me explore data in various scenarios."
+
+Recommended items: Recommending data while dragging items into various labels helped users find correct data items to label. P12 said "I liked the recommendation feature, which most of the time was accurate to my expectation. However, I would expect something like that for ranking also." P2 added "I found many examples from the recommendation panel. I felt it was intelligent to adapt to my already shown examples."
+
+${}^{1}$ Data: https://gtvalab.github.io/projects/gaggle.html
+
+User-defined Constraints: The interacted row visualization helped users understand the constraints they placed on the classification and ranking models. P14 said "This view shows me clearly what constraints are met and what did not. I can keep track of the number of blue encodings to know how many are correctly predicted". Even though the green highlights in the Data Viewer also mark the interacted data items, the Interacted Row View shows a list of all correct/incorrect matches in terms of classification and ranking.
+
+Labeling Strategy Few participants changed their strategy to label items as they interacted with Gaggle. They expected it might confuse the system. However, to their surprise, Gaggle adapted to the interactions and still satisfied most of the user-defined class definitions. P17 said "In the movies data set, I was classifying sci-fi, and thriller movies differently at first, but later I changed based on recent movies that I saw. I was surprised to see Gaggle still got almost all the expected labels right for non-interacted movies."
+
+§ 7 DISCUSSION AND LIMITATIONS
+
+Large Model Search Space: Searching models by combining different learning algorithms and hyperparameters leads to an extremely large search space. As a result, a small set of constraints on the search process would not sufficiently reduce the space, leading to a large number of sub-constrained and ill-defined solutions. Thus, how many interactions are considered optimal for a given model space? In this work, we approached this challenge by using Bayesian optimization for ranking models. However, larger search spaces may pose scalability issues while too many user constraints may "over-constrain" models leading to poor results.
+
+Scalability: The current interaction design is intended to support small to moderate dataset sizes. In the user study, we limited the dataset size to understand how users interact with the system and provide feedback to classification and ranking models. However, the current design is not meant to handle cases when the data set is large-ish, i.e., say twenty thousand data items. In the future, we would like to address this concern by using Auto-ML based cloud services coupled with progressive visual analytics [63].
+
+Abrupt Model and Result Changes: As users interact to navigate the model space, each iteration of the process may find substantially different models. For example, users might find a random forest model with "criteria $=$ gini" with depth of tree $= {50}^{\prime \prime }$ in one iteration and "criteria = entropy with depth of tree = 2 " in the next iteration. This may entail significant changes in the results of these models. While these abrupt changes may not impact users greatly if they are unaware of the model parameterizations, showing users what changed in the output may ease these transition states.
+
+Data Exploration using ML: Supporting data exploration while creating models interactively is challenging. Users may change their task definition slightly or learn new information about their data. In these cases, user feedback may be better modeled by a different model hyperparameterization compared to earlier in their task. Updating the class definition or showing better examples impacts the underlying decision boundary, which the classifier needs to map correctly. For example, in earlier iterations, a linear decision boundary might characterize the data. However, when new examples for classes are provided the decision boundary might be better approximated using a polynomial or radial surface (see Figure 7). In situations like this, Gaggle helps users by finding an optimal model with new hyperparameter settings, without changing them manually. ML practices and model overfitting: Conventionally classifiers are trained on a training set, and then validated on a test set. Our technique utilises the full dataset as input data, interacted data items as training set, and the rest as test set. As users iteratively construct classifiers, the training set grows in size and test set reduces. We used
+
+ < g r a p h i c s >
+
+Figure 7: As users gain more knowledge through exploration, they may change their task, which might require different model hyperpa-rameters. (Blue and orange points represent positive and negative classes; white points represent data items not interacted with.)
+
+this approach to account for user-specified preferences through iterative interactions. Nevertheless, our process follows the conventional ML principle, where the classifier training is done independently of the test data. It only makes prediction on it after training, and Gaggle enables users to inspect the results. However, a challenge systems like Gaggle faces is model overfitting [21]. An overly aggressive search through the model space might lead to a model which best serves the user's added constraints, but might underperform on an unseen dataset. We believe that in use cases where ML is utilised to organize or explore the data, the problem of overfitting is less problematic, considering the constructed models are not meant to be used for unseen data.
+
+Active Learning and Gaggle: Gaggle's approach to interactive labeling is closely related to active learning (AL) strategies in ML, in which systems request users to specify labels to data instances, on which the model is less confident. However, Gaggle allows freedom in terms of which items to label. AL on the other hand relies on existing labels in the training data and only asks users to re-confirm labels to certain data instances when needed (e.g, when the classifier is less confident on the prediction of a data instance). While the approach incorporated in Gaggle gives users more agency over the process, this approach may be less suitable for larger datasets where AL techniques could present items to users which need feedback.
+
+Extending the Model Space navigation: The interactive model space navigation technique that translates user interactions into classification and ranking metrics can be extended to other ML models. For example, other than a random forest model, we have tested Gaggle with SVM model for the classification task and using the RankSVM technique for the ranking task. Likewise, Gaggle can be used with a boosting model for the classification task and a weighted ranking model from each component model from the boosted model for the ranking task.
+
+§ 8 CONCLUSION
+
+In this paper, we present an interactive model space navigation approach for helping people perform classification and ranking tasks. Current VA techniques rely on a pre-selected model for a designated task or problem. However, these systems may fail if the selected model does not suit the task or the user's goals. As a solution, our technique helps users find a model suited to their goals by interactively navigating the high-dimensional model space. Using this approach, we prototyped Gaggle, a VA system to facilitate classification and ranking of data items. Further, with a qualitative user study, we collected and analyzed user feedback to understand the usability and effectiveness of Gaggle. The study results show that users agree that Gaggle is easy to use, intuitive, and helps them interactively navigate the model space to find an optimal classification and ranking model.
+
+§ 9 ACKNOWLEDGEMENTS
+
+Support for the research is partially provided by DARPA FA8750- 17-2-0107. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..6b14fa2a4ec6ca7ed0b5f4c77d9f6192526b8b42
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,335 @@
+# AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance
+
+Matt Whitlock*
+
+Autodesk Reasearch, Toronto
+
+University of Colorado Boulder
+
+George Fitzmaurice ${}^{ \dagger }$
+
+Autodesk Research, Toronto
+
+Tovi Grossman ‡
+
+Autodesk Research, Toronto
+
+University of Toronto
+
+Justin Matejka §
+
+Autodesk Research, Toronto
+
+
+
+Figure 1: Overview of the AuthAR system setup, highlighting the key hardware components.
+
+## Abstract
+
+Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.
+
+Keywords: Augmented reality, content authoring, assembly tutorials, gaze input, voice input
+
+Index Terms: H.5.m [Information Interfaces and Presentation]: Miscellaneous
+
+## 1 INTRODUCTION
+
+Physical task guidance can be delivered via Augmented Reality (AR) since assembly often requires both hands and continuous attention to the task. Additionally, assembly tutorials have instructions directly associated with physical objects, so AR can reduce the need for excessive context switching between the instructions and the physical structure by projecting those instructions into the environment. These benefits have been demonstrated in fields such as Facilities Management [20], Maintenance [44], and Internet of Things (IoT) device management $\left\lbrack {{14},{21}}\right\rbrack$ . Additionally, prior work in AR assembly guidance has shown that these benefits can translate to carrying out assembly tasks $\left\lbrack {2,{18},{21},{38}}\right\rbrack$ .
+
+While significant previous work has looked at the benefits of following tutorials in AR, much less has looked at how to author these tutorials. Beyond the technical requirements of an authoring interface, an ideal tutorial may look different depending on the end user of the tutorial. This problem is exacerbated in AR as there are many different modalities in which tutorial content can be presented. While one person may appreciate guiding animations in AR, another may prefer static text and images, and yet another may prefer video tutorials from one or multiple perspectives.
+
+With AuthAR, we present a system for building tutorials for assembly tasks that can accommodate the needs of these different types of end users. AuthAR generates video, and pictorial representations semi-automatically while the tutorial author completes the task. Furthermore, AuthAR allows tutorial authors to create and refine a tutorial in situ, integrating content authoring into the process of completing the task. This approach adds little additional overhead and reduces the need for post-processing of the tutorial.
+
+This paper presents the AuthAR system for generating mixed media assembly tutorials. Informed by prior work on content/tutorial authoring, and tutorial playback and walkthrough, we build the system with an eye toward non-obtrusive content authoring and generation of important components for tutorial playback, summarized in a set of design guidelines. We validate the system's ability to create a tutorial by stepping through the process of creating a tutorial to build a laptop stand, automatically generating an XML representation of the tutorial. Initial observations suggest the tool will be valuable, and possible ways the system could be extended and refined in future iterations.
+
+## 2 RELATED WORK
+
+AuthAR builds on prior research in the areas of AR tutorials and content authoring, as well as principles of mixed media tutorial design.
+
+---
+
+*e-mail: matthew.whitlock@colorado.edu
+
+${}^{ \dagger }$ e-mail: george.fitzmaurice@autodesk.com
+
+${}^{\frac{4}{4}}$ e-mail: tovi@dgp.toronto.edu
+
+8 e-mail: justin.matejka@autodesk.com
+
+---
+
+### 2.1 AR Tutorials
+
+$\mathrm{{AR}}$ is often applied to assembly tasks for its ability to project instructions into the environment such that they are spatially relevant $\left\lbrack {{19},{22}}\right\rbrack$ . Syberfeldt et al. demonstrate assembly of a 3D puzzle and provide evidence that AR supports faster assembly than traditional methods [39]. Similarly, Henderson et al. demonstrate the use of projected guidance for engine assembly [18]. Prior work on AR guidance design has shown that abstract representations (3D text, arrows) can be more effective for complex tasks than a more concrete representation (virtual models of the pieces and fasteners), which is sensible for simpler tasks [34]. In that work, the authors found information-rich $2\mathrm{D}$ representations to be more effective than either of the AR representations in some cases.
+
+One theory is that AR is only justified when the task is sufficiently difficult, such that the time to process the information is insignificant compared to the time to perform the task [33]. So even for physical tasks in which instructions' spatial relevance could be increased by projecting into the environment, tutorials should provide users the ability to view the step(s) in a more familiar picture/video format when needed or preferred. The need for these mixed media tutorials is apparent, however little work has explored authoring of these tutorials for physical tasks.
+
+### 2.2 AR Content Authoring
+
+Outside of the realm of tutorial authoring, numerous systems have explored content creation in augmented reality to abstract low-level programming from the creator, lowering the threshold for participation with AR. Many AR content authoring systems give users a collection of 3D models to place and manipulate in an environment or to overlay on a video stream, allowing users to create AR content without programming expertise generally required to build such scenes $\left\lbrack {4,{11},{12},{24},{26},{30}}\right\rbrack$ . Other AR content creation systems target specific end users for participation by domain experts in areas such as museum exhibition curation [36] tour guidance [3,27], and assembly/maintenance $\left\lbrack {{33},{41}}\right\rbrack$ . Roberto et al. provides a survey of existing AR content creation tools, classifying tools by standalone nature and platform-dependence [35].
+
+Of particular interest to us are authoring tools that enable creation of training experiences. Built on Amire's component-based framework [12], Zauner et al. presents a tool to enable authoring of assembly task guides in augmented reality [43]. With this tool, the author puts visually tracked pieces together hierarchically to create an assembly workflow assistant in AR. Alternatively, the expert can collaborate remotely, rather than creating a training experience ahead of time [41]. In this scenario, the expert can annotate the live video feed provided by the trainee's AR headset for varying levels of guidance on-the-fly. These systems require explicit enumeration of every component to be added to the created scene whereas our system generates content semi-automatically (segmenting video, recording changes to transform and detecting use of a tool) where possible. Moreover, our system only requires this manual input for augmentation and refinement of the tutorial while the bulk of authoring is done in situ.
+
+### 2.3 Mixed Media Tutorials
+
+Within the domain of software tutorials, Chi et al. provides design guidelines for mixed media tutorial authoring with their MixT system for software tutorials [7]. They list scannable steps, legible videos, visualized mouse movement and giving control to the user on which format to view as important components of a mixed media tutorial. Carter et al. echos this sentiment, with their ShowHow system for building a tutorial of videos and pictures taken from an HMD, noting that mixing different media is important in lieu of relying on a single media type [6]. Our work builds upon these concepts of mixed media tutorials but applies them to AR authoring of physical task tutorials. The rest of this subsection discusses the use of three popular media used for both software and physical task tutorials: videos, images, and interactive guidance.
+
+#### 2.3.1 Video
+
+Prior work on video-based tutorials has applied different strategies to the challenges of video segmentation and multiple perspectives. DemoCut allows for semi-automatic video segmentation such that these demonstration videos are appropriately concise without requiring significant post-processing [8]. Chronicle allows for video tutorials based on the working history of a file [16]. As the file changes, the Chronicle system generates a video tutorial of how the file changed. For physical tasks, Nakae et al. propose use of multiple video perspectives $\left( {1}^{\text{st }}\right.$ person, ${3}^{\text{rd }}$ person, overhead) to record fabrication demonstrations and semi-automatic generation of these videos [29].
+
+#### 2.3.2 Images
+
+Prior work also uses captioned and augmented images for physical task and software tutorials. Image-based tutorials can be generated automatically from demonstration as is done with TutorialPlan, a system that builds tutorials to help novices learn AutoCAD [25]. Images can also represent groups of significant manipulations (such as multiple changes to the saturation parameter) as Grabler et al demonstrated for GIMP tutorials [15]. For physical tasks, prior work explored use of AR to retarget 2D technical documentation onto the object itself, providing spatially relevant augmentations [28].
+
+#### 2.3.3 Interactive Overlays
+
+Interactive tutorials guiding users where to click, place objects, and even move their hands have become increasingly popular. For example, EverTutor automatically generates tutorials for smart phone tasks such as setting a repeating alarm or changing font size based on touch events [40]. They found improved performance and preference toward these tutorials over text, video, and image tutorials. In the realm of physical assembly tutorials, use of visual and/or depth tracking allows for automatic generation of tutorials based on changes to location and rotation $\left\lbrack {5,{10},{13},{39}}\right\rbrack$ . Further, interactive tutorial authoring can include tracking of hand position [32], and project green and red onto end users' hands to indicate correct and incorrect positioning respectively [31]. DuploTrack allows users to author a tutorial for creating Duplo block models using depth sensing to infer positions and rotations automatically, and projective guidance is available to end users of the tutorial [17].
+
+## 3 DESIGN GUIDELINES
+
+Our design of the AuthAR system was grounded by our exploration of the assembly task design space, and a study of related research and investigation into the difficulties associated with the process of generating tutorials (both for "traditional" media, as well as AR). Below we describe the design guidelines we followed when making decisions about the implementation of our system.
+
+D1: Non-Intrusive/Hand-Free. It is important that the author and assembler be able to perform the assembly task without being burdened by the tutorial apparatus or interface. Though many AR/VR interfaces are either mediated by a handheld device or require use of freehand gestures for input, assembly tasks often require the use of both hands in parallel. For this reason, we prioritize hands-free interaction with the system such that users can always keep their hands free for assembly.
+
+D2: Multiple Representations. Prior studies $\left\lbrack {6,{40}}\right\rbrack$ have shown that different representations (text, static pictures, video, animations, etc.) can all be valuable for people following along with a tutorial. Our system should allow authors to document their tutorial using multiple media types to best capture the necessary information.
+
+D3: Adaptive Effort. Manual content creation in AR allows for high expressivity but is time-consuming and can add complexity. Automatic content creation tools are easier to use, however have the side effect of limiting the author's creative control. Our tool should let authors move between automatic and manual creation modes to get the benefits of both models. In the most "automatic" case, an author should be able to generate a tutorial by doing little more than simply completing the assembly task as they would normally.
+
+
+
+Figure 2: Design Space for AR Assembly tutorials. The design decisions we made are highlighted in black.
+
+D4: Real Time and In Situ Authoring. With the ability to generate, refine and augment tutorial step representations, our tool should allow authors to create the tutorial while performing the task and make necessary tweaks directly after each step. This form of contextual and in situ editing allows authors to add callouts or make changes while they are easy to remember and reduces the need for post-processing of tutorials at a desktop computer.
+
+## 4 DESIGN SPACE OF AR ASSEMBLY TASK TUTORIALS
+
+The design space of tutorials for assembly tasks in augmented reality is broad, with many dimensions of variability both in how the instructions are presented to an end-user and how they are authored. To inform the design of our system and explore how to best achieve our design goals, we first mapped out some areas of this design space most relevant to our work (Figure 2).
+
+## Presented Content
+
+Perhaps the most important dimension of variability in the AR-based assembly task tutorial design space is how the tutorial information is presented within the AR environment. Like more "traditional" tutorial delivery mediums, the AR environment is able to present static text and images, as well as show explanatory videos. A most straightforward (and still useful) application of AR technology for sharing assembly tutorials would be to display relevant information about the task as a heads-up display (HUD) in the user's headset, leaving their hand's free to perform the task. Unique to AR however is the ability to spatially associate these "traditional" elements with points or objects in the physical space. Further, an AR tutorial can present the assembly instructions "live", by displaying assembly guidance graphics or animations interwoven into the physical space. With our system we chose to use traditional text, pictures, and video presentation methods in addition to dynamic instructions, since they each have their unique benefits (D2).
+
+In order for the HoloLens to listen for the keyword "Stop Recording" during first/third person video recording, it cannot simultaneously record dictation to complement the video. For this reason, the current form of AuthAR records muted videos, but future iterations with additional microphone input capability would rectify this. With use of audio in content authoring infeasible, text serves to supplement muted video.
+
+
+
+Figure 3: AuthAR System Diagram. A message passing server sends position data of materials and a screwdriver from coordinated Optitrack cameras to the HoloLens. The HoloLens sends video segmenting commands to the Android tablet through the server.
+
+## Authoring Location
+
+When it comes to authoring the tutorial, the content could be constructed in situ, that is, in the same physical space as the task is being performed, or in an external secondary location such as a desktop computer. Our work explores in situ authoring to reduce the required effort (D3) and maintain context (D4).
+
+## Content Creation
+
+The content for the tutorial can be automatically captured as the authors moves through the assembly steps, or manually created by explicitly specifying what should be happening at each step. Automatically generating the tutorials streamlines the authoring and content creation process, but limits expressivity of the tutorial instructions. We automatically capture as much as possible, while allowing manual additions where the author thinks they would be helpful (D3).
+
+## Content Editing
+
+Another aspect of tutorial authoring is how the content is edited. A traditional practice is to capture the necessary content first, then go back and edit for time and clarity in post-processing. Alternately, the content can be edited concurrently with the process of content collection-which if implemented poorly could negatively impact the flow of the creation process. In the best case, this results in having a completed tutorial ready to share as soon as the author has completed the task themselves. We focus on creating a well-designed concurrent editing process (D1).
+
+## Interaction Techniques
+
+Interacting with the tutorial system, both in the creation of the tutorial as well as when following along, can be accomplished through many means. AR applications can use touch controls, gestures, or dedicated hardware controllers. However, for assembly tasks it is desirable to have both hands available at all times for the assembly task - rather than have them occupied by control of the tutorial authoring system (D1). For that reason, we exclusively use voice and gaze controls.
+
+
+
+Figure 4: Example configuration of tracked materials in the physical environment (top) and transform data streaming from Optitrack to the Message Passing Server to provide positional tracking of invisible renderings (bottom). Within AuthAR, the virtual components are overlaid on the physical pieces, such that the physical components are interactive.
+
+## 5 THE AUTHAR SYSTEM
+
+Auth ${AR}$ is a suite of software tools that allow tutorial authors to generate AR content (Figure 1). Optitrack motion capture cameras visually track marked assembly materials and a screwdriver, adding changes to position, transform and locations of screws added with a tracked screwdriver to the tutorial automatically. The HoloLens captures first person videos and images and a mounted Android tablet simultaneously captures third person video. The HoloLens also guides authors through the process of creating the tutorial and allows for gaze- and voice-based interaction to add and update content. This includes addition of augmentations such as location-specific callout points, marking locations of untrackable screws, and specification of images as "negative examples" (steps that should be avoided) or warnings.
+
+### 5.1 System Architecture
+
+AuthAR's system architecture consists of three components-the Microsoft HoloLens, a Samsung Tab A 10.1 Android tablet and a server running on a desktop computer all on the same network (Figure 3). As a proxy to sufficient object recognition and point-cloud generation directly from the headset, we use relatively small (approximately $1\mathrm{\;{cm}}$ ) Optitrack visual markers for detection of material position and rotation. In developing AuthAR, we envisioned headsets of the future having onboard object recognition $\left\lbrack {9,{24}}\right\rbrack$ . As a proxy to such capabilities, we implement a networked system to coordinate object positions generated by Optitrack's Motive software and the HoloLens to make the physical objects interactive. To provide this interactivity, the HoloLens registers virtual replicas of each piece and overlays these models as invisible renderings at the position and rotation of the tracked object. This invisible object takes the same shape as the physical object but has its visual rendering component disabled. It engages the HoloLens raycast and gives the user the illusion of placing virtual augmentations on physical objects. Additionally, we add a tracked handle to a screwdriver for tracking to infer screw events.
+
+
+
+Figure 5: Simultaneous 3rd person video recording from the Android tablet (left) and 1st person video recording from the HoloLens (right).
+
+The server connects to the Optitrack host via UDP and continues to update object and tool transforms. This server sends the data to the HoloLens via 64-character messages and the HoloLens' representation of the transform is updated accordingly. The Android tablet simply serves to record ${3}^{\text{rd }}$ person video of the tutorial. When the HoloLens starts and stops recording ${1}^{\text{st }}$ person demonstration, a message is passed to the server and then to the Android tablet to toggle recording. Throughout the paper, we demonstrate usage of Auth ${AR}$ to build a tutorial for assembly of an Ikea laptop stand ${}^{1}$ . Parts have been outfitted with Optitrack visual markers and defined as rigid bodies in Optitrack's Motive software. Simple representations of these parts are by the HoloLens as invisible colliders, so the physical components act as interactive objects in AR (Figure 4).
+
+Though this configuration is specific to the laptop stand, this approach is easily extensible to other assembly workflows. Users simply define combinations of visual markers as rigid bodies and build simplified models of the individual parts. In this case, these models are combinations of cubes that have been scaled along $\mathrm{X},\mathrm{Y}$ and $\mathrm{Z}$ axes, giving rough approximations of the parts’ shapes. This initial step of predefining the shapes of these parts allows fully in situ editing.
+
+### 5.2 Interaction Paradigm
+
+To avoid encumbering the assembly process with tutorial generation steps, interaction with AuthAR involves only voice, gaze, and use of materials and tools. This allows the user to always keep their hands free to build. By using only voice- and gaze-based interaction, we also eliminate the need for an occluding visual interface of menus and buttons, avoiding interference with the physical tutorial tasks. For flexibility, the user can add augmentations at any point while building the tutorial. However, to encourage faster onboarding, we guide the user through a two-phase process for each step: Step Recording and Step Review.
+
+To provide this guidance, we implemented a three-part Heads-Up Display (HUD). The top-right always displays both the current state and the command to advance to the next stage, the top-left shows available commands within Step Review mode, and the middle is reserved for notifications and prompts for dictation. A quick green flash indicates that the user has successfully moved to the next state.
+
+Step Recording: Automatically Generated Features
+
+---
+
+${}^{1}$ https://www.ikea.com/us/en/p/vittsjoe-laptop-stand-black-brown-glass- 00250249/
+
+---
+
+
+
+Figure 6: Example usage of callout points in paper-based instructions. Callout points draw attention to the alignment of the holes on the materials (left). Instructions can convey negative examples of incorrect object orientation (right). Images from the assembly instructions for an Ikea laptop stand.
+
+When the user says "Start Recording" AuthAR begins recording changes to object transforms such that moving the physical objects maps directly to manipulating the virtual representations of those objects in the tutorial. This command also initiates video recording using the HoloLens’ built-in camera and the tablet’s ${1}^{\text{st }}$ person perspective (Figure 5). AuthAR also records when the screwdriver's tip comes in contact with an object piece and generates a screw hole on that object (though not displayed until the Review Mode). To do this we add a tracked attachment to the handle, similar to what was done for Dodecapen [42] and SymbiosisSketch [1] to provide the position of the tip based on the orientation of the attachment.
+
+Given real-time streaming of position data of the screwdriver's handle and a priori knowledge of the length from the handle to the tip as ${10.5}\mathrm{\;{cm}}$ , we calculate the position of the screwdriver’s tip as ${10.5}\mathrm{\;{cm}}$ forward from the handle. When the user says "Finish Recording", the HoloLens prompts the user for a step description and records dictation. When description is complete, the user enters an idle state until they are ready for review.
+
+## Step Review: Manually Added Features
+
+After a tutorial author has completed a step recording, they can they can say "Review Step" to enter review mode for the step. The ${1}^{\text{st }}$ person video just recorded plays on a loop across from the author, automatically repositioning itself such that the author can look up at any point and see the video directly in front of them. This allows the author to draw upon their own experience when adding manual augmentations. Existing augmentations (e.g., callout points and fasteners) shift into focus by getting larger or expanding when the user is looking closer to that augmentation than any other-this eliminates the need to focus on small points (approximately $3\mathrm{\;{cm}}$ ) to engage with them. When looking toward a particular object, the available commands to update the augmentation are shown in the top-right of the Heads-up Display.
+
+After recording the step, the tutorial author may want to draw the tutorial user's attention to a particular point, similar to how this is done on $2\mathrm{D}$ paper-based instructions (Figure 6). To do this, the author focuses the gaze-based cursor on the specific point on the tracked object where the callout point should go and says "Add Point". This adds a small virtual sphere anchored by its relative position on the object. The process of adding a captioned image (Figure 7), begins when the user looks toward a callout point and says "Add Picture". The system then starts a countdown from three in the middle of the heads-up display to indicate that a picture is about to be taken, and then displays "Hold still!" when the countdown is complete, at which point the HoloLens captures the current frame.
+
+The collected image is saved and immediately loaded over the callout point. The author can then say "Add Text", and a prompt in the middle of the heads-up display shows "Speak the image caption" and the HoloLens begins listening for recorded dictation. After a pause of 3 seconds, the HoloLens finishes recording and immediately associates the spoken text as the image caption.
+
+
+
+Figure 7: After adding a callout point, that point has a canvas to fill in (top). The author can add a picture (left), and a caption (right) and then has a completed callout point (bottom).
+
+Authors can describe a callout point as a negative example or a warning simply by looking toward the callout point and saying "Warning". This defines that callout point as a warning or negative example and will draw tutorial users to be extra attentive when traversing the tutorial. The callout point turns red and the associated image is given a red border around it (Figure 8). Authors are also able to move the point along the surface of any tracked piece or delete it entirely.
+
+Though fastening components together is an important aspect of an assembly tutorial, fastener objects (e.g., screws, nails) are too small for traditional object tracking technologies. During step recording, AuthAR records use of the tracked screwdriver and detects when it was used on a tracked material, generating a virtual screw hole at that location. Making use of these generated screw holes from step recording, the author can associate a virtual screw with the virtual screw hole by looking toward the hole and saying "Add Screw" (Figure 9). The user cycles through possible screws by saying "Next" and "Previous"-commands which cycle through possible virtual screw representations.
+
+The author is able to hold the physical screw up to the virtual one for comparison and say "This One" to associate the screw with that hole. Authors can also manually add new fasteners in areas that cannot be tracked. To do so, the author once again says "Add Screw", pulling up the same menu, but when a screw has been selected, the ray-casted, gaze-based cursor allows the user to manually place the virtual screw where it was physically placed. This is useful for the laptop stand, for example, as there are rubber feet that need to be screwed in by hand, rather than with the tracked screwdriver. Saying "Finish Review" brings the user back to the original idle state and the user has iterated through the step building process at this point.
+
+
+
+Figure 8: Tutorial author setting a warning about fragile glass using a red callout point and a red border.
+
+## 6 Discussion
+
+Toward validating AuthAR, we discuss our initial observations in testing with tutorial authors, present an example application that parses and displays the generated tutorial for end users, and explain extensibility beyond the presented use case. In doing so, we consider improvements to AuthAR, and design considerations for other in situ AR content authoring tools.
+
+### 6.1 Initial User Feedback
+
+To gather initial observations and feedback of the system, we asked two users to generate a tutorial for an Ikea laptop stand. Though we have not formally evaluated AuthAR for usability, this initial feedback provides insight into possible improvements to AuthAR and generalized takeaways in building such systems. The standard paper instructions for the stand consist of four steps: fastening the legs to the bottom base, fastening the top to the legs, adding screw-on feet to the bottom of the structure and adding glass to the top. We guided the users through using the system while they created an AR tutorial for assembling this piece of furniture.
+
+Both testers found the system helpful for generating tutorials with one noting that "being able to take pictures with the head for annotations was really useful." This suggests that the embodied gaze-based interaction is particularly well-suited to picture and video recording. Most of the functionality for making refinements in the tutorial is enabled by the user looking anywhere near the objects, however, adding new callout points requires accurate hovering of the cursor on the object of interest while speaking a command. One user mentioned that it was "kind awkward to point at certain points with the head". In such systems that require precise placement of virtual objects on physical components, pointing at and touching the position where the callout point would be a useful improvement over a gaze-only approach.
+
+Though fulfilling the hands-free requirement of a tutorial generation system (D1), AuthAR's use of dictation recognition for text entry was particularly challenging for users, in part due to the automated prompting for step descriptions and titles. One participant was surprised by the immediate prompt for a step description, and said that "it was hard to formulate something articulate to say by the time it had finished recording", so future iterations will likely incorporate users explicitly starting dictation recognition for a title so they are prepared to give one.
+
+
+
+Figure 9: User adding virtual screws to the tutorial. The user can hold the physical screw up to the virtual one for comparison (top), and if the screw hole was not automatically generated, the user can place the screw via ray-cast from the headset (middle).
+
+While we focus on authoring the tutorial in real time, the users wanted to view and refine previous steps. One possible enhancement to an in situ content authoring tool would be a "navigation prompt to let you jump between steps and review your work." With AuthAR, this would provide users the ability to review and refine the full tutorial at a high level, including previous steps.
+
+### 6.2 Tutorial Playback/Walkthrough
+
+Though we focus on authoring of assembly tutorials, we also implemented and tested a playback mode to validate AuthAR’s ability to generate working tutorials. The tutorial author can save a tutorial after finishing a step, and AuthAR generates an XML representation of the entire tutorial. We load this XML into the playback application, also built for the HoloLens, and components of the steps built earlier are loaded and displayed for the end user to follow.
+
+The guidance techniques are simple but demonstrate success of the authoring process and portability of the generated tutorial. Our playback application projects lines from each piece's location to where that piece needs to go (Figure 10a). When the end user of the tutorial has correctly placed the object, a notification appears in front of them (Figure 10b). The user also receives guidance of where add screws and when the playback application recognizes a screw event in that location, the screw augmentation disappears (Figure 10c).
+
+
+
+Figure 10: Example playback of a generated tutorial.
+
+Both first person and third person video representations play on a loop across the table from the user (Figure 10d). As the user walks around the table, the videos adjust their positions such that the user can always look up and see the videos straight across from them. Use of the third person video is currently the only tutorial component that requires post-processing after the in situ authoring process is complete. Because the XML representation of the tutorial uses file paths for videos, the author needs to manually move the video from the Android tablet to the headset's file system. Future iterations could automatically stream these videos between components.
+
+### 6.3 AuthAR Extensibility
+
+We demonstrate AuthAR with assembly of an Ikea laptop stand but note that it could be extended to any physical task where simplified virtual models of the pieces could be built or obtained. All of the pieces used for the laptop stand were scaled cubes or combinations of scaled cubes, with disabled "Renderer" components to create the illusion of adding virtual augmentations to physical parts, when in reality, there were invisible objects overlaid on top of the physical pieces. So simply loading in pieces and disabling the visual renderings allow for extensibility to virtually any assembly task.
+
+While demonstrated with an augmented screwdriver, AuthAR could be extended to support different tools, or an integrated workspace of smart tools $\left\lbrack {{23},{37}}\right\rbrack$ . We employed very simple logic, that whenever the tip of the screwdriver hovers near a piece, AuthAR adds a screw hole. Future iterations could employ more advanced logic for detecting tool usage. The need for a large truss with ${10}\mathrm{{Op}}$ - titrack cameras enables very accurate tracking of visual markers but limits the AuthAR's widespread deployment in its current state. For practical use we envision the localization of objects being done with the headset only, or with cheaper external optical tracking, perhaps with black and white fiduciary markers. In this scenario, tracked pieces could be established in real time though user-assisted object recognition [24], rather than defining their shapes prior to running AuthAR.
+
+The in situ authoring approach offered by AuthAR allows the user to concurrently craft the tutorial while assembling the pieces. However, the gaze/voice multimodal interface does not provide users with efficient tools to fine-tune the generated tutorial. To this end, AuthAR should be complimented by a 2D interface for precise editing of individual components of the tutorial, similar to tutorial generation tools described in previous work $\left\lbrack {6,7,{16}}\right\rbrack$ . Trimming ${1}^{\text{st }}$ and ${3}^{\text{rd }}$ person perspective videos, cropping images and editing text are currently not well-supported by AuthAR and would be better suited to a mouse and keyboard interface. This complimentary approach would also allow users to focus on coarse-grain tutorial generation and object assembly without being burdened by these smaller edits that can easily be done after the fact.
+
+## 7 CONCLUSION
+
+AuthAR enables tutorial authors to generate mixed media tutorials semi-automatically to guide end users through the assembly process. We automatically record expert demonstration where possible and allow for in situ editing for refinements and additions. We built AuthAR with several design guidelines in mind, validated with the authoring of a tutorial for assembling a laptop stand, and discuss the extensibility to assembly of other tasks by simply loading different virtual models into AuthAR. We see AuthAR enabling authoring of tutorials that could reach a widespread population with mixed media tutorials flexible to the preferences of each individual user.
+
+## REFERENCES
+
+[1] R. Arora, R. Habib Kazi, T. Grossman, G. Fitzmaurice, and K. Singh.
+
+Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574. 3173759
+
+[2] K. M. Baird and W. Barfield. Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Real., 4(4):250-259, Dec. 1999. doi: 10.1007/BF01421808
+
+[3] N. Barrena, A. Navarro, S. García, and D. Oyarzun. Cooltour: Vr and ar authoring tool to create cultural experiences. In G. D. Pietro, L. Gallo, R. J. Howlett, and L. C. Jain, eds., Intelligent Interactive Multimedia Systems and Services 2016, pp. 483-489. Springer International Publishing, Cham, 2016.
+
+[4] M. Bauer, B. Bruegge, G. Klinker, A. MacWilliams, T. Reicher, S. Riss, C. Sandor, and M. Wagner. Design of a component-based augmented reality framework. In Proceedings IEEE and ACM International Symposium on Augmented Reality, pp. 45-54, 2001.
+
+[5] B. Bhattacharya and E. Winer. A method for real-time generation of augmented reality work instructions via expert movements. In M. Dolinsky and I. E. McDowall, eds., The Engineering Reality of Virtual Reality 2015, vol. 9392, pp. 109 - 121. International Society for Optics and Photonics, SPIE, 2015. doi: 10.1117/12.2081214
+
+[6] S. A. Carter, P. Qvarfordt, M. Cooper, A. Komori, and V. Mäkelä. Tools for online tutorials: comparing capture devices, tutorial representations, and access devices. CoRR, abs/1801.08997, 2018.
+
+[7] P.-Y. Chi, S. Ahn, A. Ren, M. Dontcheva, W. Li, and B. Hartmann. Mixt: Automatic generation of step-by-step mixed media tutorials. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST '12, p. 93-102. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2380116. 2380130
+
+[8] P.-Y. Chi, J. Liu, J. Linder, M. Dontcheva, W. Li, and B. Hartmann. Democut: Generating concise instructional videos for physical demonstrations. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, p. 141-150. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10. 1145/2501988.2502052
+
+[9] A. I. Comport, E. Marchand, and F. Chaumette. A real-time tracker for markerless augmented reality. In The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings., pp. 36-45, 2003.
+
+[10] D. Damen, T. Leelasawassuk, and W. Mayol-Cuevas. You-do, i-learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance. Computer Vision and Image Understanding, 149:98 - 112, 2016. Special issue on Assistive Computer Vision and Robotics -.
+
+[11] M. de Sá and E. Churchill. Mobile augmented reality: Exploring design and prototyping techniques. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 221-230. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2371574.2371608
+
+[12] R. Dörner, C. Geiger, M. Haller, and V. Paelke. Authoring Mixed Reality - A Component and Framework-Based Approach, pp. 405-413. Springer US, Boston, MA, 2003. doi: 10.1007/978-0-387-35660-0_49
+
+[13] M. Funk, T. Kosch, and A. Schmidt. Interactive worker assistance: Comparing the effects of in-situ projection, head-mounted displays, tablet, and paper instructions. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp '16, p. 934-939. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2971648.2971706
+
+[14] J. A. Garcia Macias, J. Alvarez-Lozano, P. Estrada, and E. Aviles Lopez. Browsing the internet of things with sentient visors. Computer, 44(5):46-52, 2011.
+
+[15] F. Grabler, M. Agrawala, W. Li, M. Dontcheva, and T. Igarashi. Generating photo manipulation tutorials by demonstration. In ACM SIG-GRAPH 2009 Papers, SIGGRAPH '09. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1576246.1531372
+
+[16] T. Grossman, J. Matejka, and G. Fitzmaurice. Chronicle: Capture,
+
+exploration, and playback of document workflow histories. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10, p. 143-152. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1866029. 1866054
+
+[17] A. Gupta, D. Fox, B. Curless, and M. Cohen. Duplotrack: A real-time system for authoring and guiding duplo block assembly. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST '12, p. 389-402. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2380116.2380167
+
+[18] S. J. Henderson and S. K. Feiner. Augmented reality in the psychomotor phase of a procedural task. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 191-200, 2011.
+
+[19] L. Hou, X. Wang, and M. Truijens. Using augmented reality to facilitate piping assembly: an experiment-based evaluation. Journal of Computing in Civil Engineering, 29(1):05014007, 2015.
+
+[20] J. Irizarry, M. Gheisari, G. Williams, and B. N. Walker. Infospot: A mobile augmented reality method for accessing building information through a situation awareness approach. Automation in Construction, ${33} : {11} - {23},{2013}$ . Augmented Reality in Architecture, Engineering, and Construction. doi: 10.1016/j.autcon.2012.09.002
+
+[21] M. Jahn, M. Jentsch, C. R. Prause, F. Pramudianto, A. Al-Akkad, and R. Reiners. The energy aware smart home. In 2010 5th International Conference on Future Information Technology, pp. 1-8, 2010.
+
+[22] B. M. Khuong, K. Kiyokawa, A. Miller, J. J. La Viola, T. Mashita, and H. Takemura. The effectiveness of an ar-based context-aware assembly support system in object assembly. In 2014 IEEE Virtual Reality (VR), pp. 57-62, 2014.
+
+[23] J. Knibbe, T. Grossman, and G. Fitzmaurice. Smart makerspace: An immersive instructional space for physical tasks. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, ITS ' 15, p. 83-92. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2817721.2817741
+
+[24] T. Lee and T. Hollerer. Hybrid feature tracking and user interaction for markerless augmented reality. In 2008 IEEE Virtual Reality Conference, pp. 145-152, 2008.
+
+[25] W. Li, Y. Zhang, and G. Fitzmaurice. Tutorialplan: Automated tutorial generation from cad drawings. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13, p. 2020-2027. AAAI Press, 2013.
+
+[26] B. MacIntyre, M. Gandy, S. Dow, and J. D. Bolter. Dart: a toolkit for rapid design exploration of augmented reality experiences. In Proceedings of the 17th annual ACM symposium on User Interface Software and Technology, pp. 197-206. ACM, 2004.
+
+[27] N. Mavrogeorgi, S. Koutsoutos, A. Yannopoulos, T. Varvarigou, and G. Kambourakis. Cultural heritage experience with virtual reality according to user preferences. In 2009 Second International Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies, and Services, pp. 13-18, 2009.
+
+[28] P. Mohr, B. Kerbl, M. Donoser, D. Schmalstieg, and D. Kalkofen. Retargeting technical documentation to augmented reality. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 3337-3346. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123. 2702490
+
+[29] K. Nakae and K. Tsukada. Support system to review manufacturing workshop through multiple videos. In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, IUI'18 Companion. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3180308.3180312
+
+[30] J.-S. Park. Ar-room: a rapid prototyping framework for augmented reality applications. Multimedia tools and applications, 55(3):725-746, 2011.
+
+[31] N. Petersen, A. Pagani, and D. Stricker. Real-time modeling and tracking manual workflows from first-person vision. In 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 117-124, 2013.
+
+[32] N. Petersen and D. Stricker. Learning task structure from video examples for workflow tracking and authoring. In 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 237-246, 2012.
+
+[33] R. Radkowski. Investigation of visual features for augmented reality assembly assistance. In R. Shumaker and S. Lackey, eds., Virtual, Augmented and Mixed Reality, pp. 488-498. Springer International Publishing, Cham, 2015.
+
+[34] R. Radkowski, J. Herrema, and J. Oliver. Augmented reality-based manual assembly support with visual features for different degrees of difficulty. International Journal of Human-Computer Interaction, 31(5):337-349, 2015. doi: 10.1080/10447318.2014.994194
+
+[35] R. A. Roberto, J. P. Lima, R. C. Mota, and V. Teichrieb. Authoring tools for augmented reality: An analysis and classification of content design tools. In A. Marcus, ed., Design, User Experience, and Usability: Technological Contexts, pp. 237-248. Springer International Publishing, Cham, 2016.
+
+[36] D. Rumiński and K. Walczak. Creation of interactive ar content on mobile devices. In W. Abramowicz, ed., Business Information Systems Workshops, pp. 258-269. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
+
+[37] E. Schoop, M. Nguyen, D. Lim, V. Savage, S. Follmer, and B. Hartmann. Drill sergeant: Supporting physical construction projects through an ecosystem of augmented tools. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA '16, p. 1607-1614. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2851581.2892429
+
+[38] H. Seichter, J. Looser, and M. Billinghurst. Composar: An intuitive tool for authoring ar applications. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 177-178, 2008.
+
+[39] A. Syberfeldt, O. Danielsson, M. Holm, and L. Wang. Visual assembling guidance using augmented reality. Procedia Manufacturing, 1:98 - 109, 2015. 43rd North American Manufacturing Research Conference,
+
+NAMRC 43, 8-12 June 2015, UNC Charlotte, North Carolina, United States. doi: 10.1016/j.promfg.2015.09.068
+
+[40] C.-Y. Wang, W.-C. Chu, H.-R. Chen, C.-Y. Hsu, and M. Y. Chen. Evertutor: Automatically creating interactive guided tutorials on smart-phones by user demonstration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 4027-4036. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2557407
+
+[41] S. Webel, U. Bockholt, T. Engelke, N. Gavish, M. Olbrich, and C. Preusche. An augmented reality training platform for assembly and maintenance skills. Robotics and Autonomous Systems, 61(4):398 - 403, 2013. Models and Technologies for Multi-modal Skill Training. doi: 10.1016/j.robot.2012.09.013
+
+[42] P.-C. Wu, R. Wang, K. Kin, C. Twigg, S. Han, M.-H. Yang, and S.-Y. Chien. Dodecapen: Accurate 6dof tracking of a passive stylus. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17, p. 365-374. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3126594. 3126664
+
+[43] J. Zauner, M. Haller, A. Brandl, and W. Hartmann. Authoring of a mixed reality furniture assembly instructor. In ACM SIGGRAPH 2003 Sketches & Applications, SIGGRAPH '03, p. 1. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/ 965400.965448
+
+[44] J. Zhu, S. Ong, and A. Nee. A context-aware augmented reality system to assist the maintenance operators. International Journal on Interactive Design and Manufacturing (IJIDeM), 8(4):293-304, 2014.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5ecb26fefe847c7943923d86008f76b8bddc3b5e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/La2hggaEcW/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,231 @@
+§ AUTHAR: CONCURRENT AUTHORING OF TUTORIALS FOR AR ASSEMBLY GUIDANCE
+
+Matt Whitlock*
+
+Autodesk Reasearch, Toronto
+
+University of Colorado Boulder
+
+George Fitzmaurice ${}^{ \dagger }$
+
+Autodesk Research, Toronto
+
+Tovi Grossman ‡
+
+Autodesk Research, Toronto
+
+University of Toronto
+
+Justin Matejka §
+
+Autodesk Research, Toronto
+
+ < g r a p h i c s >
+
+Figure 1: Overview of the AuthAR system setup, highlighting the key hardware components.
+
+§ ABSTRACT
+
+Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.
+
+Keywords: Augmented reality, content authoring, assembly tutorials, gaze input, voice input
+
+Index Terms: H.5.m [Information Interfaces and Presentation]: Miscellaneous
+
+§ 1 INTRODUCTION
+
+Physical task guidance can be delivered via Augmented Reality (AR) since assembly often requires both hands and continuous attention to the task. Additionally, assembly tutorials have instructions directly associated with physical objects, so AR can reduce the need for excessive context switching between the instructions and the physical structure by projecting those instructions into the environment. These benefits have been demonstrated in fields such as Facilities Management [20], Maintenance [44], and Internet of Things (IoT) device management $\left\lbrack {{14},{21}}\right\rbrack$ . Additionally, prior work in AR assembly guidance has shown that these benefits can translate to carrying out assembly tasks $\left\lbrack {2,{18},{21},{38}}\right\rbrack$ .
+
+While significant previous work has looked at the benefits of following tutorials in AR, much less has looked at how to author these tutorials. Beyond the technical requirements of an authoring interface, an ideal tutorial may look different depending on the end user of the tutorial. This problem is exacerbated in AR as there are many different modalities in which tutorial content can be presented. While one person may appreciate guiding animations in AR, another may prefer static text and images, and yet another may prefer video tutorials from one or multiple perspectives.
+
+With AuthAR, we present a system for building tutorials for assembly tasks that can accommodate the needs of these different types of end users. AuthAR generates video, and pictorial representations semi-automatically while the tutorial author completes the task. Furthermore, AuthAR allows tutorial authors to create and refine a tutorial in situ, integrating content authoring into the process of completing the task. This approach adds little additional overhead and reduces the need for post-processing of the tutorial.
+
+This paper presents the AuthAR system for generating mixed media assembly tutorials. Informed by prior work on content/tutorial authoring, and tutorial playback and walkthrough, we build the system with an eye toward non-obtrusive content authoring and generation of important components for tutorial playback, summarized in a set of design guidelines. We validate the system's ability to create a tutorial by stepping through the process of creating a tutorial to build a laptop stand, automatically generating an XML representation of the tutorial. Initial observations suggest the tool will be valuable, and possible ways the system could be extended and refined in future iterations.
+
+§ 2 RELATED WORK
+
+AuthAR builds on prior research in the areas of AR tutorials and content authoring, as well as principles of mixed media tutorial design.
+
+*e-mail: matthew.whitlock@colorado.edu
+
+${}^{ \dagger }$ e-mail: george.fitzmaurice@autodesk.com
+
+${}^{\frac{4}{4}}$ e-mail: tovi@dgp.toronto.edu
+
+8 e-mail: justin.matejka@autodesk.com
+
+§ 2.1 AR TUTORIALS
+
+$\mathrm{{AR}}$ is often applied to assembly tasks for its ability to project instructions into the environment such that they are spatially relevant $\left\lbrack {{19},{22}}\right\rbrack$ . Syberfeldt et al. demonstrate assembly of a 3D puzzle and provide evidence that AR supports faster assembly than traditional methods [39]. Similarly, Henderson et al. demonstrate the use of projected guidance for engine assembly [18]. Prior work on AR guidance design has shown that abstract representations (3D text, arrows) can be more effective for complex tasks than a more concrete representation (virtual models of the pieces and fasteners), which is sensible for simpler tasks [34]. In that work, the authors found information-rich $2\mathrm{D}$ representations to be more effective than either of the AR representations in some cases.
+
+One theory is that AR is only justified when the task is sufficiently difficult, such that the time to process the information is insignificant compared to the time to perform the task [33]. So even for physical tasks in which instructions' spatial relevance could be increased by projecting into the environment, tutorials should provide users the ability to view the step(s) in a more familiar picture/video format when needed or preferred. The need for these mixed media tutorials is apparent, however little work has explored authoring of these tutorials for physical tasks.
+
+§ 2.2 AR CONTENT AUTHORING
+
+Outside of the realm of tutorial authoring, numerous systems have explored content creation in augmented reality to abstract low-level programming from the creator, lowering the threshold for participation with AR. Many AR content authoring systems give users a collection of 3D models to place and manipulate in an environment or to overlay on a video stream, allowing users to create AR content without programming expertise generally required to build such scenes $\left\lbrack {4,{11},{12},{24},{26},{30}}\right\rbrack$ . Other AR content creation systems target specific end users for participation by domain experts in areas such as museum exhibition curation [36] tour guidance [3,27], and assembly/maintenance $\left\lbrack {{33},{41}}\right\rbrack$ . Roberto et al. provides a survey of existing AR content creation tools, classifying tools by standalone nature and platform-dependence [35].
+
+Of particular interest to us are authoring tools that enable creation of training experiences. Built on Amire's component-based framework [12], Zauner et al. presents a tool to enable authoring of assembly task guides in augmented reality [43]. With this tool, the author puts visually tracked pieces together hierarchically to create an assembly workflow assistant in AR. Alternatively, the expert can collaborate remotely, rather than creating a training experience ahead of time [41]. In this scenario, the expert can annotate the live video feed provided by the trainee's AR headset for varying levels of guidance on-the-fly. These systems require explicit enumeration of every component to be added to the created scene whereas our system generates content semi-automatically (segmenting video, recording changes to transform and detecting use of a tool) where possible. Moreover, our system only requires this manual input for augmentation and refinement of the tutorial while the bulk of authoring is done in situ.
+
+§ 2.3 MIXED MEDIA TUTORIALS
+
+Within the domain of software tutorials, Chi et al. provides design guidelines for mixed media tutorial authoring with their MixT system for software tutorials [7]. They list scannable steps, legible videos, visualized mouse movement and giving control to the user on which format to view as important components of a mixed media tutorial. Carter et al. echos this sentiment, with their ShowHow system for building a tutorial of videos and pictures taken from an HMD, noting that mixing different media is important in lieu of relying on a single media type [6]. Our work builds upon these concepts of mixed media tutorials but applies them to AR authoring of physical task tutorials. The rest of this subsection discusses the use of three popular media used for both software and physical task tutorials: videos, images, and interactive guidance.
+
+§ 2.3.1 VIDEO
+
+Prior work on video-based tutorials has applied different strategies to the challenges of video segmentation and multiple perspectives. DemoCut allows for semi-automatic video segmentation such that these demonstration videos are appropriately concise without requiring significant post-processing [8]. Chronicle allows for video tutorials based on the working history of a file [16]. As the file changes, the Chronicle system generates a video tutorial of how the file changed. For physical tasks, Nakae et al. propose use of multiple video perspectives $\left( {1}^{\text{ st }}\right.$ person, ${3}^{\text{ rd }}$ person, overhead) to record fabrication demonstrations and semi-automatic generation of these videos [29].
+
+§ 2.3.2 IMAGES
+
+Prior work also uses captioned and augmented images for physical task and software tutorials. Image-based tutorials can be generated automatically from demonstration as is done with TutorialPlan, a system that builds tutorials to help novices learn AutoCAD [25]. Images can also represent groups of significant manipulations (such as multiple changes to the saturation parameter) as Grabler et al demonstrated for GIMP tutorials [15]. For physical tasks, prior work explored use of AR to retarget 2D technical documentation onto the object itself, providing spatially relevant augmentations [28].
+
+§ 2.3.3 INTERACTIVE OVERLAYS
+
+Interactive tutorials guiding users where to click, place objects, and even move their hands have become increasingly popular. For example, EverTutor automatically generates tutorials for smart phone tasks such as setting a repeating alarm or changing font size based on touch events [40]. They found improved performance and preference toward these tutorials over text, video, and image tutorials. In the realm of physical assembly tutorials, use of visual and/or depth tracking allows for automatic generation of tutorials based on changes to location and rotation $\left\lbrack {5,{10},{13},{39}}\right\rbrack$ . Further, interactive tutorial authoring can include tracking of hand position [32], and project green and red onto end users' hands to indicate correct and incorrect positioning respectively [31]. DuploTrack allows users to author a tutorial for creating Duplo block models using depth sensing to infer positions and rotations automatically, and projective guidance is available to end users of the tutorial [17].
+
+§ 3 DESIGN GUIDELINES
+
+Our design of the AuthAR system was grounded by our exploration of the assembly task design space, and a study of related research and investigation into the difficulties associated with the process of generating tutorials (both for "traditional" media, as well as AR). Below we describe the design guidelines we followed when making decisions about the implementation of our system.
+
+D1: Non-Intrusive/Hand-Free. It is important that the author and assembler be able to perform the assembly task without being burdened by the tutorial apparatus or interface. Though many AR/VR interfaces are either mediated by a handheld device or require use of freehand gestures for input, assembly tasks often require the use of both hands in parallel. For this reason, we prioritize hands-free interaction with the system such that users can always keep their hands free for assembly.
+
+D2: Multiple Representations. Prior studies $\left\lbrack {6,{40}}\right\rbrack$ have shown that different representations (text, static pictures, video, animations, etc.) can all be valuable for people following along with a tutorial. Our system should allow authors to document their tutorial using multiple media types to best capture the necessary information.
+
+D3: Adaptive Effort. Manual content creation in AR allows for high expressivity but is time-consuming and can add complexity. Automatic content creation tools are easier to use, however have the side effect of limiting the author's creative control. Our tool should let authors move between automatic and manual creation modes to get the benefits of both models. In the most "automatic" case, an author should be able to generate a tutorial by doing little more than simply completing the assembly task as they would normally.
+
+ < g r a p h i c s >
+
+Figure 2: Design Space for AR Assembly tutorials. The design decisions we made are highlighted in black.
+
+D4: Real Time and In Situ Authoring. With the ability to generate, refine and augment tutorial step representations, our tool should allow authors to create the tutorial while performing the task and make necessary tweaks directly after each step. This form of contextual and in situ editing allows authors to add callouts or make changes while they are easy to remember and reduces the need for post-processing of tutorials at a desktop computer.
+
+§ 4 DESIGN SPACE OF AR ASSEMBLY TASK TUTORIALS
+
+The design space of tutorials for assembly tasks in augmented reality is broad, with many dimensions of variability both in how the instructions are presented to an end-user and how they are authored. To inform the design of our system and explore how to best achieve our design goals, we first mapped out some areas of this design space most relevant to our work (Figure 2).
+
+§ PRESENTED CONTENT
+
+Perhaps the most important dimension of variability in the AR-based assembly task tutorial design space is how the tutorial information is presented within the AR environment. Like more "traditional" tutorial delivery mediums, the AR environment is able to present static text and images, as well as show explanatory videos. A most straightforward (and still useful) application of AR technology for sharing assembly tutorials would be to display relevant information about the task as a heads-up display (HUD) in the user's headset, leaving their hand's free to perform the task. Unique to AR however is the ability to spatially associate these "traditional" elements with points or objects in the physical space. Further, an AR tutorial can present the assembly instructions "live", by displaying assembly guidance graphics or animations interwoven into the physical space. With our system we chose to use traditional text, pictures, and video presentation methods in addition to dynamic instructions, since they each have their unique benefits (D2).
+
+In order for the HoloLens to listen for the keyword "Stop Recording" during first/third person video recording, it cannot simultaneously record dictation to complement the video. For this reason, the current form of AuthAR records muted videos, but future iterations with additional microphone input capability would rectify this. With use of audio in content authoring infeasible, text serves to supplement muted video.
+
+ < g r a p h i c s >
+
+Figure 3: AuthAR System Diagram. A message passing server sends position data of materials and a screwdriver from coordinated Optitrack cameras to the HoloLens. The HoloLens sends video segmenting commands to the Android tablet through the server.
+
+§ AUTHORING LOCATION
+
+When it comes to authoring the tutorial, the content could be constructed in situ, that is, in the same physical space as the task is being performed, or in an external secondary location such as a desktop computer. Our work explores in situ authoring to reduce the required effort (D3) and maintain context (D4).
+
+§ CONTENT CREATION
+
+The content for the tutorial can be automatically captured as the authors moves through the assembly steps, or manually created by explicitly specifying what should be happening at each step. Automatically generating the tutorials streamlines the authoring and content creation process, but limits expressivity of the tutorial instructions. We automatically capture as much as possible, while allowing manual additions where the author thinks they would be helpful (D3).
+
+§ CONTENT EDITING
+
+Another aspect of tutorial authoring is how the content is edited. A traditional practice is to capture the necessary content first, then go back and edit for time and clarity in post-processing. Alternately, the content can be edited concurrently with the process of content collection-which if implemented poorly could negatively impact the flow of the creation process. In the best case, this results in having a completed tutorial ready to share as soon as the author has completed the task themselves. We focus on creating a well-designed concurrent editing process (D1).
+
+§ INTERACTION TECHNIQUES
+
+Interacting with the tutorial system, both in the creation of the tutorial as well as when following along, can be accomplished through many means. AR applications can use touch controls, gestures, or dedicated hardware controllers. However, for assembly tasks it is desirable to have both hands available at all times for the assembly task - rather than have them occupied by control of the tutorial authoring system (D1). For that reason, we exclusively use voice and gaze controls.
+
+ < g r a p h i c s >
+
+Figure 4: Example configuration of tracked materials in the physical environment (top) and transform data streaming from Optitrack to the Message Passing Server to provide positional tracking of invisible renderings (bottom). Within AuthAR, the virtual components are overlaid on the physical pieces, such that the physical components are interactive.
+
+§ 5 THE AUTHAR SYSTEM
+
+Auth ${AR}$ is a suite of software tools that allow tutorial authors to generate AR content (Figure 1). Optitrack motion capture cameras visually track marked assembly materials and a screwdriver, adding changes to position, transform and locations of screws added with a tracked screwdriver to the tutorial automatically. The HoloLens captures first person videos and images and a mounted Android tablet simultaneously captures third person video. The HoloLens also guides authors through the process of creating the tutorial and allows for gaze- and voice-based interaction to add and update content. This includes addition of augmentations such as location-specific callout points, marking locations of untrackable screws, and specification of images as "negative examples" (steps that should be avoided) or warnings.
+
+§ 5.1 SYSTEM ARCHITECTURE
+
+AuthAR's system architecture consists of three components-the Microsoft HoloLens, a Samsung Tab A 10.1 Android tablet and a server running on a desktop computer all on the same network (Figure 3). As a proxy to sufficient object recognition and point-cloud generation directly from the headset, we use relatively small (approximately $1\mathrm{\;{cm}}$ ) Optitrack visual markers for detection of material position and rotation. In developing AuthAR, we envisioned headsets of the future having onboard object recognition $\left\lbrack {9,{24}}\right\rbrack$ . As a proxy to such capabilities, we implement a networked system to coordinate object positions generated by Optitrack's Motive software and the HoloLens to make the physical objects interactive. To provide this interactivity, the HoloLens registers virtual replicas of each piece and overlays these models as invisible renderings at the position and rotation of the tracked object. This invisible object takes the same shape as the physical object but has its visual rendering component disabled. It engages the HoloLens raycast and gives the user the illusion of placing virtual augmentations on physical objects. Additionally, we add a tracked handle to a screwdriver for tracking to infer screw events.
+
+ < g r a p h i c s >
+
+Figure 5: Simultaneous 3rd person video recording from the Android tablet (left) and 1st person video recording from the HoloLens (right).
+
+The server connects to the Optitrack host via UDP and continues to update object and tool transforms. This server sends the data to the HoloLens via 64-character messages and the HoloLens' representation of the transform is updated accordingly. The Android tablet simply serves to record ${3}^{\text{ rd }}$ person video of the tutorial. When the HoloLens starts and stops recording ${1}^{\text{ st }}$ person demonstration, a message is passed to the server and then to the Android tablet to toggle recording. Throughout the paper, we demonstrate usage of Auth ${AR}$ to build a tutorial for assembly of an Ikea laptop stand ${}^{1}$ . Parts have been outfitted with Optitrack visual markers and defined as rigid bodies in Optitrack's Motive software. Simple representations of these parts are by the HoloLens as invisible colliders, so the physical components act as interactive objects in AR (Figure 4).
+
+Though this configuration is specific to the laptop stand, this approach is easily extensible to other assembly workflows. Users simply define combinations of visual markers as rigid bodies and build simplified models of the individual parts. In this case, these models are combinations of cubes that have been scaled along $\mathrm{X},\mathrm{Y}$ and $\mathrm{Z}$ axes, giving rough approximations of the parts’ shapes. This initial step of predefining the shapes of these parts allows fully in situ editing.
+
+§ 5.2 INTERACTION PARADIGM
+
+To avoid encumbering the assembly process with tutorial generation steps, interaction with AuthAR involves only voice, gaze, and use of materials and tools. This allows the user to always keep their hands free to build. By using only voice- and gaze-based interaction, we also eliminate the need for an occluding visual interface of menus and buttons, avoiding interference with the physical tutorial tasks. For flexibility, the user can add augmentations at any point while building the tutorial. However, to encourage faster onboarding, we guide the user through a two-phase process for each step: Step Recording and Step Review.
+
+To provide this guidance, we implemented a three-part Heads-Up Display (HUD). The top-right always displays both the current state and the command to advance to the next stage, the top-left shows available commands within Step Review mode, and the middle is reserved for notifications and prompts for dictation. A quick green flash indicates that the user has successfully moved to the next state.
+
+Step Recording: Automatically Generated Features
+
+${}^{1}$ https://www.ikea.com/us/en/p/vittsjoe-laptop-stand-black-brown-glass- 00250249/
+
+ < g r a p h i c s >
+
+Figure 6: Example usage of callout points in paper-based instructions. Callout points draw attention to the alignment of the holes on the materials (left). Instructions can convey negative examples of incorrect object orientation (right). Images from the assembly instructions for an Ikea laptop stand.
+
+When the user says "Start Recording" AuthAR begins recording changes to object transforms such that moving the physical objects maps directly to manipulating the virtual representations of those objects in the tutorial. This command also initiates video recording using the HoloLens’ built-in camera and the tablet’s ${1}^{\text{ st }}$ person perspective (Figure 5). AuthAR also records when the screwdriver's tip comes in contact with an object piece and generates a screw hole on that object (though not displayed until the Review Mode). To do this we add a tracked attachment to the handle, similar to what was done for Dodecapen [42] and SymbiosisSketch [1] to provide the position of the tip based on the orientation of the attachment.
+
+Given real-time streaming of position data of the screwdriver's handle and a priori knowledge of the length from the handle to the tip as ${10.5}\mathrm{\;{cm}}$ , we calculate the position of the screwdriver’s tip as ${10.5}\mathrm{\;{cm}}$ forward from the handle. When the user says "Finish Recording", the HoloLens prompts the user for a step description and records dictation. When description is complete, the user enters an idle state until they are ready for review.
+
+§ STEP REVIEW: MANUALLY ADDED FEATURES
+
+After a tutorial author has completed a step recording, they can they can say "Review Step" to enter review mode for the step. The ${1}^{\text{ st }}$ person video just recorded plays on a loop across from the author, automatically repositioning itself such that the author can look up at any point and see the video directly in front of them. This allows the author to draw upon their own experience when adding manual augmentations. Existing augmentations (e.g., callout points and fasteners) shift into focus by getting larger or expanding when the user is looking closer to that augmentation than any other-this eliminates the need to focus on small points (approximately $3\mathrm{\;{cm}}$ ) to engage with them. When looking toward a particular object, the available commands to update the augmentation are shown in the top-right of the Heads-up Display.
+
+After recording the step, the tutorial author may want to draw the tutorial user's attention to a particular point, similar to how this is done on $2\mathrm{D}$ paper-based instructions (Figure 6). To do this, the author focuses the gaze-based cursor on the specific point on the tracked object where the callout point should go and says "Add Point". This adds a small virtual sphere anchored by its relative position on the object. The process of adding a captioned image (Figure 7), begins when the user looks toward a callout point and says "Add Picture". The system then starts a countdown from three in the middle of the heads-up display to indicate that a picture is about to be taken, and then displays "Hold still!" when the countdown is complete, at which point the HoloLens captures the current frame.
+
+The collected image is saved and immediately loaded over the callout point. The author can then say "Add Text", and a prompt in the middle of the heads-up display shows "Speak the image caption" and the HoloLens begins listening for recorded dictation. After a pause of 3 seconds, the HoloLens finishes recording and immediately associates the spoken text as the image caption.
+
+ < g r a p h i c s >
+
+Figure 7: After adding a callout point, that point has a canvas to fill in (top). The author can add a picture (left), and a caption (right) and then has a completed callout point (bottom).
+
+Authors can describe a callout point as a negative example or a warning simply by looking toward the callout point and saying "Warning". This defines that callout point as a warning or negative example and will draw tutorial users to be extra attentive when traversing the tutorial. The callout point turns red and the associated image is given a red border around it (Figure 8). Authors are also able to move the point along the surface of any tracked piece or delete it entirely.
+
+Though fastening components together is an important aspect of an assembly tutorial, fastener objects (e.g., screws, nails) are too small for traditional object tracking technologies. During step recording, AuthAR records use of the tracked screwdriver and detects when it was used on a tracked material, generating a virtual screw hole at that location. Making use of these generated screw holes from step recording, the author can associate a virtual screw with the virtual screw hole by looking toward the hole and saying "Add Screw" (Figure 9). The user cycles through possible screws by saying "Next" and "Previous"-commands which cycle through possible virtual screw representations.
+
+The author is able to hold the physical screw up to the virtual one for comparison and say "This One" to associate the screw with that hole. Authors can also manually add new fasteners in areas that cannot be tracked. To do so, the author once again says "Add Screw", pulling up the same menu, but when a screw has been selected, the ray-casted, gaze-based cursor allows the user to manually place the virtual screw where it was physically placed. This is useful for the laptop stand, for example, as there are rubber feet that need to be screwed in by hand, rather than with the tracked screwdriver. Saying "Finish Review" brings the user back to the original idle state and the user has iterated through the step building process at this point.
+
+ < g r a p h i c s >
+
+Figure 8: Tutorial author setting a warning about fragile glass using a red callout point and a red border.
+
+§ 6 DISCUSSION
+
+Toward validating AuthAR, we discuss our initial observations in testing with tutorial authors, present an example application that parses and displays the generated tutorial for end users, and explain extensibility beyond the presented use case. In doing so, we consider improvements to AuthAR, and design considerations for other in situ AR content authoring tools.
+
+§ 6.1 INITIAL USER FEEDBACK
+
+To gather initial observations and feedback of the system, we asked two users to generate a tutorial for an Ikea laptop stand. Though we have not formally evaluated AuthAR for usability, this initial feedback provides insight into possible improvements to AuthAR and generalized takeaways in building such systems. The standard paper instructions for the stand consist of four steps: fastening the legs to the bottom base, fastening the top to the legs, adding screw-on feet to the bottom of the structure and adding glass to the top. We guided the users through using the system while they created an AR tutorial for assembling this piece of furniture.
+
+Both testers found the system helpful for generating tutorials with one noting that "being able to take pictures with the head for annotations was really useful." This suggests that the embodied gaze-based interaction is particularly well-suited to picture and video recording. Most of the functionality for making refinements in the tutorial is enabled by the user looking anywhere near the objects, however, adding new callout points requires accurate hovering of the cursor on the object of interest while speaking a command. One user mentioned that it was "kind awkward to point at certain points with the head". In such systems that require precise placement of virtual objects on physical components, pointing at and touching the position where the callout point would be a useful improvement over a gaze-only approach.
+
+Though fulfilling the hands-free requirement of a tutorial generation system (D1), AuthAR's use of dictation recognition for text entry was particularly challenging for users, in part due to the automated prompting for step descriptions and titles. One participant was surprised by the immediate prompt for a step description, and said that "it was hard to formulate something articulate to say by the time it had finished recording", so future iterations will likely incorporate users explicitly starting dictation recognition for a title so they are prepared to give one.
+
+ < g r a p h i c s >
+
+Figure 9: User adding virtual screws to the tutorial. The user can hold the physical screw up to the virtual one for comparison (top), and if the screw hole was not automatically generated, the user can place the screw via ray-cast from the headset (middle).
+
+While we focus on authoring the tutorial in real time, the users wanted to view and refine previous steps. One possible enhancement to an in situ content authoring tool would be a "navigation prompt to let you jump between steps and review your work." With AuthAR, this would provide users the ability to review and refine the full tutorial at a high level, including previous steps.
+
+§ 6.2 TUTORIAL PLAYBACK/WALKTHROUGH
+
+Though we focus on authoring of assembly tutorials, we also implemented and tested a playback mode to validate AuthAR’s ability to generate working tutorials. The tutorial author can save a tutorial after finishing a step, and AuthAR generates an XML representation of the entire tutorial. We load this XML into the playback application, also built for the HoloLens, and components of the steps built earlier are loaded and displayed for the end user to follow.
+
+The guidance techniques are simple but demonstrate success of the authoring process and portability of the generated tutorial. Our playback application projects lines from each piece's location to where that piece needs to go (Figure 10a). When the end user of the tutorial has correctly placed the object, a notification appears in front of them (Figure 10b). The user also receives guidance of where add screws and when the playback application recognizes a screw event in that location, the screw augmentation disappears (Figure 10c).
+
+ < g r a p h i c s >
+
+Figure 10: Example playback of a generated tutorial.
+
+Both first person and third person video representations play on a loop across the table from the user (Figure 10d). As the user walks around the table, the videos adjust their positions such that the user can always look up and see the videos straight across from them. Use of the third person video is currently the only tutorial component that requires post-processing after the in situ authoring process is complete. Because the XML representation of the tutorial uses file paths for videos, the author needs to manually move the video from the Android tablet to the headset's file system. Future iterations could automatically stream these videos between components.
+
+§ 6.3 AUTHAR EXTENSIBILITY
+
+We demonstrate AuthAR with assembly of an Ikea laptop stand but note that it could be extended to any physical task where simplified virtual models of the pieces could be built or obtained. All of the pieces used for the laptop stand were scaled cubes or combinations of scaled cubes, with disabled "Renderer" components to create the illusion of adding virtual augmentations to physical parts, when in reality, there were invisible objects overlaid on top of the physical pieces. So simply loading in pieces and disabling the visual renderings allow for extensibility to virtually any assembly task.
+
+While demonstrated with an augmented screwdriver, AuthAR could be extended to support different tools, or an integrated workspace of smart tools $\left\lbrack {{23},{37}}\right\rbrack$ . We employed very simple logic, that whenever the tip of the screwdriver hovers near a piece, AuthAR adds a screw hole. Future iterations could employ more advanced logic for detecting tool usage. The need for a large truss with ${10}\mathrm{{Op}}$ - titrack cameras enables very accurate tracking of visual markers but limits the AuthAR's widespread deployment in its current state. For practical use we envision the localization of objects being done with the headset only, or with cheaper external optical tracking, perhaps with black and white fiduciary markers. In this scenario, tracked pieces could be established in real time though user-assisted object recognition [24], rather than defining their shapes prior to running AuthAR.
+
+The in situ authoring approach offered by AuthAR allows the user to concurrently craft the tutorial while assembling the pieces. However, the gaze/voice multimodal interface does not provide users with efficient tools to fine-tune the generated tutorial. To this end, AuthAR should be complimented by a 2D interface for precise editing of individual components of the tutorial, similar to tutorial generation tools described in previous work $\left\lbrack {6,7,{16}}\right\rbrack$ . Trimming ${1}^{\text{ st }}$ and ${3}^{\text{ rd }}$ person perspective videos, cropping images and editing text are currently not well-supported by AuthAR and would be better suited to a mouse and keyboard interface. This complimentary approach would also allow users to focus on coarse-grain tutorial generation and object assembly without being burdened by these smaller edits that can easily be done after the fact.
+
+§ 7 CONCLUSION
+
+AuthAR enables tutorial authors to generate mixed media tutorials semi-automatically to guide end users through the assembly process. We automatically record expert demonstration where possible and allow for in situ editing for refinements and additions. We built AuthAR with several design guidelines in mind, validated with the authoring of a tutorial for assembling a laptop stand, and discuss the extensibility to assembly of other tasks by simply loading different virtual models into AuthAR. We see AuthAR enabling authoring of tutorials that could reach a widespread population with mixed media tutorials flexible to the preferences of each individual user.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..3106a517673536c4fd5a8ab4a6e0446c4d241442
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,380 @@
+# Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation
+
+Prashant Raina* Tiberiu Popa Sudhir Mudur
+
+Department of Computer Science and Software Engineering
+
+Concordia University
+
+## Abstract
+
+Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We "translate" local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.
+
+Index Terms: Computing methodologies-Computer graphics-Shape modeling-Point-based models; Computing methodologies-Machine learning-Machine learning approaches-Neural networks
+
+## 1 INTRODUCTION
+
+Point clouds and meshes are two representations of 3D surfaces that have long coexisted in the fields of computer graphics and computer vision. Point clouds are easier to acquire from the real world. However, they lack most of the geometric information which makes 3D meshes indispensable. The connectivity information provided by meshes allows one to easily calculate normals, curvatures and other geometric properties of the underlying surface. They can also be remeshed, or resampled to arbitrary precision. This gap between point clouds and meshes has traditionally been bridged by fitting parametric or implicit surfaces to point cloud neighborhoods. However, recent advances in deep learning have now made it possible to propose data-driven approaches to transforming data between these two domains.
+
+We present here a simple and elegant approach to reconstructing fine features of surfaces sampled as point clouds. By "fine features", we refer specifically to features smaller than the separation between sampled points (an example would be the eye in Figure 1). Naturally, it is impossible to recover all fine features in the general case, as undersampling causes information to be lost. However, the field of deep learning enables us to train a machine learning model on a large dataset and obtain a learned prior which will allow us to "hallucinate" fine details that would be expected from the underlying distribution of the dataset.
+
+
+
+Figure 1: A sparse point cloud (red points) sampled from a detailed surface. Fine features such as the eye that lie between the sampled points are challenging to reconstruct.
+
+Our approach leverages generative adversarial networks (GANs) for domain translation. GANs are a class of deep neural networks which has shown great potential in synthesizing realistic novel images that appear to be sampled from a particular domain of image data. One variant of the GAN architecture which is particularly interesting to us is the conditional GAN architecture. Conditional GANs have been successfully applied to domain translation, i.e. transforming images between two very different but related domains [11]. Some examples of domain translation include translating sketches of handbags to photographs of handbags, street maps to satellite images, or summer landscape photographs to winter landscape photographs. We tackle the fine feature reconstruction problem at the local neighborhood level, by framing it as a domain translation problem between two kinds of local heightmaps: "sparse" heightmaps, which are sampled from point clouds, and "dense" heightmaps, which are sampled from meshes using raycasting. This is a very unique and atypical domain translation problem because quantitative accuracy is extremely important. By contrast, the results in the aforementioned traditional domain translation problems need only appear qualitatively plausible.
+
+Our main contribution is in adapting existing work on domain translation to the problem of reconstructing fine features from low-resolution point clouds. Our feature reconstruction results are superior to the state-of-the-art methods $\left\lbrack {{22},{28}}\right\rbrack$ . These methods use a completely different patch-based approach along with much more complex neural network architectures. Our method runs in significantly less time than the most recent state-of-the-art method [22].
+
+---
+
+*e-mail: prashantraina2005@gmail.com
+
+---
+
+Furthermore, our method generalizes well to low-resolution point clouds, even when all the training inputs are sampled from high-resolution point clouds. This highlights the robustness of our domain translation approach. The implications of this method go beyond point positions, and we show that it can easily be extended to interpolate values of a scalar field at the newly created points in the dense point cloud. As a bonus, we can also easily obtain normals at the points in a dense heightmap.
+
+Our paper is organized as follows: Section 2 recaps the most relevant related work in point cloud consolidation, as well as domain translation. Section 3 describes in detail how we generate the two types of heightmaps and perform domain translation. Section 4 shows the surface reconstruction results we obtain from applying heightmap domain translation to point cloud neighborhoods. A detailed quantitative evaluation of the results and comparison with previous methods is given in Section 5. We briefly describe an application to upsampling of a scalar field in Section 6, before giving our conclusions in Section 7. There are also two appendices, A and B, which give neural network training details and additional figures of results.
+
+## 2 RELATED WORK
+
+Image-to-image translation using deep learning has gained a lot of attention since Isola et al published their seminal work [11] on using conditional adversarial networks for translating between image domains given a training set of paired examples. This quickly led to more work on unpaired image translation [31], as well as image translation between more than two domains [2]. There have also been attempts to bring GAN-based domain translation methods to other domains such as natural language text [27], audio [9] and voxel-based 3D data [4]. Some recent work on point clouds such as FoldingNet [26] and AtlasNet [6] can also, in some sense, be regarded as precursors to domain translation between point clouds and explicit surfaces.
+
+Reconstructing fine features from point clouds can take different forms depending on how the point cloud is sampled. In some cases, the points are essentially pixels obtained from a depth image [10,23], which means that the points have a $2\mathrm{D}$ grid structure that can be treated as a Euclidean domain for discrete convolution. Furthermore, depth images are almost invariably accompanied by color or intensity images, which provide vital information that is exploited in past work. In our work, we focus on the more general problem of reconstructing detailed surfaces from completely unstructured point clouds by first increasing the density of the point clouds before attempting to reconstruct the surface. This is most closely related to the problem of point cloud consolidation, where the goal is to obtain a point cloud representation from which an accurate $3\mathrm{D}$ mesh can be reconstructed [7]. A variety of procedural point cloud consolidation methods have been proposed, including LOP [15], WLOP [7], EAR [8] and deep points consolidation [24]. All of these methods involve fitting local geometry to point clouds, and WLOP and EAR have been incorporated into popular geometry processing libraries.
+
+Recent years have seen a wave of interest in applying deep learning to point clouds, sparked by the success of PointNet [16] and its multi-scale variant, PointNet++ [17], in the field of point cloud classification and semantic segmentation. This has led to point cloud consolidation [20] and upsampling [30] approaches based on PointNet, as well as a family of point cloud upsampling techniques based on PointNet++ that includes PU-Net [29], EC-Net [28] and 3PU [22]. Li et al have concurrently developed a GAN-based point cloud upsampling method [14] which uses generator and discriminator architectures loosely based on PU-Net and 3PU. By contrast, our work uses local heightmaps, much like Roveri et al [20]. However, Roveri et al focus on consolidation of already dense point clouds with 50 thousand points, while we are mainly interested in reconstructing fine features from much sparser point clouds (5000 points
+
+
+
+Figure 2: Heightmaps sampled from the fandisk model. The sparse and dense heightmaps have different color maps to reflect the fact that sparse heightmaps are not occupied at all pixels. Top row: sparse heightmaps obtained from the point cloud. Middle row: dense heightmaps predicted by our GAN. Bottom row: ground truth heightmaps obtained by casting rays onto the original mesh.
+
+or less). Moreover, they do not use domain translation, since their ground truth data is also obtained from point clouds. We compare our work to the publicly available implementations of EC-Net [28] and 3PU [22], since they are the most recently published state-of-the-art works. 3PU is claimed to have superior results to all previous point cloud upsampling work.
+
+## 3 HEIGHTMAP DOMAIN TRANSLATION
+
+The key idea behind our method is that reconstructing fine features of point cloud neighborhoods can be reduced to an image-to-image translation problem between two kinds of local heightmaps (examples shown in Figure 2):
+
+1. Sparse heightmaps, which are sampled from point clouds. By "sparse", we mean that not all pixels in the heightmap are occupied. This follows naturally from the fact that point clouds have gaps between points. In practice, we also set an upper limit on the number of points contributing to a sparse heightmap, which makes our method more robust while also speeding up computation of the sparse heightmaps. Therefore, they are also sparse in the conventional sense that there are $O\left( 1\right)$ points represented in the heightmap.
+
+2. Dense heightmaps, which are sampled from meshes. Since the mesh is typically defined in a continuous manner over a neighborhood, we can obtain a heightmap by casting rays onto the mesh. It is worth noting that this raycasting approach can easily be modified to obtain other kinds of local "maps", such as scalar or vector fields defined over the mesh.
+
+In this section, we first explain how these two types of heightmaps are computed (Section 3.1). We then describe how we translate heightmaps from sparse to dense, using a well-known image-to-image translation approach that exploits a conditional GAN (Section 3.2). We finally show how our overall approach has additional benefits for estimating normals (Section 3.3).
+
+### 3.1 Local Heightmap Computation
+
+Some aspects of heightmap computation are common to both the sparse and dense local heightmaps. The heightmap is assigned a size of $k \times k$ pixels, where we generally select $k = {64}$ . The side of the heightmap corresponds to a length of ${2r}$ in the space of the input shape, where $r$ is the search radius used to collect nearest neighbors from the point cloud. The local coordinate frame of the neighborhood is centered on a particular point of the point cloud, which has a corresponding oriented normal $\mathbf{n}$ . The horizontal and vertical directions of the local frame are given by an arbitrary tangent $\mathbf{t}$ and its corresponding bitangent $\mathbf{b} = \mathbf{n} \times \mathbf{t}$ . In our experiments, the tangents are randomly rotated in the tangent plane at both training and testing time, so that there is no bias introduced by the choice of tangent direction.
+
+#### 3.1.1 Sparse Heightmap Computation
+
+For computing sparse heightmaps from point clouds, we adapt a simple and efficient representation used in earlier works [19,20]. A random set of neighboring points (limited to 100) is chosen from within the search radius $r$ . Since our sampled points have consistent associated normals, we can omit neighbors with back-facing normals. The neighborhood is scaled by a factor of $1/r$ to reduce dependence on scale. These points are then projected orthogonally onto the local tangent plane, i.e. the plane of the heightmap image. For each pixel in the heightmap image, we can easily compute the corresponding pixel center in the local coordinate frame of the neighborhood. We can then compute the intensity of each pixel as the weighted average of the signed distances of nearby projected points from their original positions. The unnormalized weights have a Gaussian falloff ${w}_{i} =$ $\exp \left( {-\frac{{d}_{i}^{2}}{2{\sigma }^{2}}}\right)$ , where ${d}_{i}$ are the distances of the projected points from the pixel center in the image plane (we set $\sigma = {5r}/k$ ). A constant value 1 is added to all projection heights, so that the value 0 is reserved for unoccupied pixels. As shown in section 6, the same approach can also be used to generate a sparse map, not for height, but for a scalar field.
+
+#### 3.1.2 Dense Heightmap Computation
+
+Given a point on a surface represented by a mesh (which need not be a vertex), and its corresponding normal, we can use raycasting to generate a dense heightmap. We first transform the center of each heightmap pixel to the space of the mesh, using the local tangent frame mentioned earlier. From each pixel center, we shoot two rays in opposite directions perpendicular to the tangent plane. If both rays intersect the mesh, we chose the nearer intersection point. The intensity of the pixel is the signed distance of the pixel center to the intersection point, or a fixed large value (10) in the event that neither ray intersects the mesh. These raycasting operations are performed efficiently using the Embree raycasting framework [21], which also provides us with the intersected triangle ID and the barycentric coordinates of the intersection point. This additional information can be used to interpolate scalar or vector fields previously computed on the vertices of the mesh, in order to generate other kinds of dense local maps.
+
+### 3.2 Image-to-Image Translation
+
+We use a conditional generative adversarial network (cGAN) to perform image-to-image translation between our two heightmap domains, following the approach by Isola et al [11]. The outputs of both the generator $G\left( {x, z}\right)$ and discriminator $D\left( {x, y}\right)$ are conditioned on the input image $x$ . The discriminator does not attempt to classify the entire input image as real or fake, but rather classifies individual patches ( $8 \times 8$ in the case of our dense heightmaps). The generator must minimize two losses:
+
+1. The GAN loss, ${\mathbb{E}}_{x, y}\left\lbrack {D\left( {x, y}\right) }\right\rbrack + {\mathbb{E}}_{x, z}\left\lbrack {1 - D\left( {G\left( {x, z}\right) }\right) }\right\rbrack$ , which preserves high-frequency similarities between $G\left( {x, z}\right)$ and the corresponding ground-truth image $y$ .
+
+2. The ${\ell }_{1}$ loss, ${\mathbb{E}}_{x, y, z}{\left| y - G\left( x, z\right) \right| }_{1}$ , which preserves low-frequency similarities with the ground truth.
+
+In practice, the noise vector $z$ is introduced implicitly by random dropout of neurons with ${50}\%$ probability. It is worth noting that the sparse heightmaps we use contain incomplete information which
+
+
+
+Figure 3: Examples of training models from SketchFab.
+
+
+
+Figure 4: a) Ground truth head model. b) 5000 points sampled to act as testing input. c) Poisson reconstruction of the input point cloud, showing the loss of features due to undersampling.
+
+could imply multiple different dense heightmaps. We therefore have the option of enabling neuron dropout at the time of evaluating the network, in order to obtain a stochastic output.
+
+### 3.3 Prediction of Normals
+
+The regular grid structure of a heightmap provides us the additional benefit that it allows easy computation of normals at each point on the heightmap. Given a dense heightmap, we can use backward differences to estimate gradients in the tangent and bitangent directions. This gives us the approximate normal at each point as the direction of the vector $\left( {\frac{\partial h}{\partial x},\frac{\partial h}{\partial y},\frac{2r}{k}}\right)$ . This approximate normal map proved to be sufficient for our purposes, although it could also be refined using a neural network.
+
+## 4 FINE FEATURE RECONSTRUCTION
+
+We trained our GAN on local neighborhoods sampled from a set of 90 meshes of statues obtained from SketchFab. The meshes are identical to the training set used by Wang et al [22] for training 3PU. These meshes were generated by 3D-scanning statues of people and animals (Figure 3). Training details such as hyperparameters are given in Appendix A. For evaluation, we used a separate set of 16 meshes of statues from SketchFab. These include all 13 testing meshes used by Wang et al [22], as well as 3 additional meshes that we procured. All ground truth meshes have several hundred thousand vertices.
+
+To obtain a sparse set of points for reconstructing fine geometric features, we randomly sample a fixed number of points from a test mesh using Poisson disk sampling (using the implementation available in the VCG library [1]). Figure 4 shows an example of sampling these points. Normals for these points are estimated using PCA, based on 30 nearest neighbors from the sparse point cloud.
+
+After generating ${64} \times {64}$ sparse heightmaps for these sampled points using the method described in section 3.1.1, we use our trained model to suggest a plausible ${64} \times {64}$ dense heightmap. Given
+
+
+
+Figure 5: Our domain translation and reconstruction results for the 5000 points in Figure 4.b, using different modes. a, c and e correspond to the detailed, superdense and even modes listed in Section 4. b, d and $f$ are their respective reconstructions using screened Poisson surface reconstruction.
+
+the search radius as well as the normal and tangent vectors used to obtain the sparse heightmap, we can easily transform pixels of the dense heightmap back into points in the space of the original point cloud. Rather than converting all 4096 pixels into points, we take points from an $8 \times 8$ or ${16} \times {16}$ square in the middle of the generated heightmap. The rationale for this is that the conditional GAN is transforming heightmaps at the local level with no global information, and therefore we cannot expect the extremities of the newly generated heightmap to be accurate, even if it appears to be a plausible translation of the given sparse heightmap. Once normals are computed using the method in Section 3.3, the surface can then be easily reconstructed using a conventional surface reconstruction algorithm. In our case, we use screened Poisson reconstruction [12].
+
+We experimented with different combinations of heightmap search radius, size of the central square, and stride (a stride of 2 implies taking every alternate row and column). Note that $\delta$ is the median distance between a point and its nearest neighbor in the input point cloud, which we use as a measure of scale. Out of these combinations, we found three to be interesting:
+
+1. detailed mode: Radius ${4\delta }$ , central $8 \times 8$ , stride 2 (16x upsam-pling). This mode produces the most detailed looking mesh
+
+
+
+Figure 6: Results for the statue Cupid Fighting, reconstructed from a sample of 5000 points from the ground truth mesh.
+
+when screened Poisson reconstruction is applied.
+
+2. superdense mode: Radius ${4\delta }$ , central ${16} \times {16}$ , stride 1 (256x upsampling). This mode produces an extremely dense point cloud, from which a reasonably accurate mesh can be reconstructed.
+
+3. even mode: Radius ${8\delta }$ , central $8 \times 8$ , stride 2 (16x upsam-pling). The high-resolution point cloud obtained with this mode appears to be the most evenly sampled.
+
+Figure 5 compares the results for the three modes, after processing the sampled points from Figure 4.b.
+
+### 4.1 Reconstruction Results
+
+Examples of our surface reconstruction results are shown in Figures 6 to 8 . Note that there were no post-processing operations such as smoothing in any of our figures. In Figures 6 and 7, we can see that our method is able to reconstruct fine details such as facial features from a sample of only 5000 points. In Figure 7, we can see that the eye of the lion from Figure 1 has been reconstructed, along with other fine features such as the teeth. Figure 8 shows an extreme example where we upsample a set of only 625 points, while still being able to reconstruct features such as the wings on the helmet. For comparison, we show the corresponding result using the latest state-of-the-art method by Wang et al [22], informally called 3PU. This method has already been shown to be superior to state-of-the-art approaches such as EC-Net [28] and PU-Net [29]. Additional results can be found in Appendix B.
+
+
+
+Figure 7: Reconstruction results for the statue Lion Étouffant Un Serpent. The 5000 input points are overlaid on the ground truth mesh.
+
+
+
+Figure 8: Reconstruction results for an extremely low-resolution sample of 625 points (in red) from Cupid Fighting. We compare against the best results we were able to obtain for 3PU, across multiple random samplings of 625 points.
+
+During our experiments on extremely low resolution point clouds, we naturally found that different random samplings of 625 input points for the same testing mesh give slightly different resulting meshes. We found that our method produced fairly consistent results across different random samplings. This variation is much larger for the PointNet++-based approaches. Therefore, we have selected the best result we obtained for 3PU [22] to compare with our result in Figure 8. Figure 14 in Appendix B compares our results with 3PU for multiple different random samplings of 625 input points.
+
+## 5 EVALUATION
+
+We evaluate our surface reconstruction results quantitatively based on three metrics:
+
+1. Distance of each point in the high-resolution point cloud to the ground truth mesh (D2M).
+
+2. Hausdorff distance (HD): the maximum distance between a point on the reconstructed mesh and its nearest neighbor on the ground truth mesh:
+
+$$
+\max \left( {\mathop{\max }\limits_{{p \in P}}\mathop{\min }\limits_{{q \in Q}}\parallel p - q{\parallel }^{2},\mathop{\max }\limits_{{q \in Q}}\mathop{\min }\limits_{{p \in P}}\parallel p - q{\parallel }^{2}}\right) \tag{1}
+$$
+
+3. Chamfer distance (CD) [5]: the mean distance between a point on the reconstructed mesh and its nearest neighbor on the ground truth mesh:
+
+$$
+\frac{1}{2}\left( {\frac{1}{\left| P\right| }\mathop{\sum }\limits_{{p \in P}}\mathop{\min }\limits_{{q \in Q}}\parallel p - q{\parallel }^{2} + \frac{1}{\left| Q\right| }\mathop{\sum }\limits_{{q \in Q}}\mathop{\min }\limits_{{p \in P}}\parallel p - q{\parallel }^{2}}\right) \tag{2}
+$$
+
+We used Meshlab [3] to automate computation of all three metrics. The Hausdorff distance and Chamfer distance can be computed simultaneously by first randomly sampling a large number of points from the vertices, edges and faces of one mesh. The number of points sampled is equal to the number of vertices on the sampled mesh (several hundred thousand). We then find the mean and maximum distances between these points and the nearest vertices on the other mesh. Pairs of points are discarded if their separation is greater than $5\%$ of the diagonal of the bounding box. The above procedure is then repeated in the opposite direction, giving us two maximums and two means. The Hausdorff distance is the maximum of the two maximums and the Chamfer distance is the mean of the two means.Note that for performing screened Poisson reconstruction on point clouds upsampled using EC-Net and 3PU, we estimate normals using PCA based on 30 nearest neighbors. This is not necessary for our method, since we obtain normals from the dense heightmap as mentioned in Section 3.3.
+
+In Table 1, we compare the results for the three modes for sampling our dense heightmaps, as defined in section 4.1. To have a sense of the scale, note that the point coordinates were normalized to lie within a unit cube. We first consider the detailed and even modes, which both produce 16 times the number of input points. Although the detailed mode produces the best reconstructed mesh, it is noteworthy that the even mode is not far behind, and even surpasses the detailed mode for very low resolution point clouds. Those seeking to apply our method to obtain a point cloud, and not a mesh, have a choice between using our even mode, or alternatively performing Poisson disk sampling on the mesh produced using our detailed mode. The superdense mode, which produces 256 times the number of input points, suffers a bit when it comes to the distance to the ground truth mesh. However, it is not far behind the other modes when it comes to the Hausdorff and Chamfer distances.
+
+| Mode | $\mathbf{{D2M}}$ | HD | CD |
| Detailed - 5000 points | 3.42E-04 | 3.24E-02 | 1.03E-03 |
| Even – 5000 points | 3.96E-04 | 3.48E-02 | 1.42E-03 |
| Superdense - 5000 points | 5.89E-04 | 3.35E-02 | 1.37E-03 |
| Detailed - 2500 points | 4.00E-04 | 3.66E-02 | 1.53E-03 |
| Even – 2500 points | 4.38E-04 | 4.08E-02 | 2.11E-03 |
| Superdense - 2500 points | 8.39E-04 | 4.00E-02 | 2.25E-03 |
| Detailed – 625 points | 1.16E-03 | 5.20E-02 | 3.86E-03 |
| Even – 625 points | 1.10E-03 | 5.03E-02 | 5.25E-03 |
| Superdense – 625 points | 2.93E-03 | 5.81E-02 | 6.23E-03 |
+
+Table 1: Quantitative comparison of reconstruction results using our three modes for domain translation of input point clouds with 5000 , 2500 and 625 points. Note that the superdense mode produces 256 times the number of points, while the others increase the density by a factor of 16 .
+
+| Size of point clouds | $\mathbf{{D2M}}$ | HD | CD |
| >300K: 5000 points | 3.42E-04 | 3.24E-02 | 1.03E-03 |
| 5K: 5000 points | ${8.04}\mathrm{E} - {04}$ | 3.26E-02 | 1.30E-03 |
| 625: 5000 points | 1.17E-03 | 3.38E-02 | 1.54E-03 |
| >300K: 2500 points | 4.00E-04 | 3.66E-02 | 1.53E-03 |
| 5K: 2500 points | 9.63E-04 | 3.72E-02 | 1.90E-03 |
| 625: 2500 points | 1.46E-03 | 3.96E-02 | 2.17E-03 |
| >300K: 625 points | 1.16E-03 | 5.20E-02 | 3.86E-03 |
| 5K: 625 points | 1.09E-03 | 5.04E-02 | 4.32E-03 |
| 625: 625 points | 1.97E-03 | 5.54E-02 | 4.58E-03 |
+
+Table 2: Quantitative comparison of using different point cloud sizes when training: all mesh vertices (over ${300}\mathrm{\;K}$ points), 5000 points sampled using Poisson disk sampling, or 625 points. The resulting metrics are compared for input point clouds with 5000, 2500 and 625 points.
+
+| Method | $\mathbf{{D2M}}$ | HD | CD |
| EC-Net - 5000 points | 3.42E-04 | 6.30E-02 | 3.89E-03 |
| 3PU – 5000 points | $\mathbf{{2.91E} - {04}}$ | ${3.63}\mathrm{E} - {02}$ | 1.32E-03 |
| Ours – 5000 points | 3.42E-04 | 3.24E-02 | 1.03E-03 |
| EC-Net – 2500 points | 6.56E-04 | 6.57E-02 | 5.80E-03 |
| 3PU – 2500 points | 3.16E-04 | 4.86E-02 | 2.13E-03 |
| Ours – 2500 points | ${4.00}\mathrm{E} - {04}$ | 3.66E-02 | 1.53E-03 |
| EC-Net – 625 points | 1.85E-03 | 5.84E-02 | 8.12E-03 |
| 3PU – 625 points | 1.31E-03 | 5.55E-02 | 4.96E-03 |
| Ours – 625 points | 1.16E-03 | 5.20E-02 | 3.86E-03 |
+
+Table 3: Quantitative comparison of reconstructed surfaces after ${16}\mathrm{x}$ upsampling using EC-Net, 3PU and our detailed mode for point clouds with 5000, 2500 and 625 points.
+
+| Method | $\mathbf{{D2M}}$ | HD | CD |
| 3PU – 5000 points | 0.081 | 1.83 | 0.440 |
| Ours, Detailed - 5000 points | 0.066 | 1.26 | 0.426 |
| Ours, Even - 5000 points | 0.113 | 1.36 | 0.443 |
| 3PU – 2500 points | 0.161 | 2.77 | 0.485 |
| Ours, Detailed - 2500 points | 0.112 | 1.71 | 0.442 |
| Ours, Even – 2500 points | 0.181 | 1.88 | 0.478 |
| 3PU - 625 points | 0.593 | 8.13 | 0.902 |
| Ours, Detailed - 625 points | 0.325 | 3.48 | 0.599 |
| Ours, Even – 625 points | 0.492 | 4.03 | 0.756 |
+
+Table 4: Quantitative comparison of reconstructed surfaces after ${16x}$ upsampling using 3PU, our detailed mode and our even mode on the ABC dataset [13], for point clouds with 5000, 2500 and 625 points.
+
+
+
+Figure 9: Two examples of reconstruction results obtained on the ABC dataset [13], after upsampling samples of 2500 points. Top row: ground truth. Bottom row: our results using the detailed mode.
+
+A unique feature of our method is that we can obtain good results on low resolution point clouds, even after training on high-resolution point clouds. Our training meshes are very dense, and each has over 300 thousand vertices. They can also be randomly downsampled to a lower resolution such as 5000 or 625 points during training. We have therefore investigated how different resolutions of the training point clouds affect the surface reconstruction results. After all, the resolution of the training point clouds does affect the variety of the sparse heightmaps that will be seen during training. Table 2 shows that using the entire set of mesh vertices during training usually produces the best results. Even in the case where the testing point clouds have 625 points, it is a GAN trained on a higher resolution (5000 points) which gives the best results. This is likely due to a combination of two factors: i) we use raycasting onto meshes to obtain our ground truth dense heightmaps, and ii) we randomly sample only 100 points from each local neigborhood to contribute to the sparse heightmap, thereby making our domain translation more robust.
+
+Table 3 compares the accuracy of the meshes produced by applying screened Poisson reconstruction on the dense output point cloud of our domain translation method, as well as other recent methods for point cloud upsampling. We choose the detailed mode of domain translation for comparison, since it upsamples point clouds by 16 times (thereby allowing a fair comparison with 3PU), and it also produces the most accurate reconstructed meshes. We compare our work with the results obtained using EC-Net [28], as well as the recent state-of-the-art method called 3PU by Wang et al [22]. Wang et al claim that 3PU gives better results than all previous work for 16x upsampling of point clouds. In order to upsample point clouds 16x using EC-Net, we apply $4\mathrm{x}$ upsampling with EC-Net twice in succession, as recommended by Yu et al to Wang et al [22]. As mentioned earlier, we use the same training data as Wang et al. We do not retrain EC-Net, as we do not have edge annotations in our training data, which are required by EC-Net. From Table 3, we can see that across all three metrics, our quantative results are clearly superior to those of EC-Net. Our results are also superior to $3\mathrm{{PU}}$ for the Hausdorff distance and Chamfer distance metrics, while still being competitive for the D2M metric.
+
+We have additionally performed a large-scale quantative evaluation of our method on the first 1,000 OBJ files in the ABC dataset [13], whose results are summarized in Table 4. Figure 9 shows examples of our results on this dataset. We did not re-train our GAN or the 3PU network on this dataset, but rather used the same networks which were already trained on the aforementioned 90 models from SketchFab. The 3D models in the ABC dataset are densely sampled triangular meshes obtained from parametric CAD models. For models containing multiple connected components, only the largest connected component was used for the evaluation. The coordinates of these models are not normalized. Therefore, in order to make a fair aggregation of the results, we divided the computed distance metrics for each model by the average distance between pairs of neighboring mesh vertices. We also compute the median of each metric over all 1,000 models, to reduce the effect of outlier cases where the point clouds produced by $3\mathrm{{PU}}$ do not work well with the normal estimation method (there were no such cases among the SketchFab testing models). The results show that our method gives reconstructed surfaces that are more accurate than those obtained using 3PU. Furthermore, the worst-case error given by the Hausdorff distance is comparable to the normalized distance of 1 between nearest neighbors in the ground-truth dense mesh, while the average-case errors are significantly smaller than 1 .
+
+We also found our method to be extremely fast, taking only 3 minutes to upsample a large point cloud of ${160}\mathrm{\;K}$ points with an unoptimized implementation. By contrast, the far more complicated neural network of 3PU took 214 minutes to perform the same task using the code provided by the authors. We note, however, that both methods have comparable speed for small point clouds. For instance, they both take around 20 seconds to upsample 5000 points.
+
+The aforementioned state-of-the-art point cloud upsampling methods are ultimately based on PointNet++ [17]. It is therefore worth noting that all upsampling methods based on PointNet++ share certain weaknesses:
+
+1. They are not invariant to permutations of the point cloud. This is because they all rely on farthest-point sampling as an initial step for their neural networks (see [25]). Our sparse heightmaps, on the other hand, are completely invariant to permutations of the points in the local neighborhood.
+
+2. They rely on large and complicated neural network architectures, which affects the computational efficiency. By contrast, we use a simple convolutional U-Net architecture for the GAN.
+
+3. They do not perform well in regions with unusually dense sampling. This is because after the farthest-point sampling step in each set abstraction layer, K-nearest neighbors are taken as representatives of the local neighborhood. Therefore, regions of high density will result in too many neighbors that are very close to the center point, thus giving a skewed picture of the local neighborhood. In our case, having many points mapping to the same pixels of the sparse heightmap will not significantly affect the pixel intensities.
+
+## 6 Scalar Field Upsampling
+
+Our method also has the potential for using the conditional GAN to predict values of a scalar or vector field corresponding to the dense heightmap at a given point. As an example, we show an application to the sharpness field defined by Raina et al $\left\lbrack {{18},{19}}\right\rbrack$ . The feature-aware smoothing method described in [19] is mainly concerned with the local maxima of the field, which are scale-invariant. This makes it simple to apply our GAN to locally predict the value of the sharpness field of a point cloud. If values of the sharpness field are pre-computed for the low-resolution point cloud, we can obtain a sparse image of the sharpness values simultaneously with the sparse heightmap (top row of Figure 10). By obtaining ground truth sharpness fields on meshes, we can then train a separate conditional GAN to predict the values of the sharpness field corresponding to all points of the dense heightmap (middle row of Figure 10).
+
+
+
+Figure 10: Sharpness field sampled from the fandisk model. Top row: sparse sampling of the sharpness field precomputed on the point cloud. Middle row: dense sharpness field predicted by our GAN. Bottom row: ground truth sharpness field obtained by casting rays onto the original mesh.
+
+In order to predict values of the sharpness field at the dense output points from our domain translation method, we trained a separate GAN to predict sharpness field values at the output points, using the same training procedure and data augmentation as the heightmap translation GAN. The only difference is that we obtained better results with a larger patch size of ${16} \times {16}$ for the discriminator. The training data is a set of meshes of simple geometric shapes, whose sharpness fields are computed using dihedral angles as described in [19]. We then pre-computed the sharpness field of the blade point cloud ( ${80}\mathrm{\;K}$ points) using the CNN with a spatial transformer as recommended in [19], before performing domain translation using the even mode. The results we have obtained (Figure 11) show that our GAN gives a sharpness field with similar properties to a sharpness field computed from scratch on the dense point cloud after domain translation. Furthermore, the entire domain translation procedure along with the sharpness field estimation takes around 7 minutes, compared to 20 minutes if the sharpness field is recomputed from scratch on the dense point cloud. The combination of the dense heightmap, the normal map and the sharpness field together provide all the information necessary for downstream methods to accurately reconstruct the surface along with sharp features. We believe that our domain translation approach to upsampling has the potential for extension to other scalar fields and vector fields.
+
+## 7 CONCLUSION, LIMITATIONS AND FUTURE WORK
+
+In this work, we have applied GAN-based domain translation to enable us to reconstruct fine features of low-resolution point clouds. We have obtained results superior to the state of the art, while providing a very different approach from previous PointNet++-based work. We have also given a simple example demonstrating that the same method has the potential to be extended to upsampling of scalar fields. Our work demonstrates that tangible benefits can be obtained by applying the domain translation paradigm to $3\mathrm{D}$ geometry problems, and not merely the typical domains associated with GAN-based domain translation.
+
+While our method is good at reconstructing low-level details, it has a slight tendency to add unnecessary detail in undersampled regions. Since we are performing domain translation, the GAN is forced to hallucinate a realistic-looking dense heightmap even in the presence of insufficient data. This problem opens up possibilities for future work.
+
+
+
+Figure 11: Upsampling results for the blade model, along with its sharpness field. Left: using our GAN to estimate the sharpness field. Right: recomputing the sharpness field from scratch on the dense point cloud.
+
+In our problem, we are able to obtain paired training data from the two domains. The same approach can be extended to unpaired data in domains where it is difficult to align the data of multiple domains (e.g. heightmaps obtained from depth cameras and from laser scanners). The raycasting-based approach to generating ground truth data also opens up the possibility of obtaining training data from implicit or parametric surfaces such as CAD models, instead of meshes. There is also scope for extending our method to estimate other scalar or vector fields in tandem with upsampling.
+
+## REFERENCES
+
+[1] VCG library. http://vcg.isti.cnr.it/vcglib/index.html, 2004. Accessed: 2019-03-08.
+
+[2] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8789-8797, June 2018.
+
+[3] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia. MeshLab: an Open-Source Mesh Processing Tool. In V. Scarano, R. D. Chiara, and U. Erra, eds., Eurographics Italian Chapter Conference. The Eurographics Association, 2008. doi: 10. 2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/129-136
+
+[4] N. Dehmamy, L. Stornaiuolo, and M. Martino. Vox2net: From 3D shapes to network sculptures. In NeurIPS Workshop on Machine Learning for Creativity and Design, December 2018.
+
+[5] H. Fan, H. Su, and L. J. Guibas. A point set generation network for $3\mathrm{D}$ object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 605-613, 2017.
+
+[6] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry. A papier-mâché approach to learning 3D surface generation. In The IEEE
+
+Conference on Computer Vision and Pattern Recognition (CVPR), pp. 216-224, June 2018.
+
+[7] H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-Or. Consoli-
+
+dation of unorganized point clouds for surface reconstruction. ${ACM}$ transactions on graphics (TOG), 28(5):176, 2009.
+
+[8] H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and H. R. Zhang. Edge-aware point set resampling. ACM transactions on graphics (TOG), 32(1):9, 2013.
+
+[9] S. Huang, Q. Li, C. Anil, X. Bao, S. Oore, and R. B. Grosse. Tim-bretron: A wavenet (cyclegan (cqt (audio))) pipeline for musical timbre transfer. In International Conference on Learning Representations (ICLR), p. To appear., May 2019.
+
+[10] T.-W. Hui, C. C. Loy, and X. Tang. Depth map super-resolution by deep multi-scale guidance. In European conference on computer vision (ECCV), pp. 353-369. Springer,2016.
+
+[11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1125-1134, 2017.
+
+[12] M. Kazhdan and H. Hoppe. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG), 32(3):29, 2013.
+
+[13] S. Koch, A. Matveev, Z. Jiang, F. Williams, A. Artemov, E. Burnaev, M. Alexa, D. Zorin, and D. Panozzo. ABC: A big CAD model dataset for geometric deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9601-9611, 2019.
+
+[14] R. Li, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. PU-GAN: A point cloud upsampling adversarial network. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7203-7212, 2019.
+
+[15] Y. Lipman, D. Cohen-Or, D. Levin, and H. Tal-Ezer. Parameterization-free projection for geometry reconstruction. In ACM Transactions on Graphics (TOG), vol. 26, p. 22. ACM, 2007.
+
+[16] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 652-660, 2017.
+
+[17] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems (NeurIPS), pp. 5099-5108, 2017.
+
+[18] P. Raina, S. Mudur, and T. Popa. MLS ${}^{2}$ : Sharpness field extraction using CNN for surface reconstruction. In Proceedings of Graphics Interface 2018, GI 2018, pp. 66-75. Canadian Human-Computer Communications Society, 2018.
+
+[19] P. Raina, S. Mudur, and T. Popa. Sharpness fields in point clouds using deep learning. Computers & Graphics, 78:37-53, 2019.
+
+[20] R. Roveri, A. C. Öztireli, I. Pandele, and M. Gross. Pointpronets: Consolidation of point clouds with convolutional neural networks. In Computer Graphics Forum, vol. 37, pp. 87-99. Wiley Online Library, 2018.
+
+[21] I. Wald, S. Woop, C. Benthin, G. S. Johnson, and M. Ernst. Embree: A kernel framework for efficient cpu ray tracing. ACM Trans. Graph., 33(4):143:1-143:8, July 2014. doi: 10.1145/2601097.2601199
+
+[22] Y. Wang, S. Wu, H. Huang, D. Cohen-Or, and O. Sorkine-Hornung. Patch-based progressive 3D point set upsampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5958-5967, 2019.
+
+[23] Y. Wen, B. Sheng, P. Li, W. Lin, and D. D. Feng. Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution. IEEE Transactions on Image Processing, 28(2):994-1006, 2019.
+
+[24] S. Wu, H. Huang, M. Gong, M. Zwicker, and D. Cohen-Or. Deep points consolidation. ACM Transactions on Graphics (ToG), 34(6):176, 2015.
+
+[25] J. Yang, Q. Zhang, B. Ni, L. Li, J. Liu, M. Zhou, and Q. Tian. Modeling point clouds with self-attention and gumbel subset sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3323-3332, 2019.
+
+[26] Y. Yang, C. Feng, Y. Shen, and D. Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
+
+206-215, June 2018.
+
+[27] Z. Yang, Z. Hu, C. Dyer, E. P. Xing, and T. Berg-Kirkpatrick. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems (NeurIPS), pp. 7298-7309, December 2018.
+
+[28] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Ec-net: an edge-aware point set consolidation network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 386-402, September 2018.
+
+[29] L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Pu-net: Point cloud upsampling network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790-2799, June 2018.
+
+[30] W. Zhang, H. Jiang, Z. Yang, S. Yamakawa, K. Shimada, and L. B. Kara. Data-driven upsampling of point clouds. CoRR, abs/1807.02740, 2018.
+
+[31] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), pp. 2223-2232, 2017.
+
+## APPENDIX A TRAINING DETAILS
+
+### A.1 Generator architecture
+
+The generator used in our GAN has a U-Net architecture with the following layers:
+
+1. $4 \times 4$ convolution, stride 2,64 kernels, leaky ReLU(slope $=$ 0.2)
+
+${2.4} \times 4$ convolution, stride 2,128 kernels, instance norm., leaky ReLU(slope $= {0.2}$ )
+
+${3.4} \times 4$ convolution, stride 2,256 kernels, instance norm., leaky ReLU(slope $= {0.2}$ )
+
+${4.4} \times 4$ convolution, stride 2,512 kernels, instance norm., leaky ReLU(slope $= {0.2}$ ), ${50}\%$ dropout
+
+${5.4} \times 4$ convolution, stride 2,512 kernels, instance norm., leaky ReLU(slope $= {0.2}$ ), ${50}\%$ dropout
+
+6. $4 \times 4$ convolution, stride 2,512 kernels, leaky ReLU(slope $=$ 0.2), 50% dropout
+
+${7.4} \times 4$ transposed convolution, stride 2,512 kernels, instance norm., ReLU, 50% dropout
+
+applied to concatenated output of layers 5 and 6 .
+
+${8.4} \times 4$ transposed convolution, stride 2,512 kernels, instance norm., ReLU, 50% dropout
+
+applied to concatenated output of layers 4 and 7 .
+
+${9.4} \times 4$ transposed convolution, stride 2,256 kernels, instance norm., ReLU
+
+applied to concatenated output of layers 3 and 8 .
+
+10. $4 \times 4$ transposed convolution, stride 2,128 kernels, instance norm., ReLU
+
+applied to concatenated output of layers 2 and 9 .
+
+11. $4 \times 4$ transposed convolution, stride 2,64 kernels, instance norm., ReLU
+
+applied to concatenated output of layers 1 and 10 .
+
+12. 2x upsampling
+
+13. $4 \times 4$ convolution with bias, stride 1 with zero padding,1 kernel, Tanh
+
+### A.2 Discriminator architecture
+
+Our discriminator uses the PatchGAN architecture proposed by Isola et al:
+
+${1.4} \times 4$ convolution with bias, stride 2,512 kernels, leaky $\operatorname{ReLU}\left( {\text{slope} = {0.2}}\right)$
+
+${2.4} \times 4$ convolution with bias, stride 2,1024 kernels, instance norm., leaky ReLU(slope $= {0.2}$ )
+
+${3.4} \times 4$ convolution with bias, stride 2,2048 kernels, instance norm., leaky ReLU(slope $= {0.2}$ )
+
+4. $4 \times 4$ convolution, stride 1 with zero padding,1 kernel, linear activation
+
+We use a patch size of $8 \times 8$ for our heightmap domain translation experiments. For scalar field domain translation, we obtained better results with ${16} \times {16}$ patches, therefore layer 3 is omitted.
+
+### A.3 Training hyperparameters
+
+The following hyperparameters are identical for both the generator and the discriminator:
+
+batch size $= {16}$
+
+optimizer: Adam
+
+learning rate $= 3 \times {10}^{-4}$
+
+momentum: ${\beta }_{1} = {0.5},{\beta }_{2} = {0.999}$
+
+The discriminator loss is the mean squared error of classifying each patch as real or fake. The generator has to maximize the discriminator loss, while also minimizing the ${\ell }_{1}$ error of the predicted image. The ${\ell }_{1}$ error is given a weight of 10 for heightmap domain translation, and 0.5 in the case of scalar field domain translation.
+
+## APPENDIX B ADDITIONAL RESULTS
+
+The following pages contain figures of additional results on testing shapes. Figures 12 and 13 show results obtained on inputs of 5000 sampled points. Figure 12 compares the reconstructed meshes, while Figure 13 shows the point clouds. As mentioned in Section 5 of the main paper, if the final objective is to consume a point cloud and not a mesh, our best results are obtained by either using the even mode, or by performing Poisson disk sampling on the mesh reconstructed using the detailed mode.
+
+In Figure 14, we have also shown an example of how multiple random samplings of 625 points give slightly different reconstructed surfaces. Our results are compared with $3\mathrm{{PU}}$ , which we found to produce less consistent results. We have omitted examples where PCA failed to give good enough normals for the 3PU output point cloud, resulting in failure of the screened Poisson reconstruction. Our method provides normals for output points, so no additional normal estimation step is required, and our computed normals never caused screened Poisson reconstruction to fail in our experiments.
+
+
+
+Figure 12: Additional results for surface reconstruction using our detailed mode domain translation, compared with 3PU and EC-Net.
+
+
+
+Figure 13: Dense point clouds obtained using domain translation using the even mode, compared with state-of-the-art point cloud upsampling methods. The last column shows results from Poisson disk sampling of our reconstructed surface, using the detailed mode.
+
+3PU Ours
+
+Figure 14: Comparison of surface reconstruction results obtained from multiple random samplings of 625 points of the Cupid Fighting model.
+
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..66566cdc3fc3991c2211abbb869c2e4e05eb735e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/NeCyD0NqHs/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,341 @@
+§ FINE FEATURE RECONSTRUCTION IN POINT CLOUDS BY ADVERSARIAL DOMAIN TRANSLATION
+
+Prashant Raina* Tiberiu Popa Sudhir Mudur
+
+Department of Computer Science and Software Engineering
+
+Concordia University
+
+§ ABSTRACT
+
+Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We "translate" local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.
+
+Index Terms: Computing methodologies-Computer graphics-Shape modeling-Point-based models; Computing methodologies-Machine learning-Machine learning approaches-Neural networks
+
+§ 1 INTRODUCTION
+
+Point clouds and meshes are two representations of 3D surfaces that have long coexisted in the fields of computer graphics and computer vision. Point clouds are easier to acquire from the real world. However, they lack most of the geometric information which makes 3D meshes indispensable. The connectivity information provided by meshes allows one to easily calculate normals, curvatures and other geometric properties of the underlying surface. They can also be remeshed, or resampled to arbitrary precision. This gap between point clouds and meshes has traditionally been bridged by fitting parametric or implicit surfaces to point cloud neighborhoods. However, recent advances in deep learning have now made it possible to propose data-driven approaches to transforming data between these two domains.
+
+We present here a simple and elegant approach to reconstructing fine features of surfaces sampled as point clouds. By "fine features", we refer specifically to features smaller than the separation between sampled points (an example would be the eye in Figure 1). Naturally, it is impossible to recover all fine features in the general case, as undersampling causes information to be lost. However, the field of deep learning enables us to train a machine learning model on a large dataset and obtain a learned prior which will allow us to "hallucinate" fine details that would be expected from the underlying distribution of the dataset.
+
+ < g r a p h i c s >
+
+Figure 1: A sparse point cloud (red points) sampled from a detailed surface. Fine features such as the eye that lie between the sampled points are challenging to reconstruct.
+
+Our approach leverages generative adversarial networks (GANs) for domain translation. GANs are a class of deep neural networks which has shown great potential in synthesizing realistic novel images that appear to be sampled from a particular domain of image data. One variant of the GAN architecture which is particularly interesting to us is the conditional GAN architecture. Conditional GANs have been successfully applied to domain translation, i.e. transforming images between two very different but related domains [11]. Some examples of domain translation include translating sketches of handbags to photographs of handbags, street maps to satellite images, or summer landscape photographs to winter landscape photographs. We tackle the fine feature reconstruction problem at the local neighborhood level, by framing it as a domain translation problem between two kinds of local heightmaps: "sparse" heightmaps, which are sampled from point clouds, and "dense" heightmaps, which are sampled from meshes using raycasting. This is a very unique and atypical domain translation problem because quantitative accuracy is extremely important. By contrast, the results in the aforementioned traditional domain translation problems need only appear qualitatively plausible.
+
+Our main contribution is in adapting existing work on domain translation to the problem of reconstructing fine features from low-resolution point clouds. Our feature reconstruction results are superior to the state-of-the-art methods $\left\lbrack {{22},{28}}\right\rbrack$ . These methods use a completely different patch-based approach along with much more complex neural network architectures. Our method runs in significantly less time than the most recent state-of-the-art method [22].
+
+*e-mail: prashantraina2005@gmail.com
+
+Furthermore, our method generalizes well to low-resolution point clouds, even when all the training inputs are sampled from high-resolution point clouds. This highlights the robustness of our domain translation approach. The implications of this method go beyond point positions, and we show that it can easily be extended to interpolate values of a scalar field at the newly created points in the dense point cloud. As a bonus, we can also easily obtain normals at the points in a dense heightmap.
+
+Our paper is organized as follows: Section 2 recaps the most relevant related work in point cloud consolidation, as well as domain translation. Section 3 describes in detail how we generate the two types of heightmaps and perform domain translation. Section 4 shows the surface reconstruction results we obtain from applying heightmap domain translation to point cloud neighborhoods. A detailed quantitative evaluation of the results and comparison with previous methods is given in Section 5. We briefly describe an application to upsampling of a scalar field in Section 6, before giving our conclusions in Section 7. There are also two appendices, A and B, which give neural network training details and additional figures of results.
+
+§ 2 RELATED WORK
+
+Image-to-image translation using deep learning has gained a lot of attention since Isola et al published their seminal work [11] on using conditional adversarial networks for translating between image domains given a training set of paired examples. This quickly led to more work on unpaired image translation [31], as well as image translation between more than two domains [2]. There have also been attempts to bring GAN-based domain translation methods to other domains such as natural language text [27], audio [9] and voxel-based 3D data [4]. Some recent work on point clouds such as FoldingNet [26] and AtlasNet [6] can also, in some sense, be regarded as precursors to domain translation between point clouds and explicit surfaces.
+
+Reconstructing fine features from point clouds can take different forms depending on how the point cloud is sampled. In some cases, the points are essentially pixels obtained from a depth image [10,23], which means that the points have a $2\mathrm{D}$ grid structure that can be treated as a Euclidean domain for discrete convolution. Furthermore, depth images are almost invariably accompanied by color or intensity images, which provide vital information that is exploited in past work. In our work, we focus on the more general problem of reconstructing detailed surfaces from completely unstructured point clouds by first increasing the density of the point clouds before attempting to reconstruct the surface. This is most closely related to the problem of point cloud consolidation, where the goal is to obtain a point cloud representation from which an accurate $3\mathrm{D}$ mesh can be reconstructed [7]. A variety of procedural point cloud consolidation methods have been proposed, including LOP [15], WLOP [7], EAR [8] and deep points consolidation [24]. All of these methods involve fitting local geometry to point clouds, and WLOP and EAR have been incorporated into popular geometry processing libraries.
+
+Recent years have seen a wave of interest in applying deep learning to point clouds, sparked by the success of PointNet [16] and its multi-scale variant, PointNet++ [17], in the field of point cloud classification and semantic segmentation. This has led to point cloud consolidation [20] and upsampling [30] approaches based on PointNet, as well as a family of point cloud upsampling techniques based on PointNet++ that includes PU-Net [29], EC-Net [28] and 3PU [22]. Li et al have concurrently developed a GAN-based point cloud upsampling method [14] which uses generator and discriminator architectures loosely based on PU-Net and 3PU. By contrast, our work uses local heightmaps, much like Roveri et al [20]. However, Roveri et al focus on consolidation of already dense point clouds with 50 thousand points, while we are mainly interested in reconstructing fine features from much sparser point clouds (5000 points
+
+ < g r a p h i c s >
+
+Figure 2: Heightmaps sampled from the fandisk model. The sparse and dense heightmaps have different color maps to reflect the fact that sparse heightmaps are not occupied at all pixels. Top row: sparse heightmaps obtained from the point cloud. Middle row: dense heightmaps predicted by our GAN. Bottom row: ground truth heightmaps obtained by casting rays onto the original mesh.
+
+or less). Moreover, they do not use domain translation, since their ground truth data is also obtained from point clouds. We compare our work to the publicly available implementations of EC-Net [28] and 3PU [22], since they are the most recently published state-of-the-art works. 3PU is claimed to have superior results to all previous point cloud upsampling work.
+
+§ 3 HEIGHTMAP DOMAIN TRANSLATION
+
+The key idea behind our method is that reconstructing fine features of point cloud neighborhoods can be reduced to an image-to-image translation problem between two kinds of local heightmaps (examples shown in Figure 2):
+
+1. Sparse heightmaps, which are sampled from point clouds. By "sparse", we mean that not all pixels in the heightmap are occupied. This follows naturally from the fact that point clouds have gaps between points. In practice, we also set an upper limit on the number of points contributing to a sparse heightmap, which makes our method more robust while also speeding up computation of the sparse heightmaps. Therefore, they are also sparse in the conventional sense that there are $O\left( 1\right)$ points represented in the heightmap.
+
+2. Dense heightmaps, which are sampled from meshes. Since the mesh is typically defined in a continuous manner over a neighborhood, we can obtain a heightmap by casting rays onto the mesh. It is worth noting that this raycasting approach can easily be modified to obtain other kinds of local "maps", such as scalar or vector fields defined over the mesh.
+
+In this section, we first explain how these two types of heightmaps are computed (Section 3.1). We then describe how we translate heightmaps from sparse to dense, using a well-known image-to-image translation approach that exploits a conditional GAN (Section 3.2). We finally show how our overall approach has additional benefits for estimating normals (Section 3.3).
+
+§ 3.1 LOCAL HEIGHTMAP COMPUTATION
+
+Some aspects of heightmap computation are common to both the sparse and dense local heightmaps. The heightmap is assigned a size of $k \times k$ pixels, where we generally select $k = {64}$ . The side of the heightmap corresponds to a length of ${2r}$ in the space of the input shape, where $r$ is the search radius used to collect nearest neighbors from the point cloud. The local coordinate frame of the neighborhood is centered on a particular point of the point cloud, which has a corresponding oriented normal $\mathbf{n}$ . The horizontal and vertical directions of the local frame are given by an arbitrary tangent $\mathbf{t}$ and its corresponding bitangent $\mathbf{b} = \mathbf{n} \times \mathbf{t}$ . In our experiments, the tangents are randomly rotated in the tangent plane at both training and testing time, so that there is no bias introduced by the choice of tangent direction.
+
+§ 3.1.1 SPARSE HEIGHTMAP COMPUTATION
+
+For computing sparse heightmaps from point clouds, we adapt a simple and efficient representation used in earlier works [19,20]. A random set of neighboring points (limited to 100) is chosen from within the search radius $r$ . Since our sampled points have consistent associated normals, we can omit neighbors with back-facing normals. The neighborhood is scaled by a factor of $1/r$ to reduce dependence on scale. These points are then projected orthogonally onto the local tangent plane, i.e. the plane of the heightmap image. For each pixel in the heightmap image, we can easily compute the corresponding pixel center in the local coordinate frame of the neighborhood. We can then compute the intensity of each pixel as the weighted average of the signed distances of nearby projected points from their original positions. The unnormalized weights have a Gaussian falloff ${w}_{i} =$ $\exp \left( {-\frac{{d}_{i}^{2}}{2{\sigma }^{2}}}\right)$ , where ${d}_{i}$ are the distances of the projected points from the pixel center in the image plane (we set $\sigma = {5r}/k$ ). A constant value 1 is added to all projection heights, so that the value 0 is reserved for unoccupied pixels. As shown in section 6, the same approach can also be used to generate a sparse map, not for height, but for a scalar field.
+
+§ 3.1.2 DENSE HEIGHTMAP COMPUTATION
+
+Given a point on a surface represented by a mesh (which need not be a vertex), and its corresponding normal, we can use raycasting to generate a dense heightmap. We first transform the center of each heightmap pixel to the space of the mesh, using the local tangent frame mentioned earlier. From each pixel center, we shoot two rays in opposite directions perpendicular to the tangent plane. If both rays intersect the mesh, we chose the nearer intersection point. The intensity of the pixel is the signed distance of the pixel center to the intersection point, or a fixed large value (10) in the event that neither ray intersects the mesh. These raycasting operations are performed efficiently using the Embree raycasting framework [21], which also provides us with the intersected triangle ID and the barycentric coordinates of the intersection point. This additional information can be used to interpolate scalar or vector fields previously computed on the vertices of the mesh, in order to generate other kinds of dense local maps.
+
+§ 3.2 IMAGE-TO-IMAGE TRANSLATION
+
+We use a conditional generative adversarial network (cGAN) to perform image-to-image translation between our two heightmap domains, following the approach by Isola et al [11]. The outputs of both the generator $G\left( {x,z}\right)$ and discriminator $D\left( {x,y}\right)$ are conditioned on the input image $x$ . The discriminator does not attempt to classify the entire input image as real or fake, but rather classifies individual patches ( $8 \times 8$ in the case of our dense heightmaps). The generator must minimize two losses:
+
+1. The GAN loss, ${\mathbb{E}}_{x,y}\left\lbrack {D\left( {x,y}\right) }\right\rbrack + {\mathbb{E}}_{x,z}\left\lbrack {1 - D\left( {G\left( {x,z}\right) }\right) }\right\rbrack$ , which preserves high-frequency similarities between $G\left( {x,z}\right)$ and the corresponding ground-truth image $y$ .
+
+2. The ${\ell }_{1}$ loss, ${\mathbb{E}}_{x,y,z}{\left| y - G\left( x,z\right) \right| }_{1}$ , which preserves low-frequency similarities with the ground truth.
+
+In practice, the noise vector $z$ is introduced implicitly by random dropout of neurons with ${50}\%$ probability. It is worth noting that the sparse heightmaps we use contain incomplete information which
+
+ < g r a p h i c s >
+
+Figure 3: Examples of training models from SketchFab.
+
+a) b) c)
+
+Figure 4: a) Ground truth head model. b) 5000 points sampled to act as testing input. c) Poisson reconstruction of the input point cloud, showing the loss of features due to undersampling.
+
+could imply multiple different dense heightmaps. We therefore have the option of enabling neuron dropout at the time of evaluating the network, in order to obtain a stochastic output.
+
+§ 3.3 PREDICTION OF NORMALS
+
+The regular grid structure of a heightmap provides us the additional benefit that it allows easy computation of normals at each point on the heightmap. Given a dense heightmap, we can use backward differences to estimate gradients in the tangent and bitangent directions. This gives us the approximate normal at each point as the direction of the vector $\left( {\frac{\partial h}{\partial x},\frac{\partial h}{\partial y},\frac{2r}{k}}\right)$ . This approximate normal map proved to be sufficient for our purposes, although it could also be refined using a neural network.
+
+§ 4 FINE FEATURE RECONSTRUCTION
+
+We trained our GAN on local neighborhoods sampled from a set of 90 meshes of statues obtained from SketchFab. The meshes are identical to the training set used by Wang et al [22] for training 3PU. These meshes were generated by 3D-scanning statues of people and animals (Figure 3). Training details such as hyperparameters are given in Appendix A. For evaluation, we used a separate set of 16 meshes of statues from SketchFab. These include all 13 testing meshes used by Wang et al [22], as well as 3 additional meshes that we procured. All ground truth meshes have several hundred thousand vertices.
+
+To obtain a sparse set of points for reconstructing fine geometric features, we randomly sample a fixed number of points from a test mesh using Poisson disk sampling (using the implementation available in the VCG library [1]). Figure 4 shows an example of sampling these points. Normals for these points are estimated using PCA, based on 30 nearest neighbors from the sparse point cloud.
+
+After generating ${64} \times {64}$ sparse heightmaps for these sampled points using the method described in section 3.1.1, we use our trained model to suggest a plausible ${64} \times {64}$ dense heightmap. Given
+
+a b d e f
+
+Figure 5: Our domain translation and reconstruction results for the 5000 points in Figure 4.b, using different modes. a, c and e correspond to the detailed, superdense and even modes listed in Section 4. b, d and $f$ are their respective reconstructions using screened Poisson surface reconstruction.
+
+the search radius as well as the normal and tangent vectors used to obtain the sparse heightmap, we can easily transform pixels of the dense heightmap back into points in the space of the original point cloud. Rather than converting all 4096 pixels into points, we take points from an $8 \times 8$ or ${16} \times {16}$ square in the middle of the generated heightmap. The rationale for this is that the conditional GAN is transforming heightmaps at the local level with no global information, and therefore we cannot expect the extremities of the newly generated heightmap to be accurate, even if it appears to be a plausible translation of the given sparse heightmap. Once normals are computed using the method in Section 3.3, the surface can then be easily reconstructed using a conventional surface reconstruction algorithm. In our case, we use screened Poisson reconstruction [12].
+
+We experimented with different combinations of heightmap search radius, size of the central square, and stride (a stride of 2 implies taking every alternate row and column). Note that $\delta$ is the median distance between a point and its nearest neighbor in the input point cloud, which we use as a measure of scale. Out of these combinations, we found three to be interesting:
+
+1. detailed mode: Radius ${4\delta }$ , central $8 \times 8$ , stride 2 (16x upsam-pling). This mode produces the most detailed looking mesh
+
+Ground Truth Low-res 16x Ours 16x 3PU
+
+Figure 6: Results for the statue Cupid Fighting, reconstructed from a sample of 5000 points from the ground truth mesh.
+
+when screened Poisson reconstruction is applied.
+
+2. superdense mode: Radius ${4\delta }$ , central ${16} \times {16}$ , stride 1 (256x upsampling). This mode produces an extremely dense point cloud, from which a reasonably accurate mesh can be reconstructed.
+
+3. even mode: Radius ${8\delta }$ , central $8 \times 8$ , stride 2 (16x upsam-pling). The high-resolution point cloud obtained with this mode appears to be the most evenly sampled.
+
+Figure 5 compares the results for the three modes, after processing the sampled points from Figure 4.b.
+
+§ 4.1 RECONSTRUCTION RESULTS
+
+Examples of our surface reconstruction results are shown in Figures 6 to 8 . Note that there were no post-processing operations such as smoothing in any of our figures. In Figures 6 and 7, we can see that our method is able to reconstruct fine details such as facial features from a sample of only 5000 points. In Figure 7, we can see that the eye of the lion from Figure 1 has been reconstructed, along with other fine features such as the teeth. Figure 8 shows an extreme example where we upsample a set of only 625 points, while still being able to reconstruct features such as the wings on the helmet. For comparison, we show the corresponding result using the latest state-of-the-art method by Wang et al [22], informally called 3PU. This method has already been shown to be superior to state-of-the-art approaches such as EC-Net [28] and PU-Net [29]. Additional results can be found in Appendix B.
+
+Ground Truth Low resolution 16x 3PU 16x Ours
+
+Figure 7: Reconstruction results for the statue Lion Étouffant Un Serpent. The 5000 input points are overlaid on the ground truth mesh.
+
+625 Ground Low 16x 16x 3PU Ours points Truth resolution
+
+Figure 8: Reconstruction results for an extremely low-resolution sample of 625 points (in red) from Cupid Fighting. We compare against the best results we were able to obtain for 3PU, across multiple random samplings of 625 points.
+
+During our experiments on extremely low resolution point clouds, we naturally found that different random samplings of 625 input points for the same testing mesh give slightly different resulting meshes. We found that our method produced fairly consistent results across different random samplings. This variation is much larger for the PointNet++-based approaches. Therefore, we have selected the best result we obtained for 3PU [22] to compare with our result in Figure 8. Figure 14 in Appendix B compares our results with 3PU for multiple different random samplings of 625 input points.
+
+§ 5 EVALUATION
+
+We evaluate our surface reconstruction results quantitatively based on three metrics:
+
+1. Distance of each point in the high-resolution point cloud to the ground truth mesh (D2M).
+
+2. Hausdorff distance (HD): the maximum distance between a point on the reconstructed mesh and its nearest neighbor on the ground truth mesh:
+
+$$
+\max \left( {\mathop{\max }\limits_{{p \in P}}\mathop{\min }\limits_{{q \in Q}}\parallel p - q{\parallel }^{2},\mathop{\max }\limits_{{q \in Q}}\mathop{\min }\limits_{{p \in P}}\parallel p - q{\parallel }^{2}}\right) \tag{1}
+$$
+
+3. Chamfer distance (CD) [5]: the mean distance between a point on the reconstructed mesh and its nearest neighbor on the ground truth mesh:
+
+$$
+\frac{1}{2}\left( {\frac{1}{\left| P\right| }\mathop{\sum }\limits_{{p \in P}}\mathop{\min }\limits_{{q \in Q}}\parallel p - q{\parallel }^{2} + \frac{1}{\left| Q\right| }\mathop{\sum }\limits_{{q \in Q}}\mathop{\min }\limits_{{p \in P}}\parallel p - q{\parallel }^{2}}\right) \tag{2}
+$$
+
+We used Meshlab [3] to automate computation of all three metrics. The Hausdorff distance and Chamfer distance can be computed simultaneously by first randomly sampling a large number of points from the vertices, edges and faces of one mesh. The number of points sampled is equal to the number of vertices on the sampled mesh (several hundred thousand). We then find the mean and maximum distances between these points and the nearest vertices on the other mesh. Pairs of points are discarded if their separation is greater than $5\%$ of the diagonal of the bounding box. The above procedure is then repeated in the opposite direction, giving us two maximums and two means. The Hausdorff distance is the maximum of the two maximums and the Chamfer distance is the mean of the two means.Note that for performing screened Poisson reconstruction on point clouds upsampled using EC-Net and 3PU, we estimate normals using PCA based on 30 nearest neighbors. This is not necessary for our method, since we obtain normals from the dense heightmap as mentioned in Section 3.3.
+
+In Table 1, we compare the results for the three modes for sampling our dense heightmaps, as defined in section 4.1. To have a sense of the scale, note that the point coordinates were normalized to lie within a unit cube. We first consider the detailed and even modes, which both produce 16 times the number of input points. Although the detailed mode produces the best reconstructed mesh, it is noteworthy that the even mode is not far behind, and even surpasses the detailed mode for very low resolution point clouds. Those seeking to apply our method to obtain a point cloud, and not a mesh, have a choice between using our even mode, or alternatively performing Poisson disk sampling on the mesh produced using our detailed mode. The superdense mode, which produces 256 times the number of input points, suffers a bit when it comes to the distance to the ground truth mesh. However, it is not far behind the other modes when it comes to the Hausdorff and Chamfer distances.
+
+max width=
+
+Mode $\mathbf{{D2M}}$ HD CD
+
+1-4
+Detailed - 5000 points 3.42E-04 3.24E-02 1.03E-03
+
+1-4
+Even – 5000 points 3.96E-04 3.48E-02 1.42E-03
+
+1-4
+Superdense - 5000 points 5.89E-04 3.35E-02 1.37E-03
+
+1-4
+Detailed - 2500 points 4.00E-04 3.66E-02 1.53E-03
+
+1-4
+Even – 2500 points 4.38E-04 4.08E-02 2.11E-03
+
+1-4
+Superdense - 2500 points 8.39E-04 4.00E-02 2.25E-03
+
+1-4
+Detailed – 625 points 1.16E-03 5.20E-02 3.86E-03
+
+1-4
+Even – 625 points 1.10E-03 5.03E-02 5.25E-03
+
+1-4
+Superdense – 625 points 2.93E-03 5.81E-02 6.23E-03
+
+1-4
+
+Table 1: Quantitative comparison of reconstruction results using our three modes for domain translation of input point clouds with 5000 , 2500 and 625 points. Note that the superdense mode produces 256 times the number of points, while the others increase the density by a factor of 16 .
+
+max width=
+
+Size of point clouds $\mathbf{{D2M}}$ HD CD
+
+1-4
+>300K: 5000 points 3.42E-04 3.24E-02 1.03E-03
+
+1-4
+5K: 5000 points ${8.04}\mathrm{E} - {04}$ 3.26E-02 1.30E-03
+
+1-4
+625: 5000 points 1.17E-03 3.38E-02 1.54E-03
+
+1-4
+>300K: 2500 points 4.00E-04 3.66E-02 1.53E-03
+
+1-4
+5K: 2500 points 9.63E-04 3.72E-02 1.90E-03
+
+1-4
+625: 2500 points 1.46E-03 3.96E-02 2.17E-03
+
+1-4
+>300K: 625 points 1.16E-03 5.20E-02 3.86E-03
+
+1-4
+5K: 625 points 1.09E-03 5.04E-02 4.32E-03
+
+1-4
+625: 625 points 1.97E-03 5.54E-02 4.58E-03
+
+1-4
+
+Table 2: Quantitative comparison of using different point cloud sizes when training: all mesh vertices (over ${300}\mathrm{\;K}$ points), 5000 points sampled using Poisson disk sampling, or 625 points. The resulting metrics are compared for input point clouds with 5000, 2500 and 625 points.
+
+max width=
+
+Method $\mathbf{{D2M}}$ HD CD
+
+1-4
+EC-Net - 5000 points 3.42E-04 6.30E-02 3.89E-03
+
+1-4
+3PU – 5000 points $\mathbf{{2.91E} - {04}}$ ${3.63}\mathrm{E} - {02}$ 1.32E-03
+
+1-4
+Ours – 5000 points 3.42E-04 3.24E-02 1.03E-03
+
+1-4
+EC-Net – 2500 points 6.56E-04 6.57E-02 5.80E-03
+
+1-4
+3PU – 2500 points 3.16E-04 4.86E-02 2.13E-03
+
+1-4
+Ours – 2500 points ${4.00}\mathrm{E} - {04}$ 3.66E-02 1.53E-03
+
+1-4
+EC-Net – 625 points 1.85E-03 5.84E-02 8.12E-03
+
+1-4
+3PU – 625 points 1.31E-03 5.55E-02 4.96E-03
+
+1-4
+Ours – 625 points 1.16E-03 5.20E-02 3.86E-03
+
+1-4
+
+Table 3: Quantitative comparison of reconstructed surfaces after ${16}\mathrm{x}$ upsampling using EC-Net, 3PU and our detailed mode for point clouds with 5000, 2500 and 625 points.
+
+max width=
+
+Method $\mathbf{{D2M}}$ HD CD
+
+1-4
+3PU – 5000 points 0.081 1.83 0.440
+
+1-4
+Ours, Detailed - 5000 points 0.066 1.26 0.426
+
+1-4
+Ours, Even - 5000 points 0.113 1.36 0.443
+
+1-4
+3PU – 2500 points 0.161 2.77 0.485
+
+1-4
+Ours, Detailed - 2500 points 0.112 1.71 0.442
+
+1-4
+Ours, Even – 2500 points 0.181 1.88 0.478
+
+1-4
+3PU - 625 points 0.593 8.13 0.902
+
+1-4
+Ours, Detailed - 625 points 0.325 3.48 0.599
+
+1-4
+Ours, Even – 625 points 0.492 4.03 0.756
+
+1-4
+
+Table 4: Quantitative comparison of reconstructed surfaces after ${16x}$ upsampling using 3PU, our detailed mode and our even mode on the ABC dataset [13], for point clouds with 5000, 2500 and 625 points.
+
+⑨ ⑥ ® ⑥
+
+Figure 9: Two examples of reconstruction results obtained on the ABC dataset [13], after upsampling samples of 2500 points. Top row: ground truth. Bottom row: our results using the detailed mode.
+
+A unique feature of our method is that we can obtain good results on low resolution point clouds, even after training on high-resolution point clouds. Our training meshes are very dense, and each has over 300 thousand vertices. They can also be randomly downsampled to a lower resolution such as 5000 or 625 points during training. We have therefore investigated how different resolutions of the training point clouds affect the surface reconstruction results. After all, the resolution of the training point clouds does affect the variety of the sparse heightmaps that will be seen during training. Table 2 shows that using the entire set of mesh vertices during training usually produces the best results. Even in the case where the testing point clouds have 625 points, it is a GAN trained on a higher resolution (5000 points) which gives the best results. This is likely due to a combination of two factors: i) we use raycasting onto meshes to obtain our ground truth dense heightmaps, and ii) we randomly sample only 100 points from each local neigborhood to contribute to the sparse heightmap, thereby making our domain translation more robust.
+
+Table 3 compares the accuracy of the meshes produced by applying screened Poisson reconstruction on the dense output point cloud of our domain translation method, as well as other recent methods for point cloud upsampling. We choose the detailed mode of domain translation for comparison, since it upsamples point clouds by 16 times (thereby allowing a fair comparison with 3PU), and it also produces the most accurate reconstructed meshes. We compare our work with the results obtained using EC-Net [28], as well as the recent state-of-the-art method called 3PU by Wang et al [22]. Wang et al claim that 3PU gives better results than all previous work for 16x upsampling of point clouds. In order to upsample point clouds 16x using EC-Net, we apply $4\mathrm{x}$ upsampling with EC-Net twice in succession, as recommended by Yu et al to Wang et al [22]. As mentioned earlier, we use the same training data as Wang et al. We do not retrain EC-Net, as we do not have edge annotations in our training data, which are required by EC-Net. From Table 3, we can see that across all three metrics, our quantative results are clearly superior to those of EC-Net. Our results are also superior to $3\mathrm{{PU}}$ for the Hausdorff distance and Chamfer distance metrics, while still being competitive for the D2M metric.
+
+We have additionally performed a large-scale quantative evaluation of our method on the first 1,000 OBJ files in the ABC dataset [13], whose results are summarized in Table 4. Figure 9 shows examples of our results on this dataset. We did not re-train our GAN or the 3PU network on this dataset, but rather used the same networks which were already trained on the aforementioned 90 models from SketchFab. The 3D models in the ABC dataset are densely sampled triangular meshes obtained from parametric CAD models. For models containing multiple connected components, only the largest connected component was used for the evaluation. The coordinates of these models are not normalized. Therefore, in order to make a fair aggregation of the results, we divided the computed distance metrics for each model by the average distance between pairs of neighboring mesh vertices. We also compute the median of each metric over all 1,000 models, to reduce the effect of outlier cases where the point clouds produced by $3\mathrm{{PU}}$ do not work well with the normal estimation method (there were no such cases among the SketchFab testing models). The results show that our method gives reconstructed surfaces that are more accurate than those obtained using 3PU. Furthermore, the worst-case error given by the Hausdorff distance is comparable to the normalized distance of 1 between nearest neighbors in the ground-truth dense mesh, while the average-case errors are significantly smaller than 1 .
+
+We also found our method to be extremely fast, taking only 3 minutes to upsample a large point cloud of ${160}\mathrm{\;K}$ points with an unoptimized implementation. By contrast, the far more complicated neural network of 3PU took 214 minutes to perform the same task using the code provided by the authors. We note, however, that both methods have comparable speed for small point clouds. For instance, they both take around 20 seconds to upsample 5000 points.
+
+The aforementioned state-of-the-art point cloud upsampling methods are ultimately based on PointNet++ [17]. It is therefore worth noting that all upsampling methods based on PointNet++ share certain weaknesses:
+
+1. They are not invariant to permutations of the point cloud. This is because they all rely on farthest-point sampling as an initial step for their neural networks (see [25]). Our sparse heightmaps, on the other hand, are completely invariant to permutations of the points in the local neighborhood.
+
+2. They rely on large and complicated neural network architectures, which affects the computational efficiency. By contrast, we use a simple convolutional U-Net architecture for the GAN.
+
+3. They do not perform well in regions with unusually dense sampling. This is because after the farthest-point sampling step in each set abstraction layer, K-nearest neighbors are taken as representatives of the local neighborhood. Therefore, regions of high density will result in too many neighbors that are very close to the center point, thus giving a skewed picture of the local neighborhood. In our case, having many points mapping to the same pixels of the sparse heightmap will not significantly affect the pixel intensities.
+
+§ 6 SCALAR FIELD UPSAMPLING
+
+Our method also has the potential for using the conditional GAN to predict values of a scalar or vector field corresponding to the dense heightmap at a given point. As an example, we show an application to the sharpness field defined by Raina et al $\left\lbrack {{18},{19}}\right\rbrack$ . The feature-aware smoothing method described in [19] is mainly concerned with the local maxima of the field, which are scale-invariant. This makes it simple to apply our GAN to locally predict the value of the sharpness field of a point cloud. If values of the sharpness field are pre-computed for the low-resolution point cloud, we can obtain a sparse image of the sharpness values simultaneously with the sparse heightmap (top row of Figure 10). By obtaining ground truth sharpness fields on meshes, we can then train a separate conditional GAN to predict the values of the sharpness field corresponding to all points of the dense heightmap (middle row of Figure 10).
+
+ < g r a p h i c s >
+
+Figure 10: Sharpness field sampled from the fandisk model. Top row: sparse sampling of the sharpness field precomputed on the point cloud. Middle row: dense sharpness field predicted by our GAN. Bottom row: ground truth sharpness field obtained by casting rays onto the original mesh.
+
+In order to predict values of the sharpness field at the dense output points from our domain translation method, we trained a separate GAN to predict sharpness field values at the output points, using the same training procedure and data augmentation as the heightmap translation GAN. The only difference is that we obtained better results with a larger patch size of ${16} \times {16}$ for the discriminator. The training data is a set of meshes of simple geometric shapes, whose sharpness fields are computed using dihedral angles as described in [19]. We then pre-computed the sharpness field of the blade point cloud ( ${80}\mathrm{\;K}$ points) using the CNN with a spatial transformer as recommended in [19], before performing domain translation using the even mode. The results we have obtained (Figure 11) show that our GAN gives a sharpness field with similar properties to a sharpness field computed from scratch on the dense point cloud after domain translation. Furthermore, the entire domain translation procedure along with the sharpness field estimation takes around 7 minutes, compared to 20 minutes if the sharpness field is recomputed from scratch on the dense point cloud. The combination of the dense heightmap, the normal map and the sharpness field together provide all the information necessary for downstream methods to accurately reconstruct the surface along with sharp features. We believe that our domain translation approach to upsampling has the potential for extension to other scalar fields and vector fields.
+
+§ 7 CONCLUSION, LIMITATIONS AND FUTURE WORK
+
+In this work, we have applied GAN-based domain translation to enable us to reconstruct fine features of low-resolution point clouds. We have obtained results superior to the state of the art, while providing a very different approach from previous PointNet++-based work. We have also given a simple example demonstrating that the same method has the potential to be extended to upsampling of scalar fields. Our work demonstrates that tangible benefits can be obtained by applying the domain translation paradigm to $3\mathrm{D}$ geometry problems, and not merely the typical domains associated with GAN-based domain translation.
+
+While our method is good at reconstructing low-level details, it has a slight tendency to add unnecessary detail in undersampled regions. Since we are performing domain translation, the GAN is forced to hallucinate a realistic-looking dense heightmap even in the presence of insufficient data. This problem opens up possibilities for future work.
+
+ < g r a p h i c s >
+
+Figure 11: Upsampling results for the blade model, along with its sharpness field. Left: using our GAN to estimate the sharpness field. Right: recomputing the sharpness field from scratch on the dense point cloud.
+
+In our problem, we are able to obtain paired training data from the two domains. The same approach can be extended to unpaired data in domains where it is difficult to align the data of multiple domains (e.g. heightmaps obtained from depth cameras and from laser scanners). The raycasting-based approach to generating ground truth data also opens up the possibility of obtaining training data from implicit or parametric surfaces such as CAD models, instead of meshes. There is also scope for extending our method to estimate other scalar or vector fields in tandem with upsampling.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c01512d5adeae23513413460b711158d859c63ee
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,445 @@
+# Peephole Steering: Speed Limitation Models for Steering Performance in Restricted View Sizes
+
+Shota Yamanaka*
+
+Yahoo Japan Corporation
+
+Hiroki Usuba
+
+Meiji University
+
+Haruki Takahashi
+
+Meiji University
+
+Homei Miyashita
+
+Meiji University
+
+
+
+Figure 1: Examples of steering through narrow paths with limited forward views. (a) Lasso operation for selecting multiple objects in illustration software. The user's hand occludes the forward path to be passed through. The viewable forward distance is between the stylus tip and the user's hand. (b) Map navigation in a zoomed-in view proposed in a previous study [18]. When the cursor moves downwards, the viewable forward distance is between the cursor and the window bottom.
+
+## Abstract
+
+The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted ${R}^{2} = {0.144}$ for predicting the speed, our best-fit model showed an adjusted ${R}^{2} = {0.975}$ with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.
+
+Index Terms: H.5.2 [User Interfaces]: User Interfaces-Graphical user interfaces (GUI); H.5.m [Information Interfaces and Presentation]: Miscellaneous
+
+## 1 INTRODUCTION
+
+The steering law $\left\lbrack {1,{14},{27}}\right\rbrack$ is a model for predicting the time and speed needed to pass through a constrained path, such as navigation through a hierarchical menu. In HCI, the validity of the steering law has typically been confirmed in a desktop environment, such as by maneuvering a mouse cursor or stylus tip through a path drawn on a display (e.g., $\left\lbrack {2,{31}}\right\rbrack )$ . Under such conditions, participants can view the entire path or a substantial portion of it before the trial begins, and they can thus determine the appropriate movement speed for a given path width.
+
+However, conditions under which users can view enough of a long path forward represent an ideal situation. Imagine a user operating a stylus pen in illustration software to select multiple objects with a lasso tool as shown in Fig. 1a. When a right-handed user moves the stylus rightwards, the viewable forward distance is limited due to occlusion by the user's hand. Therefore, to avoid selecting unwanted objects, the user has to move the stylus slowly. In contrast, if the user moves the stylus leftwards through the objects, the movement speed should be less limited because the viewable forward distance is not restricted.
+
+Therefore, for path steering tasks, we assume that the viewable forward distance limits the movement speed as the path width does Although the effect of limited view sizes has been investigated several times in HCI studies $\left\lbrack {{10},{16},{21},{30}}\right\rbrack$ , the main interest has been target selection. Furthermore, for steering tasks, while the view of the forward path may be limited, we found few papers on this topic that include map navigation tasks with a magnified view (Fig. 1b, [18]).
+
+If we can derive models of the relationship between task conditions and outcomes - path width and viewable forward distance vs. movement speed - it would contribute to better understanding of human motor behavior. However, evaluating the model robustness of the steering law against an additional constraint (viewable forward distance) has not been investigated well; this motivated us to conduct this work. In our user study, we conducted a path-steering experiment with a mouse and determined the best-fit model from among candidate formulations. Our key contributions are as follows.
+
+(a) We provide empirical evidence that the viewable forward distance $S$ significantly affects the steering speed. We also justify why the relationship between $S$ and speed can be represented by the power law.
+
+(b) We develop refined models to predict movement speed on the basis of the path width and $S$ , which had significant main effects and interaction effects. Our model predicts the speed with an adjusted ${R}^{2} > {0.97}$ . We also show that the movement time while steering through a view-limited area can be predicted with ${R}^{2} > {0.97}$ .
+
+We also discuss other findings, e.g., the reason a conclusion opposite of those from previous studies was obtained: the speed increased with a narrower $S$ in peephole pointing [21].
+
+## 2 RELATED WORK
+
+### 2.1 Steering Law Models
+
+Rashevsky [27, 28], Drury [14], and Accot and Zhai [1] proposed a mathematically equivalent model to predict the movement speed when passing through a constant-width path:
+
+$$
+V = \text{const} \times W \tag{1}
+$$
+
+where $V$ is the speed and $W$ is the path width. Typically, participants are instructed to perform the task as quickly and accurately as possible. Hence, there are several interpretations of $V$ : the possible maximum safe speed ${V}_{\max }$ in Rashevsky’s model, the average speed ${V}_{avg}$ in a given path length in Drury’s model (i.e., ${V}_{avg} = A/{MT}$ , where the path length is $A$ and the movement time needed is ${MT}$ ), and the instantaneous speed at a given moment in Accot and Zhai's model.
+
+---
+
+*e-mail: syamanak@yahoo-corp.jp
+
+---
+
+The validity of this model $\left( {V \propto W}\right)$ has been empirically confirmed for (e.g.) car driving $\left\lbrack {8,{12},{15}}\right\rbrack$ , pen tablets $\left\lbrack {39}\right\rbrack$ , and mice [31,37]. Because ${V}_{\text{avg }}$ is defined as $A/{MT}$ , the following equation for predicting ${MT}$ is also valid [14,20]:
+
+$$
+{MT} = b\left( {A/{V}_{avg}}\right) \tag{2}
+$$
+
+where $b$ is a constant (hereafter, $a - e$ indicate regression coefficients, with or without prime marks, as in ${b}^{\prime }$ ). Since ${V}_{\text{avg }} =$ const $\times W$ , Equation 2 can be written as follows.
+
+$$
+{MT} = b\frac{A}{\text{ const } \times W} = {b}^{\prime }\frac{A}{W}\text{ (let }{b}^{\prime } = b/\text{ const) } \tag{3}
+$$
+
+For predicting both ${V}_{\text{avg }}$ and ${MT}$ , these no-intercept forms are theoretically valid although the contribution of the intercept is often statistically significant [20] as follows.
+
+$$
+{V}_{\text{avg }} = a + {bW} \tag{4}
+$$
+
+$$
+{MT} = a + b\left( {A/{V}_{avg}}\right) \tag{5}
+$$
+
+$$
+{MT} = a + b\left( {A/W}\right) \tag{6}
+$$
+
+The steering law models on ${V}_{\text{avg }}$ and ${MT}$ hold when $W$ is narrow relative to the path length. Otherwise, users do not have to pay attention to path boundaries, in which case $W$ does not limit the speed $\left\lbrack {1,{20},{33}}\right\rbrack$ . For mouse steering tasks, $W$ limits the speed when the steering law difficulty $\left( {A/W}\right)$ is greater than ${10}\left\lbrack {{31},{33}}\right\rbrack$ . Hence, in our user study, we chose the range of $A/W$ ratios for the speed measurement area to include values less than and greater than 10 so that the priority for limiting the movement speed would change between $W$ and $S$ . That is, if $W$ is small and $A/W$ is greater than 10, we assume that $W$ strongly limits the speed, whereas if $W$ is sufficiently large such that the path width does not restrict the speed, we assume that $S$ restricts the speed more.
+
+### 2.2 Steering Operations with Cornering
+
+To accurately predict the ${MT}$ for steering around a corner as shown in Fig. 1a and b, Pastel [26] refined the model by adding the Fitts' law difficulty [17] as follows.
+
+$$
+{MT} = a + b\frac{2A}{W} + {cID} \tag{7}
+$$
+
+where the first and second path segments before/after the corner have the same length(A)and same width(W), and ${ID}$ is the index of difficulty in Fitts' law (Pastel used the Shannon formulation [23]: ${ID} = {\log }_{2}\left( {A/W + 1}\right)$ ). Fitts’ law was originally a model for pointing to a target with width $W$ at distance $A$ . Therefore, in addition to considering the difficulty of steering in order to pass through the entire path, this model also considers the difficulty of decelerating to turn at a corner. However, if users cannot see an approaching corner due to the restricted view, it is difficult to start to decelerate with appropriate timing.
+
+### 2.3 Effect of Viewable Forward Distance on Task Perfor- mance
+
+Peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ and magic lens pointing $\left\lbrack {30}\right\rbrack$ are examples of UI operations with restricted view sizes. The most popular task for peephole pointing is map navigation. When users want to see information about a landmark on a map application using a smartphone or PC, they first scroll the map (search phase) and then select an intended location (selection phase). Because Fitts' law [17] holds for 1D scrolling tasks done to capture a target into the viewing area [19], the ${MT}$ changes due to $S$ , and the total time can be predicted by the sum of the search and selection phases $\left\lbrack {{10},{30}}\right\rbrack$ . Models for peephole pointing have been validated with a mouse [10], spatially aware phone [30], handheld projector [16, 21], and touchscreen [41].
+
+Although the importance of user performance models for the peephole situation is explained in these papers, their main focus has unfortunately been on target selection. An exception that studied the effect of the viewable range in steering-law tasks was the work of Gutwin and Skopik [18], in which an area around the cursor was zoomed in on with radar-view tools (see Figure 3 in [18]). The cursor and view window were moved concurrently, and users moved the window to steer the cursor through a path.
+
+There are two differences between the work of Gutwin and Skopik [18] and ours. First, they fixed the window size of the radar view. Thus, as the zoom level increased, the corresponding viewable forward distance $S$ decreased. In contrast, in our intended tasks (Fig. 1), $S$ changes, but there is no zooming. They reported that the zoom level did not substantially change ${MT}$ , which is the opposite conclusion of that reached in the peephole pointing studies. In other words, a consistent effect of $S$ on ${MT}$ was not observed for steering and pointing; we revisit this point in the Discussion. The second difference is that the entire view was provided as a miniature view (see Fig. 1b), which assisted the timing for deceleration in preparation for the next corner.
+
+In summary, the quantitative relationship between steering performance $\left( {V}_{\text{avg }}\right.$ or ${MT}$ ) and $S$ is unclear. Yet, knowing this relationship would be beneficial for understanding user behavior in restricted-view situations, which are realistic for some tasks as described in Introduction. We tackle this challenge through a user study.
+
+## 3 MODEL DEVELOPMENT FOR UNCERTAIN CORNERING Timing
+
+As the baseline model for predicting the movement speed, we test Equation 4 $\left( {{V}_{\text{avg }} = a + {bW}}\right)$ on our experimental data. Also, to check the effect of an additional task parameter (here, $S$ ) on the estimated result $\left( {V}_{\text{avg }}\right)$ , the simplest method is to add the additional factor and the interaction term between the two predictor variables (if the interaction term is significant) to the baseline model ${}^{1}$ . Thus, we test:
+
+$$
+{V}_{\text{avg }} = a + {bW} + {cS} \tag{8}
+$$
+
+$$
+{V}_{\text{avg }} = a + {bW} + {cS} + {dWS} \tag{9}
+$$
+
+We next discuss how users limit the speed as they prepare for a corner. As the first step towards deriving a more general model, in this study we fix the width of the second path segment ${W}_{2}$ , and we give the experimental participants the previous knowledge(PK) of ${W}_{2}$ being fixed. Nevertheless, the participants do not know the amplitude of the first path segment, and thus the corner appears at an uncertain time.
+
+We incorporate Pastel's model in which users must decelerate in the first path segment when approaching the corner to safely enter the second path segment [26]. As a more general case, the first and second path segments have different lengths and widths as shown in Fig. 2a. Pastel’s idea for integrating Fitts’ ${ID}$ is that the cursor must stop within the second path area, which has a width of ${W}_{2}$ , after traveling over the first path segment. Hence, Equation 7 can be
+
+rewritten as:
+
+$$
+{MT} = a + b\frac{{A}_{1}}{{W}_{1}} + {cID} + d\frac{{A}_{2}}{{W}_{2}} \tag{10}
+$$
+
+where ${ID} = {\log }_{2}\left( {{A}_{1}/{W}_{2} + 1}\right)$ . That is, the movement amplitude for pointing is the distance of the first path and the target size is the width of the second path.
+
+---
+
+${}^{1}$ This is explained in introductory statistics textbooks or websites, e.g., https://web.archive.org/web/20190617154140/https: //www.cscu.cornell.edu/news/statnews/stnews40.pdf.
+
+---
+
+
+
+Figure 2: Steering operations with cornering in which the first and second path segments have different sizes. (a) No-mask and (b) masked conditions. Left and right masks are opaque in study, rather than semi-transparent as shown here.
+
+As shown in Fig. 2b, when a corner has not yet been revealed from the forward mask, users can at least move over a viewable forward distance $S$ . In the case that the corner is just beyond the viewable forward distance, users must adjust the speed as if the second path tolerance ranged from $S$ to $S + {W}_{2}$ ; the "target" center of the second path segment is located at a distance of $S + {0.5}{W}_{2}$ from the cursor position. The time needed to perform this pointing motion is modeled by, according to Pastel, Fitts’ law with $S + {0.5}{W}_{2}$ as the target amplitude and ${W}_{2}$ as the width. Another definition of the amplitude in Fitts' law is the distance to the closer edge of the target, which holds empirically $\left\lbrack {3,4,6}\right\rbrack$ . Using $S$ as the amplitude is thus a simpler choice that does not degrade model fitness.
+
+If ${W}_{2}$ is sufficiently wide, it is possible for users to move the cursor rapidly in the first path segment because they can appropriately decelerate as soon as they notice the corner. However, such a task is not considered to be a steering task in a constrained path, and it is necessary that the path widths $\left( {W}_{1}\right.$ and $\left. {W}_{2}\right)$ are not extremely wide for the steering law to hold $\left\lbrack {1,{14},{20}}\right\rbrack$ . In this study, we therefore set a reasonably narrow ${W}_{2}$ that necessitates careful movements to safely turn at the corner.
+
+Another model for pointing tasks is by Meyer et al. [25].
+
+$$
+{MT} = b\sqrt{A/W} \tag{11}
+$$
+
+where $A$ is the distance to the target center. While the mathematical equivalency between this power model and Fitts' logarithmic model is questioned by Rioul and Guiard [29], they agree that these models are well approximated.
+
+On the basis of this discussion, we assume that in practice the ${MT}$ for pointing to the second path segment, which might be just beyond the front mask and which ranged from $S$ to $S + {W}_{2}$ , can be regressed as follows:
+
+$$
+{MT} = b\sqrt{S/{W}_{2}} \tag{12}
+$$
+
+Again, the original model of Meyer et al. uses the distance to the target center as the target amplitude $\left( {S + {0.5}{W}_{2}}\right)$ , but using $S$ as amplitude would also fit well. The average speed for this movement is defined as the distance to be traveled divided by the time needed for travel.
+
+$$
+{V}_{avg} = \frac{S}{MT} = \frac{S}{b\sqrt{S/{W}_{2}}} = {b}^{\prime }\sqrt{S \times {W}_{2}}\left( {\text{ let }{b}^{\prime } = 1/b}\right) \tag{13}
+$$
+
+In our experiment, to focus on the new factor $S$ , we fixed the value of ${W}_{2}$ . Equation 13 can thus be further simplified:
+
+$$
+{V}_{\text{avg }} = {b}^{\prime }\sqrt{S \times {W}_{2}} = {b}^{\prime \prime }\sqrt{S}\left( {\text{ let }{b}^{\prime \prime } = {b}^{\prime }\sqrt{{W}_{2}}}\right) \tag{14}
+$$
+
+
+
+Figure 3: Visual stimuli used in the experiment. Left and right masks are opaque in study, rather than semi-transparent as shown here.
+
+In summary, we hypothesize that when the viewable forward distance is limited to $S$ , users have to limit the speed in case the second path segment is just beyond the viewable forward distance. and this behavior is expected to be modeled as ${V}_{avg} = b\sqrt{S}$ . While this model is justified based on existing theoretical and empirical evidence, we need to test the validity of our hypothesis empirically We therefore conduct a path-steering study to evaluate the model combined with the steering law.
+
+## 4 EXPERIMENT
+
+### 4.1 Participants
+
+Twelve university students participated ( 3 females and 9 males; $M$ $= {21.6},{SD} = {1.32}$ years). All were right-handed and had normal or corrected-to-normal vision. Six were daily mouse users.
+
+### 4.2 Apparatus
+
+The PC was a Sony Vaio Z (2.1 GHz; 8-GB RAM; Windows 7). The display was manufactured by I-O DATA (1920 x 1080 pixels,527.04 $\mathrm{{mm}} \times {296.46}\mathrm{\;{mm}};{60} - \mathrm{{Hz}}$ refresh rate). A Logitech optical mouse was used (model: G300r; 1000 dpi; 2.05-m cable) on a large mouse pad $\left( {{42}\mathrm{\;{cm}} \times {30}\mathrm{\;{cm}}}\right)$ . The experimental system was implemented with Hot Soup Processor 3.5 and was used in full-screen mode. The system read and processed input $\sim {125}$ times per sec.
+
+The cursor speed was set to the default: the pointer-speed slider was set to the center in the Control Panel. Pointer acceleration, or the Enhance Pointer Precision setting in Windows 7, was enabled to allow mouse operations to be performed with higher ecological validity [11]. Using pointer acceleration does not violate Fitts' law or the steering law $\left\lbrack {2,{36}}\right\rbrack$ . The large mouse pad and long mouse cable were used to avoid clutching (repositioning of the mouse) during trials. This was to omit unwanted factors during model evaluation If we had allowed clutching and the model fit was poor, we would not have been able to determine whether the poor fit was due to the model formulation or to clutching. No recognizable latency was reported by the participants.
+
+### 4.3 Task
+
+The participants had to click on the blue starting line, horizontally steer through a white path of width ${W}_{1}$ , turn downwards at a corner. and then enter a green end area (Fig. 3). After that, an orange area labeled "Next" appeared on the left-side screen edge; entering this area started the next trial. Because the direction of cornering was not a focus of this study and because a previous study showed that the ${MT}$ for downward turns was shorter than that for upward turns [33], we chose downward movement for the second path segment to shorten the duration of the experiment.
+
+If the cursor entered the gray out-of-path areas, a beep sounded, and the trial was flagged as a steering error $E{R}_{\text{steer }}$ and retried later. If the cursor did not deviate from the blue and white path segments, the trial was flagged as a success, and when entering the green end area a bell sounded (no clicking was needed). The left and right masks moved alongside the cursor.
+
+The participants were instructed to not make any errors and to move the cursor to the end area in as short a time as possible. In addition, we asked them to refrain from clutching while steering. If the participants accidentally clutched or if the mouse reached the right edge of the mouse pad, they were instructed to press the mouse button. Such trials were flagged as invalid and removed from the data analysis. If a steering error or an invalid trial was observed, a beep sounded, and the trial was presented again later in a randomized order.
+
+The measurement area of distance $A$ for recording ${MT}$ and speed is shown in Fig. 3c; that is, it ranges from (b) when the cursor reaches the left edge of the white path to (c) when the viewable range is one pixel away from the second path segment. While in this area, the participants did not know the position of the corner and thus had to move the cursor carefully to avoid deviating from the path. We measured the ${MT}$ spent in the measurement area, and the average speed, which was the dependent variable, was computed as ${V}_{\text{avg }} = A/{MT}$ . Because in the measurement area the participants could not see the corner, the only operation required in this area was to steer through a constrained path with a restricted view.
+
+To avoid revealing the position of the corner before the trial began, (1) the cursor had to be moved to the "Next" area at the left edge of the screen at the end of every trial, and (2) the cursor had to stop at the blue starting line and could not move further rightwards until the line was clicked. We provided a run-up area of 50 pixels (Fig. 3a) because the speed when clicking on the blue starting area was zero. When the speed measurement began, the cursor was already moving at some speed.
+
+### 4.4 Design and Procedure
+
+This experiment had a $6 \times 5$ within-subjects design with the following independent variables and levels. We tested six $S$ values:25,50, 100,200, and 400 pixels and the no-mask condition. The no-mask condition was included to measure baseline performance. The ${W}_{1}$ values were19,27,37,49, and 63 pixels. The movement time and speed were measured in the area shown in Fig. 3c. The average speed was computed as ${V}_{\text{avg }} = A/{MT}$ and used as the dependent variable.
+
+The width of the end area ${W}_{2}$ was fixed at 19 pixels. To prevent participants from noticing that the corner appeared at several fixed positions, we used various $A$ values, and in every trial the starting line had a random offset, ranging from 100 to 400 pixels, from the left-side screen edge. The y-coordinate of the white path center had a random offset ranging from -150 to 150 pixels from the screen center. The $A$ values for the measurement area were 300,500, and 800 pixels and were not included as an independent variable. For the no-mask condition (baseline), the white-area distance was set to $A + {400} + {W}_{2}$ pixels (i.e., same as the largest $S$ condition).
+
+The ratio of $A/{W}_{1}$ ranged from 4.76 to 42.1. As discussed in Related Work, we chose the $A$ and ${W}_{1}$ values so that the $A/{W}_{1}$ ratio would range from less than to greater than 10 in order to change the motivation for limiting the speed between ${W}_{1}$ and $S$ . The ${W}_{2}$ value was then set to the smallest ${W}_{1}$ value to require a careful cornering motion.
+
+Among the combination of ${6}_{S} \times {5}_{{W}_{1}} \times {3}_{A} = {90}$ patterns,10 trials were randomly selected as practice trials. The participants then attempted three sessions of 90 data-collection trials. In total, we recorded ${90}_{\text{patterns }} \times {3}_{\text{sessions }} \times {12}_{\text{participants }} = {3240}$ successful data points. This study took approximately ${40}\mathrm{\;{min}}$ per participant.
+
+### 4.5 Results
+
+For the error rate analysis, we used non-parametric ANOVA with the Aligned Rank Transform (ART) [35] and Tukey’s method for p-value adjustment in posthoc comparisons. For the speed data analysis, we used repeated-measures ANOVA and the Bonferroni correction as the p-value adjustment method. Note that ANOVA is robust even when experimental data are non-normal [7].
+
+
+
+Figure 4: Results for error rate over the entire trial of the experiment.
+
+#### 4.5.1 Errors
+
+Steering Error over the Entire Trial. The number of $E{R}_{\text{steer }}$ errors for the smaller to larger $S$ values were 144,103,41,49,37, and 46, respectively. The number of $E{R}_{\text{steer }}$ errors for narrower to wider $W$ values were 126,85,70,74, and 65, respectively. The mean $E{R}_{\text{steer }}$ rate was 11.5%, which was slightly higher than that found in previous work on mouse steering tasks (9% [2]). Based on the experimenter's observation, the point with the highest error rate was at the corner.
+
+We observed the main effects of $S\left( {{F}_{5,{55}} = {7.830}, p < {0.001}}\right.$ , ${\eta }_{p}^{2} = {0.42})$ and ${W}_{1}\left( {{F}_{4,{44}} = {3.529}, p < {0.05},{\eta }_{p}^{2} = {0.24}}\right)$ on the ${\bar{ER}}_{\text{steer }}$ rate in the entire trial. Fig. 4 shows these results. Post-hoc tests showed significant differences between six pairs of $S$ values: $\left( {{25},{100}}\right) ,\left( {{25},{200}}\right) ,\left( {{25},{400}}\right) ,\left( {{25},\text{ no-mask }}\right) ,\left( {{50},{100}}\right)$ , and $({50}$ , 400) with $p < {0.05}$ for all pairs. ${W}_{1} = {19}$ and 49 also showed a significant difference $\left( {p < {0.05}}\right)$ . No significant interaction was found between $S$ and ${W}_{1}\left( {{F}_{{20},{220}} = {1.539}, p = {0.07055},{\eta }_{p}^{2} = {0.12}}\right)$ .
+
+We expected the $E{R}_{\text{steer }}$ rate to decrease for greater $S$ values because the risk of deviating from the first path segment is lower in those cases. However, at the same time, the participants were able to move the cursor more rapidly as shown in Section 4.5.2 with greater $S$ . Rapid movement increased the possibility of deviating from the path, and thus Fig. 4 does not show a monotonic tendency.
+
+Steering Error in the Measurement Area. The number of $E{R}_{\text{steer }}$ errors for smaller to larger $S$ were12,9,18,15,9, and 10, respectively. The number of $E{R}_{\text{steer }}$ error for narrower to wider $W$ were 40, 13,9,9, and 2, respectively. The mean $E{R}_{\text{steer }}$ rate was ${2.20}\%$ . We observed the main effects of $S\left( {{F}_{5,{55}} = {15.05}, p < {0.001},{\eta }_{p}^{2} = {0.58}}\right)$ and ${W}_{1}\left( {{F}_{4,{44}} = {9.362}, p < {0.05},{\eta }_{p}^{2} = {0.46}}\right)$ on the $E{R}_{\text{steer }}$ rate in the measurement area. Fig. 5 shows these results. Post-hoc tests showed significant differences between eight pairs of $S$ values: $({25}$ , ${100}),\left( {{25},{400}}\right) ,\left( {{50},{100}}\right) ,\left( {{50},{200}}\right) ,\left( {{100},{200}}\right) ,\left( {{100},{400}}\right) ,({100}$ , no-mask), and(200,400)with $p < {0.05}$ for all pairs. Also, four pairs showed significant differences for ${W}_{1} = {19}$ and the other values with $p < {0.05}$ . The interaction of $S \times W$ was significant $\left( {{F}_{{20},{220}} = {3.262}}\right.$ , $p < {0.001},{\eta }_{p}^{2} = {0.23}$ ). Fig. 6 shows this result.
+
+#### 4.5.2 Average Speed
+
+We found the main effects of $S\left( {{F}_{5,{55}} = {69.98}, p < {0.001},{\eta }_{p}^{2} = }\right.$ ${0.86})$ and ${W}_{1}\left( {{F}_{4,{44}} = {180.6}, p < {0.001},{\eta }_{p}^{2} = {0.94}}\right)$ on ${V}_{\text{avg }}$ to be significant. The ${V}_{\text{avg }}$ values for $S = {25} - {400}$ pixels and the no-mask condition were 154, 224, 320, 420, 520, and 545 pixels/sec, respectively. Post-hoc tests showed significant differences between all $S$ pairs (at least $p < {0.01}$ ) except for one pair $(S = {400}$ and the no-mask condition). The ${V}_{\text{avg }}$ values for ${W}_{1} = {19} - {63}$ pixels were 240,302,365,428, and 485 pixels/sec, respectively. Significant differences were found for all ${W}_{1}$ pairs $\left( {p < {0.001}}\right)$ .
+
+- p < 0.05; error bars mean SD across participants
+
+
+
+Figure 5: Results for error rate in the measurement area of the experiment.
+
+
+
+Figure 6: Interaction of error rate in the measurement area of the experiment. The six bars in each cluster show results for $S = {25},{50}$ , 100, 200, and 400 pixels and the no-mask condition, respectively, from left to right.
+
+The interaction of $S \times {W}_{1}$ was significant $\left( {{F}_{{20},{220}} = {48.70}, p < }\right.$ ${0.001},{\eta }_{p}^{2} = {0.82})$ . As shown in Fig. 7a, we observed the following results.
+
+- For $S = {200}$ and 400 pixels and the no-mask condition, ${V}_{\text{avg }}$ decreased as ${W}_{1}$ decreased. This means that when the viewable forward distance is long, the path width can restrict the speed following the steering law.
+
+- However, as $S$ decreased, the effect of ${W}_{1}$ in limiting the speed tended to be smaller, i.e., the ${V}_{\text{avg }}$ differences became insignificant for more ${W}_{1}$ pairs. This indicates that when the viewable forward distance is short, the speed is already limited by $S$ , and thus the speed is not largely affected by changes in ${W}_{1}$ .
+
+Furthermore, Fig. 7b shows that for all five ${W}_{1}$ values, there were no significant differences between $S = {400}$ pixels and the no-mask condition. Therefore, in our experimental setting, $S = {400}$ pixels was sufficient to eliminate the effect of the masks on ${V}_{\text{avg }}$ . If we had included longer $A$ values, however, it is possible that ${V}_{avg}$ for the no-mask condition would have been much higher. It is thus fair to avoid concluding that the steering performance for $S = {400}$ pixels is equivalent to that for the no-mask condition.
+
+To analyze the speed profiles in the measurement area, we resampled the cursor trajectory every 25 pixels to reduce noise in the raw data. Fig. 8a shows that speed changes for all five ${W}_{1}$ values are not evident for the narrowest $S$ , similarly to Fig. 7a. Then, as $S$ increases, the effects of ${W}_{1}$ on ${V}_{\text{avg }}$ are exhibited more clearly (Fig. 8b and c). In the same manner, Fig. 8d-f show that the effects of $S$ become clearer as ${W}_{1}$ increases.
+
+Fig. 9a and $\mathrm{b}$ show that the power model is more appropriate than the linear model for predicting ${V}_{\text{avg }}$ with $S$ . Here, we merged the five ${W}_{1}$ values for the purposes of clearly illustrating the prediction accuracy when using $S$ and $\sqrt{S}$ . Similarly, Fig. 9c shows a high correlation between ${V}_{\text{avg }}$ and ${W}_{1}$ in the restricted-view conditions; this plot merges the five $S$ conditions (excluding the no-mask condition).
+
+
+
+Figure 7: Interaction of $S \times {W}_{1}$ on ${V}_{\text{avg }}$ . Not significantly different pairs are annotated, and for the other pairs $p < {0.05}$ .
+
+
+
+Figure 8: Speed profiles in the measurement area for $A = {800}$ . (a-c) Speeds for five ${W}_{1}$ values for a given $S$ . (d-f) Speeds for six $S$ values for a given ${W}_{1}$ .
+
+We also confirmed that the steering law in the form ${V}_{\text{avg }} = a + b{W}_{1}$ (Equation 4) fit reasonably well for each $S$ as shown in Fig. 10. This indicates that the steering law holds when a fixed $S$ is used. In addition, the slopes decreased as $S$ decreased; this means that a small $S$ prohibits a wider ${W}_{1}\mathrm{\;s}$ from increasing ${V}_{\text{avg }}$ .
+
+Fig. 11 shows that the power law $\left( {{V}_{\text{avg }} = a + b\sqrt{S}}\right.$ , Equation 14 with an intercept) held for large ${W}_{1}$ values because the speed was mainly limited by $S$ rather than ${W}_{1}$ in this case. However, when ${W}_{1}$ was small (19 or 27 pixels), the model fits were degraded to ${R}^{2} = {0.91}$ . This is because, as statistically shown in Fig. 7b, for small values of ${W}_{1}$ , the speed was already saturated by ${W}_{1}$ . This result supports the necessity of accounting for the interaction effect between $S$ and ${W}_{1}$ on the speed.
+
+When we used a single regression line of the steering law for $N = {25}$ data points $\left( {{V}_{\text{avg }} = a + b{W}_{1}}\right.$ for ${5}_{S} \times {5}_{{W}_{1}}$ without the no-mask condition), the fitness was poor: ${R}^{2} = {0.180}$ . This is because $S$ significantly changed ${V}_{\text{avg }}$ . Because we have theoretically and empirically shown that ${W}_{1}$ and $S$ limit the speed, we would like to integrate both factors.
+
+Table 1: Model fitting results for predicting ${V}_{\text{avg }}$ for $N = {25}$ data points $\left( {{5}_{S} \times {5}_{{W}_{1}}}\right)$ with adjusted ${R}^{2}$ (higher is better) and ${AIC}$ (lower is better) values. $a - d$ are estimated coefficients with their significance levels $\left( {{}^{* * * }p < {0.001},{}^{* * }p < {0.01},{}^{ * }p < {0.05}}\right.$ , and no-asterisk for $p > {0.05})$ and ${95}\%$ CIs [lower, upper].
+
+| Model | $a$ | $b$ | $c$ | $d$ | Adj. ${R}^{2}$ | ${AIC}$ |
| (#1) $a + b{W}_{1}$ | 162 [-2.20, 327] | 4.24* [0.331, 8.15] | | | 0.144 | 327 |
| (#2) $a + {bS} + c{W}_{1}$ | 20.5 [-66.8, 108] | ${0.914} * * * \left\lbrack {{0.695},{1.13}}\right\rbrack$ | ${4.24} * * * \left\lbrack {{2.33},{6.16}}\right\rbrack$ | | 0.796 | 292 |
| (#3) $a + {bS} + c{W}_{1} + d{W}_{1}S$ | ${167}\cdots \left\lbrack {{87.2},{248}}\right\rbrack$ | $- {0.0342}\left\lbrack {-{0.423},{0.355}}\right\rbrack$ | 0.473 [-1.44, 2.38] | ${0.0243} * * * \left\lbrack {{0.0151},{0.0336}}\right\rbrack$ | 0.912 | 272 |
| (#4) $a + b{W}_{1}S$ | ${182}^{* * * }\left\lbrack {{155},{208}}\right\rbrack$ | 0.0241*** [0.0211, 0.0272] | | | 0.916 | 269 |
| (#5) $a + b{W}_{1} + c{W}_{1}S$ | 162*** [110, 214] | 0.590 [-0.746, 1.93] | ${0.0236} * * * \left\lbrack {{0.0202},{0.0269}}\right\rbrack$ | | 0.916 | 270 |
| (#6) $a + b\sqrt{S} + c{W}_{1}$ | -111* [-197, -24.6] | ${24.3} * * * \left\lbrack {{19.6},{29.0}}\right\rbrack$ | 4.24*** [2.63, 5.86] | | 0.855 | 283 |
| (#7) $a + b\sqrt{S} + c{W}_{1} + d{W}_{1}\sqrt{S}$ | ${162}^{* * * }\left\lbrack {{95.3},{229}}\right\rbrack$ | 0.00463 [-5.36, 5.37] | -2.76** [-4.35, -1.17] | ${0.622} * * * \left\lbrack {{0.495},{0.750}}\right\rbrack$ | 0.974 | 241 |
| (#8) $a + b{W}_{1}\sqrt{S}$ | 95.5*** [62.8, 128] | ${0.529}***\left\lbrack {{0.467},{0.592}}\right\rbrack$ | | | 0.927 | 265 |
| (#9) $a + b{W}_{1} + c{W}_{1}\sqrt{S}$ | 162*** [134, 190] | -2.76*** [-3.60, -1.91] | ${0.623} * * * \left\lbrack {{0.576},{0.669}}\right\rbrack$ | | 0.975 | 239 |
+
+
+
+Figure 9: Model fitting results on ${V}_{\text{avg }}$ with (a) $S$ ,(b) $\sqrt{S}$ , and (c) ${W}_{1}$ .
+
+
+
+Figure 10: Model fitting results of ${V}_{\text{avg }} = a + b{W}_{1}$ for each $S$ .
+
+### 4.6 Model Fitness Comparison
+
+To statistically determine the best model, we compared the adjusted ${R}^{2}$ and Akaike information criterion (AIC) [5]. The ${AIC}$ balances the number of regression coefficients and the fit to identify the comparatively best model. A model (a) with a lower ${AIC}$ value is a better one,(b) a model with ${AIC} \leq \left( {{AI}{C}_{\text{minimum }} + 2}\right)$ may be as good as that with the minimum ${AIC}$ , and (c) with ${AIC} \geq \left( {{AI}{C}_{\text{minimum }} + {10}}\right)$ is safely rejected [9].
+
+Table 1 lists the results of model fitting. Model #1 is the baseline model of the steering law. Models #2 to #5 add a new factor(S)in a linear function. Because we found a significant main effect of $S$ on ${V}_{\text{avg }}$ , Model #2, which adds $S$ to the baseline, improved the fit compared with Model #1. Model #3 adds the interaction factor of $S \times {W}_{1}$ . Because this term was significant, this model shows a better fit than Model #2.
+
+Models #2 and #3 are simply derived following the statistical analysis method. Note that in Model #3, the main effects of $S$ and ${W}_{1}$ were not significant $\left( {p > {0.05}}\right)$ . This means that the effects of $S$ and ${W}_{1}$ on increasing ${V}_{avg}$ can be captured by the interaction factor. Therefore, we also tested the fitness of Model #4. Model #5 was tested for the sake of completeness and consistency with the power model (described below).
+
+Models #6 to #9 are power functions based on Equation 14 with an intercept. Models #6 and #7 are derived similarly to the linear ones. In Model #7, $\sqrt{S}$ was not a significant contributor $\left( {p = {0.999}}\right)$ ; Model #9 tests the fit after eliminating this term. For consistent comparison with the linear models, we also tested an interaction-factor-only model (#8). Model #5 was also tested as a comparison with #9.
+
+
+
+Figure 11: Model fitting results of ${V}_{\text{avg }} = a + b\sqrt{S}$ for each ${W}_{1}$ .
+
+Models #7 and #9 are the best-fit models according to their AIC values; the difference between their AIC values was ${1.997}\left( { < 2}\right)$ . Furthermore, the difference in their adjusted ${R}^{2}$ values was less than 1%. If the prediction accuracy is not significantly different, a model with fewer free parameters has better utility, and thus, we recommend using Model #9 to predict ${V}_{\text{avg }}$ .
+
+By applying Model #9 to Equation 5 (i.e., ${MT} = a + b\left\lbrack {A/{V}_{\text{avg }}}\right\rbrack$ ), we can also predict ${MT}$ for the measurement area as follows.
+
+$$
+{MT} = a + b\frac{A}{c + d{W}_{1} + e{W}_{1}\sqrt{S}}
+$$
+
+$$
+= a + \frac{b}{e} \times \frac{A}{c/e + {W}_{1}\left( {d/e + \sqrt{S}}\right) }
+$$
+
+$$
+= a + {b}^{\prime }\frac{A}{{c}^{\prime } + {W}_{1}\left( {{d}^{\prime } + \sqrt{S}}\right) } \tag{15}
+$$
+
+For $N = {75}$ data points $\left( { = {5}_{S} \times {5}_{{W}_{1}} \times {3}_{A}}\right)$ , the baseline steering law model (Equation 6: ${MT} = a + b\left\lbrack {A/{W}_{1}}\right\rbrack$ ) showed a poor fit (Fig. 12a). Note that because we discuss the performance in the measurement area, the second-path steering difficulty is irrelevant to the ${MT}$ prediction presented here; Equation ${10},{MT} =$ $a + b\left( {{A}_{1}/{W}_{1}}\right) + {cID} + d\left( {{A}_{2}/{W}_{2}}\right)$ , is not used. Our model, in which the new steering difficulty is $A/\left( {{c}^{\prime } + {W}_{1}\left\lbrack {{d}^{\prime } + \sqrt{S}}\right\rbrack }\right)$ , is able to predict ${MT}$ more accurately (Fig. 12b).
+
+## 5 Discussion
+
+### 5.1 Findings and Result Validity
+
+In two previous studies on peephole pointing, the error rate was smallest for the smallest $S$ and greatest for the largest $S\left\lbrack {{10},{21}}\right\rbrack$ . It was assumed that the participants were overly careful with small $S$ values and overly relaxed with large $S$ values when selecting the target. In contrast, we observed that the error rates tended to be higher for smaller $S$ values. This inconsistency could be because of differing error definitions. In the previous two studies, conventional pointing misses were considered errors, and thus overshooting the target while aiming was permitted, whereas in our study, overshooting at the end of the first path segment was not allowed (i.e., cornering misses were considered errors).
+
+
+
+Figure 12: (a) Model fitting for ${MT}$ using the baseline formulation (Equation 6, ${MT} = a + b\left( {A/{W}_{1}}\right)$ ). (b) Model fitting for ${MT}$ using our proposed formulation (Equation 15). (c) Predicting ${MT}$ as mentioned in the Discussion.
+
+Kaufmann and Ahlström also showed that the movement speed tended to decrease as $S$ increased both with and without prior knowledge(PK)of the target position [21]. They explained this in this way: "With small peepholes, participants were eager to uncover the target location by scanning the workspace as quickly as possible; accepting that they would overshoot." Hence, prior to our work, in the HCI field, it has been thought that for peephole interactions the speed decreases as $S$ increases. In our experiment, users could not perform such "quick scanning" because they had to safely turn at the corner.
+
+In previous studies on peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ , only the targets were drawn on the workspace, and thus, participants could easily recognize targets via quick scanning. In contrast, in peephole steering such as map navigation [18], quick scanning cannot be performed; if users lose sight of the current road from the peephole window, they have to find the previous road from among several roads and then return to the navigation task. Therefore, we present a new finding on peephole interactions: a larger $S$ increases the speed in overshoot-prohibited conditions [our results] but decreases the speed in overshoot-permitted conditions $\left\lbrack {{10},{21}}\right\rbrack$ . This finding contributes to better understanding of users' strategies in the peephole situation.
+
+### 5.2 Other Experimental Design Choices
+
+Our research question regarded the effect of viewable forward distance on the path-steering speed when users have to react to a corner. An alternative is to use a dead end: users must steer through a path and then stop in front of a wall without overshooting. This is called a targeted-steering task $\left\lbrack {{13},{22}}\right\rbrack$ , in which the stopping motion is modeled by Fitts' law. We thus assume that the appropriate model for this task will be similar to our proposed models, but this requires further experiments.
+
+Using only the right-side mask is another possibility for the experiment. We included the left-side mask for consistency with previous works on peephole pointing. Regarding a lasso selection task using a direct input pen tablet (Fig. 1a), the width of the forward mask ${W}_{\text{mask }}$ would affect the speed because the user’s hand would occlude the forward path, but beyond the hand, the path would be visible.
+
+While more experimental designs are possible and the resultant speed would change, our experimental data were internally valid. Thus, the fact that "other experimental designs are possible" does not undermine the validity of our models. If user performance under new conditions were to yield different conclusions, that would provide further contributions to the field.
+
+### 5.3 Implications for HCI-related Tasks
+
+Based on our findings, for mouse steering tasks, the speed and ${MT}$ should change with $S$ , but a related work on map navigation with a radar view showed no clear changes in ${MT}$ [18]. Currently, we have no answer as to whether this inconsistency comes from the fact that they used a miniature view to show the entire map and/or used magnification or from inaccuracies or blind spots in our own models The interaction between $S$ and magnification levels on ${V}_{\text{avg }}$ is also unclear and thus should be investigated. As demonstrated in this discussion, our work motivates us to rethink the validity of existing work and opens up new topics to be studied.
+
+Our models could be beneficial in reducing the efforts made to measure users' operation speed for given screen sizes. Once test users operate a map application as in Gutwin and Skopik's study [18] with several screen sizes, the resultant ${V}_{avg}$ values can be recorded, and we can then predict the ${V}_{avg}$ for other screen sizes. For example, when we tested only $S = {25}$ and 400 pixels $\left( {N = {2}_{S} \times {5}_{{W}_{1}} = {10}}\right.$ data points), Model #9 yielded $a = {130}, b = - {2.47}$ , and $c = {0.622}$ with ${R}^{2} = {0.998}$ . Using these constants, we can predict ${V}_{\text{avg }}$ for $S = {50},{100}$ , and 200 pixels $\left( {N = {15}}\right)$ , with ${R}^{2} > {0.96}$ for predicted vs. observed ${V}_{\text{avg }}$ values (Fig. 12c). Hence, depending on the new screen sizes, the ${V}_{\text{avg }}$ at which test users can perform can be estimated accurately. Importantly, as we showed that the relationship between $S$ and ${V}_{avg}$ was not linear and that the interaction of $S \times {W}_{1}$ was significant, it is difficult to accurately predict ${V}_{\text{avg }}$ for a given $S$ and ${W}_{1}$ without our proposed model.
+
+### 5.4 Limitations and Future Work
+
+Our results and discussion are somewhat limited due to the task conditions used in the study, e.g., we did not test circular paths [2] or different curvatures [38]. Also, $A$ ranged from 300 to 800 pixels, but if $A$ is too short or ${W}_{1}$ is too wide, the task finishes before the speed reaches its potential maximum value $\left\lbrack {{32},{34}}\right\rbrack$ . We limited these values to not be extremely short or wide in order to observe the effects of $S$ . Investigating valid ranges of $S,{W}_{1}$ , and $A$ at which our models hold is to be included in our future work.
+
+The width of the second path segment ${W}_{2}$ was fixed at 19 pixels and thus was not dealt with as an independent variable. In the derivation of Equation 14 ( ${V}_{\text{avg }} = {b}^{\prime }\sqrt{S \times {W}_{2}} = {b}^{\prime \prime }\sqrt{S}$ ), ${V}_{\text{avg }}$ was originally assumed to increase with ${W}_{2}$ . This may be true: if users know that ${W}_{2}$ is wide, such as 200 pixels, the necessity for quick deceleration would decrease. However, this depends on whether users have prior knowledge ${PK}$ of ${W}_{2}$ . If users do not know ${W}_{2}$ , they have to begin to decelerate as soon as part of the end area is revealed in preparation for a narrow ${W}_{2}$ . Hence, we have to account for human online response skills, i.e., immediate hand-movement correction in response to a given visual stimulus [24,40].
+
+${PK}$ of the position or timing of when a corner appears would also affect the speed. We tested only a no- ${PK}$ condition regarding the corner position, which corresponds to conditions in which users do not know the layout of objects in lassoing tasks. In contrast, if users know the layout, control is possible at much higher speeds while avoiding unintended selection. A kind of medium-PK is also possible in our experiment. That is, although the participants did not know $A$ beforehand, if the second path segment did not appear on the left half of the screen, the participants realized that they needed to decelerate because the corner must have been in the remaining space. Employing a complete no- ${PK}$ condition to evaluate peephole pointing and steering would therefore be difficult for desktop environments, although this technical limitation has not been explicitly mentioned in the literature $\left\lbrack {{10},{16},{21},{30}}\right\rbrack$ .
+
+## 6 CONCLUSION
+
+We presented an experiment to investigate the effects of viewable forward distance $S$ on path-steering speeds. In the path-steering tasks with cornering at an uncertain time, the relationship between $S$ and the speed followed a power law (square root), and the interaction between path width ${W}_{1}$ and $S$ was accounted for to accurately predict the speed. The best-fit model showed an adjusted ${R}^{2} = {0.975}$ with only one additional constant added to the baseline steering law, which also yielded an accurate model for task completion times. Interestingly, opposite conclusions were derived depending on the task requirements; a shorter $S$ increased the speed in peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ but decreased it in our path-steering experiment. Although few studies have focused on the effects of $S$ on user performance, the importance of this topic will increase with the growth of devices with limited view areas, such as smartphones and tablets, and thus, we hope this topic is revisited by many more researchers in the future.
+
+## REFERENCES
+
+[1] J. Accot and S. Zhai. Beyond fitts' law: models for trajectory-based hci tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '97), pp. 295-302, 1997. doi: 10.1145/ 258549.258760
+
+[2] J. Accot and S. Zhai. Performance evaluation of input devices in trajectory-based tasks: An application of the steering law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99, pp. 466-472. ACM, New York, NY, USA, 1999. doi: 10.1145/302979.303133
+
+[3] J. Accot and S. Zhai. More than dotting the i's - foundations for crossing-based interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '02), pp. 73-80, 2002. doi: 10.1145/503376.50339
+
+[4] J. Accot and S. Zhai. Refining fitts' law models for bivariate pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, pp. 193-200. ACM, New York, NY, USA, 2003. doi: 10.1145/642611.642646
+
+[5] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6):716-723, Dec 1974. doi: 10. 1109/TAC. 1974.1100705
+
+[6] G. Apitz, F. Guimbretière, and S. Zhai. Foundations for designing and evaluating user interfaces based on the crossing paradigm. ACM Trans. Comput.-Hum. Interact., 17(2):9:1-9:42, May 2008. doi: 10. 1145/1746259.1746263
+
+[7] M. Blanca, R. Alarcón, J. Arnau, R. Bono, and R. Bendayan. Nonnormal data: Is anova still a valid option? Psicothema, 29(4):552-557, 2017. doi: 10.7334/psicothema2016.383
+
+[8] D. J. BOTTOMS. The interaction of driving speed, steering difficulty and lateral tolerance with particular reference to agriculture. Ergonomics, 26(2):123-139, 1983. doi: 10.1080/00140138308963324
+
+[9] K. P. Burnham and D. R. Anderson. Model selection and multimodel inference: a practical information-theoretic approach. Springer, 2 ed., 2002.
+
+[10] X. Cao, J. J. Li, and R. Balakrishnan. Peephole pointing: Modeling acquisition of dynamically revealed targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pp. 1699-1708. ACM, New York, NY, USA, 2008. doi: 10.1145/ 1357054.1357320
+
+[11] G. Casiez and N. Roussel. No more bricolage!: Methods and tools to characterize, replicate and compare pointing transfer functions. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, pp. 603-614. ACM, New York, NY, USA, 2011. doi: 10.1145/2047196.2047276
+
+[12] K. DeFazio, D. Wittman, and C. Drury. Effective vehicle width in self-paced tracking. Applied Ergonomics, 23(6):382 - 386, 1992. doi: 10.1016/0003-6870(92)90369-7
+
+[13] J. T. Dennerlein, D. B. Martin, and C. Hasser. Force-feedback improves performance for steering and combined steering-targeting tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '00, pp. 423-429. ACM, New York, NY, USA, 2000. doi: 10.1145/332040.332469
+
+[14] C. G. Drury. Movements with lateral constraint. Ergonomics, 14(2):293-305, 1971. doi: 10.1080/00140137108931246
+
+[15] C. G. Drury and P. Dawson. Human factors limitations in fork-lift truck performance. Ergonomics, 17(4):447-456, 1974. doi: 10.1080/ 00140137408931376
+
+[16] B. Ens, D. Ahlström, and P. Irani. Moving ahead with peephole pointing: Modelling object selection with head-worn display field of view limitations. In Proceedings of the 2016 Symposium on Spatial User
+
+Interaction, SUI '16, pp. 107-110. ACM, New York, NY, USA, 2016. doi: 10.1145/2983310.2985756
+
+[17] P. M. Fitts. The information capacity of the human motor system
+
+in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6):381-391, 1954. doi: 10.1037/h0055392
+
+[18] C. Gutwin and A. Skopik. Fisheyes are good for large steering tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, pp. 201-208. ACM, New York, NY, USA, 2003. doi: 10.1145/642611.642648
+
+[19] K. Hinckley, K. Hinckley, E. Cutrell, S. Bathiche, and T. Muss. Quantitative analysis of scrolling techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '02, pp. 65- 72. ACM, New York, NY, USA, 2002. doi: 10.1145/503376.503389
+
+[20] E. R. Hoffmann. Review of models for restricted-path movements. International Journal of Industrial Ergonomics, 39(4):578-589, 2009. doi: 10.1016/j.ergon.2008.02.007
+
+[21] B. Kaufmann and D. Ahlström. Revisiting peephole pointing: A study of target acquisition with a handheld projector. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services, MobileHCI '12, pp. 211-220. ACM, New York, NY, USA, 2012. doi: 10.1145/2371574.2371607
+
+[22] S. Kulikov and W. Stuerzlinger. Targeted steering motions. In CHI '06 Extended Abstracts on Human Factors in Computing Systems, CHI EA '06, pp. 983-988. ACM, New York, NY, USA, 2006. doi: 10. 1145/1125451.1125640
+
+[23] I. S. MacKenzie. Fitts' law as a research and design tool in human-computer interaction. Human-Computer Interaction, 7(1):91-139, 1992. doi: 10.1207/s15327051 hci0701_3
+
+[24] M. McGuffin and R. Balakrishnan. Acquisition of expanding targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '02, pp. 57-64. ACM, New York, NY, USA, 2002. doi: 10.1145/503376.503388
+
+[25] D. Meyer, R. Abrams, S. Kornblum, C. Wright, and J. E. Keith Smith. Optimality in human motor performance: Ideal control of rapid aimed movements. Psychological review, 95:340-370, 08 1988. doi: 10. 1037/0033-295X.95.3.340
+
+[26] R. Pastel. Measuring the difficulty of steering through corners. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '06, pp. 1087-1096. ACM, New York, NY, USA, 2006. doi: 10.1145/1124772.1124934
+
+[27] N. Rashevsky. Mathematical biophysics of automobile driving. Bulletin of Mathematical Biophysics, 21(4):375-385, 1959. doi: 10.1007/ BF02477896
+
+[28] N. Rashevsky. Mathematical biophysics of automobile driving iv. Bulletin of Mathematical Biophysics, 32(1):71-78, 1970. doi: 10.1007/ BF02476794
+
+[29] O. Rioul and Y. Guiard. Power vs. logarithmic model of fitts' law: A mathematical analysis. Mathematical Social Sciences, 2012:85-96, 12 2012. doi: 10.4000/msh. 12317
+
+[30] M. Rohs and A. Oulasvirta. Target acquisition with camera phones when used as magic lenses. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pp. 1409-1418. ACM, New York, NY, USA, 2008. doi: 10.1145/1357054.1357275
+
+[31] R. Senanayake and R. S. Goonetilleke. Pointing device performance in steering tasks. Perceptual and Motor Skills, 122(3):886-910, 2016. doi: 10.1177/0031512516649717
+
+[32] R. Senanayake, E. R. Hoffmann, and R. S. Goonetilleke. A model for combined targeting and tracking tasks in computer applications. Experimental Brain Research, 231(3):367-379, Nov 2013. doi: 10. 1007/s00221-013-3700-4
+
+[33] N. Thibbotuwawa, R. S. Goonetilleke, and E. R. Hoffmann. Constrained path tracking at varying angles in a mouse tracking task. Human Factors, 54(1):138-150, 2012. doi: 10.1177/0018720811424743
+
+[34] N. Thibbotuwawa, E. R. Hoffmann, and R. S. Goonetilleke. Open-loop and feedback-controlled mouse cursor movements in linear paths. Ergonomics, 55(4):476-488, 2012. doi: 10.1080/00140139. 2011.644587
+
+[35] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 143-146. ACM, New
+
+York, NY, USA, 2011. doi: 10.1145/1978942.1978963
+
+[36] S. Yamanaka. Mouse cursor movements towards targets on the same screen edge. In Proceedings of Graphics Interface 2018, GI 2018, pp. 106 - 113. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2018. doi: 10. 20380/GI2018.14
+
+[37] S. Yamanaka. Steering performance with error-accepting delays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 570:1-570:9. ACM, New York, NY, USA, 2019. doi: 10.1145/3290605.3300800
+
+[38] S. Yamanaka and H. Miyashita. Modeling pen steering performance in a single constant-width curved path. In Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces, ISS '19, pp. 65-76. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3343055.3359697
+
+[39] S. Yamanaka, W. Stuerzlinger, and H. Miyashita. Steering through sequential linear path segments. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 232-243. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025836
+
+[40] S. Zhai, S. Conversy, M. Beaudouin-Lafon, and Y. Guiard. Human on-line response to target expansion. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, pp. 177-184. ACM, New York, NY, USA, 2003. doi: 10.1145/642611. 642644
+
+[41] J. Zhao, R. W. Soukoreff, X. Ren, and R. Balakrishnan. A model of scrolling on touch-sensitive displays. International Journal of Human-Computer Studies, 72(12):805-821, 2014. doi: 10.1016/j.ijhcs.2014. 07.003
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7ce30c3c5399530d593e3ba4122b04357b0eb799
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/PYdm4i9o062/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,378 @@
+§ PEEPHOLE STEERING: SPEED LIMITATION MODELS FOR STEERING PERFORMANCE IN RESTRICTED VIEW SIZES
+
+Shota Yamanaka*
+
+Yahoo Japan Corporation
+
+Hiroki Usuba
+
+Meiji University
+
+Haruki Takahashi
+
+Meiji University
+
+Homei Miyashita
+
+Meiji University
+
+ < g r a p h i c s >
+
+Figure 1: Examples of steering through narrow paths with limited forward views. (a) Lasso operation for selecting multiple objects in illustration software. The user's hand occludes the forward path to be passed through. The viewable forward distance is between the stylus tip and the user's hand. (b) Map navigation in a zoomed-in view proposed in a previous study [18]. When the cursor moves downwards, the viewable forward distance is between the cursor and the window bottom.
+
+§ ABSTRACT
+
+The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted ${R}^{2} = {0.144}$ for predicting the speed, our best-fit model showed an adjusted ${R}^{2} = {0.975}$ with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.
+
+Index Terms: H.5.2 [User Interfaces]: User Interfaces-Graphical user interfaces (GUI); H.5.m [Information Interfaces and Presentation]: Miscellaneous
+
+§ 1 INTRODUCTION
+
+The steering law $\left\lbrack {1,{14},{27}}\right\rbrack$ is a model for predicting the time and speed needed to pass through a constrained path, such as navigation through a hierarchical menu. In HCI, the validity of the steering law has typically been confirmed in a desktop environment, such as by maneuvering a mouse cursor or stylus tip through a path drawn on a display (e.g., $\left\lbrack {2,{31}}\right\rbrack )$ . Under such conditions, participants can view the entire path or a substantial portion of it before the trial begins, and they can thus determine the appropriate movement speed for a given path width.
+
+However, conditions under which users can view enough of a long path forward represent an ideal situation. Imagine a user operating a stylus pen in illustration software to select multiple objects with a lasso tool as shown in Fig. 1a. When a right-handed user moves the stylus rightwards, the viewable forward distance is limited due to occlusion by the user's hand. Therefore, to avoid selecting unwanted objects, the user has to move the stylus slowly. In contrast, if the user moves the stylus leftwards through the objects, the movement speed should be less limited because the viewable forward distance is not restricted.
+
+Therefore, for path steering tasks, we assume that the viewable forward distance limits the movement speed as the path width does Although the effect of limited view sizes has been investigated several times in HCI studies $\left\lbrack {{10},{16},{21},{30}}\right\rbrack$ , the main interest has been target selection. Furthermore, for steering tasks, while the view of the forward path may be limited, we found few papers on this topic that include map navigation tasks with a magnified view (Fig. 1b, [18]).
+
+If we can derive models of the relationship between task conditions and outcomes - path width and viewable forward distance vs. movement speed - it would contribute to better understanding of human motor behavior. However, evaluating the model robustness of the steering law against an additional constraint (viewable forward distance) has not been investigated well; this motivated us to conduct this work. In our user study, we conducted a path-steering experiment with a mouse and determined the best-fit model from among candidate formulations. Our key contributions are as follows.
+
+(a) We provide empirical evidence that the viewable forward distance $S$ significantly affects the steering speed. We also justify why the relationship between $S$ and speed can be represented by the power law.
+
+(b) We develop refined models to predict movement speed on the basis of the path width and $S$ , which had significant main effects and interaction effects. Our model predicts the speed with an adjusted ${R}^{2} > {0.97}$ . We also show that the movement time while steering through a view-limited area can be predicted with ${R}^{2} > {0.97}$ .
+
+We also discuss other findings, e.g., the reason a conclusion opposite of those from previous studies was obtained: the speed increased with a narrower $S$ in peephole pointing [21].
+
+§ 2 RELATED WORK
+
+§ 2.1 STEERING LAW MODELS
+
+Rashevsky [27, 28], Drury [14], and Accot and Zhai [1] proposed a mathematically equivalent model to predict the movement speed when passing through a constant-width path:
+
+$$
+V = \text{ const } \times W \tag{1}
+$$
+
+where $V$ is the speed and $W$ is the path width. Typically, participants are instructed to perform the task as quickly and accurately as possible. Hence, there are several interpretations of $V$ : the possible maximum safe speed ${V}_{\max }$ in Rashevsky’s model, the average speed ${V}_{avg}$ in a given path length in Drury’s model (i.e., ${V}_{avg} = A/{MT}$ , where the path length is $A$ and the movement time needed is ${MT}$ ), and the instantaneous speed at a given moment in Accot and Zhai's model.
+
+*e-mail: syamanak@yahoo-corp.jp
+
+The validity of this model $\left( {V \propto W}\right)$ has been empirically confirmed for (e.g.) car driving $\left\lbrack {8,{12},{15}}\right\rbrack$ , pen tablets $\left\lbrack {39}\right\rbrack$ , and mice [31,37]. Because ${V}_{\text{ avg }}$ is defined as $A/{MT}$ , the following equation for predicting ${MT}$ is also valid [14,20]:
+
+$$
+{MT} = b\left( {A/{V}_{avg}}\right) \tag{2}
+$$
+
+where $b$ is a constant (hereafter, $a - e$ indicate regression coefficients, with or without prime marks, as in ${b}^{\prime }$ ). Since ${V}_{\text{ avg }} =$ const $\times W$ , Equation 2 can be written as follows.
+
+$$
+{MT} = b\frac{A}{\text{ const } \times W} = {b}^{\prime }\frac{A}{W}\text{ (let }{b}^{\prime } = b/\text{ const) } \tag{3}
+$$
+
+For predicting both ${V}_{\text{ avg }}$ and ${MT}$ , these no-intercept forms are theoretically valid although the contribution of the intercept is often statistically significant [20] as follows.
+
+$$
+{V}_{\text{ avg }} = a + {bW} \tag{4}
+$$
+
+$$
+{MT} = a + b\left( {A/{V}_{avg}}\right) \tag{5}
+$$
+
+$$
+{MT} = a + b\left( {A/W}\right) \tag{6}
+$$
+
+The steering law models on ${V}_{\text{ avg }}$ and ${MT}$ hold when $W$ is narrow relative to the path length. Otherwise, users do not have to pay attention to path boundaries, in which case $W$ does not limit the speed $\left\lbrack {1,{20},{33}}\right\rbrack$ . For mouse steering tasks, $W$ limits the speed when the steering law difficulty $\left( {A/W}\right)$ is greater than ${10}\left\lbrack {{31},{33}}\right\rbrack$ . Hence, in our user study, we chose the range of $A/W$ ratios for the speed measurement area to include values less than and greater than 10 so that the priority for limiting the movement speed would change between $W$ and $S$ . That is, if $W$ is small and $A/W$ is greater than 10, we assume that $W$ strongly limits the speed, whereas if $W$ is sufficiently large such that the path width does not restrict the speed, we assume that $S$ restricts the speed more.
+
+§ 2.2 STEERING OPERATIONS WITH CORNERING
+
+To accurately predict the ${MT}$ for steering around a corner as shown in Fig. 1a and b, Pastel [26] refined the model by adding the Fitts' law difficulty [17] as follows.
+
+$$
+{MT} = a + b\frac{2A}{W} + {cID} \tag{7}
+$$
+
+where the first and second path segments before/after the corner have the same length(A)and same width(W), and ${ID}$ is the index of difficulty in Fitts' law (Pastel used the Shannon formulation [23]: ${ID} = {\log }_{2}\left( {A/W + 1}\right)$ ). Fitts’ law was originally a model for pointing to a target with width $W$ at distance $A$ . Therefore, in addition to considering the difficulty of steering in order to pass through the entire path, this model also considers the difficulty of decelerating to turn at a corner. However, if users cannot see an approaching corner due to the restricted view, it is difficult to start to decelerate with appropriate timing.
+
+§ 2.3 EFFECT OF VIEWABLE FORWARD DISTANCE ON TASK PERFOR- MANCE
+
+Peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ and magic lens pointing $\left\lbrack {30}\right\rbrack$ are examples of UI operations with restricted view sizes. The most popular task for peephole pointing is map navigation. When users want to see information about a landmark on a map application using a smartphone or PC, they first scroll the map (search phase) and then select an intended location (selection phase). Because Fitts' law [17] holds for 1D scrolling tasks done to capture a target into the viewing area [19], the ${MT}$ changes due to $S$ , and the total time can be predicted by the sum of the search and selection phases $\left\lbrack {{10},{30}}\right\rbrack$ . Models for peephole pointing have been validated with a mouse [10], spatially aware phone [30], handheld projector [16, 21], and touchscreen [41].
+
+Although the importance of user performance models for the peephole situation is explained in these papers, their main focus has unfortunately been on target selection. An exception that studied the effect of the viewable range in steering-law tasks was the work of Gutwin and Skopik [18], in which an area around the cursor was zoomed in on with radar-view tools (see Figure 3 in [18]). The cursor and view window were moved concurrently, and users moved the window to steer the cursor through a path.
+
+There are two differences between the work of Gutwin and Skopik [18] and ours. First, they fixed the window size of the radar view. Thus, as the zoom level increased, the corresponding viewable forward distance $S$ decreased. In contrast, in our intended tasks (Fig. 1), $S$ changes, but there is no zooming. They reported that the zoom level did not substantially change ${MT}$ , which is the opposite conclusion of that reached in the peephole pointing studies. In other words, a consistent effect of $S$ on ${MT}$ was not observed for steering and pointing; we revisit this point in the Discussion. The second difference is that the entire view was provided as a miniature view (see Fig. 1b), which assisted the timing for deceleration in preparation for the next corner.
+
+In summary, the quantitative relationship between steering performance $\left( {V}_{\text{ avg }}\right.$ or ${MT}$ ) and $S$ is unclear. Yet, knowing this relationship would be beneficial for understanding user behavior in restricted-view situations, which are realistic for some tasks as described in Introduction. We tackle this challenge through a user study.
+
+§ 3 MODEL DEVELOPMENT FOR UNCERTAIN CORNERING TIMING
+
+As the baseline model for predicting the movement speed, we test Equation 4 $\left( {{V}_{\text{ avg }} = a + {bW}}\right)$ on our experimental data. Also, to check the effect of an additional task parameter (here, $S$ ) on the estimated result $\left( {V}_{\text{ avg }}\right)$ , the simplest method is to add the additional factor and the interaction term between the two predictor variables (if the interaction term is significant) to the baseline model ${}^{1}$ . Thus, we test:
+
+$$
+{V}_{\text{ avg }} = a + {bW} + {cS} \tag{8}
+$$
+
+$$
+{V}_{\text{ avg }} = a + {bW} + {cS} + {dWS} \tag{9}
+$$
+
+We next discuss how users limit the speed as they prepare for a corner. As the first step towards deriving a more general model, in this study we fix the width of the second path segment ${W}_{2}$ , and we give the experimental participants the previous knowledge(PK) of ${W}_{2}$ being fixed. Nevertheless, the participants do not know the amplitude of the first path segment, and thus the corner appears at an uncertain time.
+
+We incorporate Pastel's model in which users must decelerate in the first path segment when approaching the corner to safely enter the second path segment [26]. As a more general case, the first and second path segments have different lengths and widths as shown in Fig. 2a. Pastel’s idea for integrating Fitts’ ${ID}$ is that the cursor must stop within the second path area, which has a width of ${W}_{2}$ , after traveling over the first path segment. Hence, Equation 7 can be
+
+rewritten as:
+
+$$
+{MT} = a + b\frac{{A}_{1}}{{W}_{1}} + {cID} + d\frac{{A}_{2}}{{W}_{2}} \tag{10}
+$$
+
+where ${ID} = {\log }_{2}\left( {{A}_{1}/{W}_{2} + 1}\right)$ . That is, the movement amplitude for pointing is the distance of the first path and the target size is the width of the second path.
+
+${}^{1}$ This is explained in introductory statistics textbooks or websites, e.g., https://web.archive.org/web/20190617154140/https: //www.cscu.cornell.edu/news/statnews/stnews40.pdf.
+
+ < g r a p h i c s >
+
+Figure 2: Steering operations with cornering in which the first and second path segments have different sizes. (a) No-mask and (b) masked conditions. Left and right masks are opaque in study, rather than semi-transparent as shown here.
+
+As shown in Fig. 2b, when a corner has not yet been revealed from the forward mask, users can at least move over a viewable forward distance $S$ . In the case that the corner is just beyond the viewable forward distance, users must adjust the speed as if the second path tolerance ranged from $S$ to $S + {W}_{2}$ ; the "target" center of the second path segment is located at a distance of $S + {0.5}{W}_{2}$ from the cursor position. The time needed to perform this pointing motion is modeled by, according to Pastel, Fitts’ law with $S + {0.5}{W}_{2}$ as the target amplitude and ${W}_{2}$ as the width. Another definition of the amplitude in Fitts' law is the distance to the closer edge of the target, which holds empirically $\left\lbrack {3,4,6}\right\rbrack$ . Using $S$ as the amplitude is thus a simpler choice that does not degrade model fitness.
+
+If ${W}_{2}$ is sufficiently wide, it is possible for users to move the cursor rapidly in the first path segment because they can appropriately decelerate as soon as they notice the corner. However, such a task is not considered to be a steering task in a constrained path, and it is necessary that the path widths $\left( {W}_{1}\right.$ and $\left. {W}_{2}\right)$ are not extremely wide for the steering law to hold $\left\lbrack {1,{14},{20}}\right\rbrack$ . In this study, we therefore set a reasonably narrow ${W}_{2}$ that necessitates careful movements to safely turn at the corner.
+
+Another model for pointing tasks is by Meyer et al. [25].
+
+$$
+{MT} = b\sqrt{A/W} \tag{11}
+$$
+
+where $A$ is the distance to the target center. While the mathematical equivalency between this power model and Fitts' logarithmic model is questioned by Rioul and Guiard [29], they agree that these models are well approximated.
+
+On the basis of this discussion, we assume that in practice the ${MT}$ for pointing to the second path segment, which might be just beyond the front mask and which ranged from $S$ to $S + {W}_{2}$ , can be regressed as follows:
+
+$$
+{MT} = b\sqrt{S/{W}_{2}} \tag{12}
+$$
+
+Again, the original model of Meyer et al. uses the distance to the target center as the target amplitude $\left( {S + {0.5}{W}_{2}}\right)$ , but using $S$ as amplitude would also fit well. The average speed for this movement is defined as the distance to be traveled divided by the time needed for travel.
+
+$$
+{V}_{avg} = \frac{S}{MT} = \frac{S}{b\sqrt{S/{W}_{2}}} = {b}^{\prime }\sqrt{S \times {W}_{2}}\left( {\text{ let }{b}^{\prime } = 1/b}\right) \tag{13}
+$$
+
+In our experiment, to focus on the new factor $S$ , we fixed the value of ${W}_{2}$ . Equation 13 can thus be further simplified:
+
+$$
+{V}_{\text{ avg }} = {b}^{\prime }\sqrt{S \times {W}_{2}} = {b}^{\prime \prime }\sqrt{S}\left( {\text{ let }{b}^{\prime \prime } = {b}^{\prime }\sqrt{{W}_{2}}}\right) \tag{14}
+$$
+
+ < g r a p h i c s >
+
+Figure 3: Visual stimuli used in the experiment. Left and right masks are opaque in study, rather than semi-transparent as shown here.
+
+In summary, we hypothesize that when the viewable forward distance is limited to $S$ , users have to limit the speed in case the second path segment is just beyond the viewable forward distance. and this behavior is expected to be modeled as ${V}_{avg} = b\sqrt{S}$ . While this model is justified based on existing theoretical and empirical evidence, we need to test the validity of our hypothesis empirically We therefore conduct a path-steering study to evaluate the model combined with the steering law.
+
+§ 4 EXPERIMENT
+
+§ 4.1 PARTICIPANTS
+
+Twelve university students participated ( 3 females and 9 males; $M$ $= {21.6},{SD} = {1.32}$ years). All were right-handed and had normal or corrected-to-normal vision. Six were daily mouse users.
+
+§ 4.2 APPARATUS
+
+The PC was a Sony Vaio Z (2.1 GHz; 8-GB RAM; Windows 7). The display was manufactured by I-O DATA (1920 x 1080 pixels,527.04 $\mathrm{{mm}} \times {296.46}\mathrm{\;{mm}};{60} - \mathrm{{Hz}}$ refresh rate). A Logitech optical mouse was used (model: G300r; 1000 dpi; 2.05-m cable) on a large mouse pad $\left( {{42}\mathrm{\;{cm}} \times {30}\mathrm{\;{cm}}}\right)$ . The experimental system was implemented with Hot Soup Processor 3.5 and was used in full-screen mode. The system read and processed input $\sim {125}$ times per sec.
+
+The cursor speed was set to the default: the pointer-speed slider was set to the center in the Control Panel. Pointer acceleration, or the Enhance Pointer Precision setting in Windows 7, was enabled to allow mouse operations to be performed with higher ecological validity [11]. Using pointer acceleration does not violate Fitts' law or the steering law $\left\lbrack {2,{36}}\right\rbrack$ . The large mouse pad and long mouse cable were used to avoid clutching (repositioning of the mouse) during trials. This was to omit unwanted factors during model evaluation If we had allowed clutching and the model fit was poor, we would not have been able to determine whether the poor fit was due to the model formulation or to clutching. No recognizable latency was reported by the participants.
+
+§ 4.3 TASK
+
+The participants had to click on the blue starting line, horizontally steer through a white path of width ${W}_{1}$ , turn downwards at a corner. and then enter a green end area (Fig. 3). After that, an orange area labeled "Next" appeared on the left-side screen edge; entering this area started the next trial. Because the direction of cornering was not a focus of this study and because a previous study showed that the ${MT}$ for downward turns was shorter than that for upward turns [33], we chose downward movement for the second path segment to shorten the duration of the experiment.
+
+If the cursor entered the gray out-of-path areas, a beep sounded, and the trial was flagged as a steering error $E{R}_{\text{ steer }}$ and retried later. If the cursor did not deviate from the blue and white path segments, the trial was flagged as a success, and when entering the green end area a bell sounded (no clicking was needed). The left and right masks moved alongside the cursor.
+
+The participants were instructed to not make any errors and to move the cursor to the end area in as short a time as possible. In addition, we asked them to refrain from clutching while steering. If the participants accidentally clutched or if the mouse reached the right edge of the mouse pad, they were instructed to press the mouse button. Such trials were flagged as invalid and removed from the data analysis. If a steering error or an invalid trial was observed, a beep sounded, and the trial was presented again later in a randomized order.
+
+The measurement area of distance $A$ for recording ${MT}$ and speed is shown in Fig. 3c; that is, it ranges from (b) when the cursor reaches the left edge of the white path to (c) when the viewable range is one pixel away from the second path segment. While in this area, the participants did not know the position of the corner and thus had to move the cursor carefully to avoid deviating from the path. We measured the ${MT}$ spent in the measurement area, and the average speed, which was the dependent variable, was computed as ${V}_{\text{ avg }} = A/{MT}$ . Because in the measurement area the participants could not see the corner, the only operation required in this area was to steer through a constrained path with a restricted view.
+
+To avoid revealing the position of the corner before the trial began, (1) the cursor had to be moved to the "Next" area at the left edge of the screen at the end of every trial, and (2) the cursor had to stop at the blue starting line and could not move further rightwards until the line was clicked. We provided a run-up area of 50 pixels (Fig. 3a) because the speed when clicking on the blue starting area was zero. When the speed measurement began, the cursor was already moving at some speed.
+
+§ 4.4 DESIGN AND PROCEDURE
+
+This experiment had a $6 \times 5$ within-subjects design with the following independent variables and levels. We tested six $S$ values:25,50, 100,200, and 400 pixels and the no-mask condition. The no-mask condition was included to measure baseline performance. The ${W}_{1}$ values were19,27,37,49, and 63 pixels. The movement time and speed were measured in the area shown in Fig. 3c. The average speed was computed as ${V}_{\text{ avg }} = A/{MT}$ and used as the dependent variable.
+
+The width of the end area ${W}_{2}$ was fixed at 19 pixels. To prevent participants from noticing that the corner appeared at several fixed positions, we used various $A$ values, and in every trial the starting line had a random offset, ranging from 100 to 400 pixels, from the left-side screen edge. The y-coordinate of the white path center had a random offset ranging from -150 to 150 pixels from the screen center. The $A$ values for the measurement area were 300,500, and 800 pixels and were not included as an independent variable. For the no-mask condition (baseline), the white-area distance was set to $A + {400} + {W}_{2}$ pixels (i.e., same as the largest $S$ condition).
+
+The ratio of $A/{W}_{1}$ ranged from 4.76 to 42.1. As discussed in Related Work, we chose the $A$ and ${W}_{1}$ values so that the $A/{W}_{1}$ ratio would range from less than to greater than 10 in order to change the motivation for limiting the speed between ${W}_{1}$ and $S$ . The ${W}_{2}$ value was then set to the smallest ${W}_{1}$ value to require a careful cornering motion.
+
+Among the combination of ${6}_{S} \times {5}_{{W}_{1}} \times {3}_{A} = {90}$ patterns,10 trials were randomly selected as practice trials. The participants then attempted three sessions of 90 data-collection trials. In total, we recorded ${90}_{\text{ patterns }} \times {3}_{\text{ sessions }} \times {12}_{\text{ participants }} = {3240}$ successful data points. This study took approximately ${40}\mathrm{\;{min}}$ per participant.
+
+§ 4.5 RESULTS
+
+For the error rate analysis, we used non-parametric ANOVA with the Aligned Rank Transform (ART) [35] and Tukey’s method for p-value adjustment in posthoc comparisons. For the speed data analysis, we used repeated-measures ANOVA and the Bonferroni correction as the p-value adjustment method. Note that ANOVA is robust even when experimental data are non-normal [7].
+
+ < g r a p h i c s >
+
+Figure 4: Results for error rate over the entire trial of the experiment.
+
+§ 4.5.1 ERRORS
+
+Steering Error over the Entire Trial. The number of $E{R}_{\text{ steer }}$ errors for the smaller to larger $S$ values were 144,103,41,49,37, and 46, respectively. The number of $E{R}_{\text{ steer }}$ errors for narrower to wider $W$ values were 126,85,70,74, and 65, respectively. The mean $E{R}_{\text{ steer }}$ rate was 11.5%, which was slightly higher than that found in previous work on mouse steering tasks (9% [2]). Based on the experimenter's observation, the point with the highest error rate was at the corner.
+
+We observed the main effects of $S\left( {{F}_{5,{55}} = {7.830},p < {0.001}}\right.$ , ${\eta }_{p}^{2} = {0.42})$ and ${W}_{1}\left( {{F}_{4,{44}} = {3.529},p < {0.05},{\eta }_{p}^{2} = {0.24}}\right)$ on the ${\bar{ER}}_{\text{ steer }}$ rate in the entire trial. Fig. 4 shows these results. Post-hoc tests showed significant differences between six pairs of $S$ values: $\left( {{25},{100}}\right) ,\left( {{25},{200}}\right) ,\left( {{25},{400}}\right) ,\left( {{25},\text{ no-mask }}\right) ,\left( {{50},{100}}\right)$ , and $({50}$ , 400) with $p < {0.05}$ for all pairs. ${W}_{1} = {19}$ and 49 also showed a significant difference $\left( {p < {0.05}}\right)$ . No significant interaction was found between $S$ and ${W}_{1}\left( {{F}_{{20},{220}} = {1.539},p = {0.07055},{\eta }_{p}^{2} = {0.12}}\right)$ .
+
+We expected the $E{R}_{\text{ steer }}$ rate to decrease for greater $S$ values because the risk of deviating from the first path segment is lower in those cases. However, at the same time, the participants were able to move the cursor more rapidly as shown in Section 4.5.2 with greater $S$ . Rapid movement increased the possibility of deviating from the path, and thus Fig. 4 does not show a monotonic tendency.
+
+Steering Error in the Measurement Area. The number of $E{R}_{\text{ steer }}$ errors for smaller to larger $S$ were12,9,18,15,9, and 10, respectively. The number of $E{R}_{\text{ steer }}$ error for narrower to wider $W$ were 40, 13,9,9, and 2, respectively. The mean $E{R}_{\text{ steer }}$ rate was ${2.20}\%$ . We observed the main effects of $S\left( {{F}_{5,{55}} = {15.05},p < {0.001},{\eta }_{p}^{2} = {0.58}}\right)$ and ${W}_{1}\left( {{F}_{4,{44}} = {9.362},p < {0.05},{\eta }_{p}^{2} = {0.46}}\right)$ on the $E{R}_{\text{ steer }}$ rate in the measurement area. Fig. 5 shows these results. Post-hoc tests showed significant differences between eight pairs of $S$ values: $({25}$ , ${100}),\left( {{25},{400}}\right) ,\left( {{50},{100}}\right) ,\left( {{50},{200}}\right) ,\left( {{100},{200}}\right) ,\left( {{100},{400}}\right) ,({100}$ , no-mask), and(200,400)with $p < {0.05}$ for all pairs. Also, four pairs showed significant differences for ${W}_{1} = {19}$ and the other values with $p < {0.05}$ . The interaction of $S \times W$ was significant $\left( {{F}_{{20},{220}} = {3.262}}\right.$ , $p < {0.001},{\eta }_{p}^{2} = {0.23}$ ). Fig. 6 shows this result.
+
+§ 4.5.2 AVERAGE SPEED
+
+We found the main effects of $S\left( {{F}_{5,{55}} = {69.98},p < {0.001},{\eta }_{p}^{2} = }\right.$ ${0.86})$ and ${W}_{1}\left( {{F}_{4,{44}} = {180.6},p < {0.001},{\eta }_{p}^{2} = {0.94}}\right)$ on ${V}_{\text{ avg }}$ to be significant. The ${V}_{\text{ avg }}$ values for $S = {25} - {400}$ pixels and the no-mask condition were 154, 224, 320, 420, 520, and 545 pixels/sec, respectively. Post-hoc tests showed significant differences between all $S$ pairs (at least $p < {0.01}$ ) except for one pair $(S = {400}$ and the no-mask condition). The ${V}_{\text{ avg }}$ values for ${W}_{1} = {19} - {63}$ pixels were 240,302,365,428, and 485 pixels/sec, respectively. Significant differences were found for all ${W}_{1}$ pairs $\left( {p < {0.001}}\right)$ .
+
+ * p < 0.05; error bars mean SD across participants
+
+ < g r a p h i c s >
+
+Figure 5: Results for error rate in the measurement area of the experiment.
+
+ < g r a p h i c s >
+
+Figure 6: Interaction of error rate in the measurement area of the experiment. The six bars in each cluster show results for $S = {25},{50}$ , 100, 200, and 400 pixels and the no-mask condition, respectively, from left to right.
+
+The interaction of $S \times {W}_{1}$ was significant $\left( {{F}_{{20},{220}} = {48.70},p < }\right.$ ${0.001},{\eta }_{p}^{2} = {0.82})$ . As shown in Fig. 7a, we observed the following results.
+
+ * For $S = {200}$ and 400 pixels and the no-mask condition, ${V}_{\text{ avg }}$ decreased as ${W}_{1}$ decreased. This means that when the viewable forward distance is long, the path width can restrict the speed following the steering law.
+
+ * However, as $S$ decreased, the effect of ${W}_{1}$ in limiting the speed tended to be smaller, i.e., the ${V}_{\text{ avg }}$ differences became insignificant for more ${W}_{1}$ pairs. This indicates that when the viewable forward distance is short, the speed is already limited by $S$ , and thus the speed is not largely affected by changes in ${W}_{1}$ .
+
+Furthermore, Fig. 7b shows that for all five ${W}_{1}$ values, there were no significant differences between $S = {400}$ pixels and the no-mask condition. Therefore, in our experimental setting, $S = {400}$ pixels was sufficient to eliminate the effect of the masks on ${V}_{\text{ avg }}$ . If we had included longer $A$ values, however, it is possible that ${V}_{avg}$ for the no-mask condition would have been much higher. It is thus fair to avoid concluding that the steering performance for $S = {400}$ pixels is equivalent to that for the no-mask condition.
+
+To analyze the speed profiles in the measurement area, we resampled the cursor trajectory every 25 pixels to reduce noise in the raw data. Fig. 8a shows that speed changes for all five ${W}_{1}$ values are not evident for the narrowest $S$ , similarly to Fig. 7a. Then, as $S$ increases, the effects of ${W}_{1}$ on ${V}_{\text{ avg }}$ are exhibited more clearly (Fig. 8b and c). In the same manner, Fig. 8d-f show that the effects of $S$ become clearer as ${W}_{1}$ increases.
+
+Fig. 9a and $\mathrm{b}$ show that the power model is more appropriate than the linear model for predicting ${V}_{\text{ avg }}$ with $S$ . Here, we merged the five ${W}_{1}$ values for the purposes of clearly illustrating the prediction accuracy when using $S$ and $\sqrt{S}$ . Similarly, Fig. 9c shows a high correlation between ${V}_{\text{ avg }}$ and ${W}_{1}$ in the restricted-view conditions; this plot merges the five $S$ conditions (excluding the no-mask condition).
+
+ < g r a p h i c s >
+
+Figure 7: Interaction of $S \times {W}_{1}$ on ${V}_{\text{ avg }}$ . Not significantly different pairs are annotated, and for the other pairs $p < {0.05}$ .
+
+ < g r a p h i c s >
+
+Figure 8: Speed profiles in the measurement area for $A = {800}$ . (a-c) Speeds for five ${W}_{1}$ values for a given $S$ . (d-f) Speeds for six $S$ values for a given ${W}_{1}$ .
+
+We also confirmed that the steering law in the form ${V}_{\text{ avg }} = a + b{W}_{1}$ (Equation 4) fit reasonably well for each $S$ as shown in Fig. 10. This indicates that the steering law holds when a fixed $S$ is used. In addition, the slopes decreased as $S$ decreased; this means that a small $S$ prohibits a wider ${W}_{1}\mathrm{\;s}$ from increasing ${V}_{\text{ avg }}$ .
+
+Fig. 11 shows that the power law $\left( {{V}_{\text{ avg }} = a + b\sqrt{S}}\right.$ , Equation 14 with an intercept) held for large ${W}_{1}$ values because the speed was mainly limited by $S$ rather than ${W}_{1}$ in this case. However, when ${W}_{1}$ was small (19 or 27 pixels), the model fits were degraded to ${R}^{2} = {0.91}$ . This is because, as statistically shown in Fig. 7b, for small values of ${W}_{1}$ , the speed was already saturated by ${W}_{1}$ . This result supports the necessity of accounting for the interaction effect between $S$ and ${W}_{1}$ on the speed.
+
+When we used a single regression line of the steering law for $N = {25}$ data points $\left( {{V}_{\text{ avg }} = a + b{W}_{1}}\right.$ for ${5}_{S} \times {5}_{{W}_{1}}$ without the no-mask condition), the fitness was poor: ${R}^{2} = {0.180}$ . This is because $S$ significantly changed ${V}_{\text{ avg }}$ . Because we have theoretically and empirically shown that ${W}_{1}$ and $S$ limit the speed, we would like to integrate both factors.
+
+Table 1: Model fitting results for predicting ${V}_{\text{ avg }}$ for $N = {25}$ data points $\left( {{5}_{S} \times {5}_{{W}_{1}}}\right)$ with adjusted ${R}^{2}$ (higher is better) and ${AIC}$ (lower is better) values. $a - d$ are estimated coefficients with their significance levels $\left( {{}^{* * * }p < {0.001},{}^{* * }p < {0.01},{}^{ * }p < {0.05}}\right.$ , and no-asterisk for $p > {0.05})$ and ${95}\%$ CIs [lower, upper].
+
+max width=
+
+Model $a$ $b$ $c$ $d$ Adj. ${R}^{2}$ ${AIC}$
+
+1-7
+(#1) $a + b{W}_{1}$ 162 [-2.20, 327] 4.24* [0.331, 8.15] X X 0.144 327
+
+1-7
+(#2) $a + {bS} + c{W}_{1}$ 20.5 [-66.8, 108] ${0.914} * * * \left\lbrack {{0.695},{1.13}}\right\rbrack$ ${4.24} * * * \left\lbrack {{2.33},{6.16}}\right\rbrack$ X 0.796 292
+
+1-7
+(#3) $a + {bS} + c{W}_{1} + d{W}_{1}S$ ${167}\cdots \left\lbrack {{87.2},{248}}\right\rbrack$ $- {0.0342}\left\lbrack {-{0.423},{0.355}}\right\rbrack$ 0.473 [-1.44, 2.38] ${0.0243} * * * \left\lbrack {{0.0151},{0.0336}}\right\rbrack$ 0.912 272
+
+1-7
+(#4) $a + b{W}_{1}S$ ${182}^{* * * }\left\lbrack {{155},{208}}\right\rbrack$ 0.0241*** [0.0211, 0.0272] X X 0.916 269
+
+1-7
+(#5) $a + b{W}_{1} + c{W}_{1}S$ 162*** [110, 214] 0.590 [-0.746, 1.93] ${0.0236} * * * \left\lbrack {{0.0202},{0.0269}}\right\rbrack$ X 0.916 270
+
+1-7
+(#6) $a + b\sqrt{S} + c{W}_{1}$ -111* [-197, -24.6] ${24.3} * * * \left\lbrack {{19.6},{29.0}}\right\rbrack$ 4.24*** [2.63, 5.86] X 0.855 283
+
+1-7
+(#7) $a + b\sqrt{S} + c{W}_{1} + d{W}_{1}\sqrt{S}$ ${162}^{* * * }\left\lbrack {{95.3},{229}}\right\rbrack$ 0.00463 [-5.36, 5.37] -2.76** [-4.35, -1.17] ${0.622} * * * \left\lbrack {{0.495},{0.750}}\right\rbrack$ 0.974 241
+
+1-7
+(#8) $a + b{W}_{1}\sqrt{S}$ 95.5*** [62.8, 128] ${0.529}***\left\lbrack {{0.467},{0.592}}\right\rbrack$ X X 0.927 265
+
+1-7
+(#9) $a + b{W}_{1} + c{W}_{1}\sqrt{S}$ 162*** [134, 190] -2.76*** [-3.60, -1.91] ${0.623} * * * \left\lbrack {{0.576},{0.669}}\right\rbrack$ X 0.975 239
+
+1-7
+
+ < g r a p h i c s >
+
+Figure 9: Model fitting results on ${V}_{\text{ avg }}$ with (a) $S$ ,(b) $\sqrt{S}$ , and (c) ${W}_{1}$ .
+
+ < g r a p h i c s >
+
+Figure 10: Model fitting results of ${V}_{\text{ avg }} = a + b{W}_{1}$ for each $S$ .
+
+§ 4.6 MODEL FITNESS COMPARISON
+
+To statistically determine the best model, we compared the adjusted ${R}^{2}$ and Akaike information criterion (AIC) [5]. The ${AIC}$ balances the number of regression coefficients and the fit to identify the comparatively best model. A model (a) with a lower ${AIC}$ value is a better one,(b) a model with ${AIC} \leq \left( {{AI}{C}_{\text{ minimum }} + 2}\right)$ may be as good as that with the minimum ${AIC}$ , and (c) with ${AIC} \geq \left( {{AI}{C}_{\text{ minimum }} + {10}}\right)$ is safely rejected [9].
+
+Table 1 lists the results of model fitting. Model #1 is the baseline model of the steering law. Models #2 to #5 add a new factor(S)in a linear function. Because we found a significant main effect of $S$ on ${V}_{\text{ avg }}$ , Model #2, which adds $S$ to the baseline, improved the fit compared with Model #1. Model #3 adds the interaction factor of $S \times {W}_{1}$ . Because this term was significant, this model shows a better fit than Model #2.
+
+Models #2 and #3 are simply derived following the statistical analysis method. Note that in Model #3, the main effects of $S$ and ${W}_{1}$ were not significant $\left( {p > {0.05}}\right)$ . This means that the effects of $S$ and ${W}_{1}$ on increasing ${V}_{avg}$ can be captured by the interaction factor. Therefore, we also tested the fitness of Model #4. Model #5 was tested for the sake of completeness and consistency with the power model (described below).
+
+Models #6 to #9 are power functions based on Equation 14 with an intercept. Models #6 and #7 are derived similarly to the linear ones. In Model #7, $\sqrt{S}$ was not a significant contributor $\left( {p = {0.999}}\right)$ ; Model #9 tests the fit after eliminating this term. For consistent comparison with the linear models, we also tested an interaction-factor-only model (#8). Model #5 was also tested as a comparison with #9.
+
+ < g r a p h i c s >
+
+Figure 11: Model fitting results of ${V}_{\text{ avg }} = a + b\sqrt{S}$ for each ${W}_{1}$ .
+
+Models #7 and #9 are the best-fit models according to their AIC values; the difference between their AIC values was ${1.997}\left( { < 2}\right)$ . Furthermore, the difference in their adjusted ${R}^{2}$ values was less than 1%. If the prediction accuracy is not significantly different, a model with fewer free parameters has better utility, and thus, we recommend using Model #9 to predict ${V}_{\text{ avg }}$ .
+
+By applying Model #9 to Equation 5 (i.e., ${MT} = a + b\left\lbrack {A/{V}_{\text{ avg }}}\right\rbrack$ ), we can also predict ${MT}$ for the measurement area as follows.
+
+$$
+{MT} = a + b\frac{A}{c + d{W}_{1} + e{W}_{1}\sqrt{S}}
+$$
+
+$$
+= a + \frac{b}{e} \times \frac{A}{c/e + {W}_{1}\left( {d/e + \sqrt{S}}\right) }
+$$
+
+$$
+= a + {b}^{\prime }\frac{A}{{c}^{\prime } + {W}_{1}\left( {{d}^{\prime } + \sqrt{S}}\right) } \tag{15}
+$$
+
+For $N = {75}$ data points $\left( { = {5}_{S} \times {5}_{{W}_{1}} \times {3}_{A}}\right)$ , the baseline steering law model (Equation 6: ${MT} = a + b\left\lbrack {A/{W}_{1}}\right\rbrack$ ) showed a poor fit (Fig. 12a). Note that because we discuss the performance in the measurement area, the second-path steering difficulty is irrelevant to the ${MT}$ prediction presented here; Equation ${10},{MT} =$ $a + b\left( {{A}_{1}/{W}_{1}}\right) + {cID} + d\left( {{A}_{2}/{W}_{2}}\right)$ , is not used. Our model, in which the new steering difficulty is $A/\left( {{c}^{\prime } + {W}_{1}\left\lbrack {{d}^{\prime } + \sqrt{S}}\right\rbrack }\right)$ , is able to predict ${MT}$ more accurately (Fig. 12b).
+
+§ 5 DISCUSSION
+
+§ 5.1 FINDINGS AND RESULT VALIDITY
+
+In two previous studies on peephole pointing, the error rate was smallest for the smallest $S$ and greatest for the largest $S\left\lbrack {{10},{21}}\right\rbrack$ . It was assumed that the participants were overly careful with small $S$ values and overly relaxed with large $S$ values when selecting the target. In contrast, we observed that the error rates tended to be higher for smaller $S$ values. This inconsistency could be because of differing error definitions. In the previous two studies, conventional pointing misses were considered errors, and thus overshooting the target while aiming was permitted, whereas in our study, overshooting at the end of the first path segment was not allowed (i.e., cornering misses were considered errors).
+
+ < g r a p h i c s >
+
+Figure 12: (a) Model fitting for ${MT}$ using the baseline formulation (Equation 6, ${MT} = a + b\left( {A/{W}_{1}}\right)$ ). (b) Model fitting for ${MT}$ using our proposed formulation (Equation 15). (c) Predicting ${MT}$ as mentioned in the Discussion.
+
+Kaufmann and Ahlström also showed that the movement speed tended to decrease as $S$ increased both with and without prior knowledge(PK)of the target position [21]. They explained this in this way: "With small peepholes, participants were eager to uncover the target location by scanning the workspace as quickly as possible; accepting that they would overshoot." Hence, prior to our work, in the HCI field, it has been thought that for peephole interactions the speed decreases as $S$ increases. In our experiment, users could not perform such "quick scanning" because they had to safely turn at the corner.
+
+In previous studies on peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ , only the targets were drawn on the workspace, and thus, participants could easily recognize targets via quick scanning. In contrast, in peephole steering such as map navigation [18], quick scanning cannot be performed; if users lose sight of the current road from the peephole window, they have to find the previous road from among several roads and then return to the navigation task. Therefore, we present a new finding on peephole interactions: a larger $S$ increases the speed in overshoot-prohibited conditions [our results] but decreases the speed in overshoot-permitted conditions $\left\lbrack {{10},{21}}\right\rbrack$ . This finding contributes to better understanding of users' strategies in the peephole situation.
+
+§ 5.2 OTHER EXPERIMENTAL DESIGN CHOICES
+
+Our research question regarded the effect of viewable forward distance on the path-steering speed when users have to react to a corner. An alternative is to use a dead end: users must steer through a path and then stop in front of a wall without overshooting. This is called a targeted-steering task $\left\lbrack {{13},{22}}\right\rbrack$ , in which the stopping motion is modeled by Fitts' law. We thus assume that the appropriate model for this task will be similar to our proposed models, but this requires further experiments.
+
+Using only the right-side mask is another possibility for the experiment. We included the left-side mask for consistency with previous works on peephole pointing. Regarding a lasso selection task using a direct input pen tablet (Fig. 1a), the width of the forward mask ${W}_{\text{ mask }}$ would affect the speed because the user’s hand would occlude the forward path, but beyond the hand, the path would be visible.
+
+While more experimental designs are possible and the resultant speed would change, our experimental data were internally valid. Thus, the fact that "other experimental designs are possible" does not undermine the validity of our models. If user performance under new conditions were to yield different conclusions, that would provide further contributions to the field.
+
+§ 5.3 IMPLICATIONS FOR HCI-RELATED TASKS
+
+Based on our findings, for mouse steering tasks, the speed and ${MT}$ should change with $S$ , but a related work on map navigation with a radar view showed no clear changes in ${MT}$ [18]. Currently, we have no answer as to whether this inconsistency comes from the fact that they used a miniature view to show the entire map and/or used magnification or from inaccuracies or blind spots in our own models The interaction between $S$ and magnification levels on ${V}_{\text{ avg }}$ is also unclear and thus should be investigated. As demonstrated in this discussion, our work motivates us to rethink the validity of existing work and opens up new topics to be studied.
+
+Our models could be beneficial in reducing the efforts made to measure users' operation speed for given screen sizes. Once test users operate a map application as in Gutwin and Skopik's study [18] with several screen sizes, the resultant ${V}_{avg}$ values can be recorded, and we can then predict the ${V}_{avg}$ for other screen sizes. For example, when we tested only $S = {25}$ and 400 pixels $\left( {N = {2}_{S} \times {5}_{{W}_{1}} = {10}}\right.$ data points), Model #9 yielded $a = {130},b = - {2.47}$ , and $c = {0.622}$ with ${R}^{2} = {0.998}$ . Using these constants, we can predict ${V}_{\text{ avg }}$ for $S = {50},{100}$ , and 200 pixels $\left( {N = {15}}\right)$ , with ${R}^{2} > {0.96}$ for predicted vs. observed ${V}_{\text{ avg }}$ values (Fig. 12c). Hence, depending on the new screen sizes, the ${V}_{\text{ avg }}$ at which test users can perform can be estimated accurately. Importantly, as we showed that the relationship between $S$ and ${V}_{avg}$ was not linear and that the interaction of $S \times {W}_{1}$ was significant, it is difficult to accurately predict ${V}_{\text{ avg }}$ for a given $S$ and ${W}_{1}$ without our proposed model.
+
+§ 5.4 LIMITATIONS AND FUTURE WORK
+
+Our results and discussion are somewhat limited due to the task conditions used in the study, e.g., we did not test circular paths [2] or different curvatures [38]. Also, $A$ ranged from 300 to 800 pixels, but if $A$ is too short or ${W}_{1}$ is too wide, the task finishes before the speed reaches its potential maximum value $\left\lbrack {{32},{34}}\right\rbrack$ . We limited these values to not be extremely short or wide in order to observe the effects of $S$ . Investigating valid ranges of $S,{W}_{1}$ , and $A$ at which our models hold is to be included in our future work.
+
+The width of the second path segment ${W}_{2}$ was fixed at 19 pixels and thus was not dealt with as an independent variable. In the derivation of Equation 14 ( ${V}_{\text{ avg }} = {b}^{\prime }\sqrt{S \times {W}_{2}} = {b}^{\prime \prime }\sqrt{S}$ ), ${V}_{\text{ avg }}$ was originally assumed to increase with ${W}_{2}$ . This may be true: if users know that ${W}_{2}$ is wide, such as 200 pixels, the necessity for quick deceleration would decrease. However, this depends on whether users have prior knowledge ${PK}$ of ${W}_{2}$ . If users do not know ${W}_{2}$ , they have to begin to decelerate as soon as part of the end area is revealed in preparation for a narrow ${W}_{2}$ . Hence, we have to account for human online response skills, i.e., immediate hand-movement correction in response to a given visual stimulus [24,40].
+
+${PK}$ of the position or timing of when a corner appears would also affect the speed. We tested only a no- ${PK}$ condition regarding the corner position, which corresponds to conditions in which users do not know the layout of objects in lassoing tasks. In contrast, if users know the layout, control is possible at much higher speeds while avoiding unintended selection. A kind of medium-PK is also possible in our experiment. That is, although the participants did not know $A$ beforehand, if the second path segment did not appear on the left half of the screen, the participants realized that they needed to decelerate because the corner must have been in the remaining space. Employing a complete no- ${PK}$ condition to evaluate peephole pointing and steering would therefore be difficult for desktop environments, although this technical limitation has not been explicitly mentioned in the literature $\left\lbrack {{10},{16},{21},{30}}\right\rbrack$ .
+
+§ 6 CONCLUSION
+
+We presented an experiment to investigate the effects of viewable forward distance $S$ on path-steering speeds. In the path-steering tasks with cornering at an uncertain time, the relationship between $S$ and the speed followed a power law (square root), and the interaction between path width ${W}_{1}$ and $S$ was accounted for to accurately predict the speed. The best-fit model showed an adjusted ${R}^{2} = {0.975}$ with only one additional constant added to the baseline steering law, which also yielded an accurate model for task completion times. Interestingly, opposite conclusions were derived depending on the task requirements; a shorter $S$ increased the speed in peephole pointing $\left\lbrack {{10},{21}}\right\rbrack$ but decreased it in our path-steering experiment. Although few studies have focused on the effects of $S$ on user performance, the importance of this topic will increase with the growth of devices with limited view areas, such as smartphones and tablets, and thus, we hope this topic is revisited by many more researchers in the future.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..adf607a70921009a349810c6a29d5076fbcf359d
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,385 @@
+# Part-Based 3D Face Morphable Model with Anthropometric Local Control
+
+Donya Ghafourzadeh*
+
+Ubisoft La Forge, Montreal, Canada
+
+Cyrus Rahgoshay ${}^{ \dagger }$
+
+Ubisoft La Forge, Montreal, Canada
+
+Andre Beauchamp ${}^{§}$
+
+Ubisoft La Forge, Montreal,
+
+Canada
+
+Adeline Aubame ${}^{¶}$
+
+Ubisoft La Forge, Montreal,
+
+Canada
+
+Tiberiu Popa ${}^{1}$
+
+Concordia University,
+
+Montreal, Canada
+
+Figure 1: Our 3D facial morphable model workflow. In an offline stage, we extract PCA eigenvectors and select the best ones. We also select the best subset of anthropometric measurements. The relationship between the eigenvectors and measurements is encoded in a mapping matrix. All of these are used in the online stage, where the mapper collects user-prescribed anthropometric measurement values, and applies the mapping matrices to reconstruct the parts. The last step provides the edited face through a smooth blending of the parts.
+
+Sahel Fallahdoust ${}^{ \ddagger }$
+
+Ubisoft La Forge, Montreal, Canada
+
+Eric Paquette**
+
+École de technologie supérieure,
+
+Montreal, Canada
+
+## Abstract
+
+We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter those that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. Index Terms: Computing methodologies-Computer graphics-Shape modeling—Mesh models
+
+## 1 INTRODUCTION
+
+The authoring of realistic 3D faces with intuitive controls is used in a broad range of computer graphics applications, such as video games, person identification, facial plastic surgery, and virtual reality. This process is particularly time-consuming, given the intricate details found in the eyes, nose, mouth, and ears. Consequently, it would be convenient to use high-level controls, such as anthropometric measurements, to edit human-like character heads.
+
+Many methods use 3D morphable face models (3DMM) for animation (blend shapes), face capture, and face editing. Even though face animation concerns are important, our work focuses on the editing of facial meshes. 3DMMs are typically constructed by computing a Principal Component Analysis (PCA) on a data set of scans sharing the same mesh topology. New 3D faces are generated by changing the relative weights of the individual eigenvectors. These methods are popular due to the simplicity and efficiency of the approach, but suffer from two fundamental limitations: they impose global control on the new generated meshes, making it impossible to edit a localized region of the face, and the control mechanism is very unintuitive. Some methods compute localized 3DMMs but those focus on facial animation instead of face modeling. We compared our approach to previous works relying on facial animation and saw that their automatic localized basis construction works well for animation purposes (considering a data set composed of animations for a single person), but performs worse than our approach for modeling purposes (considering a data set made of neutral faces from different persons).
+
+We propose an approach to construct realistic 3DMMs. We increase the controllability of our faces by segmenting them into independent sub-regions and selecting the most dominant eigenvectors per part. Furthermore, we rely on facial anthropometric measurements to derive useful controls to use in our 3DMM for editing faces. We propose a measurement selection technique to bind the essential measurements to the 3DMM eigenvectors. Our approach allows the user to edit faces by adjusting the facial parts using sliders controlling the values of anthropometric measurements. The measurements are mapped to eigenvector weights, allowing us to compute the individual parts matching the values selected by the user. Finally, the reconstructed parts are seamlessly blended together to generate the desired $3\mathrm{D}$ face.
+
+---
+
+*e-mail: donya.ghafourzadeh@ubisoft.com
+
+${}^{ \dagger }$ e-mail: cyrus.rahgoshay@ubisoft.com
+
+${}^{\frac{1}{4}}$ e-mail: sahel.fallahdoust@ubisoft.com
+
+*e-mail: andre.beauchamp@ubisoft.com
+
+Te-mail: adeline.aubame@ubisoft.com
+
+Ie-mail: tiberiu.popa@concordia.ca
+
+**e-mail: eric.paquette@etsmtl.ca
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+## 2 RELATED WORK
+
+3D morphable models are powerful statistical models widely used in many applications in Computer Vision and Computer Graphics. One of the most well-known previous works in this regard is that by Blanz and Vetter [3]. Their pioneer work proposes a model using PCA from face scans. Although they propose a multi-segment model and decompose a face into four parts to augment expressiveness, the PCA decomposition is computed globally on the whole face. Other global PCA methods have been proposed $\left\lbrack {1,4,5,8,{17},{18}}\right\rbrack$ . A downside of global PCA-based methods is that they exhibit global support: when we adjust the eye, the nose may also undergo undesirable changes. Another downside is a lack of intuitive user control for face editing. While the eigenvectors are good at extracting the dominant modes of variation of the data, they provide weak intuitive interpretation.
+
+To address the former problem, local models have been proposed. They segment the face into independent sub-regions and select the most dominant eigenvectors per part. Tena et al. [26] propose a method to create localized clustered PCA models for animation. They select the location of the basis using spectral clustering on the geodesic distance and a correlation of vertex displacement considering variations in the expressions. Their method requires a manual step to adjust the boundaries of the segments, making it somewhat similar to ours, where the parts are user-specified. Chi et al. [9] adaptively segment the face model into soft regions based on user-interaction and coherency coefficients. Afterwards, they estimate the blending weights which satisfy the user constraints, as well as the spatio-temporal properties of the face set. Here too, the required user intervention renders the segmentation somewhat similar to our user-provided segments. SPLOCS [19] propose the theory of sparse matrix decompositions to produce localized deformation from an animated mesh sequence. They use vertex displacements in the Euclidean coordinates to select the basis in a greedy fashion. We noticed that when considering variation in identity instead of variation in expression, the greedy selection leads to bases which are far less local than those obtained from both our method and Tena et al.'s [26]. These papers address facial animation instead of face modeling and therefore assume large, yet localized deformations caused by facial expressions, which are different from our context where each face is globally significantly different from the others.
+
+Like Tena et al. [26], Cao et al. [7] segment the face with the same spectral clustering, followed by manual adjustment. While their method focuses mostly on expression, they also provide some identity modeling, as they rely on the FaceWarehouse [8] global model, which they decompose using the segments defined by spectral clustering. In their case, the goal is to adapt a 3DMM to a face from a video feed, in real time. While their method works remarkably well for the real-time "virtual makeup" application, it lags behind ours in terms of providing a very detailed facial model, and it does not support a face editing workflow.
+
+Other papers supplement decomposition approaches with the extraction of fine details, allowing to reconstruct a faithful facial model $\left\lbrack {6,{13},{22}}\right\rbrack$ . The major problem with these approaches is that they work for a specific person and do not provide editing capabilities. The Phace [14] method allows the user to edit fat or muscle maps in texture spaces on the face. While this provides a physically-based adjustment, the control is implicit. The user modifies the texture and then the system simulates muscles and fat to get the result.
+
+Wu et al. [27] propose an anatomically-constrained local deformation model to improve the fidelity of monocular facial animation. Their model uses 1000 overlapping parts, and then decouples the rigid pose of the part from its non-rigid deformation. While this approach works particularly well for reconstruction, the parts are too small for editing semantic face parts such as the nose or the eyes.
+
+Contrary to the methods described thus far, the Allen et al. [1] and BodyTalk [25] methods greatly facilitate editing by mapping intuitive features to modifications of global 3DMM eigenvector weights. In particular, BodyTalk [25] relates transformations of the meshes to keywords such as "fit" and "sturdy". While the mapping between the words and the deformations is not perfect, it still makes it reasonably intuitive to edit the mesh of the body. One problem with this method is that it provides words for bodies, not faces. A second major problem is the inability to make local adjustments, and adjustments that increase the length of the legs will result in changes to other regions such as the torso and arms. In contrast, for our approach, we aim at providing local control in the editing.
+
+A downside of global PCA-based methods is that they exhibit global support: adjusting parameters to change one part has unwanted effects on other unrelated parts. To address this problem, we segment the face into independent sub-regions and provide a process to select the best set of eigenvectors, given a target number of eigenvectors. Methods that segment the face in sub-regions target facial animation instead of modeling. We will demonstrate that our approach is better-suited to the task of face editing than these methods. Another problem with most of the previous related works is that they do not allow facial model editing through the adjustment of objective measurements. In contrast, our method relies on anthropometric measurements used as controls for editing. Furthermore, we propose a process to select the right set of anthropometric measurements for each facial part.
+
+## 3 OVERVIEW
+
+In this paper, we introduce a pipeline for constructing a 3DMM. We separate the face into regions and compute independent PCA decomposition on each region. We then combine the per-region 3DMMs, paying particular attention to the selection of the most dominant eigenvectors across the eigenvectors of the different regions. While the eigenvectors are good at extracting the dominant data variation modes, they provide weak intuitive interpretation. We thus use anthropometric measurements to provide human understandable adjustments of the face. The reconstruction from the measurements is done through a mapping from the measurements to the weights that need to be applied to each eigenvector. From the set of measurements we extracted from our survey of the literature, we selected a subset which resulted in the least reconstruction error. An overview of our approach can be found in Fig. 1.
+
+The remainder of this paper is organized as follows: Sec. 4 describes how 3DMMs are constructed, including face decomposition and selection of the most dominant eigenvectors. Afterwards, we discuss how to reconstruct a face by smooth blending of different facial parts (Sec. 5). In Sec. 6, the selection of the anthropometric measurements, and the mapping between these measurements and the PCA eigenvectors are discussed. We demonstrate the results in Sec. 7, and discuss them in Sec. 8.
+
+## 4 3D MORPHABLE FACE MODEL
+
+We employ PCA on a data set of faces to construct our 3DMMs. All faces are assumed to share a common mesh topology, with vertices in semantic correspondence. We propose to segment the face into different parts in order to focus the decomposition on a part-by-part basis instead of computing the PCA decomposition on the whole face. We compute the decomposition separately for the male and female subsets. As shown in Fig. 1, we decompose the face into five parts: eyes, nose, mouth, ears, and what we refer to as the facial mask (which groups the remaining areas such as cheeks, jaws, forehead, and chin). We further discuss this design choice in Sec. 8.2. This face decomposition allows us to have eigenvectors for each part. The geometry of the facial parts is represented with a shape-vector ${S}_{d} = \left\lbrack {{V}_{1}\ldots {V}_{{n}_{v}}}\right\rbrack \in {R}^{3{n}_{v}}$ , where ${n}_{v}$ is the number of vertices of $d$ th facial part, $d \in \{ 1,\ldots ,5\}$ , and ${V}_{i} = \left\lbrack {{x}_{i}{y}_{i}{z}_{i}}\right\rbrack \in {R}^{3}$ defines the $x, y$ , and $z$ coordinates of the $i$ th vertex. After applying PCA, each facial part $d$ is reconstructed as:
+
+$$
+{S}_{d}^{\prime } = \overline{{S}_{d}} + \mathop{\sum }\limits_{{j = 1}}^{{n}_{e}}{P}_{j}{b}_{j} \tag{1}
+$$
+
+where $\overline{{S}_{d}}$ is the mean shape of $d$ th facial part, ${n}_{e}$ is its number of eigenvectors, ${P}_{j}$ is an eigenvector of size $3{n}_{v}, b$ is a ${n}_{e} \times 1$ vector containing the weights of the corresponding eigenvectors, and ${S}_{d}^{\prime }$ is the reconstruction, which will be an approximation when not using all eigenvectors.
+
+Our approach selects the smallest set of eigenvectors that still reconstructs the shape accurately. We accomplish this by incrementally adding the eigenvectors, in the order of their significance, to the reconstruction until a certain accuracy is met. Even though we rely on the eigenvalues to sort the eigenvectors for each part (largest to smallest eigenvalue), we provide the user with a measurable error (in $\mathrm{{mm}}$ ), which is more precise than relying solely on eigenvalues across different parts. We determine the best set of eigenvectors to achieve a balance between the quality of the per-part reconstruction and the whole face reconstruction. To evaluate the accuracy of our selection, we construct the facial parts (Eq. 1) and blend them together (Sec. 5) to generate the whole face. Afterwards, we assess the accuracy of the reconstruction by calculating the average of the geometric error ${D}_{GE}$ between the ground truth and the blended face. We first do a rigid alignment step (rotation and translation) between the facial parts of the ground truth and the blended result. We then record the average per-vertex Euclidean distance over all vertices and per part:
+
+$$
+{D}_{G{E}_{all}}\left( {S}^{\prime }\right) = \frac{1}{{n}_{all}}\mathop{\sum }\limits_{{d = 1}}^{5}\mathop{\sum }\limits_{\substack{{{V}_{j}^{\prime } \in {S}_{d}^{\prime }} \\ {{V}_{j} \in {S}_{d}} }}\begin{Vmatrix}{{V}_{j} - \left( {{R}_{d}{V}_{j}^{\prime } + {T}_{d}}\right) }\end{Vmatrix} \tag{2}
+$$
+
+$$
+{D}_{G{E}_{\text{part }}}\left( {S}^{\prime }\right) = \frac{1}{5}\mathop{\sum }\limits_{{d = 1}}^{5}\frac{1}{{n}_{d}}\mathop{\sum }\limits_{\substack{{{V}_{j}^{\prime } \in {S}_{d}^{\prime }} \\ {{V}_{j} \in {S}_{d}} }}\begin{Vmatrix}{{V}_{j} - \left( {{R}_{d}{V}_{j}^{\prime } + {T}_{d}}\right) }\end{Vmatrix} \tag{3}
+$$
+
+$$
+{D}_{GE}\left( {S}^{\prime }\right) = \frac{{D}_{G{E}_{all}}\left( {S}^{\prime }\right) + {D}_{G{E}_{part}}\left( {S}^{\prime }\right) }{2}, \tag{4}
+$$
+
+where ${n}_{all}$ is the number of vertices of the face mesh, ${n}_{d}$ is the number of vertices for part $d,{V}_{j}$ is on the ground truth and ${V}_{j}^{\prime }$ is the corresponding point on the blended face. We compute averages over all vertices and per part to ensure that parts with more vertices do not end up using most of the eigenvector budget at the expense of parts with fewer vertices. We do so for the entire data set and for a set of 19 validation faces that were not part of the training data set. We compute the median data set error as well as the median validation error, and we average the two in a global error. The process of reconstructing the parts of validation faces is done by projecting each part onto the corresponding eigenvector basis (followed by the blending process).
+
+At each step of our incremental eigenvector selection, we decide which of the five parts will get a new eigenvector added to its set. We compare the geometric errors resulting from each of the five candidate eigenvectors, and we select a candidate eigenvector which has a great impact on decreasing the error. When eigenvectors from multiple parts result in similar decreases in error, instead of systematically picking the eigenvector based on lowest error, we select by sampling from a discrete probability density function (PDF) created from the respective decreases in error of the five candidate eigenvectors. This PDF selection process creates a more even distribution of eigenvectors across the parts and maintains a low error. As we iterate, the reconstruction error decreases. For the female and male data set faces, the average reconstruction errors are 2.00 and ${2.13}\mathrm{\;{mm}}$ when considering zero eigenvectors. The errors decrease to 0.75 and ${0.74}\mathrm{\;{mm}}$ after 80 iterations, and when considering all eigenvectors the errors are $0\mathrm{\;{mm}}$ . We chose an error threshold of $1\mathrm{\;{mm}}$ which balances out the cost associated with considering too many eigenvectors and the accuracy of the reconstruction. Table 1 shows the resulting eigenvector distribution after achieving our 1 $\mathrm{{mm}}$ reconstruction accuracy. We experimented with reconstructing the female and male validation faces based on using our subset of eigenvectors. The median reconstruction errors are 1.33 and 1.48 $\mathrm{{mm}}$ , respectively.
+
+
+
+Figure 2: Regions highlighted in green and blue contain the transition and fixed zones, respectively.
+
+Table 1: Number of eigenvectors selected for each part
+
+| Facial part | #Eigenvectors for female | #Eigenvectors for male |
| Facial mask | 7 | 9 |
| $\mathbf{{Eye}}$ | 10 | 5 |
| Nose | 6 | 6 |
| Mouth | 9 | 10 |
| Ear | 14 | 16 |
+
+## 5 FACE CONSTRUCTION THROUGH PARTS BLENDING
+
+This step focuses on the problem of constructing a realistic new face by blending the five segmented parts together. As opposed to methods such as those of Tena et al. [26] and Cao et al. [7], which handle the transition between the parts by relying on the adjustment of a single strip of vertices, we spread the transition across three strips of vertices. In contrast to other methods that adjust the transition by vertex averaging [7] or least-squares fitting [26], we use Laplacian blending [24] of the parts and the transition, resulting in a smooth, yet faithful global surface. The vertex positions are solved by an energy minimization which reduces the surface curvature discontinuities at the junction between the parts while maintaining the desired surface curvature. To this end, we define a transition zone made of quadrilateral strips around the parts. In our experiments, a band of two quadrilaterals (three rings of green vertices in Fig. 2) provides good results. We interpolate the Laplacian $\mathcal{L}$ (the cotangent weights) of the five facial parts weighted by ${\beta }_{d}$ , which has values of ${\beta }_{d} = 1$ inside the part, ${\beta }_{d} \in \{ {0.75},{0.5},{0.25}\}$ going outward of the part in the transition zone, and ${\beta }_{d} = 0$ elsewhere. We normalize these weights such that they sum to one for each vertex. These soft constraints allow some leeway in the transition zone. The boundary conditions of our system are set to the ring of blue vertices in Fig. 2, and we solve for the remaining vertices. To this end, we minimize the following energy function:
+
+$$
+E\left( {V}^{\prime }\right) = \mathop{\sum }\limits_{{i \in \text{ inner }}}{\begin{Vmatrix}{T}_{i}\mathcal{L}\left( {V}_{i}^{\prime }\right) - \frac{\mathop{\sum }\limits_{{d = 1}}^{5}{\beta }_{i, d}{R}_{d}\mathcal{L}\left( {V}_{i, d}\right) }{\mathop{\sum }\limits_{{b = 1}}^{5}{\beta }_{i, b}}\end{Vmatrix}}^{2}, \tag{5}
+$$
+
+
+
+Figure 3: Graph showing the evolution of the Frobenius norm of the rotation between two consecutive iterations (averaged across the five rotations $\left. {R}_{d}\right)$ . For each of the meshes 1 to 4, we begin with the average parts and change the weight of one eigenvector per part. Each eigenvector is selected randomly (from the first ten eigenvectors if there are more than ten eigenvectors for the part). The new value for the weight is also randomly selected within the range of -2 and $+ 2$ times the standard deviation for this eigenvector.
+
+
+
+Figure 4: (a) and (b) are the generated parts. As in Fig. 3, we modified each average part by changing the weight of one eigenvector selected randomly. The new value for the weight is also randomly selected. (c) shows the result of blending the facial parts.
+
+where "inner" is the set of vertices of the five parts, excluding the vertices of the boundary conditions; ${T}_{i}$ is an appropriate transformation for vertex ${V}_{i}^{\prime }$ based on the eventual new configuration of vertices ${V}_{i}$ and ${R}_{d}$ is the rotation of part $d$ .
+
+We solve Eq. 5 in a similar fashion to ARAP [23] by alternating solving for the vertex position and rotation matrices until the change is small. Fig. 3 shows that the rotation quickly converges as the Frobenius norm of consecutive rotations is large only for the first few iterations. Given our experiments, we decided to stop iterating when the Frobenius norm fell below 0.01 or after 6 iterations. Fig. 4 shows an example of a blended face. In this case, the Frobenius norm was below 0.01 after five iterations. Fig. 5, shows the evolution of the geometric error ${D}_{GE}$ between the ground truth parts and their blended counterparts for the example of Fig. 4. As can be seen, the error quickly reaches a plateau as the rotation stabilizes.
+
+## 6 SYNTHESIZING FACES FROM ANTHROPOMETRIC MEA- SUREMENTS
+
+PCA eigenvectors characterize the data variation space, but do not provide a clear intuitive interpretation. In this paper, we focus mainly on constructing linear regression models from data using a set of intuitive facial anthropometric measurements. Facial anthropometric measurements provide a quantitative description by means of measurements taken between specific surface landmarks defined with respect to anatomical features. We use the 33 parameters listed in Table 2. Each measurement corresponds to either a Euclidean distance or a ratio of Euclidean distances between surface positions, as specified in each paper cited in Table 2. In this section, we propose a measurement selection technique which assesses the accuracy of each measurement, resulting in the most relevant ones for each facial part.
+
+
+
+Figure 5: Graph comparing the average geometric error (in $\mathrm{{mm}}$ ) between the ground truth parts and their blended counterparts, for different numbers of iterations.
+
+Table 2: Anatomical terms and corresponding abbreviations of our selected and discarded measurements.
+
+Selected measurements Discarded measurements
+
+| Anatomical term | Abbrev | Ref |
| Nasal Width $\div$ Root Width | NWRW | [10] |
| Nasal Width $\div$ Length of Bridge | NWLB | [10] |
| Nasal Width $\div$ Width of Nostril | NWWN | [10] |
| Nasal Root Width $\div$ Tip Protrusion | NRTP | [10] |
| Length of Nasal Bridge $\div$ Tip Protrusion | NBTP | [10] |
| Nasal Width $\div$ Tip Protrusion | NWTP | [10] |
| Nasal Root Width $\div$ Length of Bridge | NRLB | [10] |
| Nasal Root Width $\div$ Width of Nostril | NRWN | [10] |
| Length of Nasal Bridge $\div$ Width of Nostril | NBWN | [10] |
| Width of Nose $\div$ Tip Protrusion | WNTP | [10] |
| Philtrum Width | $\mathbf{{PW}}$ | [11] |
| Face Height | FH | [12] |
| Orbits Intercanthal Width | OIW | [12] |
| Orbits Fissure Length | OFL | [12] |
| Orbits Biocular Width | OBW | [12] |
| Nose Height | NH | [12] |
| Face Width | FW | [15] |
| Bitragion Width | BW | [15] |
| Ear Height | EH | [15] |
| Bigonial Breadth | B | [16] |
| Bizygomatic Breadth | $\mathbf{{BB}}$ | [16] |
| Facial Index | $\mathbf{F}$ | [21] |
| Nasal Index | $\mathbf{N}$ | [21] |
| $\mathbf{{Mouth} - {FaceWidthIndex}}$ | $\mathbf{{MFW}}$ | [21] |
| Biocular Width-Total Face Height Index | BWFH | [21] |
| Lip Length | LL | [29] |
| Maximum Frontal Breadth | Max FB | [29] |
| Interpupillary Distance | ID | [29] |
| Nose Protrusion | $\mathbf{{NP}}$ | [29] |
| Nose Length | $\mathbf{{NL}}$ | [29] |
| Nose Breadth | $\mathbf{{NB}}$ | [29] |
+
+| Anatomical term | Abbrev | $\mathbf{{Ref}}$ |
| Eye Fissure Index | EF | [21] |
| Minimum Frontal Breadth | Min FB | [29] |
+
+### 6.1 Mapping Method
+
+We evaluate the measurements on the facial parts of the data set, yielding ${f}_{d, i} = \left\lbrack {{f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}}\right\rbrack$ for the $d$ th facial part of scan ${S}_{i}$ considering ${n}_{m}$ measures. The measures for all of the scans are combined into an ${n}_{m} \times {n}_{s}$ matrix, ${F}_{d} = \left\lbrack {{f}_{d,1}^{T}\ldots {f}_{d,{n}_{s}}^{T}}\right\rbrack$ , where ${n}_{s}$ is the number of scans. We learn how to adjust the weights of the PCA eigenvectors to reconstruct faces having specific characteristics corresponding to the measures. We adopt the general method of Allen et al. [1]. However, while that method learns a global mapping that adjusts the whole body, we will learn per-part local mappings. Furthermore, in Sec. 6.2 we will derive a process to select the best measures out of the set of all measures $\left\lbrack {{f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}}\right\rbrack$ , and will proceed independently for each of the five parts.
+
+We relate measures by learning a linear mapping to the PCA weights. With the ${n}_{m}$ measures for the $d$ th facial part, the mapping will be represented as a $\left( {n}_{e}\right) \times \left( {{n}_{m} + 1}\right)$ matrix, ${M}_{d}$ :
+
+$$
+{M}_{d}{\left\lbrack {f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}1\right\rbrack }^{T} = b, \tag{6}
+$$
+
+where $b$ is the corresponding eigenvector weight vector. Collecting the measurements for the whole data set, the mapping matrix is solved as:
+
+$$
+{M}_{d} = {B}_{d}{F}_{d}^{ + }, \tag{7}
+$$
+
+where ${B}_{d}$ is a $\left( {n}_{e}\right) \times \left( {{n}_{s} + 1}\right)$ matrix containing the corresponding eigenvector weights of the related facial part and ${F}_{d}^{ + }$ is the pseudoin-verse of ${F}_{d}$ . As in Eq. 6, a row of $1\mathrm{\;s}$ is appended to the measurement matrix ${F}_{d}$ for y-intercepts in the regression.
+
+To construct a new facial part based on specific measurements, we use $b$ in Eq. 1, as follows:
+
+$$
+{S}_{d}^{\prime } = \overline{{S}_{d}} + {Pb}, \tag{8}
+$$
+
+where $\overline{{S}_{d}}$ is the mean shape of the $d$ th facial part and $P$ is the matrix containing the eigenvectors. Moreover, we can define delta-feature vectors of the form:
+
+$$
+\Delta {f}_{d} = {\left\lbrack \Delta {f}_{1}\ldots \Delta {f}_{{n}_{m}}0\right\rbrack }^{T}, \tag{9}
+$$
+
+where each ${\Delta f}$ contains the user-prescribed differences in measurement values. Afterwards, by adding ${\Delta b} = {M}_{d}\Delta {f}_{d}$ to the related eigenvector weights, it is possible to adjust the measure such as to make a face slimmer or fatter.
+
+### 6.2 Measurement Selection
+
+We propose a novel technique for automatically detecting the most effective and relevant anthropometric measurements. Some might be redundant with respect to others, some might not make sense for a specific part (e.g., the "Ear Height" might not be relevant for the mouth), and some might even lead to mapping matrices that generate worst results. During our investigations, we discovered that considering more anthropometric measurements does not necessarily lead to a lower reconstruction average error. Fig. 6 illustrates that a higher error occurs considering all the measurements in comparison with our selected combination. In order to aggregate the error for the given part, we reconstruct the face, relying only on the anthropometric measurements of the selected part, and then calculate the average error, as in Sec. 4. Fig. 7 shows two odd-looking examples from using all the measurements of the nose (a) and facial mask (b). We thus evaluate the set of relevant measurements, separately for each part. We begin with an empty set of selected measurements, and we iteratively test which measurement we should add to the set by evaluating the quality of the reconstructed faces when creating the mapping matrix, considering the currently selected measurements together with the candidate measurement. We reconstruct a face using the mapping matrix(Eq.6,8)based only on its measurement values. The reconstructed face is considered as a prediction, and thus we evaluate the prediction quality in a fashion very similar to that used for eigenvector selection, by reconstructing all of the faces found in the data set of facial scans, as well as the 19 validation faces.
+
+
+
+Figure 6: Using all measurements leads to higher reconstruction errors(mm)as compared to our set of selected measurements on the data set and validation faces
+
+
+
+Figure 7: Using all the semantic measurements of the nose (a) and facial mask (b) often leads to odd-looking parts when editing through the adjustment of measurement values
+
+Each candidate measurement is used together with the current set of selected measurements, and we compute the candidate mapping matrix from this set of measurements. We use the mapping matrix with the data set and validation faces, and reconstruct all of the instances of the part under consideration (e.g., all of the mouths). We then evaluate a geometric error, ${D}_{GE}$ , with the per-vertex distance between each predicted instance and its corresponding ground truth instance. The distance is calculated after a rigid alignment of the predicted instance to the ground truth instance is performed. We can thus ensure that we are evaluating the fidelity of the shape, and not its pose. If one or a few faces result in a large error, this could lead to the rejection of a measurement, which might still be beneficial for the prediction of most faces. To avoid this, we also measure the percentage ${D}_{NI}$ of faces for which an error improvement is seen. We count the number of faces whose geometric errors have been decreased by considering the candidate measurement. We then normalize ${D}_{GE}$ and ${D}_{NI}$ to the $\left\lbrack {0,1}\right\rbrack$ range and combine them into a single reconstruction quality measure:
+
+$$
+\text{quality} = \text{normalize}\left( {D}_{NI}\right) + 1 - \text{normalize}\left( {D}_{GE}\right) \text{.} \tag{10}
+$$
+
+Considering the combined geometric error and percentage of improvement of all candidate measurements, we pick the one which will be added to the set of selected measurements. We stop adding measurements when we observe an increase of ${D}_{GE}$ and a value ${D}_{NI}$ below 50%. We repeat this process for each part (eyes, nose, mouth, etc.)
+
+The selected anthropometric measurements are enumerated in Table 3 . The description of each measurement, as well as the reference to the literature from which we obtained the measurement, are shown in Table 2, where we also list the measurements we rejected (measurements which were never selected for any of the segments).
+
+Table 3: Combinations of anthropometric measurements
+
+For female
+
+| $\mathbf{{Part}}$ | $\mathbf{{Selectedmeasures}}$ |
| Facial mask | B, BB, BW, BWFH, FH, MaxFB, NBWN, NH, NP, PW, WNTP |
| Eye | B, BB, BW, BWFH, EH, F, FW, ID, LL, NB, NBTP, NH, NRLB, NRTP, NWRW, NWWN, OBW, OFL, OIW, PW |
| Nose | EH, LL, N, NB, NBTP, NBWN, NH, NL, NP, NWTP, NWWN, PW |
| Mouth | BWFH, F, LL, MFW, NP, NRWN, NWTP, PW |
| Ear | EH, FH, FW, MaxFB, NB, NBTP, NBWN, NP, NRLB, NRTP, NWLB, NWTP, OBW, PW, WNTP |
| For male |
| Facial mask | BW, BWFH, F, FH, FW, MaxFB, NRWN, OBW, OFL, PW |
| $\mathbf{{Eye}}$ | B, BB, BW, F, FW, ID, N, NBTP, NBWN, OBW, OFL, PW |
| Nose | BB, FW, ID, LL, MaxFB, NB, NBWN, NH, NP, NRLB, NRTP, NWLB, NWWN, OBW, OIW |
| Mouth | B, BW, BWFH, F, LL, NBTP, NH, PW |
| Ear | B, BB, BWFH, EH, FH, MaxFB, N, NRTP, NWRW, OBW, OIW |
+
+### 6.3 Correlation Between Measurements
+
+Defining the correlation between the measurements is important for the adjustment of faces. Accordingly, if the user adjusts one measurement, the system automatically calculates the adjustment of the other measurements as well. This greatly helps to create realistic faces by maintaining the correlation observed in the data set. Similarly to Body Talk [25], we use Pearson's correlation coefficient on $F$ to evaluate the relationship between the anthropometric measurements. Considering a facial part $d$ , the Pearson’s correlation coefficient ${\operatorname{Cor}}_{jk}$ for measurements $j$ and $k$ is expressed as:
+
+$$
+{\operatorname{Cor}}_{jk} = \frac{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}\left( {{f}_{ij} - \overline{{f}_{j}}}\right) \left( {{f}_{ik} - \overline{{f}_{k}}}\right) }{\sqrt{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}{\left( {f}_{ij} - \overline{{f}_{j}}\right) }^{2}}\sqrt{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}{\left( {f}_{ik} - \overline{{f}_{k}}\right) }^{2}}}, \tag{11}
+$$
+
+where ${f}_{ij},{f}_{ik} \in {f}_{d, i}$ are measurements of scan ${S}_{i},\overline{{f}_{j}}$ and $\overline{{f}_{k}}$ are the mean values of measurements $j$ and $k$ respectively, and ${n}_{s}$ is the number of scans. The coefficient is a value between -1 and 1 that represents the correlation. When adjusting measurement $k$ by $\Delta {f}_{k}$ , we get the change in other measures as $\Delta {f}_{j} = {\operatorname{Cor}}_{jk}\Delta {f}_{k}$ . Accordingly, we can evaluate the influence of one measurement on the others, as well as the conditioning on one or more measurements, and create the most likely ratings of the other measurements.
+
+## 7 RESULTS
+
+Compared to global 3DMM methods that compute one set of eigenvectors for the whole face, our 3DMM computes a set of eigenvectors for each part. This is at the root of one of the advantages of our approach: its ability to locally adjust faces. We compare our approach to other methods that rely on local 3DMM. We created mapping matrices (Eq. 7) for global 3DMMs, SPLOCS [19], clustered PCA [26], as well as our part-based 3DMMs, and tested the adjustment of measurements with these models. We used 46 eigenvectors for global 3DMM, SPLOCS, and our part-based 3DMM. For clustered PCA, we first tested using 13 clusters, as is reported in the paper, but found that this leads to a non-symmetrical result (Fig. 8b). By checking other clusterings, we selected 12 clusters (Fig. 8a). Because a clustered PCA does not allow for a different number of eigenvectors for each cluster, and to avoid having too few eigenvectors per part, we used 46 eigenvectors for each cluster (selecting the 46 with the largest eigenvalues).
+
+
+
+Figure 8: Automatic part identification of clustered PCA [26]. Note how the automatic clustering leads to non-symmetrical clusters (left eye with one cluster vs. right eye with two clusters) for 13 clusters and required us to manually check which other clustering would be usable.
+
+To compare our approach and the use of measurements with other methods, we decided on a way to use our measurements with SPLOCS and clustered PCA. We further demonstrate that with SPLOCS and our approach, we can have more local measurement or global measurement control. For our approach, Table 3 shows that some measures influence more than one part. For example, the "Lip Length" is found in the lists for both mouth and nose. When a measurement is shared between different facial parts, our method allows to decide to have more localized changes by adjusting the measure for only one part, or to have more coherence across the parts by adjusting all of the parts involved in the measurement. If comparing with SPLOCS, we can also balance between local measurements and global measurements. Each measurement is based on computations involving specific measurement vertices (such as the corner of the mouth and the tip of the nose). To enforce locality, when considering a measurement, we check which SPLOCS "eigenvectors" infer significant movement at the related measurement vertices. We compute this by checking if the eigenvector displacement vector at a measurement vertex is large enough as compared to the maximum displacement vector of the eigenvector (we check if it is larger than $1\%$ of the maximum displacement of all vertices of the eigenvector). A SPLOCS eigenvector is considered for a measurement only if it meets the criterion for one of the measurement vertices of a specific measurement. To enforce more globality with SPLOCS, we use the mapping matrices for all of the eigenvectors. Fig. 7 shows an example of the globality and locality of the influence of adjusting the "Lip Length". It compares global PCA eigenvectors, local measurement and global measurement SPLOCS, clustered PCA, and our local measurement and global measurement approaches. The color coding shows the per-vertex Euclidean distance. Note that the colors do not represent errors, but rather, vertex movements. Thus, the goal is to have warmer colors around the location where the editing is intended, and colder colors in unrelated regions. Our method allows having global measurement influenced by adjusting the measure for both the nose and the mouth parts, as well as more localized changes by adjusting only the mouth (Fig. 9c-9f). Contrary to our approach, both global measurement and local measurement SPLOCS resulted in similar deformations all over the face, while the expected result was a modification focused around the mouth (Fig. 9b-9d).
+
+We will now focus on local measurement editing. Fig. 10-12 show the adjustment of the same anthropometric measurement using global 3DMM, local measurement SPLOCS, clustered PCA, and our local measurement approach. In Fig. 10f-10g, we can see that even though we wanted to adjust the "Nose Breadth", the adjustment using the global eigenvectors and local measurement SPLOCS resulted in significant deformations all over the face, while clustered PCA and our approach could focus the deformation around the nose, as expected (Fig. 10h-10i). We can observe similar unwanted global deformations of the face in Fig. 11f-11g. Also note that the automatic segmentation of clustered PCA does not provide the desired deformation for some cases, such as in Fig. 11h-12h. We consistently outperform clustered PCA in terms of local deformation where expected. The results shown in Fig. 10-12 highlight the difficulty of locally controlling the face deformation, and the power of our approach in locally adjusting the face with respect to the anthropometric measurements.
+
+
+
+Figure 9: Comparison of the globality vs. locality of the adjustments (editing by increasing the "Lip Length"): (a) global PCA eigenvectors, (b) global measurement SPLOCS, (c) our global measurement approach, (d) local measurement SPLOCS, (e) clustered PCA, and (f) our local measurement approach. The colors respectively represent per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= {8.5}\mathrm{\;{mm}}$ ). Note how our local measurement and global measurement approaches induce significant and local surface deformation to achieve the desired editing. In comparison, global PCA and SPLOCS induce non-local deformation, and clustered PCA induces much less deformation.
+
+
+
+Figure 10: "Nose Breadth" adjustment results: (a) nose of a female from validation faces adjusted using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= 5\mathrm{\;{mm}}$ ).
+
+In the accompanying video, we show multiple edits on multiple parts, starting from the average face, while Fig. 13 shows edits starting from four real faces. We can see that our approach allows capturing the essence of the anthropometric measurements, providing an easy-to-use workflow.
+
+## 8 Discussion
+
+In this section, we discuss different aspects of our approach. We present different comparisons highlighting the impact of the eigenvector and measurement selection. We then discuss the face segmentation choice, and end by describing the procedure used to bring all of our scans to a common face mesh.
+
+
+
+Figure 11: "Lip Length" increase results: (a) mouth of a male from validation faces edited using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0$ $\mathrm{{mm}}$ , red $= 8\mathrm{\;{mm}}$ ).
+
+
+
+Figure 12: "Bizygomatic Breadth" (the bizygomatic width of the face) increase results: (a) a male from the validation faces edited using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= {14}\mathrm{\;{mm}}$ ).
+
+### 8.1 Measurements Error
+
+To verify the robustness of our 3DMMs and of our set of selected measurements, we reconstruct real faces, relying on their anthropometric measurements to compute their eigenvector weights (Eq. 6). We then get the face with our approach, including the blending procedure (Sec. 5), and compute its resulting anthropometric measurements. We compute the quality of the reconstruction through the absolute value of the difference between the ground truth measurement and the measurement from the reconstructed face. Since measurements correspond either to a Euclidean distance or to a ratio of Euclidean distances, we normalized all the measurements to the $\left\lbrack {0\% ,{100}\% }\right\rbrack$ range. Fig. 14 shows that the average percentage of error is low when using "our measurements". This means that both the selection of eigenvectors and the mapping matrix work well. Furthermore, it shows that when using "all measurements" to compute the mapping matrix (Eq. 7), we get larger average errors as compared to ground truth measurements. When calculating the error in Fig. 14 for "our measurements", we calculate the average error over our selected measurements only (Table 3). The error shown in Fig. 14 for "all measurements" also considers only our selected measurements (if the error across all of the measurements is considered, the comparison is even more in favor of using our selected measurements).
+
+
+
+Figure 13: We generated random faces (left faces (a)-(d)) and edited them by increasing ("+") or decreasing ("-") the value of some of the indicated anthropometric measurements
+
+
+
+Figure 14: Using our subset of measurements on the data set and validation faces leads to lower errors (percentage), as compared to using "all measurements"
+
+We evaluated how our approach compared to SPLOCS and clustered PCA with respect to achieving measurement values prescribed by edit operations. We created a set of 1,000 random edits on 135 face meshes. We took the resulting edited face mesh from our approach, SPLOCS, as well as clustered PCA, and evaluate the difference between the measurement value prescribed by the editing and the measurement value calculated from the edited mesh. Overall, our approach is the one that performed the best, with the resulting measurement being closest to the prescribed measurement. SPLOCS was second and clustered PCA presented the greatest differences (see Fig. 15).
+
+Even though our approach is the one that is closest (on average) to the prescribed measurements, there is a limitation due to the blending of the synthesized parts. This blending sometimes affects the mesh in a way that prevents it from achieving the exact prescribed effect for the editing. Fig. 16 shows an example where the blending does not maintain the "Nose Height" of the synthesized nose as it deforms it through the blending process.
+
+
+
+Figure 15: Starting from one face, we adjust one of its measurements to match the value of that measurement for another face. We then compute the difference between the prescribed measurement value and the measurement value calculated from the mesh. We do so for 1,000 such edits. Our approach leads to a smaller error (percentage), as compared to the clustered PCA, and to slightly better results when compared to local measurement SPLOCS.
+
+
+
+Figure 16: (a) Average male head. Its "Nose Height" is ${45.09}\mathrm{\;{mm}}$ . (b) A synthesized nose with its "Nose Height" edited to ${70.11}\mathrm{\;{mm}}$ . (c) Result of blending the nose. While this is an extreme case, it still reflects the fact that the approach is not always able to achieve the prescribed measurement (the value decreased to ${58.53}\mathrm{\;{mm}}$ for this example).
+
+### 8.2 Face Decomposition
+
+Our face segmentation was motivated by several facial animation artists with whom we worked, and who strongly prefer having control over the face patches in order to make sure they match the morphology of the face and muscle locations. This type of control is impossible to achieve with an automatic method, which is typically agnostic to the underlying anatomical structure. It is important to note that this manual way of selecting the regions is no more cumbersome than the current state-of-the-art methods. The state-of-the-art method of Tena et al. [26] requires a post-processing step to fix occasional artifacts in the segmentation method. Furthermore, as illustrated in Fig. 8b, segmentation boundaries can occasionally occur across important semantic regions such as the eyes, leading to complications further down the pipeline.
+
+### 8.3 Data Set
+
+The quality of the input mesh data set is crucial for the reconstruction of good 3D face models. As do many existing methods, we assume that the meshes share a common mesh topology. Mapping raw 3D scans to a common base mesh is typically done using a surface mapping method $\left\lbrack {2,{20},{28}}\right\rbrack$ . We established this correspondence with the commercial solution, R3DS WRAP. We plan to release a subset of our data set for other researchers.
+
+## 9 CONCLUSION
+
+In this paper, we designed a new local 3DMM used for face editing. We demonstrated the difficulty of locally editing the face with global 3DMMs; we thus segmented the face into five parts and combined the 3DMMs for each part into a single 3DMM by selecting the best eigenvectors through prediction error measurements. We then proposed the use of established anthropometric measurements as a basis for face editing. We mapped the anthropometric measurements to the 3DMM through a mapping matrix. We proposed a process to select the best set of anthropometric measurements, leading to improved reconstruction accuracy and the removal of conflicting measurements. From a list of 33 anthropometric measurements we surveyed from the literature, we identified 31 which lead to an improvement of the reconstruction and rejected 2 as they decreased the quality of the reconstruction. Note that the anthropometric measurement selection process would apply as well even if using a different 3DMM from the one proposed in this paper, as well as when considering a different set of anthropometric measurements. We demonstrated this by applying our set of measurements to both SPLOCS [19] and clustered PCA [26]. This also demonstrated that our approach produces results superior to those of established methods proposing automatic segmentation and different ways to construct the eigenvector basis. We also presented different bits of experimental evidence to demonstrate the superiority of our approach, especially in terms of local control, as compared to the typical global 3DMM.
+
+A limitation of our approach lies in the mapping matrices, which assume a linear relationship between anthropometric measurements and the eigenvector weights. An interesting avenue for future work would be to apply machine learning to identify non-linear mappings. Also, our measurements are based on distances between points on the surface. Future work could consider measurements based on the curvature over the face, such as measurements specifying the angle formed at the tip of the chin.
+
+Although anthropometric measurements generate plausible facial geometric variations, they do not consider fine-scale or coarse-scale features. Regarding the fine-scale details, our approach does not model realistic variations of wrinkles, and that could be an interesting direction for future research. Regarding coarse-scale features, we could reconstruct a skull based on the anthropometric measurements, and then generate the facial mask based on an energy minimization of the skin thickness considering the skull and the measurements.
+
+## ACKNOWLEDGMENTS
+
+This work was supported by Ubisoft Inc., the Mitacs Accelerate Program, and École de technologie supérieure. We would like to thank the anonymous reviewers for their valuable comments.
+
+## REFERENCES
+
+[1] B. Allen, B. Curless, and Z. Popović. The space of human body shapes: Reconstruction and parameterization from range scans. In ${ACM}$ SIGGRAPH 2003 Papers, SIGGRAPH '03, p. 587-594. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/ 1201775.882311
+
+[2] B. Amberg, S. Romdhani, and T. Vetter. Optimal step nonrigid icp algorithms for surface registration. In 2007 IEEE Conf. on Comp. Vision and Pattern Recognition, pp. 1-8. IEEE, 2007.
+
+[3] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, p. 187-194. ACM Press/Addison-Wesley Publishing Co., USA, 1999. doi: 10.1145/ 311535.311556
+
+[4] J. Booth, A. Roussos, A. Ponniah, D. Dunaway, and S. Zafeiriou. Large scale 3d morphable models. Int. J. Comput. Vision, 126(2-4):233-254, Apr. 2018. doi: 10.1007/s11263-017-1009-7
+
+[5] J. Booth, A. Roussos, S. Zafeiriou, A. Ponniah, and D. Dunaway. A 3d morphable model learnt from 10,000 faces. In Proc. of the IEEE Conf. on Comp. Vision and Pattern Recognition, pp. 5543-5552, 2016.
+
+[6] C. Cao, D. Bradley, K. Zhou, and T. Beeler. Real-time high-fidelity facial performance capture. ACM Trans. Graph., 34(4), July 2015. doi: 10.1145/2766943
+
+[7] C. Cao, M. Chai, O. Woodford, and L. Luo. Stabilized real-time face tracking via a learned dynamic rigidity prior. ACM Trans. Graph.,
+
+37(6), Dec. 2018. doi: 10.1145/3272127.3275093
+
+[8] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization & Computer Graphics, 20(03):413-425, mar 2014. doi: 10.1109/TVCG.2013.249
+
+[9] J. Chi, S. Gao, and C. Zhang. Interactive facial expression editing based on spatio-temporal coherency. Vis. Comput., 33(6-8):981-991, June 2017. doi: 10.1007/s00371-017-1387-4
+
+[10] A. Etöz and I. Ercan. Anthropometric analysis of the nose. In Handbook of Anthropometry, pp. 919-926. Springer, 2012.
+
+[11] L. G. Farkas. Anthropometry of the Head and Face. Raven Pr, 1994.
+
+[12] L. G. Farkas, M. J. Katic, and C. R. Forrest. International anthropometric study of facial morphology in various ethnic groups/races. Journal of Craniofacial Surgery, 16(4):615-646, 2005.
+
+[13] P. Garrido, M. Zollhöfer, D. Casas, L. Valgaerts, K. Varanasi, P. Pérez, and C. Theobalt. Reconstruction of personalized $3\mathrm{\;d}$ face rigs from monocular video. ACM Trans. Graph., 35(3), May 2016. doi: 10. 1145/2890493
+
+[14] A.-E. Ichim, P. Kadleček, L. Kavan, and M. Pauly. Phace: Physics-based face modeling and animation. ACM Trans. Graph., 36(4), July 2017. doi: 10.1145/3072959.3073664
+
+[15] D. Lacko, T. Huysmans, P. M. Parizel, G. De Bruyne, S. Verwulgen, M. M. Van Hulle, and J. Sijbers. Evaluation of an anthropometric shape model of the human scalp. Applied Ergonomics, 48:70-85, May 2015. doi: 10.1016/j.apergo.2014.11.008
+
+[16] J.-H. Lee, S.-J. Shin, and C. Istook. Analysis of human head shapes in the united states. International journal of human ecology, 7, 01 2006.
+
+[17] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero. Learning a model of facial shape and expression from 4d scans. ACM Trans. Graph., 36(6), Nov. 2017. doi: 10.1145/3130800.3130813
+
+[18] M. Lüthi, T. Gerig, C. Jud, and T. Vetter. Gaussian process morphable models. IEEE Transactions on pattern analysis and machine intelligence, 40(8):1860-1873, 2018.
+
+[19] T. Neumann, K. Varanasi, S. Wenger, M. Wacker, M. Magnor, and C. Theobalt. Sparse localized deformation components. ACM Trans. Graph., 32(6), Nov. 2013. doi: 10.1145/2508363.2508417
+
+[20] S. Ramachandran, D. Ghafourzadeh, M. de Lasa, T. Popa, and E. Pa-quette. Joint planar parameterization of segmented parts and cage deformation for dense correspondence. Computers & Graphics, 74:202-212, 2018.
+
+[21] N. Ramanathan and R. Chellappa. Modeling age progression in young faces. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1, CVPR '06, p. 387-394. IEEE Computer Society, USA, 2006. doi: 10.1109/CVPR.2006. 187
+
+[22] F. Shi, H.-T. Wu, X. Tong, and J. Chai. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph., 33(6), Nov. 2014. doi: 10.1145/2661229.2661290
+
+[23] O. Sorkine and M. Alexa. As-rigid-as-possible surface modeling. In Proceedings of the Fifth Eurographics Symposium on Geometry Processing, SGP '07, p. 109-116. Eurographics Association, Goslar, DEU, 2007.
+
+[24] O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rössl, and H.-P. Seidel. Laplacian surface editing. In Proc. of the 2004 Eurographics/ ACM SIGGRAPH Symposium on Geometry Processing, SGP '04, pp. 175-184. ACM, 2004.
+
+[25] S. Streuber, M. A. Quiros-Ramirez, M. Q. Hill, C. A. Hahn, S. Zuffi, A. O'Toole, and M. J. Black. Body talk: Crowdshaping realistic 3d avatars with words. ACM Trans. Graph., 35(4), July 2016. doi: 10. 1145/2897824.2925981
+
+[26] J. R. Tena, F. De la Torre, and I. Matthews. Interactive region-based linear 3d face models. In ACM SIGGRAPH 2011 Papers, SIGGRAPH '11. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/1964921.1964971
+
+[27] C. Wu, D. Bradley, M. Gross, and T. Beeler. An anatomically-constrained local deformation model for monocular face capture. ${ACM}$ Trans. Graph., 35(4), July 2016. doi: 10.1145/2897824.2925882
+
+[28] E. Zell and M. Botsch. Elastiface: Matching and blending textured
+
+faces. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR '13, pp. 15-24. ACM, New York, NY, USA, 2013. doi: 10.1145/2486042.2486045
+
+[29] Z. Zhuang, D. Landsittel, S. Benson, R. Roberge, and R. Shaffer. Facial anthropometric differences among gender, ethnicity, and age groups. The Annals of occupational hygiene, 54:391-402, 03 2010. doi: 10. 1093/annhyg/meq007
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4f2d9df28e7f15172ade16b95d32a3b0383d3300
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Q7Cy_qHg6Y/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,480 @@
+§ PART-BASED 3D FACE MORPHABLE MODEL WITH ANTHROPOMETRIC LOCAL CONTROL
+
+Donya Ghafourzadeh*
+
+Ubisoft La Forge, Montreal, Canada
+
+Cyrus Rahgoshay ${}^{ \dagger }$
+
+Ubisoft La Forge, Montreal, Canada
+
+Andre Beauchamp ${}^{§}$
+
+Ubisoft La Forge, Montreal,
+
+Canada
+
+Adeline Aubame ${}^{¶}$
+
+Ubisoft La Forge, Montreal,
+
+Canada
+
+Tiberiu Popa ${}^{1}$
+
+Concordia University,
+
+Montreal, Canada
+
+Figure 1: Our 3D facial morphable model workflow. In an offline stage, we extract PCA eigenvectors and select the best ones. We also select the best subset of anthropometric measurements. The relationship between the eigenvectors and measurements is encoded in a mapping matrix. All of these are used in the online stage, where the mapper collects user-prescribed anthropometric measurement values, and applies the mapping matrices to reconstruct the parts. The last step provides the edited face through a smooth blending of the parts.
+
+Sahel Fallahdoust ${}^{ \ddagger }$
+
+Ubisoft La Forge, Montreal, Canada
+
+Eric Paquette**
+
+École de technologie supérieure,
+
+Montreal, Canada
+
+§ ABSTRACT
+
+We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter those that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. Index Terms: Computing methodologies-Computer graphics-Shape modeling—Mesh models
+
+§ 1 INTRODUCTION
+
+The authoring of realistic 3D faces with intuitive controls is used in a broad range of computer graphics applications, such as video games, person identification, facial plastic surgery, and virtual reality. This process is particularly time-consuming, given the intricate details found in the eyes, nose, mouth, and ears. Consequently, it would be convenient to use high-level controls, such as anthropometric measurements, to edit human-like character heads.
+
+Many methods use 3D morphable face models (3DMM) for animation (blend shapes), face capture, and face editing. Even though face animation concerns are important, our work focuses on the editing of facial meshes. 3DMMs are typically constructed by computing a Principal Component Analysis (PCA) on a data set of scans sharing the same mesh topology. New 3D faces are generated by changing the relative weights of the individual eigenvectors. These methods are popular due to the simplicity and efficiency of the approach, but suffer from two fundamental limitations: they impose global control on the new generated meshes, making it impossible to edit a localized region of the face, and the control mechanism is very unintuitive. Some methods compute localized 3DMMs but those focus on facial animation instead of face modeling. We compared our approach to previous works relying on facial animation and saw that their automatic localized basis construction works well for animation purposes (considering a data set composed of animations for a single person), but performs worse than our approach for modeling purposes (considering a data set made of neutral faces from different persons).
+
+We propose an approach to construct realistic 3DMMs. We increase the controllability of our faces by segmenting them into independent sub-regions and selecting the most dominant eigenvectors per part. Furthermore, we rely on facial anthropometric measurements to derive useful controls to use in our 3DMM for editing faces. We propose a measurement selection technique to bind the essential measurements to the 3DMM eigenvectors. Our approach allows the user to edit faces by adjusting the facial parts using sliders controlling the values of anthropometric measurements. The measurements are mapped to eigenvector weights, allowing us to compute the individual parts matching the values selected by the user. Finally, the reconstructed parts are seamlessly blended together to generate the desired $3\mathrm{D}$ face.
+
+*e-mail: donya.ghafourzadeh@ubisoft.com
+
+${}^{ \dagger }$ e-mail: cyrus.rahgoshay@ubisoft.com
+
+${}^{\frac{1}{4}}$ e-mail: sahel.fallahdoust@ubisoft.com
+
+*e-mail: andre.beauchamp@ubisoft.com
+
+Te-mail: adeline.aubame@ubisoft.com
+
+Ie-mail: tiberiu.popa@concordia.ca
+
+**e-mail: eric.paquette@etsmtl.ca
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+§ 2 RELATED WORK
+
+3D morphable models are powerful statistical models widely used in many applications in Computer Vision and Computer Graphics. One of the most well-known previous works in this regard is that by Blanz and Vetter [3]. Their pioneer work proposes a model using PCA from face scans. Although they propose a multi-segment model and decompose a face into four parts to augment expressiveness, the PCA decomposition is computed globally on the whole face. Other global PCA methods have been proposed $\left\lbrack {1,4,5,8,{17},{18}}\right\rbrack$ . A downside of global PCA-based methods is that they exhibit global support: when we adjust the eye, the nose may also undergo undesirable changes. Another downside is a lack of intuitive user control for face editing. While the eigenvectors are good at extracting the dominant modes of variation of the data, they provide weak intuitive interpretation.
+
+To address the former problem, local models have been proposed. They segment the face into independent sub-regions and select the most dominant eigenvectors per part. Tena et al. [26] propose a method to create localized clustered PCA models for animation. They select the location of the basis using spectral clustering on the geodesic distance and a correlation of vertex displacement considering variations in the expressions. Their method requires a manual step to adjust the boundaries of the segments, making it somewhat similar to ours, where the parts are user-specified. Chi et al. [9] adaptively segment the face model into soft regions based on user-interaction and coherency coefficients. Afterwards, they estimate the blending weights which satisfy the user constraints, as well as the spatio-temporal properties of the face set. Here too, the required user intervention renders the segmentation somewhat similar to our user-provided segments. SPLOCS [19] propose the theory of sparse matrix decompositions to produce localized deformation from an animated mesh sequence. They use vertex displacements in the Euclidean coordinates to select the basis in a greedy fashion. We noticed that when considering variation in identity instead of variation in expression, the greedy selection leads to bases which are far less local than those obtained from both our method and Tena et al.'s [26]. These papers address facial animation instead of face modeling and therefore assume large, yet localized deformations caused by facial expressions, which are different from our context where each face is globally significantly different from the others.
+
+Like Tena et al. [26], Cao et al. [7] segment the face with the same spectral clustering, followed by manual adjustment. While their method focuses mostly on expression, they also provide some identity modeling, as they rely on the FaceWarehouse [8] global model, which they decompose using the segments defined by spectral clustering. In their case, the goal is to adapt a 3DMM to a face from a video feed, in real time. While their method works remarkably well for the real-time "virtual makeup" application, it lags behind ours in terms of providing a very detailed facial model, and it does not support a face editing workflow.
+
+Other papers supplement decomposition approaches with the extraction of fine details, allowing to reconstruct a faithful facial model $\left\lbrack {6,{13},{22}}\right\rbrack$ . The major problem with these approaches is that they work for a specific person and do not provide editing capabilities. The Phace [14] method allows the user to edit fat or muscle maps in texture spaces on the face. While this provides a physically-based adjustment, the control is implicit. The user modifies the texture and then the system simulates muscles and fat to get the result.
+
+Wu et al. [27] propose an anatomically-constrained local deformation model to improve the fidelity of monocular facial animation. Their model uses 1000 overlapping parts, and then decouples the rigid pose of the part from its non-rigid deformation. While this approach works particularly well for reconstruction, the parts are too small for editing semantic face parts such as the nose or the eyes.
+
+Contrary to the methods described thus far, the Allen et al. [1] and BodyTalk [25] methods greatly facilitate editing by mapping intuitive features to modifications of global 3DMM eigenvector weights. In particular, BodyTalk [25] relates transformations of the meshes to keywords such as "fit" and "sturdy". While the mapping between the words and the deformations is not perfect, it still makes it reasonably intuitive to edit the mesh of the body. One problem with this method is that it provides words for bodies, not faces. A second major problem is the inability to make local adjustments, and adjustments that increase the length of the legs will result in changes to other regions such as the torso and arms. In contrast, for our approach, we aim at providing local control in the editing.
+
+A downside of global PCA-based methods is that they exhibit global support: adjusting parameters to change one part has unwanted effects on other unrelated parts. To address this problem, we segment the face into independent sub-regions and provide a process to select the best set of eigenvectors, given a target number of eigenvectors. Methods that segment the face in sub-regions target facial animation instead of modeling. We will demonstrate that our approach is better-suited to the task of face editing than these methods. Another problem with most of the previous related works is that they do not allow facial model editing through the adjustment of objective measurements. In contrast, our method relies on anthropometric measurements used as controls for editing. Furthermore, we propose a process to select the right set of anthropometric measurements for each facial part.
+
+§ 3 OVERVIEW
+
+In this paper, we introduce a pipeline for constructing a 3DMM. We separate the face into regions and compute independent PCA decomposition on each region. We then combine the per-region 3DMMs, paying particular attention to the selection of the most dominant eigenvectors across the eigenvectors of the different regions. While the eigenvectors are good at extracting the dominant data variation modes, they provide weak intuitive interpretation. We thus use anthropometric measurements to provide human understandable adjustments of the face. The reconstruction from the measurements is done through a mapping from the measurements to the weights that need to be applied to each eigenvector. From the set of measurements we extracted from our survey of the literature, we selected a subset which resulted in the least reconstruction error. An overview of our approach can be found in Fig. 1.
+
+The remainder of this paper is organized as follows: Sec. 4 describes how 3DMMs are constructed, including face decomposition and selection of the most dominant eigenvectors. Afterwards, we discuss how to reconstruct a face by smooth blending of different facial parts (Sec. 5). In Sec. 6, the selection of the anthropometric measurements, and the mapping between these measurements and the PCA eigenvectors are discussed. We demonstrate the results in Sec. 7, and discuss them in Sec. 8.
+
+§ 4 3D MORPHABLE FACE MODEL
+
+We employ PCA on a data set of faces to construct our 3DMMs. All faces are assumed to share a common mesh topology, with vertices in semantic correspondence. We propose to segment the face into different parts in order to focus the decomposition on a part-by-part basis instead of computing the PCA decomposition on the whole face. We compute the decomposition separately for the male and female subsets. As shown in Fig. 1, we decompose the face into five parts: eyes, nose, mouth, ears, and what we refer to as the facial mask (which groups the remaining areas such as cheeks, jaws, forehead, and chin). We further discuss this design choice in Sec. 8.2. This face decomposition allows us to have eigenvectors for each part. The geometry of the facial parts is represented with a shape-vector ${S}_{d} = \left\lbrack {{V}_{1}\ldots {V}_{{n}_{v}}}\right\rbrack \in {R}^{3{n}_{v}}$ , where ${n}_{v}$ is the number of vertices of $d$ th facial part, $d \in \{ 1,\ldots ,5\}$ , and ${V}_{i} = \left\lbrack {{x}_{i}{y}_{i}{z}_{i}}\right\rbrack \in {R}^{3}$ defines the $x,y$ , and $z$ coordinates of the $i$ th vertex. After applying PCA, each facial part $d$ is reconstructed as:
+
+$$
+{S}_{d}^{\prime } = \overline{{S}_{d}} + \mathop{\sum }\limits_{{j = 1}}^{{n}_{e}}{P}_{j}{b}_{j} \tag{1}
+$$
+
+where $\overline{{S}_{d}}$ is the mean shape of $d$ th facial part, ${n}_{e}$ is its number of eigenvectors, ${P}_{j}$ is an eigenvector of size $3{n}_{v},b$ is a ${n}_{e} \times 1$ vector containing the weights of the corresponding eigenvectors, and ${S}_{d}^{\prime }$ is the reconstruction, which will be an approximation when not using all eigenvectors.
+
+Our approach selects the smallest set of eigenvectors that still reconstructs the shape accurately. We accomplish this by incrementally adding the eigenvectors, in the order of their significance, to the reconstruction until a certain accuracy is met. Even though we rely on the eigenvalues to sort the eigenvectors for each part (largest to smallest eigenvalue), we provide the user with a measurable error (in $\mathrm{{mm}}$ ), which is more precise than relying solely on eigenvalues across different parts. We determine the best set of eigenvectors to achieve a balance between the quality of the per-part reconstruction and the whole face reconstruction. To evaluate the accuracy of our selection, we construct the facial parts (Eq. 1) and blend them together (Sec. 5) to generate the whole face. Afterwards, we assess the accuracy of the reconstruction by calculating the average of the geometric error ${D}_{GE}$ between the ground truth and the blended face. We first do a rigid alignment step (rotation and translation) between the facial parts of the ground truth and the blended result. We then record the average per-vertex Euclidean distance over all vertices and per part:
+
+$$
+{D}_{G{E}_{all}}\left( {S}^{\prime }\right) = \frac{1}{{n}_{all}}\mathop{\sum }\limits_{{d = 1}}^{5}\mathop{\sum }\limits_{\substack{{{V}_{j}^{\prime } \in {S}_{d}^{\prime }} \\ {{V}_{j} \in {S}_{d}} }}\begin{Vmatrix}{{V}_{j} - \left( {{R}_{d}{V}_{j}^{\prime } + {T}_{d}}\right) }\end{Vmatrix} \tag{2}
+$$
+
+$$
+{D}_{G{E}_{\text{ part }}}\left( {S}^{\prime }\right) = \frac{1}{5}\mathop{\sum }\limits_{{d = 1}}^{5}\frac{1}{{n}_{d}}\mathop{\sum }\limits_{\substack{{{V}_{j}^{\prime } \in {S}_{d}^{\prime }} \\ {{V}_{j} \in {S}_{d}} }}\begin{Vmatrix}{{V}_{j} - \left( {{R}_{d}{V}_{j}^{\prime } + {T}_{d}}\right) }\end{Vmatrix} \tag{3}
+$$
+
+$$
+{D}_{GE}\left( {S}^{\prime }\right) = \frac{{D}_{G{E}_{all}}\left( {S}^{\prime }\right) + {D}_{G{E}_{part}}\left( {S}^{\prime }\right) }{2}, \tag{4}
+$$
+
+where ${n}_{all}$ is the number of vertices of the face mesh, ${n}_{d}$ is the number of vertices for part $d,{V}_{j}$ is on the ground truth and ${V}_{j}^{\prime }$ is the corresponding point on the blended face. We compute averages over all vertices and per part to ensure that parts with more vertices do not end up using most of the eigenvector budget at the expense of parts with fewer vertices. We do so for the entire data set and for a set of 19 validation faces that were not part of the training data set. We compute the median data set error as well as the median validation error, and we average the two in a global error. The process of reconstructing the parts of validation faces is done by projecting each part onto the corresponding eigenvector basis (followed by the blending process).
+
+At each step of our incremental eigenvector selection, we decide which of the five parts will get a new eigenvector added to its set. We compare the geometric errors resulting from each of the five candidate eigenvectors, and we select a candidate eigenvector which has a great impact on decreasing the error. When eigenvectors from multiple parts result in similar decreases in error, instead of systematically picking the eigenvector based on lowest error, we select by sampling from a discrete probability density function (PDF) created from the respective decreases in error of the five candidate eigenvectors. This PDF selection process creates a more even distribution of eigenvectors across the parts and maintains a low error. As we iterate, the reconstruction error decreases. For the female and male data set faces, the average reconstruction errors are 2.00 and ${2.13}\mathrm{\;{mm}}$ when considering zero eigenvectors. The errors decrease to 0.75 and ${0.74}\mathrm{\;{mm}}$ after 80 iterations, and when considering all eigenvectors the errors are $0\mathrm{\;{mm}}$ . We chose an error threshold of $1\mathrm{\;{mm}}$ which balances out the cost associated with considering too many eigenvectors and the accuracy of the reconstruction. Table 1 shows the resulting eigenvector distribution after achieving our 1 $\mathrm{{mm}}$ reconstruction accuracy. We experimented with reconstructing the female and male validation faces based on using our subset of eigenvectors. The median reconstruction errors are 1.33 and 1.48 $\mathrm{{mm}}$ , respectively.
+
+ < g r a p h i c s >
+
+Figure 2: Regions highlighted in green and blue contain the transition and fixed zones, respectively.
+
+Table 1: Number of eigenvectors selected for each part
+
+max width=
+
+Facial part #Eigenvectors for female #Eigenvectors for male
+
+1-3
+Facial mask 7 9
+
+1-3
+$\mathbf{{Eye}}$ 10 5
+
+1-3
+Nose 6 6
+
+1-3
+Mouth 9 10
+
+1-3
+Ear 14 16
+
+1-3
+
+§ 5 FACE CONSTRUCTION THROUGH PARTS BLENDING
+
+This step focuses on the problem of constructing a realistic new face by blending the five segmented parts together. As opposed to methods such as those of Tena et al. [26] and Cao et al. [7], which handle the transition between the parts by relying on the adjustment of a single strip of vertices, we spread the transition across three strips of vertices. In contrast to other methods that adjust the transition by vertex averaging [7] or least-squares fitting [26], we use Laplacian blending [24] of the parts and the transition, resulting in a smooth, yet faithful global surface. The vertex positions are solved by an energy minimization which reduces the surface curvature discontinuities at the junction between the parts while maintaining the desired surface curvature. To this end, we define a transition zone made of quadrilateral strips around the parts. In our experiments, a band of two quadrilaterals (three rings of green vertices in Fig. 2) provides good results. We interpolate the Laplacian $\mathcal{L}$ (the cotangent weights) of the five facial parts weighted by ${\beta }_{d}$ , which has values of ${\beta }_{d} = 1$ inside the part, ${\beta }_{d} \in \{ {0.75},{0.5},{0.25}\}$ going outward of the part in the transition zone, and ${\beta }_{d} = 0$ elsewhere. We normalize these weights such that they sum to one for each vertex. These soft constraints allow some leeway in the transition zone. The boundary conditions of our system are set to the ring of blue vertices in Fig. 2, and we solve for the remaining vertices. To this end, we minimize the following energy function:
+
+$$
+E\left( {V}^{\prime }\right) = \mathop{\sum }\limits_{{i \in \text{ inner }}}{\begin{Vmatrix}{T}_{i}\mathcal{L}\left( {V}_{i}^{\prime }\right) - \frac{\mathop{\sum }\limits_{{d = 1}}^{5}{\beta }_{i,d}{R}_{d}\mathcal{L}\left( {V}_{i,d}\right) }{\mathop{\sum }\limits_{{b = 1}}^{5}{\beta }_{i,b}}\end{Vmatrix}}^{2}, \tag{5}
+$$
+
+ < g r a p h i c s >
+
+Figure 3: Graph showing the evolution of the Frobenius norm of the rotation between two consecutive iterations (averaged across the five rotations $\left. {R}_{d}\right)$ . For each of the meshes 1 to 4, we begin with the average parts and change the weight of one eigenvector per part. Each eigenvector is selected randomly (from the first ten eigenvectors if there are more than ten eigenvectors for the part). The new value for the weight is also randomly selected within the range of -2 and $+ 2$ times the standard deviation for this eigenvector.
+
+ < g r a p h i c s >
+
+Figure 4: (a) and (b) are the generated parts. As in Fig. 3, we modified each average part by changing the weight of one eigenvector selected randomly. The new value for the weight is also randomly selected. (c) shows the result of blending the facial parts.
+
+where "inner" is the set of vertices of the five parts, excluding the vertices of the boundary conditions; ${T}_{i}$ is an appropriate transformation for vertex ${V}_{i}^{\prime }$ based on the eventual new configuration of vertices ${V}_{i}$ and ${R}_{d}$ is the rotation of part $d$ .
+
+We solve Eq. 5 in a similar fashion to ARAP [23] by alternating solving for the vertex position and rotation matrices until the change is small. Fig. 3 shows that the rotation quickly converges as the Frobenius norm of consecutive rotations is large only for the first few iterations. Given our experiments, we decided to stop iterating when the Frobenius norm fell below 0.01 or after 6 iterations. Fig. 4 shows an example of a blended face. In this case, the Frobenius norm was below 0.01 after five iterations. Fig. 5, shows the evolution of the geometric error ${D}_{GE}$ between the ground truth parts and their blended counterparts for the example of Fig. 4. As can be seen, the error quickly reaches a plateau as the rotation stabilizes.
+
+§ 6 SYNTHESIZING FACES FROM ANTHROPOMETRIC MEA- SUREMENTS
+
+PCA eigenvectors characterize the data variation space, but do not provide a clear intuitive interpretation. In this paper, we focus mainly on constructing linear regression models from data using a set of intuitive facial anthropometric measurements. Facial anthropometric measurements provide a quantitative description by means of measurements taken between specific surface landmarks defined with respect to anatomical features. We use the 33 parameters listed in Table 2. Each measurement corresponds to either a Euclidean distance or a ratio of Euclidean distances between surface positions, as specified in each paper cited in Table 2. In this section, we propose a measurement selection technique which assesses the accuracy of each measurement, resulting in the most relevant ones for each facial part.
+
+ < g r a p h i c s >
+
+Figure 5: Graph comparing the average geometric error (in $\mathrm{{mm}}$ ) between the ground truth parts and their blended counterparts, for different numbers of iterations.
+
+Table 2: Anatomical terms and corresponding abbreviations of our selected and discarded measurements.
+
+Selected measurements Discarded measurements
+
+max width=
+
+Anatomical term Abbrev Ref
+
+1-3
+Nasal Width $\div$ Root Width NWRW [10]
+
+1-3
+Nasal Width $\div$ Length of Bridge NWLB [10]
+
+1-3
+Nasal Width $\div$ Width of Nostril NWWN [10]
+
+1-3
+Nasal Root Width $\div$ Tip Protrusion NRTP [10]
+
+1-3
+Length of Nasal Bridge $\div$ Tip Protrusion NBTP [10]
+
+1-3
+Nasal Width $\div$ Tip Protrusion NWTP [10]
+
+1-3
+Nasal Root Width $\div$ Length of Bridge NRLB [10]
+
+1-3
+Nasal Root Width $\div$ Width of Nostril NRWN [10]
+
+1-3
+Length of Nasal Bridge $\div$ Width of Nostril NBWN [10]
+
+1-3
+Width of Nose $\div$ Tip Protrusion WNTP [10]
+
+1-3
+Philtrum Width $\mathbf{{PW}}$ [11]
+
+1-3
+Face Height FH [12]
+
+1-3
+Orbits Intercanthal Width OIW [12]
+
+1-3
+Orbits Fissure Length OFL [12]
+
+1-3
+Orbits Biocular Width OBW [12]
+
+1-3
+Nose Height NH [12]
+
+1-3
+Face Width FW [15]
+
+1-3
+Bitragion Width BW [15]
+
+1-3
+Ear Height EH [15]
+
+1-3
+Bigonial Breadth B [16]
+
+1-3
+Bizygomatic Breadth $\mathbf{{BB}}$ [16]
+
+1-3
+Facial Index $\mathbf{F}$ [21]
+
+1-3
+Nasal Index $\mathbf{N}$ [21]
+
+1-3
+$\mathbf{{Mouth} - {FaceWidthIndex}}$ $\mathbf{{MFW}}$ [21]
+
+1-3
+Biocular Width-Total Face Height Index BWFH [21]
+
+1-3
+Lip Length LL [29]
+
+1-3
+Maximum Frontal Breadth Max FB [29]
+
+1-3
+Interpupillary Distance ID [29]
+
+1-3
+Nose Protrusion $\mathbf{{NP}}$ [29]
+
+1-3
+Nose Length $\mathbf{{NL}}$ [29]
+
+1-3
+Nose Breadth $\mathbf{{NB}}$ [29]
+
+1-3
+
+max width=
+
+Anatomical term Abbrev $\mathbf{{Ref}}$
+
+1-3
+Eye Fissure Index EF [21]
+
+1-3
+Minimum Frontal Breadth Min FB [29]
+
+1-3
+
+§ 6.1 MAPPING METHOD
+
+We evaluate the measurements on the facial parts of the data set, yielding ${f}_{d,i} = \left\lbrack {{f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}}\right\rbrack$ for the $d$ th facial part of scan ${S}_{i}$ considering ${n}_{m}$ measures. The measures for all of the scans are combined into an ${n}_{m} \times {n}_{s}$ matrix, ${F}_{d} = \left\lbrack {{f}_{d,1}^{T}\ldots {f}_{d,{n}_{s}}^{T}}\right\rbrack$ , where ${n}_{s}$ is the number of scans. We learn how to adjust the weights of the PCA eigenvectors to reconstruct faces having specific characteristics corresponding to the measures. We adopt the general method of Allen et al. [1]. However, while that method learns a global mapping that adjusts the whole body, we will learn per-part local mappings. Furthermore, in Sec. 6.2 we will derive a process to select the best measures out of the set of all measures $\left\lbrack {{f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}}\right\rbrack$ , and will proceed independently for each of the five parts.
+
+We relate measures by learning a linear mapping to the PCA weights. With the ${n}_{m}$ measures for the $d$ th facial part, the mapping will be represented as a $\left( {n}_{e}\right) \times \left( {{n}_{m} + 1}\right)$ matrix, ${M}_{d}$ :
+
+$$
+{M}_{d}{\left\lbrack {f}_{{i}_{1}}\ldots {f}_{{i}_{{n}_{m}}}1\right\rbrack }^{T} = b, \tag{6}
+$$
+
+where $b$ is the corresponding eigenvector weight vector. Collecting the measurements for the whole data set, the mapping matrix is solved as:
+
+$$
+{M}_{d} = {B}_{d}{F}_{d}^{ + }, \tag{7}
+$$
+
+where ${B}_{d}$ is a $\left( {n}_{e}\right) \times \left( {{n}_{s} + 1}\right)$ matrix containing the corresponding eigenvector weights of the related facial part and ${F}_{d}^{ + }$ is the pseudoin-verse of ${F}_{d}$ . As in Eq. 6, a row of $1\mathrm{\;s}$ is appended to the measurement matrix ${F}_{d}$ for y-intercepts in the regression.
+
+To construct a new facial part based on specific measurements, we use $b$ in Eq. 1, as follows:
+
+$$
+{S}_{d}^{\prime } = \overline{{S}_{d}} + {Pb}, \tag{8}
+$$
+
+where $\overline{{S}_{d}}$ is the mean shape of the $d$ th facial part and $P$ is the matrix containing the eigenvectors. Moreover, we can define delta-feature vectors of the form:
+
+$$
+\Delta {f}_{d} = {\left\lbrack \Delta {f}_{1}\ldots \Delta {f}_{{n}_{m}}0\right\rbrack }^{T}, \tag{9}
+$$
+
+where each ${\Delta f}$ contains the user-prescribed differences in measurement values. Afterwards, by adding ${\Delta b} = {M}_{d}\Delta {f}_{d}$ to the related eigenvector weights, it is possible to adjust the measure such as to make a face slimmer or fatter.
+
+§ 6.2 MEASUREMENT SELECTION
+
+We propose a novel technique for automatically detecting the most effective and relevant anthropometric measurements. Some might be redundant with respect to others, some might not make sense for a specific part (e.g., the "Ear Height" might not be relevant for the mouth), and some might even lead to mapping matrices that generate worst results. During our investigations, we discovered that considering more anthropometric measurements does not necessarily lead to a lower reconstruction average error. Fig. 6 illustrates that a higher error occurs considering all the measurements in comparison with our selected combination. In order to aggregate the error for the given part, we reconstruct the face, relying only on the anthropometric measurements of the selected part, and then calculate the average error, as in Sec. 4. Fig. 7 shows two odd-looking examples from using all the measurements of the nose (a) and facial mask (b). We thus evaluate the set of relevant measurements, separately for each part. We begin with an empty set of selected measurements, and we iteratively test which measurement we should add to the set by evaluating the quality of the reconstructed faces when creating the mapping matrix, considering the currently selected measurements together with the candidate measurement. We reconstruct a face using the mapping matrix(Eq.6,8)based only on its measurement values. The reconstructed face is considered as a prediction, and thus we evaluate the prediction quality in a fashion very similar to that used for eigenvector selection, by reconstructing all of the faces found in the data set of facial scans, as well as the 19 validation faces.
+
+ < g r a p h i c s >
+
+Figure 6: Using all measurements leads to higher reconstruction errors(mm)as compared to our set of selected measurements on the data set and validation faces
+
+ < g r a p h i c s >
+
+Figure 7: Using all the semantic measurements of the nose (a) and facial mask (b) often leads to odd-looking parts when editing through the adjustment of measurement values
+
+Each candidate measurement is used together with the current set of selected measurements, and we compute the candidate mapping matrix from this set of measurements. We use the mapping matrix with the data set and validation faces, and reconstruct all of the instances of the part under consideration (e.g., all of the mouths). We then evaluate a geometric error, ${D}_{GE}$ , with the per-vertex distance between each predicted instance and its corresponding ground truth instance. The distance is calculated after a rigid alignment of the predicted instance to the ground truth instance is performed. We can thus ensure that we are evaluating the fidelity of the shape, and not its pose. If one or a few faces result in a large error, this could lead to the rejection of a measurement, which might still be beneficial for the prediction of most faces. To avoid this, we also measure the percentage ${D}_{NI}$ of faces for which an error improvement is seen. We count the number of faces whose geometric errors have been decreased by considering the candidate measurement. We then normalize ${D}_{GE}$ and ${D}_{NI}$ to the $\left\lbrack {0,1}\right\rbrack$ range and combine them into a single reconstruction quality measure:
+
+$$
+\text{ quality } = \text{ normalize }\left( {D}_{NI}\right) + 1 - \text{ normalize }\left( {D}_{GE}\right) \text{ . } \tag{10}
+$$
+
+Considering the combined geometric error and percentage of improvement of all candidate measurements, we pick the one which will be added to the set of selected measurements. We stop adding measurements when we observe an increase of ${D}_{GE}$ and a value ${D}_{NI}$ below 50%. We repeat this process for each part (eyes, nose, mouth, etc.)
+
+The selected anthropometric measurements are enumerated in Table 3 . The description of each measurement, as well as the reference to the literature from which we obtained the measurement, are shown in Table 2, where we also list the measurements we rejected (measurements which were never selected for any of the segments).
+
+Table 3: Combinations of anthropometric measurements
+
+For female
+
+max width=
+
+$\mathbf{{Part}}$ $\mathbf{{Selectedmeasures}}$
+
+1-2
+Facial mask B, BB, BW, BWFH, FH, MaxFB, NBWN, NH, NP, PW, WNTP
+
+1-2
+Eye B, BB, BW, BWFH, EH, F, FW, ID, LL, NB, NBTP, NH, NRLB, NRTP, NWRW, NWWN, OBW, OFL, OIW, PW
+
+1-2
+Nose EH, LL, N, NB, NBTP, NBWN, NH, NL, NP, NWTP, NWWN, PW
+
+1-2
+Mouth BWFH, F, LL, MFW, NP, NRWN, NWTP, PW
+
+1-2
+Ear EH, FH, FW, MaxFB, NB, NBTP, NBWN, NP, NRLB, NRTP, NWLB, NWTP, OBW, PW, WNTP
+
+1-2
+2|c|For male
+
+1-2
+Facial mask BW, BWFH, F, FH, FW, MaxFB, NRWN, OBW, OFL, PW
+
+1-2
+$\mathbf{{Eye}}$ B, BB, BW, F, FW, ID, N, NBTP, NBWN, OBW, OFL, PW
+
+1-2
+Nose BB, FW, ID, LL, MaxFB, NB, NBWN, NH, NP, NRLB, NRTP, NWLB, NWWN, OBW, OIW
+
+1-2
+Mouth B, BW, BWFH, F, LL, NBTP, NH, PW
+
+1-2
+Ear B, BB, BWFH, EH, FH, MaxFB, N, NRTP, NWRW, OBW, OIW
+
+1-2
+
+§ 6.3 CORRELATION BETWEEN MEASUREMENTS
+
+Defining the correlation between the measurements is important for the adjustment of faces. Accordingly, if the user adjusts one measurement, the system automatically calculates the adjustment of the other measurements as well. This greatly helps to create realistic faces by maintaining the correlation observed in the data set. Similarly to Body Talk [25], we use Pearson's correlation coefficient on $F$ to evaluate the relationship between the anthropometric measurements. Considering a facial part $d$ , the Pearson’s correlation coefficient ${\operatorname{Cor}}_{jk}$ for measurements $j$ and $k$ is expressed as:
+
+$$
+{\operatorname{Cor}}_{jk} = \frac{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}\left( {{f}_{ij} - \overline{{f}_{j}}}\right) \left( {{f}_{ik} - \overline{{f}_{k}}}\right) }{\sqrt{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}{\left( {f}_{ij} - \overline{{f}_{j}}\right) }^{2}}\sqrt{\mathop{\sum }\limits_{{i = 1}}^{{n}_{s}}{\left( {f}_{ik} - \overline{{f}_{k}}\right) }^{2}}}, \tag{11}
+$$
+
+where ${f}_{ij},{f}_{ik} \in {f}_{d,i}$ are measurements of scan ${S}_{i},\overline{{f}_{j}}$ and $\overline{{f}_{k}}$ are the mean values of measurements $j$ and $k$ respectively, and ${n}_{s}$ is the number of scans. The coefficient is a value between -1 and 1 that represents the correlation. When adjusting measurement $k$ by $\Delta {f}_{k}$ , we get the change in other measures as $\Delta {f}_{j} = {\operatorname{Cor}}_{jk}\Delta {f}_{k}$ . Accordingly, we can evaluate the influence of one measurement on the others, as well as the conditioning on one or more measurements, and create the most likely ratings of the other measurements.
+
+§ 7 RESULTS
+
+Compared to global 3DMM methods that compute one set of eigenvectors for the whole face, our 3DMM computes a set of eigenvectors for each part. This is at the root of one of the advantages of our approach: its ability to locally adjust faces. We compare our approach to other methods that rely on local 3DMM. We created mapping matrices (Eq. 7) for global 3DMMs, SPLOCS [19], clustered PCA [26], as well as our part-based 3DMMs, and tested the adjustment of measurements with these models. We used 46 eigenvectors for global 3DMM, SPLOCS, and our part-based 3DMM. For clustered PCA, we first tested using 13 clusters, as is reported in the paper, but found that this leads to a non-symmetrical result (Fig. 8b). By checking other clusterings, we selected 12 clusters (Fig. 8a). Because a clustered PCA does not allow for a different number of eigenvectors for each cluster, and to avoid having too few eigenvectors per part, we used 46 eigenvectors for each cluster (selecting the 46 with the largest eigenvalues).
+
+ < g r a p h i c s >
+
+Figure 8: Automatic part identification of clustered PCA [26]. Note how the automatic clustering leads to non-symmetrical clusters (left eye with one cluster vs. right eye with two clusters) for 13 clusters and required us to manually check which other clustering would be usable.
+
+To compare our approach and the use of measurements with other methods, we decided on a way to use our measurements with SPLOCS and clustered PCA. We further demonstrate that with SPLOCS and our approach, we can have more local measurement or global measurement control. For our approach, Table 3 shows that some measures influence more than one part. For example, the "Lip Length" is found in the lists for both mouth and nose. When a measurement is shared between different facial parts, our method allows to decide to have more localized changes by adjusting the measure for only one part, or to have more coherence across the parts by adjusting all of the parts involved in the measurement. If comparing with SPLOCS, we can also balance between local measurements and global measurements. Each measurement is based on computations involving specific measurement vertices (such as the corner of the mouth and the tip of the nose). To enforce locality, when considering a measurement, we check which SPLOCS "eigenvectors" infer significant movement at the related measurement vertices. We compute this by checking if the eigenvector displacement vector at a measurement vertex is large enough as compared to the maximum displacement vector of the eigenvector (we check if it is larger than $1\%$ of the maximum displacement of all vertices of the eigenvector). A SPLOCS eigenvector is considered for a measurement only if it meets the criterion for one of the measurement vertices of a specific measurement. To enforce more globality with SPLOCS, we use the mapping matrices for all of the eigenvectors. Fig. 7 shows an example of the globality and locality of the influence of adjusting the "Lip Length". It compares global PCA eigenvectors, local measurement and global measurement SPLOCS, clustered PCA, and our local measurement and global measurement approaches. The color coding shows the per-vertex Euclidean distance. Note that the colors do not represent errors, but rather, vertex movements. Thus, the goal is to have warmer colors around the location where the editing is intended, and colder colors in unrelated regions. Our method allows having global measurement influenced by adjusting the measure for both the nose and the mouth parts, as well as more localized changes by adjusting only the mouth (Fig. 9c-9f). Contrary to our approach, both global measurement and local measurement SPLOCS resulted in similar deformations all over the face, while the expected result was a modification focused around the mouth (Fig. 9b-9d).
+
+We will now focus on local measurement editing. Fig. 10-12 show the adjustment of the same anthropometric measurement using global 3DMM, local measurement SPLOCS, clustered PCA, and our local measurement approach. In Fig. 10f-10g, we can see that even though we wanted to adjust the "Nose Breadth", the adjustment using the global eigenvectors and local measurement SPLOCS resulted in significant deformations all over the face, while clustered PCA and our approach could focus the deformation around the nose, as expected (Fig. 10h-10i). We can observe similar unwanted global deformations of the face in Fig. 11f-11g. Also note that the automatic segmentation of clustered PCA does not provide the desired deformation for some cases, such as in Fig. 11h-12h. We consistently outperform clustered PCA in terms of local deformation where expected. The results shown in Fig. 10-12 highlight the difficulty of locally controlling the face deformation, and the power of our approach in locally adjusting the face with respect to the anthropometric measurements.
+
+ < g r a p h i c s >
+
+Figure 9: Comparison of the globality vs. locality of the adjustments (editing by increasing the "Lip Length"): (a) global PCA eigenvectors, (b) global measurement SPLOCS, (c) our global measurement approach, (d) local measurement SPLOCS, (e) clustered PCA, and (f) our local measurement approach. The colors respectively represent per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= {8.5}\mathrm{\;{mm}}$ ). Note how our local measurement and global measurement approaches induce significant and local surface deformation to achieve the desired editing. In comparison, global PCA and SPLOCS induce non-local deformation, and clustered PCA induces much less deformation.
+
+ < g r a p h i c s >
+
+Figure 10: "Nose Breadth" adjustment results: (a) nose of a female from validation faces adjusted using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= 5\mathrm{\;{mm}}$ ).
+
+In the accompanying video, we show multiple edits on multiple parts, starting from the average face, while Fig. 13 shows edits starting from four real faces. We can see that our approach allows capturing the essence of the anthropometric measurements, providing an easy-to-use workflow.
+
+§ 8 DISCUSSION
+
+In this section, we discuss different aspects of our approach. We present different comparisons highlighting the impact of the eigenvector and measurement selection. We then discuss the face segmentation choice, and end by describing the procedure used to bring all of our scans to a common face mesh.
+
+ < g r a p h i c s >
+
+Figure 11: "Lip Length" increase results: (a) mouth of a male from validation faces edited using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0$ $\mathrm{{mm}}$ , red $= 8\mathrm{\;{mm}}$ ).
+
+ < g r a p h i c s >
+
+Figure 12: "Bizygomatic Breadth" (the bizygomatic width of the face) increase results: (a) a male from the validation faces edited using (b) global PCA eigenvectors, (c) SPLOCS, (d) clustered PCA, and (e) our approach. The color mapped renderings (f)-(i) indicate respective per-vertex Euclidean distance (blue $= 0\mathrm{\;{mm}}$ , red $= {14}\mathrm{\;{mm}}$ ).
+
+§ 8.1 MEASUREMENTS ERROR
+
+To verify the robustness of our 3DMMs and of our set of selected measurements, we reconstruct real faces, relying on their anthropometric measurements to compute their eigenvector weights (Eq. 6). We then get the face with our approach, including the blending procedure (Sec. 5), and compute its resulting anthropometric measurements. We compute the quality of the reconstruction through the absolute value of the difference between the ground truth measurement and the measurement from the reconstructed face. Since measurements correspond either to a Euclidean distance or to a ratio of Euclidean distances, we normalized all the measurements to the $\left\lbrack {0\% ,{100}\% }\right\rbrack$ range. Fig. 14 shows that the average percentage of error is low when using "our measurements". This means that both the selection of eigenvectors and the mapping matrix work well. Furthermore, it shows that when using "all measurements" to compute the mapping matrix (Eq. 7), we get larger average errors as compared to ground truth measurements. When calculating the error in Fig. 14 for "our measurements", we calculate the average error over our selected measurements only (Table 3). The error shown in Fig. 14 for "all measurements" also considers only our selected measurements (if the error across all of the measurements is considered, the comparison is even more in favor of using our selected measurements).
+
+ < g r a p h i c s >
+
+Figure 13: We generated random faces (left faces (a)-(d)) and edited them by increasing ("+") or decreasing ("-") the value of some of the indicated anthropometric measurements
+
+ < g r a p h i c s >
+
+Figure 14: Using our subset of measurements on the data set and validation faces leads to lower errors (percentage), as compared to using "all measurements"
+
+We evaluated how our approach compared to SPLOCS and clustered PCA with respect to achieving measurement values prescribed by edit operations. We created a set of 1,000 random edits on 135 face meshes. We took the resulting edited face mesh from our approach, SPLOCS, as well as clustered PCA, and evaluate the difference between the measurement value prescribed by the editing and the measurement value calculated from the edited mesh. Overall, our approach is the one that performed the best, with the resulting measurement being closest to the prescribed measurement. SPLOCS was second and clustered PCA presented the greatest differences (see Fig. 15).
+
+Even though our approach is the one that is closest (on average) to the prescribed measurements, there is a limitation due to the blending of the synthesized parts. This blending sometimes affects the mesh in a way that prevents it from achieving the exact prescribed effect for the editing. Fig. 16 shows an example where the blending does not maintain the "Nose Height" of the synthesized nose as it deforms it through the blending process.
+
+ < g r a p h i c s >
+
+Figure 15: Starting from one face, we adjust one of its measurements to match the value of that measurement for another face. We then compute the difference between the prescribed measurement value and the measurement value calculated from the mesh. We do so for 1,000 such edits. Our approach leads to a smaller error (percentage), as compared to the clustered PCA, and to slightly better results when compared to local measurement SPLOCS.
+
+ < g r a p h i c s >
+
+Figure 16: (a) Average male head. Its "Nose Height" is ${45.09}\mathrm{\;{mm}}$ . (b) A synthesized nose with its "Nose Height" edited to ${70.11}\mathrm{\;{mm}}$ . (c) Result of blending the nose. While this is an extreme case, it still reflects the fact that the approach is not always able to achieve the prescribed measurement (the value decreased to ${58.53}\mathrm{\;{mm}}$ for this example).
+
+§ 8.2 FACE DECOMPOSITION
+
+Our face segmentation was motivated by several facial animation artists with whom we worked, and who strongly prefer having control over the face patches in order to make sure they match the morphology of the face and muscle locations. This type of control is impossible to achieve with an automatic method, which is typically agnostic to the underlying anatomical structure. It is important to note that this manual way of selecting the regions is no more cumbersome than the current state-of-the-art methods. The state-of-the-art method of Tena et al. [26] requires a post-processing step to fix occasional artifacts in the segmentation method. Furthermore, as illustrated in Fig. 8b, segmentation boundaries can occasionally occur across important semantic regions such as the eyes, leading to complications further down the pipeline.
+
+§ 8.3 DATA SET
+
+The quality of the input mesh data set is crucial for the reconstruction of good 3D face models. As do many existing methods, we assume that the meshes share a common mesh topology. Mapping raw 3D scans to a common base mesh is typically done using a surface mapping method $\left\lbrack {2,{20},{28}}\right\rbrack$ . We established this correspondence with the commercial solution, R3DS WRAP. We plan to release a subset of our data set for other researchers.
+
+§ 9 CONCLUSION
+
+In this paper, we designed a new local 3DMM used for face editing. We demonstrated the difficulty of locally editing the face with global 3DMMs; we thus segmented the face into five parts and combined the 3DMMs for each part into a single 3DMM by selecting the best eigenvectors through prediction error measurements. We then proposed the use of established anthropometric measurements as a basis for face editing. We mapped the anthropometric measurements to the 3DMM through a mapping matrix. We proposed a process to select the best set of anthropometric measurements, leading to improved reconstruction accuracy and the removal of conflicting measurements. From a list of 33 anthropometric measurements we surveyed from the literature, we identified 31 which lead to an improvement of the reconstruction and rejected 2 as they decreased the quality of the reconstruction. Note that the anthropometric measurement selection process would apply as well even if using a different 3DMM from the one proposed in this paper, as well as when considering a different set of anthropometric measurements. We demonstrated this by applying our set of measurements to both SPLOCS [19] and clustered PCA [26]. This also demonstrated that our approach produces results superior to those of established methods proposing automatic segmentation and different ways to construct the eigenvector basis. We also presented different bits of experimental evidence to demonstrate the superiority of our approach, especially in terms of local control, as compared to the typical global 3DMM.
+
+A limitation of our approach lies in the mapping matrices, which assume a linear relationship between anthropometric measurements and the eigenvector weights. An interesting avenue for future work would be to apply machine learning to identify non-linear mappings. Also, our measurements are based on distances between points on the surface. Future work could consider measurements based on the curvature over the face, such as measurements specifying the angle formed at the tip of the chin.
+
+Although anthropometric measurements generate plausible facial geometric variations, they do not consider fine-scale or coarse-scale features. Regarding the fine-scale details, our approach does not model realistic variations of wrinkles, and that could be an interesting direction for future research. Regarding coarse-scale features, we could reconstruct a skull based on the anthropometric measurements, and then generate the facial mask based on an energy minimization of the skin thickness considering the skull and the measurements.
+
+§ ACKNOWLEDGMENTS
+
+This work was supported by Ubisoft Inc., the Mitacs Accelerate Program, and École de technologie supérieure. We would like to thank the anonymous reviewers for their valuable comments.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a5b94b38a58014a41b18ec60b89804ba7a0108b
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,423 @@
+# Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients' Perspectives
+
+Carman Neustaedter‡
+
+School of Interactive Arts and Technology, Simon Fraser University
+
+## Abstract
+
+We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients' needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-to-face consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.
+
+Keywords: Mobile video communication, doctor appointments, domestic settings, computer-mediated communication.
+
+Index Terms: Human-centered computing-Empirical studies in HCI
+
+## 1 INTRODUCTION
+
+Telemedicine involves the use of video conferencing systems to support remote consultations with patients. Telemedicine systems can be valuable as people who live far away from medical resources or face health challenges (e.g., chronic illness, mobility issues) may find it very hard or even impossible to see a doctor in person [1]-[3]. People are now able to have video-based appointments with general practitioners using commercially available technologies like Skype, FaceTime or specialized telemedicine video systems [4]-[6]. For example, we are now seeing a proliferation of apps for video-based doctor appointments, such as MDLive and Babylon. With this comes a strong need to ensure video conferencing systems are designed appropriately in order to meet the needs of both patients and doctors.
+
+Historically, telemedicine systems have been studied with a strong focus on specialist appointments, for example, certain chronic diseases [7]-[9] or surgery [10], [11]. In contrast, there has been less focus on system designs for patient visits with general practitioners and even less focus on understanding the design needs of patients for such systems. For example, studies have explored people's level of satisfaction and convenience with remote doctor appointments [12], [13], rather than explorations of the socio-technical challenges involved in video-based appointments and the design challenges that exist for video conferencing systems aimed at supporting appointments. This makes it unclear how to design systems that move past basic video chat software (e.g., Skype, FaceTime) capabilities.
+
+For these reasons, our work explores in-home video appointments between people and their family physician. We were interested in understanding how patients would react to appointments focused on a range of topics from common colds to privacy invasive situations, where one could use a mobile phone and video chat software (e.g., Skype) to meet with their doctor from home. Our overarching goal was to understand what design needs and opportunities exist for video conferencing systems focused on home-based doctor appointments to meet the needs of patients, though clearly future work is needed from the perspective of doctors. We also focused specifically on conducting our study in a manner that did not expose patients directly to privacy-invasive appointments. Here we relied on scenario-based design methods [14], [15] that allow participants to examine interactions with future technologies in a grounded way.
+
+We conducted an exploratory study with twenty-two participants who have visited doctors for general medical conditions. We were purposely broad with our sample and included diverse age groups, occupations, cultural and ethnic backgrounds. The goal was to raise as many design challenges and opportunities as possible, which comes from sampling a broad spectrum of participants. Future work should consider narrowing in on particular populations and types of visits, informed by our work that helps point to cases and situations that would be useful to explore further. We first interviewed participants about their past in-person experiences. This allowed us to learn where challenges exist, and help inform our understanding of patient needs for video-based appointments. We then used six video scenarios depicting video appointments to conduct focused interview conversations with our participants. The videos ranged from non-invasive situations such as a cold to privacy intrusive cases such as a physical exam of one's private parts. In contrast to other study approaches where we may have investigated actual video-based appointments or role-plays, the scenarios allowed us to gauge participants' reactions to privacy sensitive situations without putting them directly in harm's way and risking their own privacy.
+
+Our results show that video-mediated appointments could raise issues around accessibility, relationship building, camera work to capture visuals of one's body, and privacy concerns about private information disclosure. Thus, while video-based appointments could be valuable for patients, systems to support them must be carefully designed to address these concerns. Existing commercial video conferencing systems (e.g., Skype, FaceTime) are not mapped well to the needs of patients for video-based appointments and more nuanced designs are required.
+
+---
+
+* email: dongqih@sfu.ca
+
+${}^{ \dagger }$ email: yheshmat@sfu.ca
+
+${}^{ \ddagger }$ email: carman@sfu.ca
+
+---
+
+### 2.1 Medical Healthcare over Distance
+
+Telemedicine systems were created to help remote populations with limited medical resources connect with physicians and specialists in urban centers [16]-[18]. They have also been designed to support people who are unable to visit a doctor in person due to difficulties such as age, disability, or diseases [3]. Doctors have been able to communicate with patients via text message [19], [20], phone call [9], or video call [21]-[23]. Telemedicine uses have also advanced over the years to serve a broader spectrum of users and not just those in rural areas with mobility issues [24], [25]. This has allowed doctors to provide more attention to patients over relatively long periods of time [10], [26] such as patients with chronic diseases [9], [27], [28].
+
+In addition to telemedicine systems, ubiquitous monitoring instruments have been designed and deployed in home environments to aid health care [29]-[31]. Sensors have been embedded into furniture such as beds [32] and couches [33], or attached to the human body [7], [34], [35] to monitor physiological signals. Traditional diagnosis or treatment procedures become different when direct physical contact is unavailable [36]. For example, physical interaction systems can be used to transfer haptic feedback between physicians and patients [37]. Computer-aided virtual guidance has been applied to help patients conduct physiotherapy exercises [38], [39]. Factors such as system usefulness and ease of use, policy and management support, and patients' relationships with health providers have been found to be key to telemedicine system success and acceptance [40]-[42]. Security and privacy concerns have also been explored in relation to telemedicine, considering the confidentiality of medical information [43], [44]. Researchers have tried to resolve security concerns by strengthening access control [45]-[47].
+
+Most closely related to our work, researchers have explored video-based doctor appointments through questionnaires and interviews where respondents have provided their general reactions to the idea of having a video-based appointment. From this work, we know that people feel video visits will lessen travel time and costs [48] and like the idea of having an appointment from the comfort of their home [49], [50]. Several researchers have also studied actual video-based doctor appointments. Powell et al. [13] interviewed patients after having a video-based appointment in a medical clinic office. Users reported video being convenient and only having minor privacy concerns with people overhearing the call [13]. Dixon and Stahl [12] rated patients' experiences using a video visit compared to an in-person visit after having one of both in a clinic. People preferred in-person visits but were generally satisfied with video visits [12]. In all cases, appointments were related to fairly mundane topics and privacy sensitive situations were not explored.
+
+We build on these studies by exploring why people have specific technology preferences and social needs along with descriptions of the concerns people have with video appointments. This helps inform user interface and system design. Our work also differs in that we explore a range of appointment scenarios, some with potentially large privacy risks, which are not easy to explore with real appointments given ethical concerns. In addition, our work studies in-home usage rather than video conferencing usage in a clinic or doctor's office; this contrasts prior work [12], [13]. Usage in a home may potentially see different concerns and reactions because users are giving the doctor visual access into their home and are without medical instruments or assistance, as opposed to a doctor's office.
+
+### 2.2 Video Communications
+
+Video conferencing has been widely used amongst family and friends and in work and educational contexts [51]-[56]. People share views or activities via video calls, which can help create stronger feelings of connection over distance and a greater sense of awareness of others [57]-[61]. Applications range from supporting casual conversation to formal meetings [56], [58], [62]. Despite the benefits of video communication, it can still be difficult to generate the same feelings and situations via video calls as found in face-to-face communication [51]. First, people can feel that there is a barrier when watching via a computer [55], [63]. Factors such as narrow fields of view and a lack of mobility can cause users to be aware of the distance between people in video calls [55], [64]. It can also be difficult to maintain eye contact because of displacements between cameras and the video view of the remote user [40]. There are also issues with feeling like one has to continually show their face on the video call [65].
+
+Some researchers have explored ways to increase feelings of connection over distance. For example, this has involved presenting a larger camera view and additional camera control to improve engagement with remote scenes [66], deploying interactions to support virtual shared activities over distance [59], [67], or sharing first-person views to enhance feelings of copresence [68]. When mobile phones are used for video conferencing, one of the main challenges is 'camera work,' the continual reorienting of the camera by moving one's smartphone in order to ensure the remote person has a good view [62], [69], [70]. Local users streaming the video via their phones desire hands-free cameras that are easy to move [71]. Remote users desire the ability to gesture at things in the scene [69]. We explore the camera work needed for home-based video appointments with doctors, which has not been explored in prior studies.
+
+We also know that video conferencing systems have been fraught with privacy concerns, despite their benefits. Online privacy issues typically relate to how users' information is mediated by media [72]. Privacy theory in video communication deconstructs privacy into three inter-related aspects: solitude, confidentiality and autonomy [73]. Solitude relates to having control over one's availability (e.g., can a person gain enough time on their own?) [73]. Confidentiality concerns how information is disclosed to others (e.g., is any sensitive background material shown on camera?) [73]. Lastly, autonomy pertains to having control over how one can interact in a video-mediated communication system (e.g., can a person choose when to use various features and for what reasons?) [73]. For example, with video calling, it can be easy to stand out of the camera's view yet still possible to see what is on the video screen or overhear the video call's audio [74], [75]. Situations like these infringe on people's confidentiality and autonomy at the same time. Across the literature, privacy concerns with video-mediated communication systems often relate to issues around showing the background of one's environment (e.g., a messy room) or a person's appearance not looking good on camera [53], [59], [74], [75]. Privacy challenges in relation to solitude, confidentiality, autonomy have not been thoroughly explored for video-based primary care appointments in the home. Our study builds on past research that explores privacy in work and family communication situations while using video communication systems.
+
+## 3 EXPLORATORY STUDY METHOD
+
+We conducted an exploratory study to understand what aspects of appointments patients feel are important and what benefits or challenges exist for video-based doctor appointments from one's home. Our study was approved by our university research ethics board and we took great care and caution to conduct our study in a manner that did not increase privacy risk for patients.
+
+### 3.1 Participants
+
+The study enrolled a total of 22 participants (17 females, 5 males) who had visited doctors. We recruited participants through snowball sampling, posting advertisements on social networks and university mailing lists. The gender imbalance was unintentional and based solely on who responded to our participant call and was willing to participate. Seventeen interviews were done in person either on our university campus or at participants' homes, whichever they felt comfortable with. Five of the interviews were done over Skype. The participants were all adults within the age range of 19-71 (average $= {37},\mathrm{{SD}} = {16}$ ). Participants had a range of cultural and ethnic backgrounds, including individuals with European, Asian, and Middle Eastern descent. To reach a diverse data set, we recruited participants from different age groups, occupations, cultural and ethnic backgrounds. We purposely chose a diverse group of participants so that we could find out as many benefits and challenges as possible in terms of designing technology for supporting video-based appointments. Thus, the goal was to be exploratory such that future studies could then investigate the areas of opportunity and concern revealed by our study in more detail. Some of the participants visited the doctor regularly for conditions such as blood pressure check, gout, anxiety control, arthritis, depression, digestive system issues, etc. Others visited the doctor only when sick or for checkups.
+
+### 3.2 Method
+
+We used semi-structured interviews to gain an in-depth understanding of patients' experiences with in-person doctor appointments and thoughts about video-based appointments. Each interview contained two sections. In the first section, participants talked about their past doctor appointment experiences. In the second section, they were shown six video scenarios, each of which dealt with a distinct medical condition, and we interviewed them about their reactions. Participants could choose between a female or male interviewer in order to feel more comfortable sharing their private medical experiences or their personal opinions. Eleven participants (7 females, 4 males) had an interviewer of the same gender; 10 female participants had a male interviewer and 1 male participant had a female interviewer. The interviews lasted between 50 and 90 minutes.
+
+#### 3.2.1 In-Person Experiences
+
+To learn more about patients' face-to-face appointments with doctors, we asked them to talk about their past appointments that they felt went well or not. The goal was to use this knowledge to understand what aspects of in-person appointments should be maintained or improved in a video-based appointment. Furthermore, the shift of doctor appointments from in-person to online might bring both challenges and opportunities that were unknown to us. Thus, an understanding of in-person doctor appointments would benefit our thoughts on how to design video conferencing systems. It would also act as a form of baseline.
+
+We purposely ground this interview phase in questions about specific appointments, as opposed to more general thoughts, to acquire detailed and specific data. When recalling these visits, participants were asked to describe the details, such as their conditions, the examinations performed, how the diagnoses were made, what treatments were provided, whether they had follow-up visits, etc. In this way, the questions would help them recall as much information as possible about the appointment. As examples, we asked, "Can you tell me about a visit you felt that went very well (or not well)?" and "How did you describe the situation to your doctor?", "What worked well?", "What did not work well about the visit?" At the end of this interview section, participants were asked about their general opinions on the necessity of face-to-face office visits (as opposed to video-based appointments) and what factors they believed were important during the appointments. This section of the interview lasted 20 to 30 minutes.
+
+#### 3.2.2 Future Scenarios
+
+##### 3.2.2.1 Scenario Planning and Production
+
+Next, we conducted scenario-based interviews with participants. This method was selected based on a lot of careful thought and planning. We wanted participants to understand the concept of video conferencing between a doctor and a patient and ask them about specific attributes of such appointments, which may be hard to imagine. Yet we were cautious that we did not want to infringe on the privacy of our participants as we wanted to explore topics that were both commonplace as well as privacy intrusive. For these reasons, we employed an approach similar to scenario-based design [14], [15], which is often used in the early stages of design cycles in the field of human-computer interaction. The goal is to elucidate the use of novel technologies that might not exist yet or be widely used. This approach is able to engage users in exploring both the design opportunities and challenges that might exist for a technology through storytelling and conversation. With this type of method, participants can be shown pre-recorded videos of people and design artifacts, which are then used as a conversation piece to discuss future technology usage. In our case, we wanted to illustrate aspects such as what could be seen on video during an appointment, including the environment, facial expressions, gestures, and one's body, and how smartphones might need to be oriented or used to capture such information.
+
+Other study options might involve investigating real video-based appointments or letting participants role-play as opposed to showing them videos; however, we felt there were serious ethical challenges. First, it would be difficult and highly privacy intrusive to observe real appointments about privacy-sensitive topics, e.g., talking about drug usage or domestic abuse, conducting a visual exam of one's private areas. Second, role-playing such appointments could similarly be awkward and privacy intrusive. In contrast, we felt that pre-recorded video scenarios would allow us to gauge participants' reactions to privacy sensitive situations without putting them directly in harm's way and risking their own privacy. Pre-recorded videos would also allow us to have control over what participants saw, as they would each see the same situation. This would mean we could learn about everyone's reactions to the same situations, and we could explore multiple appointment topics with each participant rather than just one.
+
+Prior to the study, we planned and pre-recorded six sample doctor-patient appointments using a video conferencing system. This involved brainstorming possible appointments and the likely benefits and challenges that might exist for patients and doctors. We narrowed down a large list of scenarios to a set of six that we felt mapped to a range of experiences. We then iteratively generated scripts and storyboards for each video. These were reviewed with a doctor who conducted video-based appointments to ensure the appointments we depicted were realistic.
+
+We chose scenarios based on several aspects. First, we wanted the scenarios to cover common medical conditions where appointments would normally be conducted in a clinic, including conversation between the patient and doctor, visual examinations, or physical touching. Second, we selected scenarios that would require a variety of camera work to facilitate the video call, e.g., orienting the camera to have a view of the patient's whole body, face, or particular areas like the mouth. Third, we selected scenarios with different levels of potential privacy concerns to receive a variety of reactions from participants. Some appointments were felt to be somewhat mundane and nonproblematic (e.g., a cold), while others were purposely meant to offer problematic situations in different ways (e.g., problems with conversations, problems with what is shown on camera). The resulting scenarios were:
+
+
+
+Figure 1: Images depicting the video scenarios that were shown to participants. In each video, from left to right: third-person view of patient, camera view of patient, camera view of doctor.
+
+1) Cold: The patient had a cold and sore throat. The doctor asked the patient to explain the symptoms and show their throat with the mobile phone camera.
+
+2) Fall while jogging: The patient described falling down while jogging and was asked to show the injuries. The patient followed the instructions of the doctor to uncover their stomach region and press on different locations to inspect for internal injuries.
+
+3) Sleeplessness: The patient explained that they were having sleepless nights. They were asked about alcohol and coffee intake. The doctor said they would send a referral to a counseling office.
+
+4) Drugs: The patient had a rash on their arm. After excluding the common possible causes, the doctor asked the patient about using drugs which made the patient feel awkward as they didn't realize the possible connection.
+
+5) Domestic abuse: The patient described being dizzy and having a bruise on their forehead. They were asked questions about their cognitive competence, then the patient confided in the doctor that there was partner abuse.
+
+6) Private parts: The patient and doctor discussed results from an annual physical exam. The doctor asked about the patient's sexual history and a lump on the patient's groin area. The doctor instructed the patient to show their private parts and the doctor performed a visual examination.
+
+Next, we recorded each video scenario twice within a home setting in our lab, once with male actors for both the patient and doctor, and once with female actresses for both patient and doctor. The videos used the same basic script. Figure 1 shows an image from the female versions of the videos. In each video, participants were shown: 1) a third-person view of the patient holding their mobile phone so they could understand the camera work that was needed to capture the video (left side of the video); 2) the camera view of the patient (the middle of the video); and, 3) the camera view of the doctor (right side of the video). When the videos revealed the patient's body, we blurred parts that might normally be hidden under clothes. This was to protect the privacy of the actors in the videos. For Scene 6, we did not record the portion of the video showing private areas. Instead, we showed a masked portion of video. Videos were between 1.5 and 2 minutes each.
+
+##### 3.2.2.2 Scenario-Based Interview Method
+
+In the study, we showed and asked participants questions about the six scenarios, one at a time. Participants watched the videos that mapped to their gender selection. We felt that this mapping might generate stronger empathy from our participants and help them to imagine how they would feel if they were in the same situation as the actor. We did not counterbalance the ordering of the scenes as we wanted to ease the participants into the idea of video appointments with somewhat mundane situations first and our work was meant to be exploratory rather than a carefully controlled experiment. This does have the limitation that the order of the scenes could have affected participants' thoughts about them.
+
+After watching a video, participants were asked to provide reactions to the specific situation, where we asked what they saw as the benefits or challenges of using a video call for the appointment. These questions included, for example, "How would you feel if you were the patient in the video?", "How would you compare an in-person appointment with that in the video call?" We then repeated this for each video, one-by-one. We also asked for their opinions on camera control, privacy concerns, and the use of different types of technologies. The scenario-based interview lasted about 40 minutes. Participants received $\$ {20}$ for participating.
+
+### 3.3 Data Collection and Analysis
+
+All the interviews were audio-recorded and fully transcribed. We recorded videos of the participants watching the scenarios and talking about them (with permission). Two researchers analyzed and coded the data. Each researcher coded the data separately, then one researcher merged the coding and conducted additional analysis. The researchers also discussed their analysis and coding. We used open coding to label all the findings in the transcripts. Afterwards, through the use of axial coding, we formed categories such as privacy, benefits and challenges of using video calls, trust, physical examination, etc. Lastly, selective coding was used to find high-level themes including accessibility, empathy and trust, camera work and privacy concerns. In the following sections, we describe our findings under the four main themes. Quotes from participants are presented with participant ID as P#. We intermix descriptions of patients' past experiences and their opinions of video appointments as a way to further analyze video appointments.
+
+## 4 ACCESSIBILITY
+
+Participants felt that video appointments could create a lower barrier for accessing one's doctor than in-person appointments. Participants described visiting their doctor based on their own judgements around when it was important to do so. Many of them said they would not bother to see a doctor for what they felt were minor things (e.g., a general cold or bruises) and perform an analysis by themselves, sometimes with the aid of web searches. Participants said that often they were not sure whether they should visit a doctor or not. Some felt that a lower barrier to meeting with one's doctor might make it easier for them to meet about more situations where they were unsure as to whether an appointment was necessary.
+
+Instead of you waiting for a week to visit the doctor you can use this system to have primary comfort to know how serious or not the problem is until you find an appointment time. -P11, female, 33
+
+## 5 EMPATHY AND TRUST
+
+Relationship building is one of the essential aspects of doctor-patient communication. Similar to prior research [76], participants told us that body language was important during conversations with their doctor when in-person. Conversations involved eye contact and body gestures. By looking patients in their eyes, nodding while listening, or using hand gestures when explaining things to them, the doctor could let patients feel like their conditions were heard, their feelings were understood, and their problems were trying to be solved.
+
+Participants felt that a video-based appointment would cause changes to the ways that body language was conveyed and seen. For example, doctors could be multi-tasking on their computer or not fully paying attention.
+
+I think over a video call it's hard to know if the person's attention is only on you because they might have other tabs open and stuff...Whereas if you're in-person, you know through their body language and through their eye contact that they're actually focusing on you. - P1, female, 19
+
+The user interface in our video scenarios tended to only show the doctor's face and shoulders, akin to a typical video chat (Skype) call. Yet participants described wanting to see more parts of the doctor's body during appointments. For example, one participant wanted to see the doctor's face and upper body, including their arms and hands in case the doctor gestured with them. This would make the appointment feel 'more real'. One participant said it might be difficult to have eye contact with the doctor if the camera was not at the right angle.
+
+## 6 CAMERA WORK
+
+We talked with participants about the camera work that would be needed within video-based appointments and they saw various aspects of it in the video scenarios. By camera work we refer to the orienting of the smartphone camera such that it can capture the information desired by doctors.
+
+### 6.1 Visual Examinations
+
+First, all participants talked about the doctor doing visual examinations of their body or parts of it during their previous appointments. When it came to video-based appointments, participants expressed concerns about whether the camera could clearly show body parts in order to support visual examinations by the doctor. There were several conditions in our video scenarios that contained visual exams, e.g., showing down one's throat, wounded legs and foreheads, rashes, and genitals. In these cases, participants felt that the camera resolution, color accuracy, network quality, and light intensity could be a problem. Participants felt that visual checks would be less accurate over a video call than in-person. This could cause them to lose some trust when it came to their diagnosis. Based on their past video chatting experiences, several participants noted that the quality of static images was much better than that of showing via video, as video resolution is highly limited by network bandwidth. Thus, they felt that images may be better for information sharing in some situations. One caveat is that this could require careful camera work in order to hold the phone steady for a picture.
+
+I think if there's a camera that can just take a snapshot of your throat...just like when you go and get your x-ray of your mouth for your teeth...which could automatically be sent to the doctor rather than you go "aww". - P13, female, 34
+
+We asked participants if they would have different reactions to aspects of camera work if they were using a desktop computer with a webcam or a 360-degree camera that could automatically capture the entire scene. In this case, the doctor could look at various parts of the patient's body without the patient having to move the camera around. Participants generally felt that the extra wide field of view provided by a 360-degree camera would not aid visual examinations. Participants felt that mobile phones were better for situations when they wanted to show body parts as their phone was highly mobile and they could bring it close to their body. However, one participant said a mobile phone camera would be inconvenient when they needed to perform certain actions with two hands, such as lifting their shirt and pressing their abdomen at the same time (e.g., fall while jogging scenario). She also pointed out that it would be tiring to hold their phone all the time when talking with the doctor.
+
+I can see that, given a long consultation, the patient probably gets tired that she has to hold the phone and it's not comfortable anyway...The patient only has two hands to set and hit the body. With the mobile phone, she really needs one hand. - P17, female, 42
+
+Lastly, participants talked about the importance of the doctor seeing everything that occurred in a doctor's office. This included the way that the patient was sitting in an office chair to how they moved to an examination table.
+
+The physical examination of the patients starts when the patient opens the door...you take a look at their appearance, the way that they walk, if they are so tired, how they carry their bodies, how they walk. The general appearance of a patient is so helpful. When you are Skyping with someone or you are Face Timing with someone, it's really impossible to get the general idea of how the patient is walking, how the patient is doing stuff. - P8, female, 31
+
+Participants felt that every subtle detail was important for the doctor to see because it could relate to things that the patient did not think to tell the doctor. For example, one might not think to tell the doctor that their foot was sore after a bicycle fall but this could be noticed when a person walked. Participants noted that such aspects might not be visible during a video-based appointment since a mobile phone's camera would likely be pointed at the patient's face rather than their entire body. The room's lighting or Internet bandwidth may also compromise what was visible.
+
+You can find so many precious points about so much precious information about the patients by doing physical examination. For example, sometimes patients forget. You are doing the physical examination on their chest and you see a scar on their sternum, and you ask them what this scar is, and the patient is like, 'Oh, now I remember. I had a surgery 20 years ago or something. I forgot doctor. Sorry.' - P8, female, 31
+
+Participants also talked about the chance of patients lying or hiding details from their doctor. While nobody admitted to doing so, participants felt it would be easier to lie in a video-based appointment. Supposed indicators of lying, such as subtle eye movements or discomfort while sitting, may be harder to notice. Some commented that people may also be more inclined to lie over a video call because they were 'online' and not in-person. These issues were seen as compromising a doctor's diagnosis.
+
+### 6.2 Physical Touch
+
+Participants' past in-person appointments often involved palpation to feel their body. This was directly explored in the falling while jogging scenario where the patient in the video had to press their own abdomen following the doctor's instructions. Some participants believed that the patient could perform this action on their own as long as the doctor could clearly see the patient's actions. The challenge, as previously mentioned, was that this could require very careful camera work in order to both capture the action on video and touch oneself at the same time. Participants also talked about not being properly trained in some cases and having a hard time following a doctor's instructions when it came to physical touches; thus, people were apprehensive about doing this work as the patient.
+
+The patient could probably apply less pressure than needed to feel versus a doctor. A doctor can physically tell if it's serious or not instead of having patients to let him know. - P6, male, 24
+
+## 7 Privacy Concerns
+
+Participants had several privacy concerns when it came to video-based appointments like those in our scenarios, including issues with both video and audio.
+
+### 7.1 Private Visuals
+
+First, we talked with participants about their reactions to showing visuals of themselves on camera that might be considered privacy sensitive. All participants were fine with showing non-private areas of their body to remote doctors, as they saw in our scenarios. Yet when it came to show private body parts (e.g., groin area, chest) over video, all participants showed concerns about privacy, in particular their confidentiality and autonomy, and preferred to visit their doctor in-person. Many participants thought it was weird to show their private parts over a video call as they had never experienced it before. They were concerned about the security of the video link and worried that it may get 'hijacked'- again, issues around confidentiality. Several participants also said that they did not know if anybody was in the doctor's office but off-camera and able to see or hear the appointment. Thus, they had concerns in relation to autonomy and their ability to participate in the video-mediated space in a way that they desired. In contrast, they felt that when in a doctor's office in-person, the patient would know for sure who was in the room because they could see all areas within it.
+
+Participants also had privacy concerns when it came to situations such as the domestic abuse scenario and raised several specific issues, albeit these varied across groups of participants. Participants talked about what it would be like to be in a situation involving domestic abuse. Several participants said that staying at home and having a video appointment was a better choice as the private information, bruises in this case, would not be visible to people other than the doctor. They thought that as a patient they might feel uncomfortable outside their house and be noticed by people on their way to the doctor's office or in the waiting room. Other participants talked about how a video appointment at home could present additional risk since an abusive partner could come home unexpectedly. They felt that when in-person, only the patient and doctor would be in the doctor's office. The patient would be safe, and the conversation would be private as well.
+
+Because you don't want neighbors to see anything or a random stranger to think, 'Oh my god, she got beat up. She's in a bad situation.' And in the conferencing, she could just talk more openly and say, 'Okay, I'm sharing this with you. You're the only one that sees it.'. - P21, female, 68
+
+Consulting with a doctor at home will increase the risk of abuse again. - P3, female, 21
+
+Given the privacy concerns that participants expressed, we asked them about possible ways of mitigating their concerns. For example, we talked with them about the possibility of blurring their face in the video feed during situations such as the private parts and domestic abuse scenarios so they would feel more comfortable with the appointments and have a video call in more of an anonymous fashion. This was seen as being valuable by eight participants though two of the remaining participants pointed out that it could make it harder to get accurate diagnoses since the doctor would not know the patient's history. Doctors may also not be able to understand the patient's facial expression, which could help them assess the severity of a situation.
+
+I think [blurring faces] is very good. For example, when you want to go there and talk about drinking or marijuana or private parts, these kinds of things. I know people that don't go to doctor at all just because they don't want to talk about it with another person. - P12, female, 33
+
+You can read the expressions of the people's face, eyes. 'Okay, this lady is really scared ... or she knows it's a minor thing, so she's not really worried about it'. - P21, female, 68
+
+Participants were asked if they would feel any different in terms of privacy if the doctor was a different gender than they were. Five female participants explained that they preferred a doctor of the same gender for health problems that they felt were private. All male participants felt okay with doctors of both genders regardless of the situation.
+
+Especially if it's not my regular family doctor, I would not want a male there. Actually, even if it was my family doctor, I usually try to find the public nurses, like female. - P9, female, 32
+
+I guess I don't really care what gender my doctor is, as long as they're professional. - P7, male, 27
+
+### 7.2 Private Conversations
+
+Second, we talked with participants about the kinds of conversations that were occurring over the video appointment scenarios and how comfortable they would be in having such conversations with a remote doctor and responses varied. Some participants believed that telling the doctor about their medical conditions was not embarrassing as the doctor was professional and they should be honest with them and explain everything. In contrast, other participants felt embarrassed about the conversations in all of the scenarios except cold and fall while jogging. Most female participants felt embarrassed talking about sensitive issues such as relationships, sexual practices, abuse, and drug consumption. None of the male participants had the same concerns. Participants also commented that they felt some people may be less inclined to have conversations about sensitive topics due to cultural backgrounds and taboo topics.
+
+People have such different cultural backgrounds that something that is not taboo with some person could be really taboo to another, and really affecting their ability to communicate what's really going on. - P16, female, 38
+
+### 7.3 Camera Control
+
+Third, we probed participants about privacy in relation to video capture and who had control of the camera, them or the doctor. We asked participants whether they would be okay giving up control of the camera to the remote doctor, if it was possible. For example, one could imagine placing their phone in a stand that had remote controlled pan and tilt features. In general, the responses were based on the amount of trust that a person had built up with their doctor and how strong they felt their relationship was with this person. This was the case for the majority of the participants who said that giving up control of the camera to their doctor was fine because they trusted their doctor and felt that if the doctor had control of the camera that they would be able to more easily acquire their preferred view point.
+
+He would know exactly what he's looking for. Or he'd be able to focus it better to take a look at what it is that he needs to look at or tell you exactly where to press to figure out what is the extent of the injury or just so that he has enough information to make the correct diagnosis. - P20, female, 61
+
+A few participants said they would only give up control if it was necessary. One wanted to be informed before and during the call about what the doctor wanted to see. P11 likened this to an experience she had where she needed online services to help her fix her computer. She felt that having patients be able to monitor what the remote person was doing with the camera would dispel privacy concerns.
+
+[Online support] always asked me if they can control my computer...The first time I did it, well that's a pretty big step for me. But then I realized I can see exactly what they're doing and then they're working in an office space. I think I can trust them. - P1, female, 19
+
+One participant talked about giving the doctor more control, such as the ability to capture images and draw annotations on them. This could help illustrate things to patients.
+
+Maybe if the doctor could take screenshots and then annotate them, and then show those to the patient, saying like 'Oh, you need to take care of this part of your mouth, like this tooth,' ...Or 'Oh, I see this here,' and then they circle it. 'Can you apply this kind of medicine to that part of your mouth?' - P7, male, 27
+
+We also asked participants about newer technologies that might be used as a part of video appointments to give the doctor a better view of the patient or their environment. For example, we asked about 360-degree and wide field of view cameras. Some participants said they felt it was unnecessary for a doctor to see an entire room. Others were okay, again, if they knew what a doctor was looking at and if it was useful for the appointment and a diagnosis.
+
+### 7.4 Video Recording
+
+Participants talked about the possibility of the video calls being recorded and this was troubling. Ten of them expressed concerns that they did not want their video-based appointments to be recorded. Moreover, some expressed concerns that the doctor might capture screenshots of the video without their knowledge or permission. For example, one participant talked about the potential that exists when people have access to private information:
+
+I worked in a computer networking in [organization name], we could access passwords of the users, but we were not allowed to tell this to users. We never used it. But we could have. - P12, female, 33
+
+Two participants said that it could be valuable to have video recordings of appointments in order to have a more complete history of one's medical record, yet there were large concerns over who would have access to this video data.
+
+## 8 Discussion
+
+We now discuss our method and results to explore the challenges and design possibilities for video-based appointments between doctors and patients.
+
+### 8.1 Scenario-Based Design
+
+We employed a study method that built upon scenario-based design given difficulties and privacy risks in observing and talking with participants about real appointments. By presenting participants with vivid and graphically rich video clips, we were able to illustrate a series of scenarios that we wanted to explore in detail. This was far beyond would we likely would have been able to achieve through verbal descriptions alone. Participants reacted positively to the method and were able to engage in detailed conversations with us as researchers. Thus, we feel our approach is especially helpful when the situations one wants to explore are non-existent or rare at present time. It is unlikely that they are using them for risky and privacy-sensitive situations. Of course, having participants participate in actual doctor appointments would move reactions beyond the types of speculations that participants had in our study. However, it would be critical that studies of such appointments be carefully designed to counterbalance the possible effects and ethical dilemmas found with privacy intrusive studies. We feel that one value coming out from our study is that we now have a better understanding of how people will react to privacy-intrusive video-based appointments, which can help researchers understand how to plan studies with actual appointments in a way that minimizes privacy and ethical risks.
+
+### 8.2 Privacy Aspects in Sensitive Situations
+
+It is clear from our results that video visits are currently not a replacement for all types of doctor-patient appointments. Participants saw video-based appointments as being supplementary to seeing their doctor in person. Some privacy invasive situations, for example, involving showing one's private areas or talking about sexual-related situations, are clearly not great candidates for video-based appointments for now. Some people are also unwilling to disclose private information, be it visually or aurally communicated, and rightfully so. Thus, they are concerned about the confidentiality of information and how it is disclosed over video [73]. Several concepts related to confidentiality were reflected in our findings, including information sensitivity, concerns about video fidelity and control over its capture [73]. First, there were issues related to impression management. Having patients reveal sensitive information could create embarrassment or identity issues with self-esteem. Some of our participants felt it would be valuable to blur their faces, which would allow them to 'detach' themselves from their identity and make the conversation less personal. Second, fidelity involves the persistency of information [73]. In the case of video-based appointments, this could include the recording of conversations. Future design work could explore how to reduce patient anxiety about the possibility of video recording. Third, control involves knowing that video is only being seen and heard by the doctor and patient, and no other parties [73]. In face-to-face appointments, patients are able to know who is in the physical space of the office. This can become difficult to know over video. It reflects a common issue that has resonated in the video communication literature around people being disembodied in a video-mediated environment (being able to see / hear while off-camera) [75]. This suggests the design of cameras with larger fields of view for the doctor's office so that patients can maintain an awareness of the space. Our results also reveal that there could be some situations, even those involving very private information (e.g., conversations about domestic abuse), where some people may feel more comfortable with a video appointment compared to an in-person one so that they can do so from a location of their choosing. This may make the appointment more comfortable for them and avoid being exposed to others outside of their home. In this way, the video appointment could help control the confidentiality of sensitive information. We caution though that none of our participants reported having experienced domestic abuse, so these findings should be validated through further study.
+
+### 8.3 Visuals of the Patient and Doctor
+
+Patients saw value in having the doctor see their entire body and area around them rather than just their face in case there were things that the doctor might notice what they didn't think to talk about or explain. Yet there were hesitations around cameras that might have wider fields of view, such as 360-degree cameras, because of what else might be captured by the camera in their home. This suggests design explorations into methods that might provide doctors with broader views while still balancing the privacy concerns of patients. For example, one might imagine systems that allow patients to selectively blur or replace the background [77] in video feeds that they feel are private, but this would need to be done in a way that still allows doctors to understand what is happening in the video for proper diagnosis. Patients were generally fine giving up control of the camera to their doctor, if there was trust in the relationship and they knew what the doctor was looking at. Thus, there were few concerns with autonomy over how one participates in the video-mediated space. This illustrates the importance of doctor-patient relationships given that patients are willing to give up some autonomy based on their trust in doctors. As is conceptualized by Palen and Dourish [72], privacy involves the management of boundaries which can be dynamic according to contexts or actions. Doctor-patient relationships work as boundaries within video appointments. A well-established relationship could transfer the autonomy from the patient to the doctor's side in order to support the examination on the patient. This could factor into design solutions where a doctor may be given greater control to, for example, remotely pan a camera around the patient's environment to see them better, providing that the patient knows what the doctor is looking at.
+
+Relationship building was considered essential in doctor-patient communication [76]. Turning to patients' views of the doctor's environment, participants wanted to see body language and ensuring eye contact with the doctor. It reflects the challenge of building rapport using non-verbal behaviors on camera. While such visuals are important in mobile video calls with family/friends [51], [55], the element of trust that they evoke with patients feels different and, in some ways, more critical in building doctor-patient relationship over video [78]. As mentioned, designs that include a camera with a larger field of view might allow one to see the whole body of the doctor, including body language, which could help build rapport with patients. However, such designs should be carefully thought through as multiple factors and challenges are intertwined.
+
+### 8.4 Appointment Accessibility and Awareness
+
+We note that the flexibility that might come with video appointments could easily bring caveats. For example, participants tended to feel that doctors would be more accessible to them if video appointments were available. Mobile apps which provide virtual visit services may encourage patients to meet with unknown doctors online in the present moment as opposed to waiting a few days to see their own doctors. This might cause an overutilization of video appointments and a loss in the continuity of care over time. This suggests there are design opportunities to better link medical records with apps that permit video-based appointments so that information can more easily be shared between providers.
+
+According to our findings, patients had challenges knowing whether their situations would be appropriate for video visits. This suggests that they are design opportunities for exploring systems that may help patients screen themselves to see if a video appointment would be appropriate. This could be implemented using questionnaires that patients answer along with decision tree algorithms, or with medical professional assistance (e.g., a nurse asks screening questions over a short video call).
+
+### 8.5 Camera Work and Visuals of the Patient
+
+Like mobile phone research for video calls between family and friends [62], [69], [70], we, too, saw challenges around camera work and easily capturing the right information for doctors. There were pragmatic issues like camera lighting, but also issues around holding phones to show body parts while also being able to see the doctor's reactions. Compared to the related literature [51], [53], [54], the complexities around camera work seemed to be more difficult in our study situation. Mobile phone calls with family/friends tend to not involve trying to show specific body parts and instead, focus on faces or surrounding contexts [54], [58]. Video calls with doctors may try to expose highly accurate images of one's body parts such as the neck, abdomen or back. These areas can be difficult to capture using ordinary mobile phones. In addition, it can require high quality lighting and proper camera orientation. This contrasts casual conversations with family or friends. Video appointments might also require that patients perform particular camera movements that they typically do not do on a tablet, laptop, or phone when using existing video chat tools (e.g., Skype, FaceTime). Because the camera is coupled with display showing the camera's view (e.g., the phone), it can be hard to direct the camera to a particular area while also looking at the screen to see what is in view.
+
+These challenges suggest the need for tools that make it easier for patients to perform the necessary camera work during a video appointment. Tools could focus on ways to hold and move a camera for easier capture. For example, video conferencing software may include on-screen visuals [79] to show patients where to move the camera to capture an area of one's body, or augmented visuals [80], [81] to guide patients to perform certain actions. One could imagine customized versions that help users capture areas of their body after selecting a particular body part, e.g., clicking on 'knee' in the application could trigger visuals that guide the user to capture all views of a knee.
+
+One could also think about hardware tools or devices that would make it easier to hold a mobile phone camera or set it down in order to capture body parts that are at awkward angles or locations. People commonly use 'selfie' sticks or phone stands to capture pictures presently. One could imagine custom designed apparatus for video-based doctor appointments that let a person more easily hold or set down their mobile phone to capture a body part on camera. Designs could also explore the decoupling of the camera device from the display device. For example, it may be easier to perform camera work during a video appointment if the camera could be held in one hand to show a body part, while the user looks at a separate display to see what the camera is capturing. This is not normally done with existing video chat tools since people often use devices with cameras built into them. External cameras could be highly valuable for video appointments.
+
+## 9 CONCLUSIONS AND FUTURE WORK
+
+Overall, our research points to the value that video-based appointments could bring to patients, from a patient-centric perspective. We have pointed to a variety of opportunities for additional design work in order to mitigate camera challenges and privacy concerns. While we are cautious to not suggest more specific design directions and implications for design without additional design work, clearly existing video chat technologies (e.g., Skype) do not map to the specific needs of video-based appointments. Their two-way calling model would mean that patients could try to connect with the doctor at inopportune times. They are also limited when it comes to the camera work that would be necessary and valuable for video-based appointments, including features to help mitigate privacy concerns and guide the user in capturing areas of their body on camera.
+
+Naturally, future work should study the needs and experiences of doctors when it comes to video-based appointments. These may offer alternative needs than the patients in our study had, which might suggest further design accommodations and balances that need to be made to address the needs of both groups of users. Our study is limited in that we had a large number of female participants by chance. Few males contacted us to participate. Future work should further explore the concerns of males as well as others. We also recognize that other cultures might feel differently towards video-based appointments. The patients we studied were mostly participating in the health care system in Canada that is publicly funded where they can visit the doctor at any point in time and not have to pay for the visit. This could have affected their viewpoints in our study.
+
+## REFERENCES
+
+[1] M. Mars, Telemedicine and advances in urban and rural healthcare delivery in Africa, Prog. Cardiovasc. Dis., vol. 56, no. 3, pp. 326-335, 2013.
+
+[2] C. S. Kruse, S. Bouffard, M. Dougherty, and J. S. Parro, Telemedicine Use in Rural Native American Communities in the Era of the ACA: a Systematic Literature Review, Journal of Medical Systems, vol. 40, no. 6. Springer US, p. 145, 27-Jun- 2016.
+
+[3] A. Mehrotra, A. B. Jena, A. B. Busch, J. Souza, L. Uscher-Pines, and B. E. Landon, Utilization of telemedicine among rural medicare beneficiaries, JAMA - Journal of the American Medical Association, vol. 315, no. 18. American Medical Association, pp. 2015-2016, 10-May-2016.
+
+[4] D. G. Armstrong, N. Giovinco, J. L. Mills, and L. C. Rogers, FaceTime for Physicians: Using Real Time Mobile Phone-Based Videoconferencing to Augment Diagnosis and Care in Telemedicine., Eplasty, vol. 11, p. e23, May 2011.
+
+[5] N. R. Armfield, M. Bradford, and N. K. Bradford, The clinical use of Skype-For which patients, with which problems and in which settings? A snapshot review of the literature, International Journal of Medical Informatics, vol. 84, no. 10. pp. 737-742, 2015.
+
+[6] J. McElligott et al., Toward a More Usable Home-Based Video Telemedicine System: A Heuristic Evaluation of the Clinician User Interfaces of Home-Based Video Telemedicine Systems, JMIR Hum. Factors, vol. 4, no. 2, p. e11, Apr. 2017.
+
+[7] J. M. Fisher, N. Y. Hammerla, L. Rochester, P. Andras, and R. W. Walker, Body-Worn Sensors in Parkinson's Disease: Evaluating Their Acceptability to Patients, Telemed. e-Health, vol. 22, no. 1, pp. 63-69, Jan. 2016.
+
+[8] K. Steel, D. Cox, and H. Garry, Therapeutic videoconferencing interventions for the treatment of long-term conditions, $J$ Telemed Telecare, vol. 17, no. 3, pp. 109-117 9p, Apr. 2011.
+
+[9] N. Suksomboon, N. Poolsup, and Y. L. Nge, Impact of Phone Call Intervention on Glycemic Control in Diabetes Patients: A Systematic Review and Meta-Analysis of Randomized, Controlled Trials, PLoS One, vol. 9, no. 2, p. e89207, Feb. 2014.
+
+[10] A. G. Del Signore, R. Dang, A. Yerasi, A. M. Iloreta, and B. D. Malkin, Videoconferencing for the pre-operative interaction between patient and surgeon, J. Telemed. Telecare, vol. 20, no. 5, pp. 267-271, 2014.
+
+[11] S. Canon et al., A pilot study of telemedicine for post-operative urological care in children, J. Telemed. Telecare, vol. 20, no. 8, pp. 427-430, Dec. 2014.
+
+[12] R. F. Dixon and J. E. Stahl, Virtual Visits in a General Medicine Practice: A Pilot Study, Telemed. e-Health, vol. 14, no. 6, pp. 525-530, Aug. 2008.
+
+[13] R. E. Powell, J. M. Henstenburg, G. Cooper, J. E. Hollander, and K. L. Rising, Patient Perceptions of Telehealth Primary Care Video Visits., Ann. Fam. Med., vol. 15, no. 3, pp. 225-229, May 2017.
+
+[14] J. Carroll, Making use: scenario-based design of human-computer interactions. MIT press, 2000.
+
+[15] T. Erickson, Notes on Design Practice: Stories and Prototypes as Catalysts for Communication, Scenar. Des., pp. 37-58, 1995.
+
+[16] L. Watts and A. Monk, Telemedical consultation, Proc. SIGCHI Conf. Hum. factors Comput. Syst. - CHI '97, pp. 534-535, 1997.
+
+[17] ed. Field, Marilyn J., Telemedicine:A Guide to Assessing Telecommunications for Health Care. National Academy Press, 1996.
+
+[18] P. Zanaboni and R. Wootton, Adoption of routine telemedicine in Norwegian hospitals: progress over 5 years, BMC Health Serv. Res., vol. 16, no. 1, p. 496, Dec. 2016.
+
+[19] H. Cole-Lewis and T. Kershaw, Text Messaging as a Tool for Behavior Change in Disease Prevention and Management, Epidemiol. Rev., vol. 32, no. 1, pp. 56-69, Apr. 2010.
+
+[20] J. Wei, I. Hollin, and S. Kachnowski, A review of the use of mobile phone text messaging in clinical and healthy behaviour interventions, J. Telemed. Telecare, vol. 17, no. 1, pp. 41-48, Jan. 2011.
+
+[21] K. Bykachev, O. Turunen, M. Sormunen, J. Karppi, K. Kumpulainen, and H. Turunen, Booking system, video conferencing (VC) solution and online forms for improving child psychiatric services in Pohjois-Savo region, Finnish $J$ . eHealth eWelfare, vol. 9, no. 2-3, p. 259, May 2017.
+
+[22] J. P. Hubble, R. Pahwa, D. K. Michalek, C. Thomas, and W. C. Koller, Interactive video conferencing: A means of providing interim care to parkinson's disease patients, Mov. Disord., vol. 8, no. 3, pp. 380-382, Jan. 1993.
+
+[23] P. J. Klutke et al., Practical evaluation of standard-based low-cost video conferencing in telemedicine and epidemiological applications, Med. Inform. Internet Med., vol. 24, no. 2, pp. 135-145, Jan. 1999.
+
+[24] M. J. Field and J. Grigsby, Telemedicine and remote patient monitoring, Journal of the American Medical Association, vol. 288, no. 4. American Medical Association, pp. 423-425, 24-Jul- 2002.
+
+[25] A. Banbury, L. Parkinson, S. Nancarrow, J. Dart, L. C. Gray, and J. Buckley, Delivering patient education by group videoconferencing into the home: Lessons learnt from the Telehealth Literacy Project, J. Telemed. Telecare, vol. 22, no. 8, pp. 483-488, 2016.
+
+[26] C. Robinson, A. Gund, B.-A. Sjöqvist, and K. Bry, Using telemedicine in the care of newborn infants after discharge from a neonatal intensive care unit reduced the need of hospital visits, Acta Paediatr., vol. 105, no. 8, pp. 902-909, Aug. 2016.
+
+[27] K. L. Bagot, N. Moloczij, K. Barclay-Moss, M. Vu, C. F. Bladin, and D. A. Cadilhac, Sustainable implementation of innovative, technology-based health care practices: A qualitative case study from stroke telemedicine, J. Telemed. Telecare, p. 1357633X1879238, Sep. 2018.
+
+[28] G. A. Constantinescu, D. G. Theodoros, T. G. Russell, E. C. Ward, S. J. Wilson, and R. Wootton, Home-based speech treatment for Parkinson's disease delivered remotely: A case report, J. Telemed. Telecare, vol. 16, no. 2, pp. 100-104, Mar. 2010.
+
+[29] J. H. Shin, G. S. Chung, K. K. Kim, J. S. Kim, B. S. Hwang, and K. S. Park, Ubiquitous House and Unconstrained Monitoring Devices for Home Healthcare System, in 2007 6th International Special Topic Conference on Information Technology Applications in Biomedicine, 2007, pp. 201-204.
+
+[30] L. Catarinucci et al., An IoT-Aware Architecture for Smart Healthcare Systems, IEEE Internet Things J., vol. 2, no. 6, pp.
+
+515-526, Dec. 2015.
+
+[31] T. Tamura, T. Togawa, M. Ogawa, and M. Yoda, Fully automated health monitoring system in the home, Med. Eng.
+
+Phys., vol. 20, no. 8, pp. 573-579, Oct. 1998.
+
+[32] M. Ishijima, Monitoring of electrocardiograms in bed without utilizing body surface electrodes, IEEE Trans. Biomed. Eng., vol. 40, no. 6, pp. 593-594, Jun. 1993.
+
+[33] Yong Gyu Lim, Ko Keun Kim, and Suk Park, ECG measurement on a chair without conductive contact, IEEE Trans. Biomed. Eng., vol. 53, no. 5, pp. 956-959, May 2006.
+
+[34] C. C. Y. Poon, Yuan-Ting Zhang, and Shu-Di Bao, A novel biometrics method to secure wireless body area sensor networks for telemedicine and m-health, IEEE Commun. Mag., vol. 44, no. 4, pp. 73-81, Apr. 2006.
+
+[35] G. Appelboom et al., Smart wearable body sensors for patient self-assessment and monitoring, Arch. Public Heal., vol. 72, no. 1, p. 28, Dec. 2014.
+
+[36] M. E. Williams, T. C. Ricketts, and B. G. Thompson, Telemedicine and Geriatrics: Back to the Future, J. Am. Geriatr. Soc., vol. 43, no. 9, pp. 1047-1051, Sep. 1995.
+
+[37] Z. Tang, X. Guo, and B. Prabhakaran, Virtual rehabilitation system, in Proceedings of the ACM international conference on Health informatics - IHI '10, 2010, p. 833.
+
+[38] L. Piron et al., Exercises for paretic upper limb after stroke: A combined virtual-reality and telemedicine approach, J. Rehabil. Med., vol. 41, no. 12, pp. 1016-102, 2009.
+
+[39] R. Tang et al., Physio@Home: Exploring visual guidance and feedback techniques for physiotherapy exercises, in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, 2015, pp. 4123-4132.
+
+[40] M. J. Rho, I. young Choi, and J. Lee, Predictive factors of telemedicine service acceptance and behavioral intention of physicians, Int. J. Med. Inform., vol. 83, no. 8, pp. 559-571, Aug. 2014.
+
+[41] S. Zailani, M. S. Gilani, D. Nikbin, and M. Iranmanesh, Determinants of Telemedicine Acceptance in Selected Public Hospitals in Malaysia: Clinical Perspective, J. Med. Syst., vol. 38, no. 9, p. 111, Sep. 2014.
+
+[42] R. Pietro Ricci et al., Long-term patient acceptance of and satisfaction with implanted device remote monitoring, Europace, vol. 12, no. 5, pp. 674-679, May 2010.
+
+[43] V. Garg and J. Brewer, Telemedicine Security: A Systematic Review, J. Diabetes Sci. Technol., vol. 5, no. 3, pp. 768-777, May 2011.
+
+[44] M. Terry, Medical Identity Theft and Telemedicine Security, Telemed. e-Health, vol. 15, no. 10, pp. 928-932, Dec. 2009.
+
+[45] M. L. Mat Kiah, S. H. Al-Bakri, A. A. Zaidan, B. B. Zaidan, and M. Hussain, Design and Develop a Video Conferencing Framework for Real-Time Telemedicine Applications Using Secure Group-Based Communication Architecture, J. Med. Syst., vol. 38, no. 10, p. 133, Oct. 2014.
+
+[46] F. Kargl, E. Lawrence, M. Fischer, and Y. Y. Lim, Security, Privacy and Legal Issues in Pervasive eHealth Monitoring Systems, in 2008 7th International Conference on Mobile Business, 2008, pp. 296-304.
+
+[47] A. Appari and M. E. Johnson, Information security and privacy in healthcare: current state of research, Int. J. Internet Enterp. Manag., vol. 6, no. 4, p. 279, 2010.
+
+[48] P. Sevean, S. Dampier, M. Spadoni, S. Strickland, and S. Pilatzke, Patients and families experiences with video telehealth in rural/remote communities in northern Canada, J. Clin. Nurs., vol. 18, no. 18, pp. 2573-2579, Sep. 2009.
+
+[49] A. L. Grubaugh, G. D. Cain, J. D. Elhai, S. L. Patrick, and B. C. Frueh, Attitudes Toward Medical and Mental Health Care Delivered Via Telehealth Applications Among Rural and Urban Primary Care Patients, J. Nerv. Ment. Dis., vol. 196, no. 2, pp.
+
+166-170, Feb. 2008.
+
+[50] M. R. Gardner, S. M. Jenkins, D. A. O'Neil, D. L. Wood, B. R. Spurrier, and S. Pruthi, Perceptions of Video-Based Appointments from the Patient's Home: A Patient Survey,
+
+Telemed. e-Health, vol. 21, no. 4, pp. 281-285, Apr. 2015.
+
+[51] T. K. Judge and C. Neustaedter, Sharing conversation and sharing life, in Proceedings of the 28th international conference on Human factors in computing systems - CHI '10, 2010, p. 655.
+
+[52] A. W. T. Bates, Technology, e-learning and distance education. 2005.
+
+[53] C. Neustaedter, J. Procyk, A. Chua, A. Forghani, and C. Pang, Mobile Video Conferencing for Sharing Outdoor Leisure Activities Over Distance, Human-Computer Interaction, 2017.
+
+[54] M. G. Ames, J. Go, J. Jofish Kaye, and M. Spasojevic, Making love in the network closet: the benefits and work of family videochat, in Proceedings of the 2010 ACM conference on Computer supported cooperative work - CSCW '10, 2010, p. 145.
+
+[55] D. S. Kirk, A. Sellen, and X. Cao, Home video communication: mediating'closeness', in Proceedings of the 2010 ACM conference on Computer supported cooperative work - CSCW '10, 2010, p. 135.
+
+[56] J. R. Brubaker, G. Venolia, and J. C. Tang, Focusing on shared experiences: moving beyond the camera in video communication, in Proceedings of the Designing Interactive Systems Conference on - DIS '12,2012, p. 96.
+
+[57] U. Baishya and C. Neustaedter, In Your Eyes: Anytime, Anywhere Video and Audio Streaming for Couples, in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ’17, 2017, pp. 84-97.
+
+[58] T. K. Judge, C. Neustaedter, S. Harrison, and A. Blose, Family portals: connecting families through a multifamily media space, in Proceedings of the 2011 annual conference on Human factors in computing systems - CHI '11, 2011, p. 1205.
+
+[59] H. Raffle et al., Family story play: reading with young children (and elmo) over a distance, in Proceedings of the 28th international conference on Human factors in computing systems - CHI '10, 2010, p. 1583.
+
+[60] A. Tang, M. Pahud, K. Inkpen, H. Benko, J. C. Tang, and B. Buxton, Three's Company: Understanding Communication Channels in Three-way Distributed Collaboration, Proc. 2010 ACM Conf. Comput. Support. Coop. Work - CSCW '10, pp. 271-280, 2010.
+
+[61] A. Forghani and C. Neustaedter, The routines and needs of grandparents and parents for grandparent-grandchild conversations over distance, in Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14, 2014.
+
+[62] C. Neustaedter et al., Sharing Domestic Life through Long-Term Video Connections, ACM Trans. Comput. Interact., 2015.
+
+[63] K. Hasan, Ç. Ayça, and T. Y. Emrah, PERCEPTIONS OF STUDENTS WHO TAKE SYNCHRONOUS COURSES THROUGH, TOJET Turkish Online J. Educ. Technol., vol. 10, no. 4, pp. 276-293, 2011.
+
+[64] D. T. Nguyen and J. Canny, More than face-to-face: empathy effects of video framing, in Proceedings of the 27th international conference on Human factors in computing systems - CHI 09, 2009, p. 423.
+
+[65] R. Harper, S. Rintel, R. Watson, and K. O'Hara, The 'interrogative gaze': Making video calling and messaging 'accountable,' Pragmatics, vol. 27, no. 3, pp. 319-350, 2017.
+
+[66] A. Tang, O. Fakourfar, C. Neustaedter, and S. Bateman, Collaboration with ${360}^{ \circ }$ Videochat: Challenges and Opportunities, in Proceedings of the 2017 Conference on Designing Interactive Systems - DIS '17, 2017, pp. 1327-1339.
+
+[67] S. Follmer, H. Raffle, J. Go, R. Ballagas, and H. Ishii, Video play: playful interactions in video conferencing for long-distance families with young children, in Proceedings of the 9th International Conference on Interaction Design and Children - IDC '10, 2010, p. 49.
+
+[68] R. Pan, S. Singhal, B. E. Riecke, E. Cramer, and C. Neustaedter, 'Myeyes': The design and evaluation of first person view video streaming for long-distance Couples, DIS 2017 - Proc. 2017 ACM Conf. Des. Interact. Syst., 2017.
+
+[69] B. Jones, A. Witcraft, S. Bateman, C. Neustaedter, and A. Tang, Mechanics of Camera Work in Mobile Video Collaboration, in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, 2015.
+
+[70] K. O'Hara, A. Black, and M. Lipson, Everyday practices with mobile video telephony, in Proceedings of the SIGCHI conference on Human Factors in computing systems, 2006.
+
+[71] K. Inkpen, B. Taylor, S. Junuzovic, J. Tang, and G. Venolia, Experiences2Go:sharing kids' activities outside the home with remote family members, in Proceedings of the 2013 conference on Computer supported cooperative work - CSCW '13, 2013, p. 1329.
+
+[72] L. Palen and P. Dourish, Unpacking 'privacy' for a networked world, in Proceedings of the conference on Human factors in computing systems - CHI '03, 2003, p. 129.
+
+[73] M. Boyle and S. Greenberg, The language of privacy: Learning from video media space analysis and design, ACM Transactions on Computer-Human Interaction, vol. 12, no. 2. ACM, pp. 328- 370, 01-Jun-2005.
+
+[74] M. Boyle, C. Neustaedter, and S. Greenberg, Privacy Factors in Video-Based Media Spaces, in Media Space: 20+ Years of Mediated Life, Springer, London, 2009, pp. 97-122.
+
+[75] V. Bellotti and A. Sellen, Design for Privacy in Ubiquitous Computing Environments, in Proceedings of the Third European Conference on Computer-Supported Cooperative Work 13-17 September 1993, Milan, Italy ECSCW '93, 1993.
+
+[76] L. M. L. Ong, J. C. J. M. de Haes, A. M. Hoos, and F. B. Lammes, Doctor-patient communication: A review of the literature, Soc. Sci. Med., vol. 40, no. 7, pp. 903-918, Apr. 1995.
+
+[77] M. NieBner, J. Thies, M. Stamminger, M. Zollhofer, and C. Theobalt, Face2Face: Real-Time Face Capture and Reenactment of RGB Videos, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2387- 2395.
+
+[78] E. A. Miller, The technical and interpersonal aspects of telemedicine: Effects on doctor-patient communication, Journal of Telemedicine and Telecare, vol. 9, no. 1. SAGE PublicationsSage UK: London, England, pp. 1-7, 24-Feb-2003.
+
+[79] V. Domova, E. Vartiainen, and M. Englund, Designing a remote video collaboration system for industrial settings, in ITS 2014 - Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces, 2014, pp. 229-238.
+
+[80] D. S. Kirk and D. S. Fraser, Comparing remote gesture technologies for supporting collaborative physical tasks, in Conference on Human Factors in Computing Systems - Proceedings, 2006, vol. 2, pp. 1191-1200.
+
+[81] R. Sodhi, H. Benko, and A. D. Wilson, LightGuide: Projected visualizations for hand movement guidance, in Conference on Human Factors in Computing Systems - Proceedings, 2012, pp. 179-188.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..995174d92b997fac80a3ff9a79dba9df1eece01e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QADTLUoEIZ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,247 @@
+§ EXPLORING VIDEO CONFERENCING FOR DOCTOR APPOINTMENTS IN THE HOME: A SCENARIO-BASED APPROACH FROM PATIENTS' PERSPECTIVES
+
+Carman Neustaedter‡
+
+School of Interactive Arts and Technology, Simon Fraser University
+
+§ ABSTRACT
+
+We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients' needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-to-face consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.
+
+Keywords: Mobile video communication, doctor appointments, domestic settings, computer-mediated communication.
+
+Index Terms: Human-centered computing-Empirical studies in HCI
+
+§ 1 INTRODUCTION
+
+Telemedicine involves the use of video conferencing systems to support remote consultations with patients. Telemedicine systems can be valuable as people who live far away from medical resources or face health challenges (e.g., chronic illness, mobility issues) may find it very hard or even impossible to see a doctor in person [1]-[3]. People are now able to have video-based appointments with general practitioners using commercially available technologies like Skype, FaceTime or specialized telemedicine video systems [4]-[6]. For example, we are now seeing a proliferation of apps for video-based doctor appointments, such as MDLive and Babylon. With this comes a strong need to ensure video conferencing systems are designed appropriately in order to meet the needs of both patients and doctors.
+
+Historically, telemedicine systems have been studied with a strong focus on specialist appointments, for example, certain chronic diseases [7]-[9] or surgery [10], [11]. In contrast, there has been less focus on system designs for patient visits with general practitioners and even less focus on understanding the design needs of patients for such systems. For example, studies have explored people's level of satisfaction and convenience with remote doctor appointments [12], [13], rather than explorations of the socio-technical challenges involved in video-based appointments and the design challenges that exist for video conferencing systems aimed at supporting appointments. This makes it unclear how to design systems that move past basic video chat software (e.g., Skype, FaceTime) capabilities.
+
+For these reasons, our work explores in-home video appointments between people and their family physician. We were interested in understanding how patients would react to appointments focused on a range of topics from common colds to privacy invasive situations, where one could use a mobile phone and video chat software (e.g., Skype) to meet with their doctor from home. Our overarching goal was to understand what design needs and opportunities exist for video conferencing systems focused on home-based doctor appointments to meet the needs of patients, though clearly future work is needed from the perspective of doctors. We also focused specifically on conducting our study in a manner that did not expose patients directly to privacy-invasive appointments. Here we relied on scenario-based design methods [14], [15] that allow participants to examine interactions with future technologies in a grounded way.
+
+We conducted an exploratory study with twenty-two participants who have visited doctors for general medical conditions. We were purposely broad with our sample and included diverse age groups, occupations, cultural and ethnic backgrounds. The goal was to raise as many design challenges and opportunities as possible, which comes from sampling a broad spectrum of participants. Future work should consider narrowing in on particular populations and types of visits, informed by our work that helps point to cases and situations that would be useful to explore further. We first interviewed participants about their past in-person experiences. This allowed us to learn where challenges exist, and help inform our understanding of patient needs for video-based appointments. We then used six video scenarios depicting video appointments to conduct focused interview conversations with our participants. The videos ranged from non-invasive situations such as a cold to privacy intrusive cases such as a physical exam of one's private parts. In contrast to other study approaches where we may have investigated actual video-based appointments or role-plays, the scenarios allowed us to gauge participants' reactions to privacy sensitive situations without putting them directly in harm's way and risking their own privacy.
+
+Our results show that video-mediated appointments could raise issues around accessibility, relationship building, camera work to capture visuals of one's body, and privacy concerns about private information disclosure. Thus, while video-based appointments could be valuable for patients, systems to support them must be carefully designed to address these concerns. Existing commercial video conferencing systems (e.g., Skype, FaceTime) are not mapped well to the needs of patients for video-based appointments and more nuanced designs are required.
+
+* email: dongqih@sfu.ca
+
+${}^{ \dagger }$ email: yheshmat@sfu.ca
+
+${}^{ \ddagger }$ email: carman@sfu.ca
+
+§ 2.1 MEDICAL HEALTHCARE OVER DISTANCE
+
+Telemedicine systems were created to help remote populations with limited medical resources connect with physicians and specialists in urban centers [16]-[18]. They have also been designed to support people who are unable to visit a doctor in person due to difficulties such as age, disability, or diseases [3]. Doctors have been able to communicate with patients via text message [19], [20], phone call [9], or video call [21]-[23]. Telemedicine uses have also advanced over the years to serve a broader spectrum of users and not just those in rural areas with mobility issues [24], [25]. This has allowed doctors to provide more attention to patients over relatively long periods of time [10], [26] such as patients with chronic diseases [9], [27], [28].
+
+In addition to telemedicine systems, ubiquitous monitoring instruments have been designed and deployed in home environments to aid health care [29]-[31]. Sensors have been embedded into furniture such as beds [32] and couches [33], or attached to the human body [7], [34], [35] to monitor physiological signals. Traditional diagnosis or treatment procedures become different when direct physical contact is unavailable [36]. For example, physical interaction systems can be used to transfer haptic feedback between physicians and patients [37]. Computer-aided virtual guidance has been applied to help patients conduct physiotherapy exercises [38], [39]. Factors such as system usefulness and ease of use, policy and management support, and patients' relationships with health providers have been found to be key to telemedicine system success and acceptance [40]-[42]. Security and privacy concerns have also been explored in relation to telemedicine, considering the confidentiality of medical information [43], [44]. Researchers have tried to resolve security concerns by strengthening access control [45]-[47].
+
+Most closely related to our work, researchers have explored video-based doctor appointments through questionnaires and interviews where respondents have provided their general reactions to the idea of having a video-based appointment. From this work, we know that people feel video visits will lessen travel time and costs [48] and like the idea of having an appointment from the comfort of their home [49], [50]. Several researchers have also studied actual video-based doctor appointments. Powell et al. [13] interviewed patients after having a video-based appointment in a medical clinic office. Users reported video being convenient and only having minor privacy concerns with people overhearing the call [13]. Dixon and Stahl [12] rated patients' experiences using a video visit compared to an in-person visit after having one of both in a clinic. People preferred in-person visits but were generally satisfied with video visits [12]. In all cases, appointments were related to fairly mundane topics and privacy sensitive situations were not explored.
+
+We build on these studies by exploring why people have specific technology preferences and social needs along with descriptions of the concerns people have with video appointments. This helps inform user interface and system design. Our work also differs in that we explore a range of appointment scenarios, some with potentially large privacy risks, which are not easy to explore with real appointments given ethical concerns. In addition, our work studies in-home usage rather than video conferencing usage in a clinic or doctor's office; this contrasts prior work [12], [13]. Usage in a home may potentially see different concerns and reactions because users are giving the doctor visual access into their home and are without medical instruments or assistance, as opposed to a doctor's office.
+
+§ 2.2 VIDEO COMMUNICATIONS
+
+Video conferencing has been widely used amongst family and friends and in work and educational contexts [51]-[56]. People share views or activities via video calls, which can help create stronger feelings of connection over distance and a greater sense of awareness of others [57]-[61]. Applications range from supporting casual conversation to formal meetings [56], [58], [62]. Despite the benefits of video communication, it can still be difficult to generate the same feelings and situations via video calls as found in face-to-face communication [51]. First, people can feel that there is a barrier when watching via a computer [55], [63]. Factors such as narrow fields of view and a lack of mobility can cause users to be aware of the distance between people in video calls [55], [64]. It can also be difficult to maintain eye contact because of displacements between cameras and the video view of the remote user [40]. There are also issues with feeling like one has to continually show their face on the video call [65].
+
+Some researchers have explored ways to increase feelings of connection over distance. For example, this has involved presenting a larger camera view and additional camera control to improve engagement with remote scenes [66], deploying interactions to support virtual shared activities over distance [59], [67], or sharing first-person views to enhance feelings of copresence [68]. When mobile phones are used for video conferencing, one of the main challenges is 'camera work,' the continual reorienting of the camera by moving one's smartphone in order to ensure the remote person has a good view [62], [69], [70]. Local users streaming the video via their phones desire hands-free cameras that are easy to move [71]. Remote users desire the ability to gesture at things in the scene [69]. We explore the camera work needed for home-based video appointments with doctors, which has not been explored in prior studies.
+
+We also know that video conferencing systems have been fraught with privacy concerns, despite their benefits. Online privacy issues typically relate to how users' information is mediated by media [72]. Privacy theory in video communication deconstructs privacy into three inter-related aspects: solitude, confidentiality and autonomy [73]. Solitude relates to having control over one's availability (e.g., can a person gain enough time on their own?) [73]. Confidentiality concerns how information is disclosed to others (e.g., is any sensitive background material shown on camera?) [73]. Lastly, autonomy pertains to having control over how one can interact in a video-mediated communication system (e.g., can a person choose when to use various features and for what reasons?) [73]. For example, with video calling, it can be easy to stand out of the camera's view yet still possible to see what is on the video screen or overhear the video call's audio [74], [75]. Situations like these infringe on people's confidentiality and autonomy at the same time. Across the literature, privacy concerns with video-mediated communication systems often relate to issues around showing the background of one's environment (e.g., a messy room) or a person's appearance not looking good on camera [53], [59], [74], [75]. Privacy challenges in relation to solitude, confidentiality, autonomy have not been thoroughly explored for video-based primary care appointments in the home. Our study builds on past research that explores privacy in work and family communication situations while using video communication systems.
+
+§ 3 EXPLORATORY STUDY METHOD
+
+We conducted an exploratory study to understand what aspects of appointments patients feel are important and what benefits or challenges exist for video-based doctor appointments from one's home. Our study was approved by our university research ethics board and we took great care and caution to conduct our study in a manner that did not increase privacy risk for patients.
+
+§ 3.1 PARTICIPANTS
+
+The study enrolled a total of 22 participants (17 females, 5 males) who had visited doctors. We recruited participants through snowball sampling, posting advertisements on social networks and university mailing lists. The gender imbalance was unintentional and based solely on who responded to our participant call and was willing to participate. Seventeen interviews were done in person either on our university campus or at participants' homes, whichever they felt comfortable with. Five of the interviews were done over Skype. The participants were all adults within the age range of 19-71 (average $= {37},\mathrm{{SD}} = {16}$ ). Participants had a range of cultural and ethnic backgrounds, including individuals with European, Asian, and Middle Eastern descent. To reach a diverse data set, we recruited participants from different age groups, occupations, cultural and ethnic backgrounds. We purposely chose a diverse group of participants so that we could find out as many benefits and challenges as possible in terms of designing technology for supporting video-based appointments. Thus, the goal was to be exploratory such that future studies could then investigate the areas of opportunity and concern revealed by our study in more detail. Some of the participants visited the doctor regularly for conditions such as blood pressure check, gout, anxiety control, arthritis, depression, digestive system issues, etc. Others visited the doctor only when sick or for checkups.
+
+§ 3.2 METHOD
+
+We used semi-structured interviews to gain an in-depth understanding of patients' experiences with in-person doctor appointments and thoughts about video-based appointments. Each interview contained two sections. In the first section, participants talked about their past doctor appointment experiences. In the second section, they were shown six video scenarios, each of which dealt with a distinct medical condition, and we interviewed them about their reactions. Participants could choose between a female or male interviewer in order to feel more comfortable sharing their private medical experiences or their personal opinions. Eleven participants (7 females, 4 males) had an interviewer of the same gender; 10 female participants had a male interviewer and 1 male participant had a female interviewer. The interviews lasted between 50 and 90 minutes.
+
+§ 3.2.1 IN-PERSON EXPERIENCES
+
+To learn more about patients' face-to-face appointments with doctors, we asked them to talk about their past appointments that they felt went well or not. The goal was to use this knowledge to understand what aspects of in-person appointments should be maintained or improved in a video-based appointment. Furthermore, the shift of doctor appointments from in-person to online might bring both challenges and opportunities that were unknown to us. Thus, an understanding of in-person doctor appointments would benefit our thoughts on how to design video conferencing systems. It would also act as a form of baseline.
+
+We purposely ground this interview phase in questions about specific appointments, as opposed to more general thoughts, to acquire detailed and specific data. When recalling these visits, participants were asked to describe the details, such as their conditions, the examinations performed, how the diagnoses were made, what treatments were provided, whether they had follow-up visits, etc. In this way, the questions would help them recall as much information as possible about the appointment. As examples, we asked, "Can you tell me about a visit you felt that went very well (or not well)?" and "How did you describe the situation to your doctor?", "What worked well?", "What did not work well about the visit?" At the end of this interview section, participants were asked about their general opinions on the necessity of face-to-face office visits (as opposed to video-based appointments) and what factors they believed were important during the appointments. This section of the interview lasted 20 to 30 minutes.
+
+§ 3.2.2 FUTURE SCENARIOS
+
+§ 3.2.2.1 SCENARIO PLANNING AND PRODUCTION
+
+Next, we conducted scenario-based interviews with participants. This method was selected based on a lot of careful thought and planning. We wanted participants to understand the concept of video conferencing between a doctor and a patient and ask them about specific attributes of such appointments, which may be hard to imagine. Yet we were cautious that we did not want to infringe on the privacy of our participants as we wanted to explore topics that were both commonplace as well as privacy intrusive. For these reasons, we employed an approach similar to scenario-based design [14], [15], which is often used in the early stages of design cycles in the field of human-computer interaction. The goal is to elucidate the use of novel technologies that might not exist yet or be widely used. This approach is able to engage users in exploring both the design opportunities and challenges that might exist for a technology through storytelling and conversation. With this type of method, participants can be shown pre-recorded videos of people and design artifacts, which are then used as a conversation piece to discuss future technology usage. In our case, we wanted to illustrate aspects such as what could be seen on video during an appointment, including the environment, facial expressions, gestures, and one's body, and how smartphones might need to be oriented or used to capture such information.
+
+Other study options might involve investigating real video-based appointments or letting participants role-play as opposed to showing them videos; however, we felt there were serious ethical challenges. First, it would be difficult and highly privacy intrusive to observe real appointments about privacy-sensitive topics, e.g., talking about drug usage or domestic abuse, conducting a visual exam of one's private areas. Second, role-playing such appointments could similarly be awkward and privacy intrusive. In contrast, we felt that pre-recorded video scenarios would allow us to gauge participants' reactions to privacy sensitive situations without putting them directly in harm's way and risking their own privacy. Pre-recorded videos would also allow us to have control over what participants saw, as they would each see the same situation. This would mean we could learn about everyone's reactions to the same situations, and we could explore multiple appointment topics with each participant rather than just one.
+
+Prior to the study, we planned and pre-recorded six sample doctor-patient appointments using a video conferencing system. This involved brainstorming possible appointments and the likely benefits and challenges that might exist for patients and doctors. We narrowed down a large list of scenarios to a set of six that we felt mapped to a range of experiences. We then iteratively generated scripts and storyboards for each video. These were reviewed with a doctor who conducted video-based appointments to ensure the appointments we depicted were realistic.
+
+We chose scenarios based on several aspects. First, we wanted the scenarios to cover common medical conditions where appointments would normally be conducted in a clinic, including conversation between the patient and doctor, visual examinations, or physical touching. Second, we selected scenarios that would require a variety of camera work to facilitate the video call, e.g., orienting the camera to have a view of the patient's whole body, face, or particular areas like the mouth. Third, we selected scenarios with different levels of potential privacy concerns to receive a variety of reactions from participants. Some appointments were felt to be somewhat mundane and nonproblematic (e.g., a cold), while others were purposely meant to offer problematic situations in different ways (e.g., problems with conversations, problems with what is shown on camera). The resulting scenarios were:
+
+ < g r a p h i c s >
+
+Figure 1: Images depicting the video scenarios that were shown to participants. In each video, from left to right: third-person view of patient, camera view of patient, camera view of doctor.
+
+1) Cold: The patient had a cold and sore throat. The doctor asked the patient to explain the symptoms and show their throat with the mobile phone camera.
+
+2) Fall while jogging: The patient described falling down while jogging and was asked to show the injuries. The patient followed the instructions of the doctor to uncover their stomach region and press on different locations to inspect for internal injuries.
+
+3) Sleeplessness: The patient explained that they were having sleepless nights. They were asked about alcohol and coffee intake. The doctor said they would send a referral to a counseling office.
+
+4) Drugs: The patient had a rash on their arm. After excluding the common possible causes, the doctor asked the patient about using drugs which made the patient feel awkward as they didn't realize the possible connection.
+
+5) Domestic abuse: The patient described being dizzy and having a bruise on their forehead. They were asked questions about their cognitive competence, then the patient confided in the doctor that there was partner abuse.
+
+6) Private parts: The patient and doctor discussed results from an annual physical exam. The doctor asked about the patient's sexual history and a lump on the patient's groin area. The doctor instructed the patient to show their private parts and the doctor performed a visual examination.
+
+Next, we recorded each video scenario twice within a home setting in our lab, once with male actors for both the patient and doctor, and once with female actresses for both patient and doctor. The videos used the same basic script. Figure 1 shows an image from the female versions of the videos. In each video, participants were shown: 1) a third-person view of the patient holding their mobile phone so they could understand the camera work that was needed to capture the video (left side of the video); 2) the camera view of the patient (the middle of the video); and, 3) the camera view of the doctor (right side of the video). When the videos revealed the patient's body, we blurred parts that might normally be hidden under clothes. This was to protect the privacy of the actors in the videos. For Scene 6, we did not record the portion of the video showing private areas. Instead, we showed a masked portion of video. Videos were between 1.5 and 2 minutes each.
+
+§ 3.2.2.2 SCENARIO-BASED INTERVIEW METHOD
+
+In the study, we showed and asked participants questions about the six scenarios, one at a time. Participants watched the videos that mapped to their gender selection. We felt that this mapping might generate stronger empathy from our participants and help them to imagine how they would feel if they were in the same situation as the actor. We did not counterbalance the ordering of the scenes as we wanted to ease the participants into the idea of video appointments with somewhat mundane situations first and our work was meant to be exploratory rather than a carefully controlled experiment. This does have the limitation that the order of the scenes could have affected participants' thoughts about them.
+
+After watching a video, participants were asked to provide reactions to the specific situation, where we asked what they saw as the benefits or challenges of using a video call for the appointment. These questions included, for example, "How would you feel if you were the patient in the video?", "How would you compare an in-person appointment with that in the video call?" We then repeated this for each video, one-by-one. We also asked for their opinions on camera control, privacy concerns, and the use of different types of technologies. The scenario-based interview lasted about 40 minutes. Participants received $\$ {20}$ for participating.
+
+§ 3.3 DATA COLLECTION AND ANALYSIS
+
+All the interviews were audio-recorded and fully transcribed. We recorded videos of the participants watching the scenarios and talking about them (with permission). Two researchers analyzed and coded the data. Each researcher coded the data separately, then one researcher merged the coding and conducted additional analysis. The researchers also discussed their analysis and coding. We used open coding to label all the findings in the transcripts. Afterwards, through the use of axial coding, we formed categories such as privacy, benefits and challenges of using video calls, trust, physical examination, etc. Lastly, selective coding was used to find high-level themes including accessibility, empathy and trust, camera work and privacy concerns. In the following sections, we describe our findings under the four main themes. Quotes from participants are presented with participant ID as P#. We intermix descriptions of patients' past experiences and their opinions of video appointments as a way to further analyze video appointments.
+
+§ 4 ACCESSIBILITY
+
+Participants felt that video appointments could create a lower barrier for accessing one's doctor than in-person appointments. Participants described visiting their doctor based on their own judgements around when it was important to do so. Many of them said they would not bother to see a doctor for what they felt were minor things (e.g., a general cold or bruises) and perform an analysis by themselves, sometimes with the aid of web searches. Participants said that often they were not sure whether they should visit a doctor or not. Some felt that a lower barrier to meeting with one's doctor might make it easier for them to meet about more situations where they were unsure as to whether an appointment was necessary.
+
+Instead of you waiting for a week to visit the doctor you can use this system to have primary comfort to know how serious or not the problem is until you find an appointment time. -P11, female, 33
+
+§ 5 EMPATHY AND TRUST
+
+Relationship building is one of the essential aspects of doctor-patient communication. Similar to prior research [76], participants told us that body language was important during conversations with their doctor when in-person. Conversations involved eye contact and body gestures. By looking patients in their eyes, nodding while listening, or using hand gestures when explaining things to them, the doctor could let patients feel like their conditions were heard, their feelings were understood, and their problems were trying to be solved.
+
+Participants felt that a video-based appointment would cause changes to the ways that body language was conveyed and seen. For example, doctors could be multi-tasking on their computer or not fully paying attention.
+
+I think over a video call it's hard to know if the person's attention is only on you because they might have other tabs open and stuff...Whereas if you're in-person, you know through their body language and through their eye contact that they're actually focusing on you. - P1, female, 19
+
+The user interface in our video scenarios tended to only show the doctor's face and shoulders, akin to a typical video chat (Skype) call. Yet participants described wanting to see more parts of the doctor's body during appointments. For example, one participant wanted to see the doctor's face and upper body, including their arms and hands in case the doctor gestured with them. This would make the appointment feel 'more real'. One participant said it might be difficult to have eye contact with the doctor if the camera was not at the right angle.
+
+§ 6 CAMERA WORK
+
+We talked with participants about the camera work that would be needed within video-based appointments and they saw various aspects of it in the video scenarios. By camera work we refer to the orienting of the smartphone camera such that it can capture the information desired by doctors.
+
+§ 6.1 VISUAL EXAMINATIONS
+
+First, all participants talked about the doctor doing visual examinations of their body or parts of it during their previous appointments. When it came to video-based appointments, participants expressed concerns about whether the camera could clearly show body parts in order to support visual examinations by the doctor. There were several conditions in our video scenarios that contained visual exams, e.g., showing down one's throat, wounded legs and foreheads, rashes, and genitals. In these cases, participants felt that the camera resolution, color accuracy, network quality, and light intensity could be a problem. Participants felt that visual checks would be less accurate over a video call than in-person. This could cause them to lose some trust when it came to their diagnosis. Based on their past video chatting experiences, several participants noted that the quality of static images was much better than that of showing via video, as video resolution is highly limited by network bandwidth. Thus, they felt that images may be better for information sharing in some situations. One caveat is that this could require careful camera work in order to hold the phone steady for a picture.
+
+I think if there's a camera that can just take a snapshot of your throat...just like when you go and get your x-ray of your mouth for your teeth...which could automatically be sent to the doctor rather than you go "aww". - P13, female, 34
+
+We asked participants if they would have different reactions to aspects of camera work if they were using a desktop computer with a webcam or a 360-degree camera that could automatically capture the entire scene. In this case, the doctor could look at various parts of the patient's body without the patient having to move the camera around. Participants generally felt that the extra wide field of view provided by a 360-degree camera would not aid visual examinations. Participants felt that mobile phones were better for situations when they wanted to show body parts as their phone was highly mobile and they could bring it close to their body. However, one participant said a mobile phone camera would be inconvenient when they needed to perform certain actions with two hands, such as lifting their shirt and pressing their abdomen at the same time (e.g., fall while jogging scenario). She also pointed out that it would be tiring to hold their phone all the time when talking with the doctor.
+
+I can see that, given a long consultation, the patient probably gets tired that she has to hold the phone and it's not comfortable anyway...The patient only has two hands to set and hit the body. With the mobile phone, she really needs one hand. - P17, female, 42
+
+Lastly, participants talked about the importance of the doctor seeing everything that occurred in a doctor's office. This included the way that the patient was sitting in an office chair to how they moved to an examination table.
+
+The physical examination of the patients starts when the patient opens the door...you take a look at their appearance, the way that they walk, if they are so tired, how they carry their bodies, how they walk. The general appearance of a patient is so helpful. When you are Skyping with someone or you are Face Timing with someone, it's really impossible to get the general idea of how the patient is walking, how the patient is doing stuff. - P8, female, 31
+
+Participants felt that every subtle detail was important for the doctor to see because it could relate to things that the patient did not think to tell the doctor. For example, one might not think to tell the doctor that their foot was sore after a bicycle fall but this could be noticed when a person walked. Participants noted that such aspects might not be visible during a video-based appointment since a mobile phone's camera would likely be pointed at the patient's face rather than their entire body. The room's lighting or Internet bandwidth may also compromise what was visible.
+
+You can find so many precious points about so much precious information about the patients by doing physical examination. For example, sometimes patients forget. You are doing the physical examination on their chest and you see a scar on their sternum, and you ask them what this scar is, and the patient is like, 'Oh, now I remember. I had a surgery 20 years ago or something. I forgot doctor. Sorry.' - P8, female, 31
+
+Participants also talked about the chance of patients lying or hiding details from their doctor. While nobody admitted to doing so, participants felt it would be easier to lie in a video-based appointment. Supposed indicators of lying, such as subtle eye movements or discomfort while sitting, may be harder to notice. Some commented that people may also be more inclined to lie over a video call because they were 'online' and not in-person. These issues were seen as compromising a doctor's diagnosis.
+
+§ 6.2 PHYSICAL TOUCH
+
+Participants' past in-person appointments often involved palpation to feel their body. This was directly explored in the falling while jogging scenario where the patient in the video had to press their own abdomen following the doctor's instructions. Some participants believed that the patient could perform this action on their own as long as the doctor could clearly see the patient's actions. The challenge, as previously mentioned, was that this could require very careful camera work in order to both capture the action on video and touch oneself at the same time. Participants also talked about not being properly trained in some cases and having a hard time following a doctor's instructions when it came to physical touches; thus, people were apprehensive about doing this work as the patient.
+
+The patient could probably apply less pressure than needed to feel versus a doctor. A doctor can physically tell if it's serious or not instead of having patients to let him know. - P6, male, 24
+
+§ 7 PRIVACY CONCERNS
+
+Participants had several privacy concerns when it came to video-based appointments like those in our scenarios, including issues with both video and audio.
+
+§ 7.1 PRIVATE VISUALS
+
+First, we talked with participants about their reactions to showing visuals of themselves on camera that might be considered privacy sensitive. All participants were fine with showing non-private areas of their body to remote doctors, as they saw in our scenarios. Yet when it came to show private body parts (e.g., groin area, chest) over video, all participants showed concerns about privacy, in particular their confidentiality and autonomy, and preferred to visit their doctor in-person. Many participants thought it was weird to show their private parts over a video call as they had never experienced it before. They were concerned about the security of the video link and worried that it may get 'hijacked'- again, issues around confidentiality. Several participants also said that they did not know if anybody was in the doctor's office but off-camera and able to see or hear the appointment. Thus, they had concerns in relation to autonomy and their ability to participate in the video-mediated space in a way that they desired. In contrast, they felt that when in a doctor's office in-person, the patient would know for sure who was in the room because they could see all areas within it.
+
+Participants also had privacy concerns when it came to situations such as the domestic abuse scenario and raised several specific issues, albeit these varied across groups of participants. Participants talked about what it would be like to be in a situation involving domestic abuse. Several participants said that staying at home and having a video appointment was a better choice as the private information, bruises in this case, would not be visible to people other than the doctor. They thought that as a patient they might feel uncomfortable outside their house and be noticed by people on their way to the doctor's office or in the waiting room. Other participants talked about how a video appointment at home could present additional risk since an abusive partner could come home unexpectedly. They felt that when in-person, only the patient and doctor would be in the doctor's office. The patient would be safe, and the conversation would be private as well.
+
+Because you don't want neighbors to see anything or a random stranger to think, 'Oh my god, she got beat up. She's in a bad situation.' And in the conferencing, she could just talk more openly and say, 'Okay, I'm sharing this with you. You're the only one that sees it.'. - P21, female, 68
+
+Consulting with a doctor at home will increase the risk of abuse again. - P3, female, 21
+
+Given the privacy concerns that participants expressed, we asked them about possible ways of mitigating their concerns. For example, we talked with them about the possibility of blurring their face in the video feed during situations such as the private parts and domestic abuse scenarios so they would feel more comfortable with the appointments and have a video call in more of an anonymous fashion. This was seen as being valuable by eight participants though two of the remaining participants pointed out that it could make it harder to get accurate diagnoses since the doctor would not know the patient's history. Doctors may also not be able to understand the patient's facial expression, which could help them assess the severity of a situation.
+
+I think [blurring faces] is very good. For example, when you want to go there and talk about drinking or marijuana or private parts, these kinds of things. I know people that don't go to doctor at all just because they don't want to talk about it with another person. - P12, female, 33
+
+You can read the expressions of the people's face, eyes. 'Okay, this lady is really scared ... or she knows it's a minor thing, so she's not really worried about it'. - P21, female, 68
+
+Participants were asked if they would feel any different in terms of privacy if the doctor was a different gender than they were. Five female participants explained that they preferred a doctor of the same gender for health problems that they felt were private. All male participants felt okay with doctors of both genders regardless of the situation.
+
+Especially if it's not my regular family doctor, I would not want a male there. Actually, even if it was my family doctor, I usually try to find the public nurses, like female. - P9, female, 32
+
+I guess I don't really care what gender my doctor is, as long as they're professional. - P7, male, 27
+
+§ 7.2 PRIVATE CONVERSATIONS
+
+Second, we talked with participants about the kinds of conversations that were occurring over the video appointment scenarios and how comfortable they would be in having such conversations with a remote doctor and responses varied. Some participants believed that telling the doctor about their medical conditions was not embarrassing as the doctor was professional and they should be honest with them and explain everything. In contrast, other participants felt embarrassed about the conversations in all of the scenarios except cold and fall while jogging. Most female participants felt embarrassed talking about sensitive issues such as relationships, sexual practices, abuse, and drug consumption. None of the male participants had the same concerns. Participants also commented that they felt some people may be less inclined to have conversations about sensitive topics due to cultural backgrounds and taboo topics.
+
+People have such different cultural backgrounds that something that is not taboo with some person could be really taboo to another, and really affecting their ability to communicate what's really going on. - P16, female, 38
+
+§ 7.3 CAMERA CONTROL
+
+Third, we probed participants about privacy in relation to video capture and who had control of the camera, them or the doctor. We asked participants whether they would be okay giving up control of the camera to the remote doctor, if it was possible. For example, one could imagine placing their phone in a stand that had remote controlled pan and tilt features. In general, the responses were based on the amount of trust that a person had built up with their doctor and how strong they felt their relationship was with this person. This was the case for the majority of the participants who said that giving up control of the camera to their doctor was fine because they trusted their doctor and felt that if the doctor had control of the camera that they would be able to more easily acquire their preferred view point.
+
+He would know exactly what he's looking for. Or he'd be able to focus it better to take a look at what it is that he needs to look at or tell you exactly where to press to figure out what is the extent of the injury or just so that he has enough information to make the correct diagnosis. - P20, female, 61
+
+A few participants said they would only give up control if it was necessary. One wanted to be informed before and during the call about what the doctor wanted to see. P11 likened this to an experience she had where she needed online services to help her fix her computer. She felt that having patients be able to monitor what the remote person was doing with the camera would dispel privacy concerns.
+
+[Online support] always asked me if they can control my computer...The first time I did it, well that's a pretty big step for me. But then I realized I can see exactly what they're doing and then they're working in an office space. I think I can trust them. - P1, female, 19
+
+One participant talked about giving the doctor more control, such as the ability to capture images and draw annotations on them. This could help illustrate things to patients.
+
+Maybe if the doctor could take screenshots and then annotate them, and then show those to the patient, saying like 'Oh, you need to take care of this part of your mouth, like this tooth,' ...Or 'Oh, I see this here,' and then they circle it. 'Can you apply this kind of medicine to that part of your mouth?' - P7, male, 27
+
+We also asked participants about newer technologies that might be used as a part of video appointments to give the doctor a better view of the patient or their environment. For example, we asked about 360-degree and wide field of view cameras. Some participants said they felt it was unnecessary for a doctor to see an entire room. Others were okay, again, if they knew what a doctor was looking at and if it was useful for the appointment and a diagnosis.
+
+§ 7.4 VIDEO RECORDING
+
+Participants talked about the possibility of the video calls being recorded and this was troubling. Ten of them expressed concerns that they did not want their video-based appointments to be recorded. Moreover, some expressed concerns that the doctor might capture screenshots of the video without their knowledge or permission. For example, one participant talked about the potential that exists when people have access to private information:
+
+I worked in a computer networking in [organization name], we could access passwords of the users, but we were not allowed to tell this to users. We never used it. But we could have. - P12, female, 33
+
+Two participants said that it could be valuable to have video recordings of appointments in order to have a more complete history of one's medical record, yet there were large concerns over who would have access to this video data.
+
+§ 8 DISCUSSION
+
+We now discuss our method and results to explore the challenges and design possibilities for video-based appointments between doctors and patients.
+
+§ 8.1 SCENARIO-BASED DESIGN
+
+We employed a study method that built upon scenario-based design given difficulties and privacy risks in observing and talking with participants about real appointments. By presenting participants with vivid and graphically rich video clips, we were able to illustrate a series of scenarios that we wanted to explore in detail. This was far beyond would we likely would have been able to achieve through verbal descriptions alone. Participants reacted positively to the method and were able to engage in detailed conversations with us as researchers. Thus, we feel our approach is especially helpful when the situations one wants to explore are non-existent or rare at present time. It is unlikely that they are using them for risky and privacy-sensitive situations. Of course, having participants participate in actual doctor appointments would move reactions beyond the types of speculations that participants had in our study. However, it would be critical that studies of such appointments be carefully designed to counterbalance the possible effects and ethical dilemmas found with privacy intrusive studies. We feel that one value coming out from our study is that we now have a better understanding of how people will react to privacy-intrusive video-based appointments, which can help researchers understand how to plan studies with actual appointments in a way that minimizes privacy and ethical risks.
+
+§ 8.2 PRIVACY ASPECTS IN SENSITIVE SITUATIONS
+
+It is clear from our results that video visits are currently not a replacement for all types of doctor-patient appointments. Participants saw video-based appointments as being supplementary to seeing their doctor in person. Some privacy invasive situations, for example, involving showing one's private areas or talking about sexual-related situations, are clearly not great candidates for video-based appointments for now. Some people are also unwilling to disclose private information, be it visually or aurally communicated, and rightfully so. Thus, they are concerned about the confidentiality of information and how it is disclosed over video [73]. Several concepts related to confidentiality were reflected in our findings, including information sensitivity, concerns about video fidelity and control over its capture [73]. First, there were issues related to impression management. Having patients reveal sensitive information could create embarrassment or identity issues with self-esteem. Some of our participants felt it would be valuable to blur their faces, which would allow them to 'detach' themselves from their identity and make the conversation less personal. Second, fidelity involves the persistency of information [73]. In the case of video-based appointments, this could include the recording of conversations. Future design work could explore how to reduce patient anxiety about the possibility of video recording. Third, control involves knowing that video is only being seen and heard by the doctor and patient, and no other parties [73]. In face-to-face appointments, patients are able to know who is in the physical space of the office. This can become difficult to know over video. It reflects a common issue that has resonated in the video communication literature around people being disembodied in a video-mediated environment (being able to see / hear while off-camera) [75]. This suggests the design of cameras with larger fields of view for the doctor's office so that patients can maintain an awareness of the space. Our results also reveal that there could be some situations, even those involving very private information (e.g., conversations about domestic abuse), where some people may feel more comfortable with a video appointment compared to an in-person one so that they can do so from a location of their choosing. This may make the appointment more comfortable for them and avoid being exposed to others outside of their home. In this way, the video appointment could help control the confidentiality of sensitive information. We caution though that none of our participants reported having experienced domestic abuse, so these findings should be validated through further study.
+
+§ 8.3 VISUALS OF THE PATIENT AND DOCTOR
+
+Patients saw value in having the doctor see their entire body and area around them rather than just their face in case there were things that the doctor might notice what they didn't think to talk about or explain. Yet there were hesitations around cameras that might have wider fields of view, such as 360-degree cameras, because of what else might be captured by the camera in their home. This suggests design explorations into methods that might provide doctors with broader views while still balancing the privacy concerns of patients. For example, one might imagine systems that allow patients to selectively blur or replace the background [77] in video feeds that they feel are private, but this would need to be done in a way that still allows doctors to understand what is happening in the video for proper diagnosis. Patients were generally fine giving up control of the camera to their doctor, if there was trust in the relationship and they knew what the doctor was looking at. Thus, there were few concerns with autonomy over how one participates in the video-mediated space. This illustrates the importance of doctor-patient relationships given that patients are willing to give up some autonomy based on their trust in doctors. As is conceptualized by Palen and Dourish [72], privacy involves the management of boundaries which can be dynamic according to contexts or actions. Doctor-patient relationships work as boundaries within video appointments. A well-established relationship could transfer the autonomy from the patient to the doctor's side in order to support the examination on the patient. This could factor into design solutions where a doctor may be given greater control to, for example, remotely pan a camera around the patient's environment to see them better, providing that the patient knows what the doctor is looking at.
+
+Relationship building was considered essential in doctor-patient communication [76]. Turning to patients' views of the doctor's environment, participants wanted to see body language and ensuring eye contact with the doctor. It reflects the challenge of building rapport using non-verbal behaviors on camera. While such visuals are important in mobile video calls with family/friends [51], [55], the element of trust that they evoke with patients feels different and, in some ways, more critical in building doctor-patient relationship over video [78]. As mentioned, designs that include a camera with a larger field of view might allow one to see the whole body of the doctor, including body language, which could help build rapport with patients. However, such designs should be carefully thought through as multiple factors and challenges are intertwined.
+
+§ 8.4 APPOINTMENT ACCESSIBILITY AND AWARENESS
+
+We note that the flexibility that might come with video appointments could easily bring caveats. For example, participants tended to feel that doctors would be more accessible to them if video appointments were available. Mobile apps which provide virtual visit services may encourage patients to meet with unknown doctors online in the present moment as opposed to waiting a few days to see their own doctors. This might cause an overutilization of video appointments and a loss in the continuity of care over time. This suggests there are design opportunities to better link medical records with apps that permit video-based appointments so that information can more easily be shared between providers.
+
+According to our findings, patients had challenges knowing whether their situations would be appropriate for video visits. This suggests that they are design opportunities for exploring systems that may help patients screen themselves to see if a video appointment would be appropriate. This could be implemented using questionnaires that patients answer along with decision tree algorithms, or with medical professional assistance (e.g., a nurse asks screening questions over a short video call).
+
+§ 8.5 CAMERA WORK AND VISUALS OF THE PATIENT
+
+Like mobile phone research for video calls between family and friends [62], [69], [70], we, too, saw challenges around camera work and easily capturing the right information for doctors. There were pragmatic issues like camera lighting, but also issues around holding phones to show body parts while also being able to see the doctor's reactions. Compared to the related literature [51], [53], [54], the complexities around camera work seemed to be more difficult in our study situation. Mobile phone calls with family/friends tend to not involve trying to show specific body parts and instead, focus on faces or surrounding contexts [54], [58]. Video calls with doctors may try to expose highly accurate images of one's body parts such as the neck, abdomen or back. These areas can be difficult to capture using ordinary mobile phones. In addition, it can require high quality lighting and proper camera orientation. This contrasts casual conversations with family or friends. Video appointments might also require that patients perform particular camera movements that they typically do not do on a tablet, laptop, or phone when using existing video chat tools (e.g., Skype, FaceTime). Because the camera is coupled with display showing the camera's view (e.g., the phone), it can be hard to direct the camera to a particular area while also looking at the screen to see what is in view.
+
+These challenges suggest the need for tools that make it easier for patients to perform the necessary camera work during a video appointment. Tools could focus on ways to hold and move a camera for easier capture. For example, video conferencing software may include on-screen visuals [79] to show patients where to move the camera to capture an area of one's body, or augmented visuals [80], [81] to guide patients to perform certain actions. One could imagine customized versions that help users capture areas of their body after selecting a particular body part, e.g., clicking on 'knee' in the application could trigger visuals that guide the user to capture all views of a knee.
+
+One could also think about hardware tools or devices that would make it easier to hold a mobile phone camera or set it down in order to capture body parts that are at awkward angles or locations. People commonly use 'selfie' sticks or phone stands to capture pictures presently. One could imagine custom designed apparatus for video-based doctor appointments that let a person more easily hold or set down their mobile phone to capture a body part on camera. Designs could also explore the decoupling of the camera device from the display device. For example, it may be easier to perform camera work during a video appointment if the camera could be held in one hand to show a body part, while the user looks at a separate display to see what the camera is capturing. This is not normally done with existing video chat tools since people often use devices with cameras built into them. External cameras could be highly valuable for video appointments.
+
+§ 9 CONCLUSIONS AND FUTURE WORK
+
+Overall, our research points to the value that video-based appointments could bring to patients, from a patient-centric perspective. We have pointed to a variety of opportunities for additional design work in order to mitigate camera challenges and privacy concerns. While we are cautious to not suggest more specific design directions and implications for design without additional design work, clearly existing video chat technologies (e.g., Skype) do not map to the specific needs of video-based appointments. Their two-way calling model would mean that patients could try to connect with the doctor at inopportune times. They are also limited when it comes to the camera work that would be necessary and valuable for video-based appointments, including features to help mitigate privacy concerns and guide the user in capturing areas of their body on camera.
+
+Naturally, future work should study the needs and experiences of doctors when it comes to video-based appointments. These may offer alternative needs than the patients in our study had, which might suggest further design accommodations and balances that need to be made to address the needs of both groups of users. Our study is limited in that we had a large number of female participants by chance. Few males contacted us to participate. Future work should further explore the concerns of males as well as others. We also recognize that other cultures might feel differently towards video-based appointments. The patients we studied were mostly participating in the health care system in Canada that is publicly funded where they can visit the doctor at any point in time and not have to pay for the visit. This could have affected their viewpoints in our study.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..aaa11aed5fa010b487ee5fcdb4c412f0495dbcf8
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,511 @@
+# Computer Vision Applications and their Ethical Risks in the Global South
+
+Charles-Olivier Dufresne-Camaro* Fanny Chevalier ${}^{ \dagger }\;$ Syed Ishtiaque Ahmed ${}^{ \ddagger }$
+
+${}^{* \dagger \ddagger }$ Department of Computer Science, ${}^{ \dagger }$ Department of Statistical Sciences, University of Toronto
+
+## Abstract
+
+We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk's importance.
+
+Index Terms: General and reference-Document types-Surveys and overviews; Computing methodologies—Artificial intelligence— Computer vision-Computer vision tasks; Social and professional topics-User characteristics; Social and professional topics-Computing / technology policy;
+
+## 1 INTRODUCTION
+
+In recent years, computer vision (CV) systems have become increasingly ubiquitous in many parts of the world, with applications ranging from assisted photography on smartphones, to drone-monitored agriculture. Current popular problems in the field include autonomous driving, affective computing and general activity recognition; each application has the potential to significantly impact the way we live in society.
+
+Computer vision has also been leveraged for more controversial applications, such as sensitive attributes predictors for sexual orientation [80] and body weight [42], DeepFakes [78,81] and automated surveillance systems [53]. In particular, this last application has been shown to have the potential to severely restrict people's privacy and freedom, and compromise security. As one example, we note the case of China and their current use of surveillance systems to identify, track and target Uighurs [53], a minority group primarily of Muslim faith. Similar systems with varying functionalities have also been exported to countries such as Ecuador [54].
+
+While advances in CV have led to considerable technological improvements, they also raise a great deal of ethical issues that cannot be disregarded. We stress that it is critical that the potential societal implications are studied as new technologies are developed. We recognize that this is a particularly difficult endeavour, as one often has very little control over how technology is used downstream. Moreover, recent focus on designing for global markets makes it near impossible to consider all local uses and adaptations: a contribution
+
+28-29 May making a positive impact in the Global North may turn out to be a harmful, restrictive one in the Global South ${}^{1}$ .
+
+Several groups have come forward in the last decade to raise ethical concerns over certain smart systems - CV and general-AI systems alike- with reports of discrimination $\left\lbrack {{14},{15},{44},{48}}\right\rbrack$ and privacy violations [26,50,58,62]. One notable example comes from Joy Buolamwini, who exposed discriminatory issues in several face analysis systems [15]. However, very few have explored the more focused general problem of ethics in CV. We believe this to be an important problem as restricting the scope to CV allows for more domain-specific actionable solutions to emerge, while keeping sight of the big-picture problems in the field.
+
+To our knowledge, only two attempts to study this problem have been undertaken in modern CV research [45, 74]. While insightful, these works paint ethical issues in CV with a coarse brush, discarding subtleties of how these risks may vary across different local contexts. Furthermore, the surveyed literature in both work focuses extensively on CV in the West and major tech hubs, raising concerns over the generalization of their findings in other distinct communities. Considering people in the Global South are important users of technology, most notably smartphones, we believe it is of utmost importance to extend this study of CV-driven risks to their communities.
+
+In this work, we conduct a survey of CV research literature with applications in the Global South to gain a better understanding of CV uses and the underlying problems in this area. While studying research literature may provide a weaker understanding of the true problem space compared to studying existing systems in the field, we believe our method forms a strong preliminary step in studying this important problem. In doing so, we aim to provide further guidance to designers in building safe context-appropriate solutions with CV components.
+
+To guide our search in identifying potential risks, we use Skirpan and Yeh's moral compass [74] (Sect. 2). Through our analysis (Sect. 3 -5), we aim to answer the following research questions:
+
+${RQI}$ . What are the main applications of modern computer vision when used in the Global South?
+
+${RQ2}$ . What are the most significant computer vision-driven ethical risks in a context of use in the Global South?
+
+${RQ3}$ . How do research interests of Global North researchers differ from those of Global South researchers in the context of computer vision research for the Global South?
+
+RQ4. How do the risks associated with these technologies when used in the Global South differ from those in the Global North?
+
+It is our belief that while certain CV technologies and algorithms are equally available to people in the North and South, the resulting uses, implications, and risks will differ. Our contributions are fourfold: 1) we present an overview of research where CV technology is deployed or studied in a geographical context within the Global South, and the needs it aims to address; 2) we identify the principal ethical risks at play in recent $\mathrm{{CV}}$ research in this context; 3) we show that research interests in CV applications for the Global South differ between Global South and Global North researchers; and 4) we show that the importance of ethical risks in CV systems differs across contexts. Finally, we also reflect on our findings and propose design considerations to minimize risks in CV systems for the Global South.
+
+---
+
+*e-mail: camaro@cs.toronto.edu
+
+${}^{ \dagger }$ e-mail: fanny@cs.toronto.edu
+
+${}^{ \ddagger }$ e-mail: ishtiaque@cs.toronto.edu
+
+${}^{1}$ We use the United Nations’ M49 Standard [77] to define the Global North (developed countries) and the Global South (developing countries).
+
+---
+
+## 2 FRAMEWORK
+
+We are aware of only two studies aiming at identifying ethical risks in computer vision. The first, by Lauronen [45], surveys recent CV literature and briefly identifies six themes of ethical issues in CV: espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation.
+
+The second, by Skirpan and Yeh [74], takes the form of a moral compass (framework) aimed at guiding CV researchers in protecting the public. In this framework, CV risks fall into five categories: privacy violations, discrimination, security breaches, spoofing and adversarial inputs, and psychological harms. Using those categories, the authors propose 22 risk scenarios based on recent research and real-life events. In addition, they present a likelihood of occurrence risk evaluation along with a preliminary risk perception evaluation based on uncertainty (i.e. how much is known about it) and severity (i.e. how impactful it is).
+
+To guide our search in evaluating CV in a Global South context, we propose using Skirpan and Yeh's framework based on two reasons: 1) we find the set of risk categories to generalize better in non-Western countries, and 2) it allows us to directly compare the results of our risk analysis in the Global South to the results of Skirpan and Yeh's risk analysis in the West [74]. Nevertheless, we emphasize that the risk categories are by no mean exhaustive of every current risk related to modern CV systems. Indeed, the field is still evolving at a considerable rate and new risks may not all fit well into any of those categories, e.g. DeepFakes.
+
+To illustrate the scope of each risk category, we present below one risk scenario extracted from the framework [74] along with a brief description of the risk. Note that we make no distinction between deliberate and accidental presences of a risk, e.g. unplanned discrimination caused by the use of biased data to train a CV system. Privacy violations
+
+"Health insurance premium is increased due to inferences from online photos." - Scenario #1 [74]
+
+Privacy violations are defined as having one's private information collected without the person's consent by a third party via the use of a CV system. This includes data such as body measurements, sexual orientation, income, relationships, travels and activities.
+
+Discrimination
+
+"Xenophobia leads police forces to track and target foreigners." — Scenario #9 [74]
+
+Discrimination occurs when someone is treated unjustly by a CV system due to certain sensitive characteristics that are irrelevant to the situation. Such characteristics include religious beliefs, race, gender and sexual orientation.
+
+Security breaches
+
+"Security guard sells footage of public official typing in a password to allow for a classified information leak." — Scenario #5 [74]
+
+Skirpan and Yeh broadly define security breaches as a malicious actor(s) exploiting vulnerabilities in CV systems to disable them, steal private information or employ them in cyberattacks.
+
+## Spoofing and adversarial inputs
+
+"Automated public transportation that uses visual verification systems is attacked causing a crash."
+
+— Scenario #8 [74]
+
+In this context, the objective of spoofing and adversarial inputs is to push a CV system to react confidently in an erroneous and harmful manner. Examples include misclassifying people or situations, e.g. recognizing a robber as a bank employee, or misclassifying fraudulent documents as authentic.
+
+## Psychological harms
+
+"People will not attend protests they agree with due to fear of recognition by cameras and subsequent punishment." — Scenario #2 [74]
+
+At a high level, any negative change in how people live based on their interactions with CV systems is considered a psychological harm. Unlike the other categories, this risk is more passive and results from longer-term interactions with CV systems.
+
+We stress that the risk scenarios and their analysis were aimed to be general and reflect "normal" life in the Global North. Therefore, they likely do not generalize well to a myriad of areas in the Global South. For example, scenario #1 used to illustrate privacy violations takes for granted that access to health insurance is available and that people have some sort of online presence. It also assumes that a health insurance company has access to enough data on the population to do any kind of reliable inference.
+
+For this reason, we only use the risk categories to guide our analysis. We believe those generalize well in all areas with CV systems, even though they may not cover the full risk space and what constitutes sensitive or private information may vary across communities $\left\lbrack {2,6}\right\rbrack$ .
+
+## 3 Methodology
+
+In the context of this work, we consider a prior work (system) to be relevant to our study if it identifies or addresses specific needs of Global South communities through the use of CV techniques. This criterion can be satisfied in multiple ways: direct deployment of a system in the area, use of visual data related to the area, description of local problems related to CV research, and so forth. Works exploring specific problems in more general settings are excluded, e.g. discrimination at large in CV-driven surveillance systems. We exclude from our study systems designed for China, given their current economical situation.
+
+To find relevant research papers, we conducted a search on four digital libraries: ACM Digital Library, IEEE Xplore Digital Library, ScienceDirect and JSTOR. We have chosen these four platforms to maximize our search coverage on both applied research conferences and journals, and more fundamental research publication venues.
+
+We limited our search to research papers published between 2010 and 2019 to focus on recent advances in the field. We included all publication venues as we wished to be inclusive of all CV researchers focusing on problems in the Global South. We consider inclusion to be critical in achieving a thorough understanding of the many different needs addressed by CV researchers, and the ethical risks of CV research in the Global South.
+
+To identify appropriate search queries, we first began with an exploratory search on ACM Digital Library. We defined our search queries as two components: one CV term and one contextual term, e.g. "smart camera" AND "Global South". To build our initial list of search queries, we selected as CV terms common words and phrases used in CV literature, e.g. "camera", "drone" and "machine vision". As contextual terms, we chose common words referring to HCI4D/ICTD research areas and locations, and ethical risks in technology, e.g. "privacy", "development", "poverty" and "Bangladesh". Our initial list consisted of ${15}\mathrm{{CV}}$ terms and 24 contextual terms, resulting in 360 search queries.
+
+
+
+Figure 1: Distribution of publication years for the research papers in our corpus $\left( {n = {55}}\right)$ .
+
+We selected papers based on their title, looking only at the first 100 results (ranked using ACM DL's relevance criterion). We then discarded terms based on the relevancy of the selected papers. Terms that were too general and returned too many irrelevant articles (e.g. CV term "mobile phone" or the contextual term "development") or that continuously returned no relevant content (e.g. CV term "webcam" or contextual term "racism") were discarded.
+
+We retained $5\mathrm{{CV}}$ terms (Computer Vision, Drone, Smart Camera, Machine Vision, Face Recognition) and 10 contextual terms (ICTD, ICT4D, Global South, Surveillance, Ethics, Pakistan, Bangladesh, India, Africa, Ethiopia) for a total of 50 search queries per platform. This selection allowed us to efficiently cover a wide spectrum of CV research areas (CV terms) focused on Global South countries (contextual terms). Additionally, this also provided us with an alternative way of covering relevant work by focusing on those considering risks and ethics (contextual terms) in CV.
+
+A total of 470 papers were collected over all four platforms using our defined search queries. We then read attentively each abstract and skimmed through each paper to evaluate their relevancy using our aforementioned criterion. Ultimately, 55 unique research papers were deemed relevant.
+
+For each paper, we noted the publication year, the country the CV system was designed for (target location), and the country (researchers' location) and region (Global South or Global North) of the researchers who designed the systems (i.e. the country where the authors’ institution is located ${}^{2}$ ). Additionally, we noted the main topic of the visual data used by the system (e.g. cars), the application area (i.e. the addressed needs, e.g. healthcare), and the ethical risks at play defined in Sect. 2. For all risks, we only considered likely occurrences, i.e. cases where a system appears to exhibit certain direct risks following normal use.
+
+We did not collect more fine-grained characteristics such as precise deployment areas in our data collection process due to the eclecticism of our corpus: deployments, if they occurred, were not always discussed in detail.
+
+The first author then performed open and axial coding [17] on both the application areas and data topics to extract the principal categories best representing the corpus.
+
+## 4 RESULTS
+
+In this section, we describe the characteristics of the research papers collected $\left( {n = {55}}\right)$ for our analysis. Table 1 shows an excerpt of our final data collection table used to generate the results below. The complete table is included in the supplemental material. Publication years are shown in Fig. 1.
+
+### 4.1 Location
+
+Researchers' location. Of the 55 articles, 41 were authored by researchers from the Global South and 14 by researchers from the Global North. Specifically, Global South researchers were located in India (20), Bangladesh (10), Pakistan (6), Brazil (1), Iran (1), Iraq (1), Kenya (1), and Malaysia (1). Global North researchers were located in US (11), Germany (1), Japan (1), and South Korea (1).
+
+Target location. For papers authored by Global South researchers, the systems were designed for, or deployed in the same country as the researchers' institutions. For the 14 systems designed by Global North researchers, the target locations were varied: Ethiopia (2), India (2), Kenya (1), Mozambique (1), Pakistan (1), Senegal (1), and Zimbabwe (1). We also note five systems designed to cover broader areas: Global South (4), and Africa (1).
+
+### 4.2 Data topics
+
+Using open and axial coding [17], we extracted six broad categories covering all visual data topics found in our corpus: characters, human, medicine, outdoor scenes, products, and satellite. Each paper is labeled with only one category: the category that fits best with the specific data topic of the paper.
+
+Characters. This category encompasses any visual data that can be used as the input of an optical character recognition system. Systems using text [64], license plates [38, 43], checks [7], and banknotes [55] fit into this category.
+
+Human. Included in this category are any visual data where humans are the main subjects. Such data is typically used in face recognition [12, 39], sign recognition [36, 65], tracking [67], detection [66,70], and attribute estimation [33]. In our corpus, faces $\left\lbrack {{12},{31},{39}}\right\rbrack$ and actions $\left\lbrack {{36},{65},{66},{70}}\right\rbrack$ represent the main objects of study using such data.
+
+Medicine. Medical data is tied to the study of people's health. We include in this category any image or video of people with a focus on healthcare-related attributes [47], medical documents [21], and diagnostic test images $\left\lbrack {{20},{22}}\right\rbrack$ .
+
+Outdoor scenes. Any visual data in which the main focus is on outdoor scenes is grouped under this category. This includes data collected by humans [51], but also cameras mounted on vehicles $\left\lbrack {{11},{18},{79}}\right\rbrack$ , and surveillance cameras $\left\lbrack {{23},{49}}\right\rbrack$ . We however exclude airborne and satellite imagery (i.e. high-altitude, top-down views) from this category.
+
+Products. We group in this category visual data focusing on objects resulting from human labour and fabrication processes such as fruit [30,68], fish [37,72], silk [63], and leather [24].
+
+Satellite. Airborne and satellite imagery data such as multi-spectral imagery [59], are grouped under this category.
+
+The distribution of these topics in our corpus is shown in Fig. 2. Fig. 3 shows how this distribution varies with the researchers' region.
+
+### 4.3 Application areas
+
+Through the use of open and axial coding [17], we identified 7 broad application area categories representing all applications found in our corpus: agriculture and fishing, assistive technology, healthcare, policy planning, safety, surveillance, and transportation.
+
+Agriculture and fishing. We include in this category applications related to agriculture, fishing, and industrial uses of goods from these areas. For example, systems were designed to automatically recognize papaya diseases [30], identify fish exposed to heavy metals [37], and estimate crop areas in Ethiopia using satellite data [56].
+
+---
+
+${}^{2}$ For papers with authors from different countries, we noted the location most shared by the authors.
+
+---
+
+| Title | Year | Target Location | Researchers' Location (Region) | Data Topic | Application Areas | Risks |
| Cross border intruder detection in hilly ter- rain in dark environment [66] | 2016 | India | India (Global South) | Human | Policy plan., Surveillance | Discrimination, Privacy, Spoofing |
| Field evaluation of a camera-based mobile health system in low-resource settings [22] | 2014 | Zimbabwe | United States (Global North) | Medicine | Healthcare | N/A |
+
+Table 1: Excerpt from the final data collection log used in this study. The complete table is included in the supplemental material.
+
+
+
+Figure 2: Distribution of the visual data topics $\left( {n = {55}}\right)$ in our corpus $\left( {n = {55}}\right)$ .
+
+
+
+Figure 3: Normalized distributions of data topics for systems designed by researchers from the Global North (blue, $n = {14}$ ) and from the Global South (orange, $n = {41}$ ) .
+
+Assistive technology. Research papers in this category study the design of CV systems assisting people with disabilities in doing everyday tasks. This includes Bengali sign language recognition systems $\left\lbrack {{36},{65}}\right\rbrack$ , the Indian Spontaneous Expression Database used for emotion recognition [31], and a Bangla currency recognition system to assist people with visual impairments [55].
+
+Healthcare. CV research focused on healthcare-related topics fall into this category. We note adaptive games for stroke rehabilitation patients in Pakistan using CV technology to track players [40]. We also include systems used to analyze diagnostic tests [22], and digitize medical forms [21].
+
+Policy planning. We include in this category policies tied to urban planning, development planning, environment monitoring, traffic control, and general quality-of-life improvements. We note SpotGarbage used to detect garbage in the streets of India [51], a visual pollution detector in Bangladesh [3], and various traffic monitoring systems in Kenya [41] and Bangladesh [23,60]. We also highlight uses of satellite data to estimate the poorest villages in Kenya to guide cash transfers towards people in need [1], and to gain a better understanding of rural populations of India [34].
+
+Safety. In the context of our work, we define safety as the protection of people from general harms. We include in this category CV systems for bank check fraud detection [7], food toxicity detection [37], and informal settlement detection [28].
+
+
+
+Figure 4: Distribution of application area categories $\left( {n = {80}}\right)$ in our corpus $\left( {n = {55}}\right)$ .
+
+
+
+Figure 5: Normalized distributions of application areas for systems designed by researchers from the Global North (blue, $n = {23}$ ) and from the Global South (orange, $n = {57}$ ).
+
+Surveillance. Surveillance applications refer to systems monitoring people or activities to protect a specific group from danger. We make no distinction between explicit and secret acts of surveillance, i.e. being monitored without one's consent. We found systems for theft detection [70], border intrusion detection in India [66], automatic license plate recognition [38], and tracking customers [67].
+
+Transportation. CV systems in this category focus on improving and understanding transport conditions in the Global South. The majority of the systems identified in our corpus were aimed at monitoring and reducing traffic $\left\lbrack {{49},{60}}\right\rbrack$ and accidents $\left\lbrack {23}\right\rbrack$ . We also note systems addressing the problem of autonomous driving [11,79].
+
+ | | ˈtɔəl əʌlɪsˌsɪsɪ | ә.reduleәH | | Agges | әɔue||!әʌuNS | uoдедооdsueл」 |
| Characters | 0 | 2 | 0 | 0 | 1 | 3 | 3 |
| Human | 0 | 3 | 2 | 2 | 0 | 9 | 0 |
| Medicine | 0 | 0 | 4 | 0 | 0 | 0 | 0 |
| Outdoor scenes | 0 | 0 | 0 | 7 | 4 | 2 | 8 |
| Products | 12 | 0 | 0 | 1 | 1 | 0 | 0 |
| Satellite | 1 | 0 | 0 | 9 | 5 | 1 | 0 |
+
+Figure 6: Co-occurrence frequencies between data topics and application areas in our corpus $\left( {n = {55}}\right)$ .
+
+
+
+Figure 7: Distribution of risks $\left( {n = {80}}\right)$ in our corpus $\left( {n = {55}}\right)$ . 18 papers had no identified risks.
+
+
+
+Figure 8: Normalized distributions of risks $\left( {n = {80}}\right)$ for systems designed by researchers from the Global North (blue, $n = {16}$ ) and from the Global South (orange, $n = {64}$ ).
+
+Each paper was tagged with at least one category and at most three categories (median $= 1$ ). The distribution of application areas is shown in Fig. 4, followed by the normalized distributions for each researchers' region in Fig. 5. We present the global co-occurrence frequencies between data topics and application areas in Fig. 6.
+
+### 4.4 Risks
+
+We briefly illustrate how we assessed the presence of risks in CV systems (Sect. 3), using two examples from our corpus. Note that we did not consider intent in our analysis; instead, we focused strictly on the possibility of enabling certain risks.
+
+First, we consider a system using satellite imagery designed to identify informal settlements in the Global South, informing NGOs of regions that could benefit the most from their aid. [28]. While the system focuses on areas rather than individuals, it nevertheless collects information on vulnerable communities without their consent (privacy violations). Additionally, a malicious actor aware of the system's functionalities could modify settlements to hide them in satellite imagery, e.g. by covering rooftops with different materials (spoofing). For these reasons, we tagged the system with two risks.
+
+Second, we examine a sign language digit recognition system [36], which we tagged with no risk. As the system appears to solely rely on hand gestures and focuses on a simple limited task, the presence of all 5 risks appeared minimal.
+
+Fig. 7 shows the overall risk distribution for our corpus. Papers were tagged with at most four risks; 18 had no risks (median $=$ 1). Normalized distributions for each researchers' region appear in Fig. 8. We show the global co-occurrence frequencies between application areas and risks in Fig. 9, and between data topics and risks in Fig. 10. Finally, we present the conditional probability of a risk being present, given the presence of another in Fig. 11.
+
+ | UOREUJUUDSIG | AoeALI | su.uey 'tɔAsd | sayoea.qAyunoas | Buyoods |
| Agri. & Fishing | 0 | 1 | 0 | 0 | 6 |
| Assistive Tech. | 1 | 1 | 1 | 0 | 2 |
| Healthcare | 0 | 0 | 0 | 0 | 0 |
| Policy Plan. | 4 | 14 | 0 | 0 | 12 |
| Safety | 1 | 7 | 0 | 0 | 8 |
| Surveillance | 9 | 15 | 10 | 0 | 14 |
| Transportation | 0 | 9 | 3 | 0 | 9 |
+
+Figure 9: Co-occurrence frequencies between application areas and risks in our corpus $\left( {n = {55}}\right)$ .
+
+ | uoneupuposic | Apeniud | suutey to Asd | sә'yɔeә.iqAyunoas | Buyoods |
| Characters | 0 | 3 | 3 | 0 | 5 |
| Human | 10 | 10 | 7 | 0 | 9 |
| Medicine | 0 | 0 | 0 | 0 | 0 |
| Outdoor scenes | 0 | 6 | 0 | 0 | 8 |
| Products | 0 | 0 | 0 | 0 | 5 |
| Satellite | 2 | 7 | 0 | 0 | 5 |
+
+Figure 10: Co-occurrence frequencies between data topics and risks in our corpus $\left( {n = {55}}\right)$ .
+
+ | UOREUJULIOSIG | AceAud | su.uey ' 'v2As d | sәupeә.iqAyunoas | Buyoods |
| Discrimination | 1 | 0.46 | 0.70 | - | 0.34 |
| Privacy | 1 | 1 | 1 | - | 0.66 |
| Psych. Harms | 0.58 | 0.38 | 1 | - | 0.28 |
| Security breaches | 0 | 0 | 0 | - | 0 |
| Spoofing | 0.92 | 0.81 | 0.90 | - | 1 |
+
+Figure 11: Conditional probability of a risk being present, given the presence of another risk $p$ (row loolumn) in our corpus $\left( {n = {55}}\right)$ .
+
+## 5 DISCUSSION
+
+Overall, we find that the majority of surveyed CV systems were designed to address specific needs of a community, ranging from food quality control to healthcare, traffic minimization, and improved security. Moreover, we observe that currently popular research areas in the West, such as autonomous vehicles and affective computing, have not achieved the same penetration level so far in the Global South.
+
+We present below our most notable findings and compare them with the findings of Skirpan and Yeh for the Global North [74]. We then present design considerations to minimize each risk, followed by a discussion on the limitations of our study.
+
+### 5.1 Key findings
+
+CV research for the Global South, by the Global South? ${75}\%$ of the papers in our corpus were authored by researchers from the Global South. We find this proportion concerning, as it highlights an ever present risk of colonization of methods and knowledge [75]. This is furthermore important as both the interpretation of visual data and its associated applications are highly dependent on the context in which the data is situated [25]. Thus, we believe this proportion raises an important ethical concern around the interpretation of visual data and the politics tied to the downstream CV applications [35].
+
+## Interests differ between regional research groups.
+
+Grouping the distributions of data topics and application areas by researchers' regions (Fig. 3 and 5) reveals significant differences in interests. We note for example the data topic of satellite imagery used in ${50}\%$ of all Global North systems and only $5\%$ of Global South systems. We speculate that this type of data may be less utilized by Global South researchers as it is typically used to solve larger fundamental problems rather than directly addressing specific needs. Conversely, we find Global North researchers to focus more on data that can be used to solve broader problems.
+
+These distinctions are also visible in application areas: Global North researchers appear to focus much more on applications requiring significant technical expertise, often employing modalities neither aligned with nor originated from local methods [75], with broader ramifications (e.g. policy planning applications identifying at-risk areas through satellite imagery $\left\lbrack {1,{28}}\right\rbrack$ ) than applications focused on specific local problems, requiring a thorough understanding of the community and less technical expertise (e.g. applications for transportation, agriculture, and surveillance). In contrast, the regional distribution for Global South researchers is much more uniform, with greater interest in those local problems. Furthermore, these disparities may also be caused by the lack of deeper knowledge about the local problems of the Global South among the CV community in the Global North [35].
+
+Finally, we note that such differences in interests may also be caused by accessibility issues, e.g. data being made available only to certain researchers, high cost to developing new system compared to imports, and computational power issues.
+
+Surveillance applications - promising, but not without risks. Globally, we find surveillance to be the second most popular application area behind policy planning, an area mostly popular due to its encompassing definition. We attribute this popularity most notably to one factor: surveillance systems are a natural solution to satisfy in part the essential need of security, albeit at the cost of restricting liberties. This trade-off is notably closely tied to the government systems in place, which vary wildly in the Global South [6]. Moreover, we note that the motivations for surveillance vary between cultures, ranging from care, nurture, and protection to control and oppression.
+
+We emphasize that researchers should be especially careful in designing surveillance systems, as we found that most studied here could be easily misused to focus on increasing control over people rather than improving security; we found surveillance to be the application most frequently tied to any risk, and the one with the most diverse set of risks (Fig. 9).
+
+## The impact of machine learning on CV risks.
+
+${73}\%$ of all CV systems analyzed used machine learning techniques to achieve their goals. This has several implications when it comes to the presence of certain risks in CV systems. First, it amplifies the risk of spoofing and adversarial inputs, the most frequent risk in our corpus appearing in nearly all data topics and application areas (Fig. 9 and 10). Indeed, research has shown neural networks to be easily misguided by adversarial inputs $\left\lbrack {{27},{57},{61}}\right\rbrack$ .
+
+Second, the use of machine learning amplifies the risk of discrimination, in particular the unplanned type, i.e. CV systems using sensitive attributes for decision making without being designed to do so. We found fewer examples of this risk in our corpus as discrimination occurs strictly against people; the risk was mainly found in surveillance and policy planning applications using human data.
+
+## Privacy violations - the common denominator.
+
+In our analysis, we found privacy violations to be the second most common risk, far ahead discrimination and psychological harms. Specifically, we found the risk of privacy violations most present in surveillance and policy planning applications, both areas requiring in-depth knowledge of people.
+
+Additionally, we have observed that certain risks were typically present only when privacy violations were too. In particular, the risk of privacy violations was always present when the risks of discrimination and psychological harms were found (Fig. 11). While our analysis does not robustly assess the relations between each risk, this finding suggests an almost hierarchical relation between certain risks, beginning with privacy violations: e.g, for automated discrimination to occur, sensitive attributes of people must first be known (privacy violations). Similarly, the risk of psychological harms depends directly on the presence of other risks. We note however that the risk of spoofing appears to be tied more closely to certain algorithms than risks, although we still expect its frequency to rise as access to sensitive data (privacy violations) increases.
+
+## Privacy violations frequency – a likely underestimate.
+
+We note that research on privacy and technology $\left\lbrack {2,6}\right\rbrack$ has shown privacy to be regionally constructed, differing between cultural regions: e.g. people in the Global South often perceive privacy differently than Western liberal people. Our interpretation of the general definition of privacy violations used in our analysis (Sect. 2) likely does not encompass every cultural definition of privacy in the Global South. As such, there may exist many unforeseen privacy-related risks tied to CV systems, leading us to believe the actual frequency of the risk is even higher than what we have found in our analysis.
+
+## Not all risks could be reliably perceived.
+
+We note in our analysis two risks that could not be reliably estimated: security breaches and psychological harms. In fact, we were unable to find a single case of the former. We attribute this primarily to the method used for analysis: detection of the risk would require research papers to present an in-depth description of a system's implementation, highlighting certain vulnerabilities. Similarly, while we have found some systems with risks of psychological harms, we emphasize that we have likely underestimated its actual frequency: detection requires both an in-depth understanding of the system and its subsequent deployments. Moreover, compared to the other four risks, we re-emphasize that the risk of psychological harms results from longer-term interactions and may only become discernible as risk-driven harms begin to occur.
+
+## Frequency versus severity.
+
+We first stress that the absence of certain risks does not indicate their non-existence. On the contrary, this informs us to be more wary of these risks as we may have little means to diagnose them in this context. Additionally, we note that the risk frequency estimated here is a different concept than the frequency of harm occurrences tied to the aforementioned risk. For example, while spoofing was found to be the most common risk in our corpus, harm occurrences linked to this risk requires the existence of a knowledgeable malicious actor attacking the system, which may be much more uncommon.
+
+Furthermore, we emphasize that each risk can have severe impacts, regardless of their frequency. For example, while the risk of security breaches may be infrequent, a single occurrence could lead to disastrous consequences, from large-scale information leaks to casualties. Thus, while this study may highlight common pitfalls tied to the design of recent CV systems, we urge designers to consider the potential impacts caused by all risks equally.
+
+### 5.2 Comparison with Global North risk analysis
+
+#### 5.2.1 Risk probability
+
+In their work, Skirpan and Yeh [74] identified discrimination, security breaches and psychological harms as the most probable risks. In the present study, we instead identify privacy violations and spoofing as the most probable risks in the Global South. We attribute these differences to two reasons.
+
+First, each work focused on a different body of work. Skirpan and Yeh searched through both research literature and current news, leading to a more diverse set of systems and problems. We, on the other hand, limited our search to research papers focusing specifically on the Global South to obtain a more grounded understanding. Thus, risks that cannot be reliably estimated from research papers (e.g. security breaches) did not appear frequently in our study. However, this alone is insufficient to suggest the risk space to be different in the Global South: the absence of a risk in research literature does not imply its non-existence in the real world.
+
+Second, we believe these differences to be primarily related to the availability and the acceptance of CV systems in both regions. CV systems in the West, which have become nearly ubiquitous, are often designed to be considerate of the history and politics of the region. However, in the Global South, fewer systems are designed and, in most cases, they involve transferring existing Northern technologies to the Global South. The designed systems thus tend to be less considerate of the culture and politics of the Global South [35], limiting among other things their level of adoption by the community. Moreover, we re-emphasize the pseudo-hierarchical nature of the risks studied here: privacy violations appear to serve as a springboard for the other risks to arise. Thus, in areas where CV technology is relatively recent, we should expect to find privacy violations to be more frequent than other risks. This, combined with the extensive use of machine learning techniques, can also explain why the risk of spoofing and adversarial inputs was found to be more frequent than the three risks higlighted by Skirpan and Yeh.
+
+#### 5.2.2 Risk Perception
+
+Skirpan and Yeh [74] have also evaluated risks in terms of severity and uncertainty, leading them to identify discrimination and security breaches as the most potential important risks. While we have not formally performed this type of analysis in our work, we speculate the problem space in the Global South to be different.
+
+We believe that at this time, the most important risk is privacy violations, in part due to its gateway property to other risks and the current availability of CV systems in the Global South. We note privacy violations can not only enable automated discrimination in CV systems, but also broader discrimination and security risks. For example, by surveilling and collecting specific sensitive attributes of its people, a state could identify and arrest, displace, or mistreat certain marginal groups [53].
+
+Additionally, our perception aligns with a growing body of HCI4D/ICTD research on the more general problem of privacy and technology $\left\lbrack {2,4 - 6,{32},{73}}\right\rbrack$ showing privacy to be regionally constructed and highlighting a general lack of understanding from technology designers, which in turn suggests there may be important privacy (and security) concerns notably at gender and community levels unforeseen in the Global North.
+
+### 5.3 Design considerations for risk minimization
+
+As a general consideration to minimize risks in CV systems, we urge researchers to ensure their designs are context-appropriate for the focused area. Designing such systems requires expertise in technical domains such as CV and design, but also in humanities and social sciences to identify potential risks for a given community. As such, we recommend involving more local researchers in the design process. Aside from risk minimization, the participation of local people is critical in ensuring the goals of the system are aligned with the true needs of the community. We refer designers to Irani et al.'s work on postcolonial computing [35] for more in-depth guidance on this design process. For the rest of this section, we provide design considerations for each ethical risk considered in our study.
+
+Privacy violations. We briefly highlight two common privacy-related issues in modern CV systems. First, decision making processes have become increasingly obfuscated: decisions could unknowingly be based on private attributes, rather than the ones expected by the designer and consented by the user. Second, non-users in shared spaces may unconsciously be treated as users. We refer to a growing body of work on privacy-preserving CV for concrete solutions $\left\lbrack {{16},{69},{71},{76}}\right\rbrack$ , such as Ren et al.’s face anonymizer for action detection. [69].
+
+Discrimination. Discrimination in smart CV systems and general AI systems has been extensively researched in recent years $\left\lbrack {{15},{83}}\right\rbrack$ . The use of biased, imbalanced, or mis-adapted data and training schemes are typically the cause of this risk. We present two strategies to address these issues. First, datasets can be adapted and balanced by collecting new context-appropriate data, e.g. ensuring that every gender and ethnicity is equally represented in a human dataset [19]. Second, learning schemes can be modified to achieve certain definitions of fairness $\left\lbrack {{10},{52},{82}}\right\rbrack$ . We refer to Barocas et al.'s book [13] for a more in-depth analysis of general fairness-based solutions to minimize discrimination.
+
+Spoofing and adversarial inputs. In general, risk minimization can be achieved by improving system robustness. Solutions include taking into account external conditions that could affect performance (e.g. variable lighting), using sets of robust discriminative features in decision making, and improving measurement's quality. However, all systems are not equally prone to fall to the same types of attacks. For example, systems involving neural networks are typically weaker to precise adversarial inputs $\left\lbrack {{27},{57},{61}}\right\rbrack$ . In this context, we refer to recent advances in the field for specific solutions $\left\lbrack {8,9,{29},{46}}\right\rbrack$ .
+
+Security breaches. Our considerations here are limited as the minimization of this risk depends heavily on the type of system. We strongly encourage designers to engage with experts in this field to develop robust context-specific solutions. Nevertheless, general solutions include avoiding system deployments in areas with easily accessible sensitive information and limiting access to the system.
+
+Psychological harms. We urge designers to engage with the communities and involve local experts to determine what constitutes an acceptable and helpful CV system, and to follow these guidelines during design and deployment. We again emphasize how this risk is closely tied to all four others: any occurrence of harm related to the other risks will inevitably lead people to modify their lifestyle. Thus, the minimization of this risk depends directly on the minimization of the other four risks.
+
+### 5.4 Limitations
+
+Data collection. Our analysis does not take into account every use of CV in the Global South. Indeed, we have only considered certain systems that were published as research papers in specific digital libraries and tagged as relevant during our search. We thus only present a partial view of the real practical uses of CV in the Global South, i.e. systems encountered by the public in their daily lives, and the subsequent ethical risks.
+
+Furthermore, we note that our relevance criterion can be interpreted and applied differently. For example, "use of visual data related to the area" could refer to the use of a publicly-available dataset representative of the target location. However, it could also refer to any imagery that appears to be related to the area. Due to the scarcity of such datasets, we opted for the latter definition, which resulted in a more relaxed definition of relevancy.
+
+Finally, we acknowledge that the collected characteristics are for the most part surface-level, in part due to the eclecticism of the corpus: e.g. precise deployment areas (e.g. office spaces) could not be collected uniformly as deployments were not always discussed.
+
+Risk evaluation. First, we were unable to reliably assess the presence of the risks of security breaches and psychological harms in our corpus. While we have attempted to mitigate this issue by estimating the risks' frequency based on our understanding of the systems, our evaluation does not likely represent well their actual presence in the Global South.
+
+Second, we note that our corpus was heavily skewed towards certain Global South countries: ${69}\%$ of the systems were designed for the countries of India, Bangladesh and Pakistan. We thus acknowledge that our results do not represent equally well, and may not generalize well to every community in the Global South.
+
+Finally, we note that our risk frequency and perception analyses directly reflect our views of the problem, namely as designers and researchers from the Global North, which are themselves inspired by previous work [74]. Nevertheless, we believe that our overview of the typical CV-related ethical risks can serve as a basis to assist researchers in designing safe context-appropriate CV systems for the Global South.
+
+## 6 CONCLUSION
+
+We have presented an overview of applications and ethical risks tied to recent advances in CV research for the Global South. Our results indicate that surveillance and policy planning are the most explored application areas. We found research interests to differ notably between regions, with researchers from the North focusing more on broader problems requiring complex technical solutions. Risk-wise, privacy violations appears as the most likely and most severe risk to arise in CV systems. This last result differs from those of Skirpan and Yeh [74], which were obtained by analyzing both research literature and news events with a focus on the Global North. Taken together, our findings suggest that the actual uses of CV and the importance of its associated ethical risks are region-specific, depending heavily on a community's needs, norms, culture and resources.
+
+As future work, this study can be extended to non-research CV applications, e.g. commercial systems, to gain a more thorough understanding of the situation in the Global South. Additionally, similarly to what Skirpan and Yeh proposed in their work [74], we believe the next major step in improving our understanding of these issues is to survey direct and indirect users of CV systems and study their interactions in specific areas of the Global South. Only then can we gain a more precise and representative understanding of the importance and the frequency of each of those ethical risks.
+
+## REFERENCES
+
+[1] B. Abelson, K. R. Varshney, and J. Sun. Targeting direct cash transfers to the extremely poor. In Proc. of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pp. 1563-1572. ACM, NY, USA, 2014. doi: 10.1145/2623330.2623335
+
+[2] N. Abokhodair and S. Vieweg. Privacy & social media in the context of the arab gulf. In Proc. of the 2016 ACM Conference on Designing Interactive Systems, DIS '16, p. 672-683. ACM, NY, USA, 2016. doi: 10.1145/2901790.2901873
+
+[3] N. Ahmed, M. N. Islam, A. S. Tuba, M. Mahdy, and M. Sujaud-din. Solving visual pollution with deep learning: A new nexus in environmental management. Journal of Environmental Management, 248:109253, October 2019. doi: 10.1016/j.jenvman. 2019.07.024
+
+[4] S. Ahmed, M. Haque, J. Chen, and N. Dell. Digital privacy challenges with shared mobile phone use in bangladesh. Proc. of the ACM on Human-Computer Interaction, 1(CSCW), 11 2017. doi: 10. 1145/3134652
+
+[5] S. I. Ahmed, S. Guha, M. R. Rifat, F. H. Shezan, and N. Dell. Privacy in repair: An analysis of the privacy challenges surrounding broken digital artifacts in bangladesh. In Proc. of the Eighth International Conference on Information and Communication Technologies and Development, ICTD '16. ACM, NY, USA, 2016. doi: 10.1145/2909609.2909661
+
+[6] S. I. Ahmed, M. R. Haque, S. Guha, M. R. Rifat, and N. Dell. Privacy, security, and surveillance in the global south: A study of biometric mobile sim registration in bangladesh. In Proc. of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 906-918. ACM, NY, USA, 2017. doi: 10.1145/3025453.3025961
+
+[7] Y. Akbari, M. J. Jalili, J. Sadri, K. Nouri, I. Siddiqi, and C. Djeddi. A novel database for automatic processing of persian handwritten bank checks. Pattern Recognition, 74(C):253 - 265, February 2018. doi: 10. 1016/j.patcog.2017.09.011
+
+[8] N. Akhtar, J. Liu, and A. Mian. Defense against universal adversarial perturbations. In The IEEE Conference on Computer Vision and Pattern
+
+Recognition (CVPR), pp. 3389-3398. IEEE, June 2018. doi: 10.1109/ CVPR.2018.00357
+
+[9] N. Akhtar and A. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. doi: 10.1109/ACCESS.2018.2807385
+
+[10] L. Anne Hendricks, K. Burns, K. Saenko, T. Darrell, and A. Rohrbach. Women also snowboard: Overcoming bias in captioning models. In The European Conference on Computer Vision (ECCV), pp. 793-811. Springer, September 2018. doi: 10.1007/978-3-030-01219-9_47
+
+[11] A. Badshah, N. Islam, D. Shahzad, B. Jan, H. Farman, M. Khan, G. Jeon, and A. Ahmad. Vehicle navigation in gps denied environment for smart cities using vision sensors. Computers, Environment and Urban Systems, 77:101281, September 2019. doi: 10.1016/j.compen-vurbsys.2018.09.001
+
+[12] S. Banerjee, A. Bhattacharjee, and S. Das. Deep domain adaptation for face recognition using images captured from surveillance cameras. In 2018 International Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1-12. IEEE, Sep. 2018. doi: 10.23919/BIOSIG. 2018.8553278
+
+[13] S. Barocas, M. Hardt, and A. Narayanan. Fairness and Machine Learning. fairmlbook.org, 2019. http://www.fairmlbook.org.
+
+[14] A. L. Blank. Computer vision machine learning and future-oriented ethics. Honors Projects, 107:1-15, 2019.
+
+[15] J. Buolamwini and T. Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency (FAT), vol. 81 of Proc. of Machine Learning Research, pp. 77-91. PMLR, 2018.
+
+[16] J. Chen, J. Konrad, and P. Ishwar. Vgan-based image representation learning for privacy-preserving facial expression recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018.
+
+[17] J. M. Corbin and A. Strauss. Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative sociology, 13(1):3-21, 1990. doi: 10.1007/BF00988593
+
+[18] A. Danti, J. Y. Kulkarni, and P. Hiremath. An image processing approach to detect lanes, pot holes and recognize road signs in indian roads. International Journal of Modeling and Optimization, 2(6):658- 662, December 2012. doi: 10.7763/IJMO.2012.V2.204
+
+[19] T. de Vries, I. Misra, C. Wang, and L. van der Maaten. Does object recognition work for everyone? In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
+
+[20] N. Dell and G. Borriello. Mobile tools for point-of-care diagnostics in the developing world. In Proc. of the 3rd ACM Symposium on Computing for Development, ACM DEV '13, pp. 9:1-9:10. ACM, NY, USA, 2013. doi: 10.1145/2442882.2442894
+
+[21] N. Dell, J. Crawford, N. Breit, T. Chaluco, A. Coelho, J. McCord, and G. Borriello. Integrating ODK scan into the community health worker supply chain in mozambique. In Proc. of the Sixth International Conference on Information and Communication Technologies and Development, ICTD '13, pp. 228-237. ACM, NY, USA, 2013. doi: 10.1145/2516604.2516611
+
+[22] N. Dell, I. Francis, H. Sheppard, R. Simbi, and G. Borriello. Field evaluation of a camera-based mobile health system in low-resource settings. In Proc. of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services, MobileHCI '14, pp. 33-42. ACM, NY, USA, 2014. doi: 10.1145/2628363.2628366
+
+[23] M. M. L. Elahi, R. Yasir, M. A. Syrus, M. S. Q. Z. Nine, I. Hossain, and N. Ahmed. Computer vision based road traffic accident and anomaly detection in the context of bangladesh. In 2014 International Conference on Informatics, Electronics Vision (ICIEV), pp. 1-6. IEEE, May 2014. doi: 10.1109/ICIEV.2014.6850780
+
+[24] U. Farooq, M. U. Asad, F. Rafiq, G. Abbas, and A. Hanif. Application of machine vision for performance enhancement of footing machine used in leather industry of pakistan. In Eighth International Conference on Digital Information Management (ICDIM 2013), pp. 149-154. IEEE, Sep. 2013. doi: 10.1109/ICDIM.2013.6694000
+
+[25] V. Flusser and N. A. Roth. Into the Universe of Technical Images, vol. 32. University of Minnesota Press, ned - new edition ed., 2011.
+
+[26] C. Garvie, A. Bedoya, and J. Frankle. The perpetual line-up: Unregulated police face recognition in america. October 2016.
+
+[27] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harness-
+
+ing adversarial examples. In International Conference on Learning Representations, 2015.
+
+[28] B. J. Gram-Hansen, P. Helber, I. Varatharajan, F. Azam, A. Coca-Castro, V. Kopackova, and P. Bilinski. Mapping informal settlements in developing countries using machine learning and low resolution multi-spectral data. In Proc. of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, pp. 361-368. ACM, NY, USA, 2019. doi: 10.1145/3306618.3314253
+
+[29] C. Guo, M. Rana, M. Cisse, and L. van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations, pp. 1-12, 2018.
+
+[30] M. T. Habib, A. Majumder, A. Jakaria, M. Akter, M. S. Uddin, and F. Ahmed. Machine vision based papaya disease recognition. Journal of King Saud University - Computer and Information Sciences, 2018. doi: 10.1016/j.jksuci.2018.06.006
+
+[31] S. L. Happy, P. Patnaik, A. Routray, and R. Guha. The indian spontaneous expression database for emotion recognition. IEEE Transactions on Affective Computing, 8(1):131-142, jan - mar 2017. doi: 10.1109/ TAFFC.2015.2498174
+
+[32] S. M. T. Haque, P. Saha, M. S. Rahman, and S. I. Ahmed. Of ulti, "hajano", and 'matachetar otanetak datam': Exploring local practices of exchanging confidential and sensitive information in urban bangladesh. Proc. ACM Hum.-Comput. Interact., 3(CSCW), November 2019. doi: 10.1145/3359275
+
+[33] S. Helleringer, C. You, L. Fleury, L. Douillot, I. Diouf, C. T. Ndiaye, V. Delaunay, and R. Vidal. Improving age measurement in low- and middle-income countries through computer vision: A test in senegal. Demographic Research, 40:219-260, January 2019. doi: 10.4054/ DemRes.2019.40.9
+
+[34] W. Hu, J. H. Patel, Z.-A. Robert, P. Novosad, S. Asher, Z. Tang, M. Burke, D. Lobell, and S. Ermon. Mapping missing population in rural india: A deep learning approach with satellite imagery. In Proc. of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, pp. 353-359. ACM, NY, USA, 2019. doi: 10.1145/3306618. 3314263
+
+[35] L. Irani, J. Vertesi, P. Dourish, K. Philip, and R. E. Grinter. Postcolonial computing: A lens on design and development. In Proc. of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, p. 1311-1320. ACM, NY, USA, 2010. doi: 10.1145/1753326.1753522
+
+[36] S. Islam, S. S. S. Mousumi, A. S. A. Rabby, S. A. Hossain, and S. Abu-jar. A potent model to recognize bangla sign language digits using convolutional neural network. Procedia Computer Science, 143:611- 618, 2018. doi: 10.1016/j.procs.2018.10.438
+
+[37] A. Issac, A. Srivastava, A. Srivastava, and M. K. Dutta. An automated computer vision based preliminary study for the identification of a heavy metal (hg) exposed fish-channa punctatus. Computers in Biology and Medicine, 111:103326, August 2019. doi: 10.1016/j.compbiomed. 2019.103326
+
+[38] V. Jain, Z. Sasindran, A. Rajagopal, S. Biswas, H. S. Bharadwaj, and K. R. Ramakrishnan. Deep automatic license plate recognition system. In Proc. of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP '16, pp. 6:1-6:8. ACM, NY, USA, 2016. doi: 10.1145/3009977.3010052
+
+[39] I. Kalra, M. Singh, S. Nagpal, R. Singh, M. Vatsa, and P. B. Sujit. Dronesurf: Benchmark dataset for drone-based face recognition. In 2019 14th IEEE International Conference on Automatic Face Gesture Recognition (FG 2019), pp. 1-7. IEEE, May 2019. doi: 10.1109/FG. 2019.8756593
+
+[40] H. A. Khan, M. Parvez, S. Shahid, and A. Javaid. A comparative study on the effectiveness of adaptive exergames for stroke rehabilitation in pakistan. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA '18, pp. LBW044:1- LBW044:6. ACM, NY, USA, 2018. doi: 10.1145/3170427.3188523
+
+[41] A. Kinai, R. E. Bryant, A. Walcott-Bryant, E. Mibuari, K. Welde-mariam, and O. Stewart. Twende-twende: A mobile application for traffic congestion awareness and routing. In Proc. of the 1st International Conference on Mobile Software Engineering and Systems,
+
+MOBILESoft 2014, pp. 93-98. ACM, NY, USA, 2014. doi: 10.1145/ 2593902.2593926
+
+[42] E. Kocabey, M. Camurcu, F. Ofli, Y. Aytar, J. Marin, A. Torralba, and I. Weber. Face-to-bmi: using computer vision to infer body mass index
+
+on social media. In Eleventh International AAAI Conference on Web and Social Media. AAAI Press, 2017.
+
+[43] T. Kumar, S. Gupta, and D. S. Kushwaha. An efficient approach for automatic number plate recognition for low resolution images. In Proc. of the Fifth International Conference on Network, Communication and Computing, ICNCC '16, pp. 53-57. ACM, NY, USA, 2016. doi: 10. 1145/3033288.3033332
+
+[44] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. How we analyzed the compas recidivism algorithm - propublica, 2016.
+
+[45] M. Lauronen. Ethical issues in topical computer vision applications. November 2017.
+
+[46] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1778-1787. IEEE, June 2018. doi: 10.1109/CVPR.2018. 00191
+
+[47] B. C. Loh and P. H. Then. Mobile Imagery EXchange (MIX) toolkit: Data sharing for the unconnected. Personal and Ubiquitous Computing, 19(3-4):723-740, July 2015. doi: 10.1007/s00779-015-0835-2
+
+[48] K. Lum and W. Isaac. To predict and serve? Significance, 13(5):14-19, October 2016. doi: 10.1111/j.1740-9713.2016.00960.x
+
+[49] K. Maeda, T. Morimura, T. Katsuki, and M. Teraguchi. Frugal signal control using low resolution web-camera and traffic flow estimation. In Proc. of the 2014 Winter Simulation Conference, WSC '14, pp. 2082-2091. IEEE, Piscataway, NJ, USA, 2014. doi: 10.5555/2693848. 2694108
+
+[50] M. McFarland. Terrorist or pedophile? This start-up says it can out secrets by analyzing faces. The Washington Post, May 2016.
+
+[51] G. Mittal, K. B. Yagnik, M. Garg, and N. C. Krishnan. Spotgarbage: Smartphone app to detect garbage using deep learning. In Proc. of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 940-945. ACM, NY, USA, 2016. doi: 10.1145/ 2971648.2971731
+
+[52] A. Morales, J. Fiérrez, and R. Vera-Rodríguez. Sensitivenets: Learning agnostic representations with application to face recognition. arXiv, arXiv/1902.00334, 2019.
+
+[53] P. Mozur. One month, 500,000 face scans: How china is using a.i. to profile a minority. The New York Times, April 2019.
+
+[54] P. Mozur, J. M. Kessel, and M. Chan. Made in china, exported to the world: The surveillance state. The New York Times, April 2019.
+
+[55] H. Murad, N. I. Tripto, and M. E. Ali. Developing a bangla currency recognizer for visually impaired people. In Proc. of the Tenth International Conference on Information and Communication Technologies and Development, pp. 56:1-56:5. ACM, NY, USA, 2019. doi: 10. 1145/3287098.3287152
+
+[56] C. S. Neigh, M. L. Carroll, M. R. Wooten, J. L. McCarty, B. F. Powell, G. J. Husak, M. Enenkel, and C. R. Hain. Smallholder crop area mapped with wall-to-wall worldview sub-meter panchromatic image texture: A test case for tigray, ethiopia. Remote Sensing of Environment, 212:8 - 20, 2018. doi: 10.1016/j.rse.2018.04.025
+
+[57] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427-436. IEEE, 2015.
+
+[58] S. J. Oh, R. Benenson, M. Fritz, and B. Schiele. Faceless person recognition: Privacy implications in social media. In European Conference on Computer Vision, pp. 19-35. Springer, 2016. doi: 10.1007/978-3 $- {319} - {46487} - {9.2}$
+
+[59] B. Oshri, A. Hu, P. Adelson, X. Chen, P. Dupas, J. Weinstein, M. Burke, D. Lobell, and S. Ermon. Infrastructure quality assessment in africa using satellite imagery and deep learning. In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, pp. 616-625. ACM, NY, USA, 2018. doi: 10. 1145/3219819.3219924
+
+[60] T. Osman, S. S. Psyche, J. M. S. Ferdous, and H. U. Zaman. Intelligent traffic management system for cross section of roads using computer
+
+vision. In 2017 IEEE 7th Annual Computing and Communication
+
+Workshop and Conference (CCWC), pp. 1-7. IEEE, Jan 2017. doi: 10. 1109/CCWC.2017.7868350
+
+[61] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS P), pp. 372-387. IEEE, 2016.
+
+[62] J. Pierce. Smart home security cameras and shifting lines of creepiness: A design-led inquiry. In Proc. of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. ACM, NY, USA, 2019. doi: 10.1145/3290605.3300275
+
+[63] P. Prasobhkumar, C. Francis, and S. S. Gorthi. Automated quality assessment of cocoons using a smart camera based system. Engineering in Agriculture, Environment and Food, 11(4):202 - 210, October 2018. doi: 10.1016/j.eaef.2018.05.002
+
+[64] A. S. A. Rabby, S. Haque, S. Islam, S. Abujar, and S. A. Hossain. Bornonet: Bangla handwritten characters recognition using convolutional neural network. Procedia Computer Science, 143:528 - 535, 2018. doi: 10.1016/j.procs.2018.10.426
+
+[65] M. A. Rahaman, M. Jasim, M. H. Ali, and M. Hasanuzzaman. Real-time computer vision-based bengali sign language recognition. In 2014 17th International Conference on Computer and Information Technology (ICCIT), pp. 192-197. IEEE, Dec 2014. doi: 10.1109/ ICCITechn.2014.7073150
+
+[66] J. L. Raheja, S. Deora, and A. Chaudhary. Cross border intruder detection in hilly terrain in dark environment. Optik, 127(2):535 - 538, January 2016. doi: 10.1016/j.ijleo.2015.08.234
+
+[67] H. G. Rai, K. Jonna, and P. R. Krishna. Video analytics solution for tracking customer locations in retail shopping malls. In Proc. of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11, pp. 773-776. ACM, NY, USA, 2011. doi: 10.1145/2020408.2020533
+
+[68] H. T. Rauf, B. A. Saleem, M. I. U. Lali, M. A. Khan, M. Sharif, and S. A. C. Bukhari. A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data in Brief, 26:104340, October 2019. doi: 10.1016/j.dib.2019.104340
+
+[69] Z. Ren, Y. Jae Lee, and M. S. Ryoo. Learning to anonymize faces for privacy preserving action detection. In The European Conference on Computer Vision (ECCV). Springer, 2018. doi: 10.1007/978-3-030 $- {01246} - {5.38}$
+
+[70] D. Roy and K. M. C. Snatch theft detection in unconstrained surveillance videos using action attribute modelling. Pattern Recognition Letters, 108:56 - 61, June 2018. doi: 10.1016/j.patrec.2018.03.004
+
+[71] M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang. Privacy-preserving human activity recognition from extreme low resolution. In
+
+AAAI Conference on Artificial Intelligence. AAAI Press, 2017.
+
+[72] S. Z. H. Shah, H. T. Rauf, M. IkramUllah, M. S. Khalid, M. Farooq, M. Fatima, and S. A. C. Bukhari. Fish-pak: Fish species dataset from pakistan for visual features based classification. Data in Brief, 27:104565, December 2019. doi: 10.1016/j.dib. 2019.104565
+
+[73] R. Singh and S. J. Jackson. From margins to seams: Imbrication, inclusion, and torque in the aadhaar identification project. In Proc. of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 4776-4824. ACM, NY, USA, 2017. doi: 10.1145/3025453. 3025910
+
+[74] M. Skirpan and T. Yeh. Designing a moral compass for the future of computer vision using speculative analysis. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1368-1377, July 2017. doi: 10.1109/CVPRW.2017.179
+
+[75] L. T. Smith. Decolonizing methodologies: Research and indigenous peoples. Zed Books Ltd., 2013.
+
+[76] P. Speciale, J. L. Schonberger, S. B. Kang, S. N. Sinha, and M. Polle-feys. Privacy preserving image-based localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5488- 5498. IEEE, June 2019. doi: 10.1109/CVPR.2019.00564
+
+[77] U. N. Statistics Division. Standard country or area codes for statistical use (M49), 2017.
+
+[78] S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman. Synthesizing obama: Learning lip sync from audio. ACM Trans. Graph., 36(4):95:1-95:13, July 2017. doi: 10.1145/3072959.3073640
+
+[79] G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. Jawahar. IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1743-1751. IEEE, 2019. doi: 10.1109/WACV.2019.00190
+
+[80] Y. Wang and M. Kosinski. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology, 114(2):246-257, February 2018. doi: 10.1037/pspa0000098
+
+[81] E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky. Few-shot adversarial learning of realistic neural talking head models. arXiv, arXiv:1905.08233, 2019.
+
+[82] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In International Conference on Machine Learning, vol. 28 of Proceedings of Machine Learning Research, pp. 325-333. PMLR, Atlanta, Georgia, USA, 2013. doi: 10.5555/3042817.3042973
+
+[83] J. Zou and L. Schiebinger. AI can be sexist and racist - it's time to make it fair. Nature, 559(7714):324-326, July 2018. doi: 10.1038/ d41586-018-05707-8
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0b4c32966f7afdbb3248cf83bff8dc7448d0bc33
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QLFSDNIvI/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,419 @@
+§ COMPUTER VISION APPLICATIONS AND THEIR ETHICAL RISKS IN THE GLOBAL SOUTH
+
+Charles-Olivier Dufresne-Camaro* Fanny Chevalier ${}^{ \dagger }\;$ Syed Ishtiaque Ahmed ${}^{ \ddagger }$
+
+${}^{* \dagger \ddagger }$ Department of Computer Science, ${}^{ \dagger }$ Department of Statistical Sciences, University of Toronto
+
+§ ABSTRACT
+
+We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk's importance.
+
+Index Terms: General and reference-Document types-Surveys and overviews; Computing methodologies—Artificial intelligence— Computer vision-Computer vision tasks; Social and professional topics-User characteristics; Social and professional topics-Computing / technology policy;
+
+§ 1 INTRODUCTION
+
+In recent years, computer vision (CV) systems have become increasingly ubiquitous in many parts of the world, with applications ranging from assisted photography on smartphones, to drone-monitored agriculture. Current popular problems in the field include autonomous driving, affective computing and general activity recognition; each application has the potential to significantly impact the way we live in society.
+
+Computer vision has also been leveraged for more controversial applications, such as sensitive attributes predictors for sexual orientation [80] and body weight [42], DeepFakes [78,81] and automated surveillance systems [53]. In particular, this last application has been shown to have the potential to severely restrict people's privacy and freedom, and compromise security. As one example, we note the case of China and their current use of surveillance systems to identify, track and target Uighurs [53], a minority group primarily of Muslim faith. Similar systems with varying functionalities have also been exported to countries such as Ecuador [54].
+
+While advances in CV have led to considerable technological improvements, they also raise a great deal of ethical issues that cannot be disregarded. We stress that it is critical that the potential societal implications are studied as new technologies are developed. We recognize that this is a particularly difficult endeavour, as one often has very little control over how technology is used downstream. Moreover, recent focus on designing for global markets makes it near impossible to consider all local uses and adaptations: a contribution
+
+28-29 May making a positive impact in the Global North may turn out to be a harmful, restrictive one in the Global South ${}^{1}$ .
+
+Several groups have come forward in the last decade to raise ethical concerns over certain smart systems - CV and general-AI systems alike- with reports of discrimination $\left\lbrack {{14},{15},{44},{48}}\right\rbrack$ and privacy violations [26,50,58,62]. One notable example comes from Joy Buolamwini, who exposed discriminatory issues in several face analysis systems [15]. However, very few have explored the more focused general problem of ethics in CV. We believe this to be an important problem as restricting the scope to CV allows for more domain-specific actionable solutions to emerge, while keeping sight of the big-picture problems in the field.
+
+To our knowledge, only two attempts to study this problem have been undertaken in modern CV research [45, 74]. While insightful, these works paint ethical issues in CV with a coarse brush, discarding subtleties of how these risks may vary across different local contexts. Furthermore, the surveyed literature in both work focuses extensively on CV in the West and major tech hubs, raising concerns over the generalization of their findings in other distinct communities. Considering people in the Global South are important users of technology, most notably smartphones, we believe it is of utmost importance to extend this study of CV-driven risks to their communities.
+
+In this work, we conduct a survey of CV research literature with applications in the Global South to gain a better understanding of CV uses and the underlying problems in this area. While studying research literature may provide a weaker understanding of the true problem space compared to studying existing systems in the field, we believe our method forms a strong preliminary step in studying this important problem. In doing so, we aim to provide further guidance to designers in building safe context-appropriate solutions with CV components.
+
+To guide our search in identifying potential risks, we use Skirpan and Yeh's moral compass [74] (Sect. 2). Through our analysis (Sect. 3 -5), we aim to answer the following research questions:
+
+${RQI}$ . What are the main applications of modern computer vision when used in the Global South?
+
+${RQ2}$ . What are the most significant computer vision-driven ethical risks in a context of use in the Global South?
+
+${RQ3}$ . How do research interests of Global North researchers differ from those of Global South researchers in the context of computer vision research for the Global South?
+
+RQ4. How do the risks associated with these technologies when used in the Global South differ from those in the Global North?
+
+It is our belief that while certain CV technologies and algorithms are equally available to people in the North and South, the resulting uses, implications, and risks will differ. Our contributions are fourfold: 1) we present an overview of research where CV technology is deployed or studied in a geographical context within the Global South, and the needs it aims to address; 2) we identify the principal ethical risks at play in recent $\mathrm{{CV}}$ research in this context; 3) we show that research interests in CV applications for the Global South differ between Global South and Global North researchers; and 4) we show that the importance of ethical risks in CV systems differs across contexts. Finally, we also reflect on our findings and propose design considerations to minimize risks in CV systems for the Global South.
+
+*e-mail: camaro@cs.toronto.edu
+
+${}^{ \dagger }$ e-mail: fanny@cs.toronto.edu
+
+${}^{ \ddagger }$ e-mail: ishtiaque@cs.toronto.edu
+
+${}^{1}$ We use the United Nations’ M49 Standard [77] to define the Global North (developed countries) and the Global South (developing countries).
+
+§ 2 FRAMEWORK
+
+We are aware of only two studies aiming at identifying ethical risks in computer vision. The first, by Lauronen [45], surveys recent CV literature and briefly identifies six themes of ethical issues in CV: espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation.
+
+The second, by Skirpan and Yeh [74], takes the form of a moral compass (framework) aimed at guiding CV researchers in protecting the public. In this framework, CV risks fall into five categories: privacy violations, discrimination, security breaches, spoofing and adversarial inputs, and psychological harms. Using those categories, the authors propose 22 risk scenarios based on recent research and real-life events. In addition, they present a likelihood of occurrence risk evaluation along with a preliminary risk perception evaluation based on uncertainty (i.e. how much is known about it) and severity (i.e. how impactful it is).
+
+To guide our search in evaluating CV in a Global South context, we propose using Skirpan and Yeh's framework based on two reasons: 1) we find the set of risk categories to generalize better in non-Western countries, and 2) it allows us to directly compare the results of our risk analysis in the Global South to the results of Skirpan and Yeh's risk analysis in the West [74]. Nevertheless, we emphasize that the risk categories are by no mean exhaustive of every current risk related to modern CV systems. Indeed, the field is still evolving at a considerable rate and new risks may not all fit well into any of those categories, e.g. DeepFakes.
+
+To illustrate the scope of each risk category, we present below one risk scenario extracted from the framework [74] along with a brief description of the risk. Note that we make no distinction between deliberate and accidental presences of a risk, e.g. unplanned discrimination caused by the use of biased data to train a CV system. Privacy violations
+
+"Health insurance premium is increased due to inferences from online photos." - Scenario #1 [74]
+
+Privacy violations are defined as having one's private information collected without the person's consent by a third party via the use of a CV system. This includes data such as body measurements, sexual orientation, income, relationships, travels and activities.
+
+Discrimination
+
+"Xenophobia leads police forces to track and target foreigners." — Scenario #9 [74]
+
+Discrimination occurs when someone is treated unjustly by a CV system due to certain sensitive characteristics that are irrelevant to the situation. Such characteristics include religious beliefs, race, gender and sexual orientation.
+
+Security breaches
+
+"Security guard sells footage of public official typing in a password to allow for a classified information leak." — Scenario #5 [74]
+
+Skirpan and Yeh broadly define security breaches as a malicious actor(s) exploiting vulnerabilities in CV systems to disable them, steal private information or employ them in cyberattacks.
+
+§ SPOOFING AND ADVERSARIAL INPUTS
+
+"Automated public transportation that uses visual verification systems is attacked causing a crash."
+
+— Scenario #8 [74]
+
+In this context, the objective of spoofing and adversarial inputs is to push a CV system to react confidently in an erroneous and harmful manner. Examples include misclassifying people or situations, e.g. recognizing a robber as a bank employee, or misclassifying fraudulent documents as authentic.
+
+§ PSYCHOLOGICAL HARMS
+
+"People will not attend protests they agree with due to fear of recognition by cameras and subsequent punishment." — Scenario #2 [74]
+
+At a high level, any negative change in how people live based on their interactions with CV systems is considered a psychological harm. Unlike the other categories, this risk is more passive and results from longer-term interactions with CV systems.
+
+We stress that the risk scenarios and their analysis were aimed to be general and reflect "normal" life in the Global North. Therefore, they likely do not generalize well to a myriad of areas in the Global South. For example, scenario #1 used to illustrate privacy violations takes for granted that access to health insurance is available and that people have some sort of online presence. It also assumes that a health insurance company has access to enough data on the population to do any kind of reliable inference.
+
+For this reason, we only use the risk categories to guide our analysis. We believe those generalize well in all areas with CV systems, even though they may not cover the full risk space and what constitutes sensitive or private information may vary across communities $\left\lbrack {2,6}\right\rbrack$ .
+
+§ 3 METHODOLOGY
+
+In the context of this work, we consider a prior work (system) to be relevant to our study if it identifies or addresses specific needs of Global South communities through the use of CV techniques. This criterion can be satisfied in multiple ways: direct deployment of a system in the area, use of visual data related to the area, description of local problems related to CV research, and so forth. Works exploring specific problems in more general settings are excluded, e.g. discrimination at large in CV-driven surveillance systems. We exclude from our study systems designed for China, given their current economical situation.
+
+To find relevant research papers, we conducted a search on four digital libraries: ACM Digital Library, IEEE Xplore Digital Library, ScienceDirect and JSTOR. We have chosen these four platforms to maximize our search coverage on both applied research conferences and journals, and more fundamental research publication venues.
+
+We limited our search to research papers published between 2010 and 2019 to focus on recent advances in the field. We included all publication venues as we wished to be inclusive of all CV researchers focusing on problems in the Global South. We consider inclusion to be critical in achieving a thorough understanding of the many different needs addressed by CV researchers, and the ethical risks of CV research in the Global South.
+
+To identify appropriate search queries, we first began with an exploratory search on ACM Digital Library. We defined our search queries as two components: one CV term and one contextual term, e.g. "smart camera" AND "Global South". To build our initial list of search queries, we selected as CV terms common words and phrases used in CV literature, e.g. "camera", "drone" and "machine vision". As contextual terms, we chose common words referring to HCI4D/ICTD research areas and locations, and ethical risks in technology, e.g. "privacy", "development", "poverty" and "Bangladesh". Our initial list consisted of ${15}\mathrm{{CV}}$ terms and 24 contextual terms, resulting in 360 search queries.
+
+ < g r a p h i c s >
+
+Figure 1: Distribution of publication years for the research papers in our corpus $\left( {n = {55}}\right)$ .
+
+We selected papers based on their title, looking only at the first 100 results (ranked using ACM DL's relevance criterion). We then discarded terms based on the relevancy of the selected papers. Terms that were too general and returned too many irrelevant articles (e.g. CV term "mobile phone" or the contextual term "development") or that continuously returned no relevant content (e.g. CV term "webcam" or contextual term "racism") were discarded.
+
+We retained $5\mathrm{{CV}}$ terms (Computer Vision, Drone, Smart Camera, Machine Vision, Face Recognition) and 10 contextual terms (ICTD, ICT4D, Global South, Surveillance, Ethics, Pakistan, Bangladesh, India, Africa, Ethiopia) for a total of 50 search queries per platform. This selection allowed us to efficiently cover a wide spectrum of CV research areas (CV terms) focused on Global South countries (contextual terms). Additionally, this also provided us with an alternative way of covering relevant work by focusing on those considering risks and ethics (contextual terms) in CV.
+
+A total of 470 papers were collected over all four platforms using our defined search queries. We then read attentively each abstract and skimmed through each paper to evaluate their relevancy using our aforementioned criterion. Ultimately, 55 unique research papers were deemed relevant.
+
+For each paper, we noted the publication year, the country the CV system was designed for (target location), and the country (researchers' location) and region (Global South or Global North) of the researchers who designed the systems (i.e. the country where the authors’ institution is located ${}^{2}$ ). Additionally, we noted the main topic of the visual data used by the system (e.g. cars), the application area (i.e. the addressed needs, e.g. healthcare), and the ethical risks at play defined in Sect. 2. For all risks, we only considered likely occurrences, i.e. cases where a system appears to exhibit certain direct risks following normal use.
+
+We did not collect more fine-grained characteristics such as precise deployment areas in our data collection process due to the eclecticism of our corpus: deployments, if they occurred, were not always discussed in detail.
+
+The first author then performed open and axial coding [17] on both the application areas and data topics to extract the principal categories best representing the corpus.
+
+§ 4 RESULTS
+
+In this section, we describe the characteristics of the research papers collected $\left( {n = {55}}\right)$ for our analysis. Table 1 shows an excerpt of our final data collection table used to generate the results below. The complete table is included in the supplemental material. Publication years are shown in Fig. 1.
+
+§ 4.1 LOCATION
+
+Researchers' location. Of the 55 articles, 41 were authored by researchers from the Global South and 14 by researchers from the Global North. Specifically, Global South researchers were located in India (20), Bangladesh (10), Pakistan (6), Brazil (1), Iran (1), Iraq (1), Kenya (1), and Malaysia (1). Global North researchers were located in US (11), Germany (1), Japan (1), and South Korea (1).
+
+Target location. For papers authored by Global South researchers, the systems were designed for, or deployed in the same country as the researchers' institutions. For the 14 systems designed by Global North researchers, the target locations were varied: Ethiopia (2), India (2), Kenya (1), Mozambique (1), Pakistan (1), Senegal (1), and Zimbabwe (1). We also note five systems designed to cover broader areas: Global South (4), and Africa (1).
+
+§ 4.2 DATA TOPICS
+
+Using open and axial coding [17], we extracted six broad categories covering all visual data topics found in our corpus: characters, human, medicine, outdoor scenes, products, and satellite. Each paper is labeled with only one category: the category that fits best with the specific data topic of the paper.
+
+Characters. This category encompasses any visual data that can be used as the input of an optical character recognition system. Systems using text [64], license plates [38, 43], checks [7], and banknotes [55] fit into this category.
+
+Human. Included in this category are any visual data where humans are the main subjects. Such data is typically used in face recognition [12, 39], sign recognition [36, 65], tracking [67], detection [66,70], and attribute estimation [33]. In our corpus, faces $\left\lbrack {{12},{31},{39}}\right\rbrack$ and actions $\left\lbrack {{36},{65},{66},{70}}\right\rbrack$ represent the main objects of study using such data.
+
+Medicine. Medical data is tied to the study of people's health. We include in this category any image or video of people with a focus on healthcare-related attributes [47], medical documents [21], and diagnostic test images $\left\lbrack {{20},{22}}\right\rbrack$ .
+
+Outdoor scenes. Any visual data in which the main focus is on outdoor scenes is grouped under this category. This includes data collected by humans [51], but also cameras mounted on vehicles $\left\lbrack {{11},{18},{79}}\right\rbrack$ , and surveillance cameras $\left\lbrack {{23},{49}}\right\rbrack$ . We however exclude airborne and satellite imagery (i.e. high-altitude, top-down views) from this category.
+
+Products. We group in this category visual data focusing on objects resulting from human labour and fabrication processes such as fruit [30,68], fish [37,72], silk [63], and leather [24].
+
+Satellite. Airborne and satellite imagery data such as multi-spectral imagery [59], are grouped under this category.
+
+The distribution of these topics in our corpus is shown in Fig. 2. Fig. 3 shows how this distribution varies with the researchers' region.
+
+§ 4.3 APPLICATION AREAS
+
+Through the use of open and axial coding [17], we identified 7 broad application area categories representing all applications found in our corpus: agriculture and fishing, assistive technology, healthcare, policy planning, safety, surveillance, and transportation.
+
+Agriculture and fishing. We include in this category applications related to agriculture, fishing, and industrial uses of goods from these areas. For example, systems were designed to automatically recognize papaya diseases [30], identify fish exposed to heavy metals [37], and estimate crop areas in Ethiopia using satellite data [56].
+
+${}^{2}$ For papers with authors from different countries, we noted the location most shared by the authors.
+
+max width=
+
+Title Year Target Location Researchers' Location (Region) Data Topic Application Areas Risks
+
+1-7
+Cross border intruder detection in hilly ter- rain in dark environment [66] 2016 India India (Global South) Human Policy plan., Surveillance Discrimination, Privacy, Spoofing
+
+1-7
+Field evaluation of a camera-based mobile health system in low-resource settings [22] 2014 Zimbabwe United States (Global North) Medicine Healthcare N/A
+
+1-7
+
+Table 1: Excerpt from the final data collection log used in this study. The complete table is included in the supplemental material.
+
+ < g r a p h i c s >
+
+Figure 2: Distribution of the visual data topics $\left( {n = {55}}\right)$ in our corpus $\left( {n = {55}}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 3: Normalized distributions of data topics for systems designed by researchers from the Global North (blue, $n = {14}$ ) and from the Global South (orange, $n = {41}$ ) .
+
+Assistive technology. Research papers in this category study the design of CV systems assisting people with disabilities in doing everyday tasks. This includes Bengali sign language recognition systems $\left\lbrack {{36},{65}}\right\rbrack$ , the Indian Spontaneous Expression Database used for emotion recognition [31], and a Bangla currency recognition system to assist people with visual impairments [55].
+
+Healthcare. CV research focused on healthcare-related topics fall into this category. We note adaptive games for stroke rehabilitation patients in Pakistan using CV technology to track players [40]. We also include systems used to analyze diagnostic tests [22], and digitize medical forms [21].
+
+Policy planning. We include in this category policies tied to urban planning, development planning, environment monitoring, traffic control, and general quality-of-life improvements. We note SpotGarbage used to detect garbage in the streets of India [51], a visual pollution detector in Bangladesh [3], and various traffic monitoring systems in Kenya [41] and Bangladesh [23,60]. We also highlight uses of satellite data to estimate the poorest villages in Kenya to guide cash transfers towards people in need [1], and to gain a better understanding of rural populations of India [34].
+
+Safety. In the context of our work, we define safety as the protection of people from general harms. We include in this category CV systems for bank check fraud detection [7], food toxicity detection [37], and informal settlement detection [28].
+
+ < g r a p h i c s >
+
+Figure 4: Distribution of application area categories $\left( {n = {80}}\right)$ in our corpus $\left( {n = {55}}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 5: Normalized distributions of application areas for systems designed by researchers from the Global North (blue, $n = {23}$ ) and from the Global South (orange, $n = {57}$ ).
+
+Surveillance. Surveillance applications refer to systems monitoring people or activities to protect a specific group from danger. We make no distinction between explicit and secret acts of surveillance, i.e. being monitored without one's consent. We found systems for theft detection [70], border intrusion detection in India [66], automatic license plate recognition [38], and tracking customers [67].
+
+Transportation. CV systems in this category focus on improving and understanding transport conditions in the Global South. The majority of the systems identified in our corpus were aimed at monitoring and reducing traffic $\left\lbrack {{49},{60}}\right\rbrack$ and accidents $\left\lbrack {23}\right\rbrack$ . We also note systems addressing the problem of autonomous driving [11,79].
+
+max width=
+
+X X ˈtɔəl əʌlɪsˌsɪsɪ ә.reduleәH X Agges әɔue||!әʌuNS uoдедооdsueл」
+
+1-8
+Characters 0 2 0 0 1 3 3
+
+1-8
+Human 0 3 2 2 0 9 0
+
+1-8
+Medicine 0 0 4 0 0 0 0
+
+1-8
+Outdoor scenes 0 0 0 7 4 2 8
+
+1-8
+Products 12 0 0 1 1 0 0
+
+1-8
+Satellite 1 0 0 9 5 1 0
+
+1-8
+
+Figure 6: Co-occurrence frequencies between data topics and application areas in our corpus $\left( {n = {55}}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 7: Distribution of risks $\left( {n = {80}}\right)$ in our corpus $\left( {n = {55}}\right)$ . 18 papers had no identified risks.
+
+ < g r a p h i c s >
+
+Figure 8: Normalized distributions of risks $\left( {n = {80}}\right)$ for systems designed by researchers from the Global North (blue, $n = {16}$ ) and from the Global South (orange, $n = {64}$ ).
+
+Each paper was tagged with at least one category and at most three categories (median $= 1$ ). The distribution of application areas is shown in Fig. 4, followed by the normalized distributions for each researchers' region in Fig. 5. We present the global co-occurrence frequencies between data topics and application areas in Fig. 6.
+
+§ 4.4 RISKS
+
+We briefly illustrate how we assessed the presence of risks in CV systems (Sect. 3), using two examples from our corpus. Note that we did not consider intent in our analysis; instead, we focused strictly on the possibility of enabling certain risks.
+
+First, we consider a system using satellite imagery designed to identify informal settlements in the Global South, informing NGOs of regions that could benefit the most from their aid. [28]. While the system focuses on areas rather than individuals, it nevertheless collects information on vulnerable communities without their consent (privacy violations). Additionally, a malicious actor aware of the system's functionalities could modify settlements to hide them in satellite imagery, e.g. by covering rooftops with different materials (spoofing). For these reasons, we tagged the system with two risks.
+
+Second, we examine a sign language digit recognition system [36], which we tagged with no risk. As the system appears to solely rely on hand gestures and focuses on a simple limited task, the presence of all 5 risks appeared minimal.
+
+Fig. 7 shows the overall risk distribution for our corpus. Papers were tagged with at most four risks; 18 had no risks (median $=$ 1). Normalized distributions for each researchers' region appear in Fig. 8. We show the global co-occurrence frequencies between application areas and risks in Fig. 9, and between data topics and risks in Fig. 10. Finally, we present the conditional probability of a risk being present, given the presence of another in Fig. 11.
+
+max width=
+
+X UOREUJUUDSIG AoeALI su.uey 'tɔAsd sayoea.qAyunoas Buyoods
+
+1-6
+Agri. & Fishing 0 1 0 0 6
+
+1-6
+Assistive Tech. 1 1 1 0 2
+
+1-6
+Healthcare 0 0 0 0 0
+
+1-6
+Policy Plan. 4 14 0 0 12
+
+1-6
+Safety 1 7 0 0 8
+
+1-6
+Surveillance 9 15 10 0 14
+
+1-6
+Transportation 0 9 3 0 9
+
+1-6
+
+Figure 9: Co-occurrence frequencies between application areas and risks in our corpus $\left( {n = {55}}\right)$ .
+
+max width=
+
+X uoneupuposic Apeniud suutey to Asd sә'yɔeә.iqAyunoas Buyoods
+
+1-6
+Characters 0 3 3 0 5
+
+1-6
+Human 10 10 7 0 9
+
+1-6
+Medicine 0 0 0 0 0
+
+1-6
+Outdoor scenes 0 6 0 0 8
+
+1-6
+Products 0 0 0 0 5
+
+1-6
+Satellite 2 7 0 0 5
+
+1-6
+
+Figure 10: Co-occurrence frequencies between data topics and risks in our corpus $\left( {n = {55}}\right)$ .
+
+max width=
+
+X UOREUJULIOSIG AceAud su.uey ' 'v2As d sәupeә.iqAyunoas Buyoods
+
+1-6
+Discrimination 1 0.46 0.70 - 0.34
+
+1-6
+Privacy 1 1 1 - 0.66
+
+1-6
+Psych. Harms 0.58 0.38 1 - 0.28
+
+1-6
+Security breaches 0 0 0 - 0
+
+1-6
+Spoofing 0.92 0.81 0.90 - 1
+
+1-6
+
+Figure 11: Conditional probability of a risk being present, given the presence of another risk $p$ (row loolumn) in our corpus $\left( {n = {55}}\right)$ .
+
+§ 5 DISCUSSION
+
+Overall, we find that the majority of surveyed CV systems were designed to address specific needs of a community, ranging from food quality control to healthcare, traffic minimization, and improved security. Moreover, we observe that currently popular research areas in the West, such as autonomous vehicles and affective computing, have not achieved the same penetration level so far in the Global South.
+
+We present below our most notable findings and compare them with the findings of Skirpan and Yeh for the Global North [74]. We then present design considerations to minimize each risk, followed by a discussion on the limitations of our study.
+
+§ 5.1 KEY FINDINGS
+
+CV research for the Global South, by the Global South? ${75}\%$ of the papers in our corpus were authored by researchers from the Global South. We find this proportion concerning, as it highlights an ever present risk of colonization of methods and knowledge [75]. This is furthermore important as both the interpretation of visual data and its associated applications are highly dependent on the context in which the data is situated [25]. Thus, we believe this proportion raises an important ethical concern around the interpretation of visual data and the politics tied to the downstream CV applications [35].
+
+§ INTERESTS DIFFER BETWEEN REGIONAL RESEARCH GROUPS.
+
+Grouping the distributions of data topics and application areas by researchers' regions (Fig. 3 and 5) reveals significant differences in interests. We note for example the data topic of satellite imagery used in ${50}\%$ of all Global North systems and only $5\%$ of Global South systems. We speculate that this type of data may be less utilized by Global South researchers as it is typically used to solve larger fundamental problems rather than directly addressing specific needs. Conversely, we find Global North researchers to focus more on data that can be used to solve broader problems.
+
+These distinctions are also visible in application areas: Global North researchers appear to focus much more on applications requiring significant technical expertise, often employing modalities neither aligned with nor originated from local methods [75], with broader ramifications (e.g. policy planning applications identifying at-risk areas through satellite imagery $\left\lbrack {1,{28}}\right\rbrack$ ) than applications focused on specific local problems, requiring a thorough understanding of the community and less technical expertise (e.g. applications for transportation, agriculture, and surveillance). In contrast, the regional distribution for Global South researchers is much more uniform, with greater interest in those local problems. Furthermore, these disparities may also be caused by the lack of deeper knowledge about the local problems of the Global South among the CV community in the Global North [35].
+
+Finally, we note that such differences in interests may also be caused by accessibility issues, e.g. data being made available only to certain researchers, high cost to developing new system compared to imports, and computational power issues.
+
+Surveillance applications - promising, but not without risks. Globally, we find surveillance to be the second most popular application area behind policy planning, an area mostly popular due to its encompassing definition. We attribute this popularity most notably to one factor: surveillance systems are a natural solution to satisfy in part the essential need of security, albeit at the cost of restricting liberties. This trade-off is notably closely tied to the government systems in place, which vary wildly in the Global South [6]. Moreover, we note that the motivations for surveillance vary between cultures, ranging from care, nurture, and protection to control and oppression.
+
+We emphasize that researchers should be especially careful in designing surveillance systems, as we found that most studied here could be easily misused to focus on increasing control over people rather than improving security; we found surveillance to be the application most frequently tied to any risk, and the one with the most diverse set of risks (Fig. 9).
+
+§ THE IMPACT OF MACHINE LEARNING ON CV RISKS.
+
+${73}\%$ of all CV systems analyzed used machine learning techniques to achieve their goals. This has several implications when it comes to the presence of certain risks in CV systems. First, it amplifies the risk of spoofing and adversarial inputs, the most frequent risk in our corpus appearing in nearly all data topics and application areas (Fig. 9 and 10). Indeed, research has shown neural networks to be easily misguided by adversarial inputs $\left\lbrack {{27},{57},{61}}\right\rbrack$ .
+
+Second, the use of machine learning amplifies the risk of discrimination, in particular the unplanned type, i.e. CV systems using sensitive attributes for decision making without being designed to do so. We found fewer examples of this risk in our corpus as discrimination occurs strictly against people; the risk was mainly found in surveillance and policy planning applications using human data.
+
+§ PRIVACY VIOLATIONS - THE COMMON DENOMINATOR.
+
+In our analysis, we found privacy violations to be the second most common risk, far ahead discrimination and psychological harms. Specifically, we found the risk of privacy violations most present in surveillance and policy planning applications, both areas requiring in-depth knowledge of people.
+
+Additionally, we have observed that certain risks were typically present only when privacy violations were too. In particular, the risk of privacy violations was always present when the risks of discrimination and psychological harms were found (Fig. 11). While our analysis does not robustly assess the relations between each risk, this finding suggests an almost hierarchical relation between certain risks, beginning with privacy violations: e.g, for automated discrimination to occur, sensitive attributes of people must first be known (privacy violations). Similarly, the risk of psychological harms depends directly on the presence of other risks. We note however that the risk of spoofing appears to be tied more closely to certain algorithms than risks, although we still expect its frequency to rise as access to sensitive data (privacy violations) increases.
+
+§ PRIVACY VIOLATIONS FREQUENCY – A LIKELY UNDERESTIMATE.
+
+We note that research on privacy and technology $\left\lbrack {2,6}\right\rbrack$ has shown privacy to be regionally constructed, differing between cultural regions: e.g. people in the Global South often perceive privacy differently than Western liberal people. Our interpretation of the general definition of privacy violations used in our analysis (Sect. 2) likely does not encompass every cultural definition of privacy in the Global South. As such, there may exist many unforeseen privacy-related risks tied to CV systems, leading us to believe the actual frequency of the risk is even higher than what we have found in our analysis.
+
+§ NOT ALL RISKS COULD BE RELIABLY PERCEIVED.
+
+We note in our analysis two risks that could not be reliably estimated: security breaches and psychological harms. In fact, we were unable to find a single case of the former. We attribute this primarily to the method used for analysis: detection of the risk would require research papers to present an in-depth description of a system's implementation, highlighting certain vulnerabilities. Similarly, while we have found some systems with risks of psychological harms, we emphasize that we have likely underestimated its actual frequency: detection requires both an in-depth understanding of the system and its subsequent deployments. Moreover, compared to the other four risks, we re-emphasize that the risk of psychological harms results from longer-term interactions and may only become discernible as risk-driven harms begin to occur.
+
+§ FREQUENCY VERSUS SEVERITY.
+
+We first stress that the absence of certain risks does not indicate their non-existence. On the contrary, this informs us to be more wary of these risks as we may have little means to diagnose them in this context. Additionally, we note that the risk frequency estimated here is a different concept than the frequency of harm occurrences tied to the aforementioned risk. For example, while spoofing was found to be the most common risk in our corpus, harm occurrences linked to this risk requires the existence of a knowledgeable malicious actor attacking the system, which may be much more uncommon.
+
+Furthermore, we emphasize that each risk can have severe impacts, regardless of their frequency. For example, while the risk of security breaches may be infrequent, a single occurrence could lead to disastrous consequences, from large-scale information leaks to casualties. Thus, while this study may highlight common pitfalls tied to the design of recent CV systems, we urge designers to consider the potential impacts caused by all risks equally.
+
+§ 5.2 COMPARISON WITH GLOBAL NORTH RISK ANALYSIS
+
+§ 5.2.1 RISK PROBABILITY
+
+In their work, Skirpan and Yeh [74] identified discrimination, security breaches and psychological harms as the most probable risks. In the present study, we instead identify privacy violations and spoofing as the most probable risks in the Global South. We attribute these differences to two reasons.
+
+First, each work focused on a different body of work. Skirpan and Yeh searched through both research literature and current news, leading to a more diverse set of systems and problems. We, on the other hand, limited our search to research papers focusing specifically on the Global South to obtain a more grounded understanding. Thus, risks that cannot be reliably estimated from research papers (e.g. security breaches) did not appear frequently in our study. However, this alone is insufficient to suggest the risk space to be different in the Global South: the absence of a risk in research literature does not imply its non-existence in the real world.
+
+Second, we believe these differences to be primarily related to the availability and the acceptance of CV systems in both regions. CV systems in the West, which have become nearly ubiquitous, are often designed to be considerate of the history and politics of the region. However, in the Global South, fewer systems are designed and, in most cases, they involve transferring existing Northern technologies to the Global South. The designed systems thus tend to be less considerate of the culture and politics of the Global South [35], limiting among other things their level of adoption by the community. Moreover, we re-emphasize the pseudo-hierarchical nature of the risks studied here: privacy violations appear to serve as a springboard for the other risks to arise. Thus, in areas where CV technology is relatively recent, we should expect to find privacy violations to be more frequent than other risks. This, combined with the extensive use of machine learning techniques, can also explain why the risk of spoofing and adversarial inputs was found to be more frequent than the three risks higlighted by Skirpan and Yeh.
+
+§ 5.2.2 RISK PERCEPTION
+
+Skirpan and Yeh [74] have also evaluated risks in terms of severity and uncertainty, leading them to identify discrimination and security breaches as the most potential important risks. While we have not formally performed this type of analysis in our work, we speculate the problem space in the Global South to be different.
+
+We believe that at this time, the most important risk is privacy violations, in part due to its gateway property to other risks and the current availability of CV systems in the Global South. We note privacy violations can not only enable automated discrimination in CV systems, but also broader discrimination and security risks. For example, by surveilling and collecting specific sensitive attributes of its people, a state could identify and arrest, displace, or mistreat certain marginal groups [53].
+
+Additionally, our perception aligns with a growing body of HCI4D/ICTD research on the more general problem of privacy and technology $\left\lbrack {2,4 - 6,{32},{73}}\right\rbrack$ showing privacy to be regionally constructed and highlighting a general lack of understanding from technology designers, which in turn suggests there may be important privacy (and security) concerns notably at gender and community levels unforeseen in the Global North.
+
+§ 5.3 DESIGN CONSIDERATIONS FOR RISK MINIMIZATION
+
+As a general consideration to minimize risks in CV systems, we urge researchers to ensure their designs are context-appropriate for the focused area. Designing such systems requires expertise in technical domains such as CV and design, but also in humanities and social sciences to identify potential risks for a given community. As such, we recommend involving more local researchers in the design process. Aside from risk minimization, the participation of local people is critical in ensuring the goals of the system are aligned with the true needs of the community. We refer designers to Irani et al.'s work on postcolonial computing [35] for more in-depth guidance on this design process. For the rest of this section, we provide design considerations for each ethical risk considered in our study.
+
+Privacy violations. We briefly highlight two common privacy-related issues in modern CV systems. First, decision making processes have become increasingly obfuscated: decisions could unknowingly be based on private attributes, rather than the ones expected by the designer and consented by the user. Second, non-users in shared spaces may unconsciously be treated as users. We refer to a growing body of work on privacy-preserving CV for concrete solutions $\left\lbrack {{16},{69},{71},{76}}\right\rbrack$ , such as Ren et al.’s face anonymizer for action detection. [69].
+
+Discrimination. Discrimination in smart CV systems and general AI systems has been extensively researched in recent years $\left\lbrack {{15},{83}}\right\rbrack$ . The use of biased, imbalanced, or mis-adapted data and training schemes are typically the cause of this risk. We present two strategies to address these issues. First, datasets can be adapted and balanced by collecting new context-appropriate data, e.g. ensuring that every gender and ethnicity is equally represented in a human dataset [19]. Second, learning schemes can be modified to achieve certain definitions of fairness $\left\lbrack {{10},{52},{82}}\right\rbrack$ . We refer to Barocas et al.'s book [13] for a more in-depth analysis of general fairness-based solutions to minimize discrimination.
+
+Spoofing and adversarial inputs. In general, risk minimization can be achieved by improving system robustness. Solutions include taking into account external conditions that could affect performance (e.g. variable lighting), using sets of robust discriminative features in decision making, and improving measurement's quality. However, all systems are not equally prone to fall to the same types of attacks. For example, systems involving neural networks are typically weaker to precise adversarial inputs $\left\lbrack {{27},{57},{61}}\right\rbrack$ . In this context, we refer to recent advances in the field for specific solutions $\left\lbrack {8,9,{29},{46}}\right\rbrack$ .
+
+Security breaches. Our considerations here are limited as the minimization of this risk depends heavily on the type of system. We strongly encourage designers to engage with experts in this field to develop robust context-specific solutions. Nevertheless, general solutions include avoiding system deployments in areas with easily accessible sensitive information and limiting access to the system.
+
+Psychological harms. We urge designers to engage with the communities and involve local experts to determine what constitutes an acceptable and helpful CV system, and to follow these guidelines during design and deployment. We again emphasize how this risk is closely tied to all four others: any occurrence of harm related to the other risks will inevitably lead people to modify their lifestyle. Thus, the minimization of this risk depends directly on the minimization of the other four risks.
+
+§ 5.4 LIMITATIONS
+
+Data collection. Our analysis does not take into account every use of CV in the Global South. Indeed, we have only considered certain systems that were published as research papers in specific digital libraries and tagged as relevant during our search. We thus only present a partial view of the real practical uses of CV in the Global South, i.e. systems encountered by the public in their daily lives, and the subsequent ethical risks.
+
+Furthermore, we note that our relevance criterion can be interpreted and applied differently. For example, "use of visual data related to the area" could refer to the use of a publicly-available dataset representative of the target location. However, it could also refer to any imagery that appears to be related to the area. Due to the scarcity of such datasets, we opted for the latter definition, which resulted in a more relaxed definition of relevancy.
+
+Finally, we acknowledge that the collected characteristics are for the most part surface-level, in part due to the eclecticism of the corpus: e.g. precise deployment areas (e.g. office spaces) could not be collected uniformly as deployments were not always discussed.
+
+Risk evaluation. First, we were unable to reliably assess the presence of the risks of security breaches and psychological harms in our corpus. While we have attempted to mitigate this issue by estimating the risks' frequency based on our understanding of the systems, our evaluation does not likely represent well their actual presence in the Global South.
+
+Second, we note that our corpus was heavily skewed towards certain Global South countries: ${69}\%$ of the systems were designed for the countries of India, Bangladesh and Pakistan. We thus acknowledge that our results do not represent equally well, and may not generalize well to every community in the Global South.
+
+Finally, we note that our risk frequency and perception analyses directly reflect our views of the problem, namely as designers and researchers from the Global North, which are themselves inspired by previous work [74]. Nevertheless, we believe that our overview of the typical CV-related ethical risks can serve as a basis to assist researchers in designing safe context-appropriate CV systems for the Global South.
+
+§ 6 CONCLUSION
+
+We have presented an overview of applications and ethical risks tied to recent advances in CV research for the Global South. Our results indicate that surveillance and policy planning are the most explored application areas. We found research interests to differ notably between regions, with researchers from the North focusing more on broader problems requiring complex technical solutions. Risk-wise, privacy violations appears as the most likely and most severe risk to arise in CV systems. This last result differs from those of Skirpan and Yeh [74], which were obtained by analyzing both research literature and news events with a focus on the Global North. Taken together, our findings suggest that the actual uses of CV and the importance of its associated ethical risks are region-specific, depending heavily on a community's needs, norms, culture and resources.
+
+As future work, this study can be extended to non-research CV applications, e.g. commercial systems, to gain a more thorough understanding of the situation in the Global South. Additionally, similarly to what Skirpan and Yeh proposed in their work [74], we believe the next major step in improving our understanding of these issues is to survey direct and indirect users of CV systems and study their interactions in specific areas of the Global South. Only then can we gain a more precise and representative understanding of the importance and the frequency of each of those ethical risks.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..78bf2a37137130f2a174a5aff026e1136b8f1e5d
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,385 @@
+# Bend or PIN: Studying Bend Password Authentication with People with Vision Impairment
+
+Daniella Briotto Faustino*
+
+Carleton University
+
+Sara Nabil†
+
+Carleton University
+
+Audrey Girouard ${}^{ \ddagger }$
+
+Carleton University
+
+## Abstract
+
+People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-; Human-centered computing-Haptic devices- ; Human-centered computing-User studies-;
+
+## 1 INTRODUCTION
+
+While accessibility features like screen magnifiers and screen readers make devices such as flat touchscreen smartphones usable for people with vision impairment, many challenges remain. Typing on smartphones, for example, is complex for users who are blind or have low vision $\left\lbrack {4,{30}}\right\rbrack$ , often requiring them to use external physical keyboards [6, 47]. Also, screen readers read everything aloud to users, jeopardizing their privacy and requiring them to use of earphones in public spaces $\left\lbrack {3,7,{47}}\right\rbrack$ . Additionally, accessibility features have a drawback of making vision-impaired users vulnerable to shoulder surfing and aural eavesdropping when entering PINs [23]. Shoulder surfing can result from the use of screen magnifiers zoom in the keyboard, making password entries more visible, while aural eavesdropping is possible because screen readers read everything typed aloud, even password entries. Therefore, almost ${70}\%$ of the vision-impaired are concerned with typing passwords in public spaces and prefer biometric user authentication, such as fingerprints [15]. However, biometrics may not work when there is an environmental change (e.g. moist hands), and thus act as a re-authenticator for other authentication methods, such as PINs or patterns [5]. The problem is twofold: patterns are not accessible for people with vision impairment $\left\lbrack {8,{28}}\right\rbrack$ and PINs are regarded by them as insecure to unlock mobile devices $\left\lbrack {8,{15}}\right\rbrack$ , easy to guess, inaccessible or inconvenient $\left\lbrack {7,{15},{18}}\right\rbrack$ .
+
+To give vision-impaired users a more accessible and secure alternative for patterns and PINs, previous work has suggested the use of deformable user interactions, which supports tactile interaction $\left\lbrack {{36},{42},{43}}\right\rbrack$ . More specifically, they proposed the use of bend passwords [33], a user authentication method where a sequence of bend gestures works as a password. However, no previous studies were carried out to investigate the usability of bend passwords for the vision-impaired.
+
+
+
+Figure 1: BendyPass prototype, where users can enter bend and fold gestures to compose a bend password.
+
+In this paper, we present our exploration-led research of bend passwords. For our study, we developed BendyPass, a flexible device to capture bend passwords, that was proposed by in a previous demonstration [14] and extended its capabilities into an interactive device (Figure 1). We developed it through an iterative process with deformable interfaces' researchers and vision-impairment experts. Then, we conducted a within-subject study with vision-impaired participants to compare bend passwords and PINs to answer these research questions:
+
+- Q1. Would deformable interaction outperform touch interaction for people with vision impairment?
+
+- Q2. What can we learn -for design- about the ease of use, memorability and learnability of bend passwords versus PINs for people with vision impairment?
+
+Our paper builds on two main publications $\left\lbrack {{14},{33}}\right\rbrack$ in three key aspects. We created a refined prototype (inspired from a demonstration [14]) that is more efficient and robust with additional features. We targeted a specific user population, people with vision impairment, that no previous work has actually run studies with/for. We evaluated both bend and PIN authentication concurrently to quantitatively compare them and explore the usability of each in a rigorous and thorough user study, breaking new grounds.
+
+As such, our study is the first to explore the potential and limitations of bend passwords with people with vision impairments. Through the analysis of the data collected from 16 blind and low vision participants, our three key contributions are: (1) Refining the design and fabrication of deformable prototypes for password input; (2) Presenting insights on the usability of bend passwords compared to PINs; (3) Exploring the design opportunities and challenges of bend passwords.
+
+---
+
+*e-mail: daniella.briottofaustino@carleton.ca
+
+${}^{ \dagger }$ e-mail: sara.nabil@carleton.ca
+
+${}^{\frac{1}{4}}$ e-mail: audrey.girouard@carleton.ca
+
+---
+
+## 2 RELATED WORK
+
+Researchers have explored the use of technology for assisting people with vision impairment $\left\lbrack {4,{17},{23}}\right\rbrack$ . While mostly relied on touch interaction on common smartphones, some researchers explored the opportunity of authenticating through physical deformation rather than touch-screens, for people living with blindness or vision impairment. In this section, we review the use of technology, authentication methods and deformable interfaces with respect to such special needs.
+
+### 2.1 Technology and Vision Impairment
+
+Smartphones have become widely adopted not only by sighted individuals, but also by individuals with vision impairments, thanks to the rise of accessibility features and assistive applications in mainstream devices $\left\lbrack {{17},{23}}\right\rbrack$ . The most common accessibility features are screen readers and screen magnifiers. With the increased smart-phone adoption, the quality of life among the vision-impaired has improved [4], because they use their smartphones to execute tasks previously achievable only by using multiple assistive technology devices. Smartphones work as assistive tools aggregators, giving users access to apps to identify bills [11], street names [12], colours [16], objects and faces [35], read printed text [1].
+
+Nonetheless, using smartphones also has its own challenges. Entering data on smartphones can be difficult for the vision-impaired $\left\lbrack {4,{30}}\right\rbrack$ , who might need to use external physical keyboards $\left\lbrack {6,{47}}\right\rbrack$ . Also, screen readers put users at risk of others listening to their private information, requiring the use of earphones in public spaces $\left\lbrack {3,7,{47}}\right\rbrack$ . Braille is usually considered a solution, because eventual attackers probably do not know how it works. However, few vision-impaired know Braille. Among vision-impaired Americans and Canadians, fewer than ${10}\%$ read Braille $\left\lbrack {{37},{38}}\right\rbrack$ and among the British, less than 1% of the people with vision impairment can read Braille [41].
+
+### 2.2 User Authentication Methods
+
+User authentication methods are the way users can prove their identity, have access to their devices and accounts [33], and protect their personal information [27]. The user authentication methods can be categorized as: knowledge-based (something the user knows, such as PINs passwords), token-based (something the user has, such as key fobs), or biometric-based (something the user is, such as fingerprints) [27]. Generally, alphanumeric passwords are the standard for user authentication on websites [13]. On the other hand, PINs are the standard for unlocking smartphones [45]. For sighted Americans, for example, PIN is the most commonly used method to unlock smartphones [40].
+
+A 2018 online survey with 325 vision-impaired people found 75% the smartphone users had an authentication method to unlock their devices [15], a higher percentage than the ones found in research from 2012 [7] and 2015 [18], where 0% and 33% used an authentication method, respectively. The 2018 survey also found their preferred user authentication method on smartphones is fingerprint, which they also consider the most secure authentication method [15]. On the other hand, participants considered PINs the least secure authentication method, and only 16% use it as their main method [15]. The perceived security of PINs is impacted by the fact that typing PINs when using embedded screen readers makes people with vision impairment more susceptible to others aural-eavesdropping [23]. Similarly, using screen magnifiers while entering passwords increase the susceptibility for visual eavesdropping or shoulder surfing [23]. However, most vision-impaired seem unaware that even if a smartphone has a biometric method set up, a knowledge-based authentication is still their main barrier against unauthorized access to smartphones [15], as biometrics act as mere re-authenticators for knowledge-based methods [5].
+
+Vision-impaired users do not want to deal with the complexity of user authentication methods [15], but with the large volume of personal data generally stored in smartphones [27], it is essential to protect users' privacy [23]. Researchers have explored user authentication alternatives for the vision-impaired to reduce observer threats, including password management system [10], accessible pattern for touch-screens [8], and novel user authentication methods based on the user's gait [23] and multi-finger taps on the touch-screen [7]. However, no previous study has rigorously explored the use of deformation gestures as a method for user authentication with people with vision impairment yet.
+
+### 2.3 Deformable Bendable Interfaces
+
+Deformable devices allow users to physically manipulate their shapes as a form of input, by bending, twisting or deforming them $\left\lbrack {2,{20}}\right\rbrack$ . Such interfaces are often referred to as Deformable User Interfaces (DUIs) [24] - that is, the "physical manual deformation of a display to form a curvature for the purpose of triggering a software action", including a bend gesture [26]. One of the first interfaces designed to explore physical input interaction through bending was ShapeTape [9]. Similarly, Gummi [42] was made to afford bending its physical form using flexible electronic components including sensors that are able to measure deformation. Companies (such as Samsung, LG and Huawei) are currently developing foldable flexible devices, and we expect deformable devices to include bend gestures as input in the near future.
+
+Considering how blind users solely rely on non-visual feedback (e.g., tactile cues and audio), and that deformation is a tactile form of input, Ernst et al. [20] proposed deformation could be beneficial for the blind. They developed a deformable device that accepted bend gestures to control a smartphone screen reader. In their preliminary evaluation with vision-impaired participants, Ernst and Girouard [19] found bend gestures might be "easier out of the box than touch [interactions]" and could improve the accessibility of smartphones.
+
+Another possible application for bend gestures is bend passwords, first proposed by Maqsood et al. [33] as a sequence of bend gestures to authenticate the user. In a user study with sighted participants comparing bend passwords to PINs, researchers found bend passwords easy to memorize as PINs, but might allow users to rely on their muscle memory to recall their passwords [33]. Additionally, bend passwords might be harder to observe than PINs, being potentially safer against shoulder surfing attacks [32]. Our work expands and evaluate a prototype published as a demonstration [14]. Due to the tactile nature of this method and its promising opportunity, and as flexible smartphones may become available soon, we want to explore the usability of bend passwords for people who are vision-impaired or blind.
+
+## 3 PROTOTYPE
+
+This section describes the design process resulting in our final prototype that we later evaluate with people with vision impairments. The related previous work, especially $\left\lbrack {{14},{33}}\right\rbrack$ were the launching points for this paper that helped: 1) conceptualize and demonstrate the technical issues of creating an initial prototype [14]; and 2) test this interaction method with a generalized population (i.e. sighted people) [33].
+
+### 3.1 Design Process
+
+We started by consulting a blind expert who teaches technology to people with vision impairment in a local organization of the blind. The expert shared her concerns about vision-impaired users deciding not to use authentication methods on their smartphones due to their perceived inaccessibility or complexity, and confirmed that bend passwords could have potential for the vision-impaired. Then, we developed an initial version of the prototype and presented it to a group of 10 vision impaired, describing bend passwords as an alternative for authenticating. Most of them indicated interest in a tactile user authentication method.
+
+Our process consisted of developing concurrent alternative prototypes based on previous research and on feedback received from deformable interface researchers of the demonstration prototype, then presenting them to the two blind experts to participate in our iterative design process. We consulted them to define the best size, material stiffness, groove design, sensor placement and set of gestures, to create a prototype that would be well suited for such special needs. We iterated through a dozen preliminary prototypes and a total of four meetings with the blind experts before choosing our final design.
+
+### 3.2 Prior Work Inspiration
+
+Previous research showed that users prefer a deformable device the size of a smartphone rather than a tablet, to minimize the user's level of fatigue and comfort [29] and minimize the need to reposition their hands to perform bend gestures $\left\lbrack {{19},{31}}\right\rbrack$ . Users perform most deformations while holding a rectangular device horizontally [28] and they select simple bend gestures that are less physically demanding $\left\lbrack {{26},{28}}\right\rbrack$ . Additionally, forcing the user to change their grip to perform gestures not only causes discomfort $\left\lbrack {{22},{28}}\right\rbrack$ , but also slows task completion [2, 19], raises the likelihood of false activation [22], and increases the risk of dropping the device [22]. Hence, we decided to design a rectangular device similar in size to a medium smartphone and considering the importance of allowing users to access all corners without re-gripping, we designed our interface for landscape use.
+
+Regarding device deformability, previous research showed that higher stiffness requires more physical effort [25], which negatively influences users' preference and performance when bending [24]. Finally, researchers recommend that bendable interfaces could help users identify deformable areas by having grooves on the bendable points $\left\lbrack {{19},{22}}\right\rbrack$ and by providing haptic feedback $\left\lbrack {43}\right\rbrack$ . Thus, we opted for using malleable silicone for our device's bendable areas, adding grooves and haptic feedback.
+
+### 3.3 Final Prototype Design
+
+This prototype's purpose is to put this opportunity in the hands of users and explore its potential in comparison to touch-screens. Our final prototype, BendyPass (Figure 1) is approximately the size of an iPod Touch $\left( {{11.5} \times 6 \times 1\mathrm{\;{cm}}}\right)$ , made of silicone and has a single push-button that allows the user to either confirm the password or delete the previous gesture entered. The button also indicates the device's orientation, both helping users to recognize which side should be facing them, and which side should be left or right.
+
+Inspired by a recent demo [14], our prototype is composed of two silicone layers that enclose all electronic components. We 3D printed a mold including a vertical groove in its centre, four corner grooves that create triangular areas around each of its corners, and a lowered part to insert its push-button (Figure 2, top). The grooves extend from side to side to facilitate bending gestures and they are also angled to avoid users' fingers to be pinched when bending the device. To fabricate our prototype, we used two different types of silicone, making the bendable areas more flexible than the central area (Alumilite A30 and A80, respectively). This emphasizes the bendable areas while protecting the components in the center.
+
+The electronic components included 51 " Flexpoint bidirectional sensors to recognize bend gestures and a vibration motor to provide haptic feedback when an action is recognized. BendyPass components are positioned as shown in Figure 2 (bottom): the vibration motor is on the left side and the flex sensors are in the centre, and in the four corners. The components are connected to an Arduino Leonardo microcontroller connected to a MacBook Pro laptop. Gestures applied to our prototype become letters on the computer, while long button presses(i,1s)activate the Enter key and short button presses activate the Backspace key. The letter mapping is invisible to the user, who only needs to perform bend gestures and operate the button.
+
+
+
+Figure 2: BendyPass 3D mold design (top view) and its internal components housed in a thin foam layer (pink).
+
+### 3.4 Bend Passwords on BendyPass
+
+BendyPass recognizes 10 simple bend gestures (Figure 3), including bending each corner upwards or downwards ( 8 gestures) and folding it in half upwards or downwards ( 2 gestures). Our prototype was programmed to recognize fewer gestures than Maqsood et al.'s [33], because our two blind experts considered excessively complex the double gestures (gestures performed in two corners at the same time). With 10 possible gestures, BendyPass has the same number of possible combinations of a PIN, as a six-gesture bend password has the same strength against brute force guessing as a 6-digit PIN.
+
+Aside from the haptic feedback, BendyPass also provides optional audio feedback verbalizing the name of the gesture entered. Prior work indicated that for an effective communication of gestures, it is necessary to provide information about location and direction [44]. Thus, we named our gestures using both (e.g. "top left corner up"). The exceptions are folding gestures, which showed to be confusing in preliminary analysis using location. For example, a "centre up" could trick users into moving the sides up, while the centre would move down. Thus, we opted to use the name of the gesture-fold-in addition to the direction for these.
+
+## 4 USER STUDY
+
+### 4.1 Apparatus
+
+We designed a user study to compare two knowledge-based user authentication methods: bend passwords and PINs. Thus, in addition to our prototype BendyPass, we used an iPhone 6S for PIN entry, because most people with vision impairment use iPhones [15] due to its native screen magnifier and screen reader functionalities. We selected a keypad from a remote keyboard app [52] to simulate a PIN entry screen while transmitting the keys typed to a computer, where we could save them. We chose it because it worked relatively well with the screen reader VoiceOver and the Standard typing style. In this accessible typing mode, users explore the screen by either swiping or single-tapping it and they trigger a key by either lifting their finger and double tapping the screen or keeping their finger on the screen and tapping it with another finger.
+
+
+
+Figure 3: Set of 10 bend gestures available on BendyPass.
+
+
+
+Figure 4: The structure of our user study.
+
+We also developed a PHP website to receive and verify passwords both from BendyPass and the smartphone. The website was connected to a mySQL database and hosted using XAMPP. Our database saves participants IDs, password entries, and the time the input started and ended. Our website provides audio cues (such as "Create your password using the gestures learned" or "Wrong password, please try again") to help users navigate the password creation process. It also provides optional audio feedback for bend gestures or button presses. For the audio, we recorded the screen reader VoiceOver reading all messages on a MacBook Pro, using the default speed of 45 . We used JavaScript to assign actions to their respective audio snippets.
+
+### 4.2 Methodology
+
+We structured our user study to be composed of two 60-minute sessions (Figure 4), following the main tasks proposed by relevant literature [33]. The first session focused on password learnability, while the second session, about a week after the first, focused on password memorability.
+
+We started the first session by interviewing participants, asking them whether they had already tried a flexible device, followed by questions on their demographics and user authentication perception and use. Then, we asked participants to create both: 1) a bend password on BendyPass and 2) a new PIN on the smartphone with at least 6 digits/gestures. After creating the passwords, participants confirmed them 3 times, before rehearsing them by completing 5 successful logins. Whenever participants expressed uncertainty about their passwords, while confirming or rehearsing them, we asked if they wanted to create a new one and allowed them to do so. After confirmation, participants had a pause to answer questions about the easiness to create and remember their password, and the perceived overall security, specifically against shoulder surfing. Participants could create a new password if forgotten. Participants who chose to create a new password during rehearsal had to immediately confirm and rehearse them. The session ended with questions allowing participants to reflect on their experience, the likelihood of using bend passwords and further insights about what worked well and what did not.
+
+After a week, we started the second session by asking participants questions related to their easiness to remember their passwords, whether they used any strategy to memorize their password during the week, which accessibility features and typing styles they use in their own smartphone. Then, we gave participants as many attempts as they needed to complete 5 successful logins using each of the two passwords created in the first session. Finally, we finished the session by discussing with them their final thoughts and reflection regarding their likelihood to use bend passwords over PINs (in general and in flexible devices), their overall experience using BendyPass, potential user groups for bend passwords, and any proposed enhancements.
+
+We presented the two devices to participants in counterbalanced order, among participants and between sessions with the same participant. In both sessions, after interacting with each device, participants answered device-specific questions. Besides learnability and memorability, we also evaluated the other quality components of usability [39], by measuring efficiency and satisfaction in session 2, and number of errors in both sessions.
+
+In both sessions, participants verbally answered our questions about each device, regarding the easiness to create passwords, the perceived security of both password schemes and their opinions about bend passwords (Figure 5). All questions were 10-point Likert scales, where 1 was the least favourable.
+
+### 4.3 Data Analysis
+
+We transcribed participants' comments and answers to open-ended questions using Inqscribe (inqscribe.com). We performed the qualitative analysis on open-ended answers and quantitative analysis on both multiple-choice and coded answers using R Studio (rstu-dio.com). Quantitative analysis included the analysis of 732 log records (participant $\times$ step $\times$ trial) using Wilcoxon Signed-Rank (Z) for numerical data, and chi-square tests $\left( {\varkappa }^{2}\right)$ of independence between variables for categorical data, but we focus on reporting significant results. Time measured included time thinking about the passwords and time entering them. We used Grounded Theory [18] to code answers of interview questions. Whenever necessary, we coded answers in more than one theme, but we did not code unclear answers.
+
+### 4.4 Participants
+
+We recruited participants by snowballing, mainly through a local council of the blind and Facebook groups. Our recruitment criteria included participants who were at least 18 years old and were either blind or had low vision. We held sessions either at a lab or at an office at the local council of the blind. Among our 16 participants, 10 declared they were blind, 5 declared they had low vision, and one expressed having another condition. For our analysis, we grouped the latter with the blind because he could not see the smartphone screen. This resulted in 11 blind participants (68.7%) and 5 with low vision (31.2%), a distribution similar to previous work [15]. Many participants self-declared as males $\left( {\mathrm{N} = {10},{62.5}\% }\right)$ . Ages of participants ranged from 22 to 76 years-old (M=54.31, SD=15.38). Three participants also had another impairment, related to hearing loss $\left( {\mathrm{N} = 2}\right)$ , attention $\left( {\mathrm{N} = 1}\right)$ or psycho-motor system $\left( {\mathrm{N} = 1}\right)$ , according to the World Health Organization classification [46].
+
+
+
+Figure 5: Distribution of Likert scale responses; the first three pairs are from session 1, the last three are from session 2. SS stands for Shoulder Surfing.
+
+Almost all participants said they use assistive apps on their smart-phones $\left( {\mathrm{N} = {15},{93.8}\% }\right)$ , only 5 participants said they use a Braille display, a smaller proportion than in relevant work [15] (31.3% vs ${42.5}\% )$ . Three participants had experience in studies on deformable flexible devices, though never for user authentication. Most answers to the interview matched results from the survey in prior work [15], suggesting our participants represent well the group of people with vision impairment who have access to the internet and mobile devices. All blind and most low-vision participants interacted with the smartphone with a screen reader. Only one participant used screen magnifier. All participants returned for session 2 about a week later (M=7.28 days, Md=7, SD=0.87).
+
+## 5 FINDINGS
+
+We discuss each of our main findings, introducing potential and implications for design learnt from studying bend passwords in-use with users with vision impairment. Our reported findings combine the results from observation of usability, analysing log files of passwords, and questionnaire results.
+
+### 5.1 Learnability
+
+#### 5.1.1 Ease of Creating
+
+Before creating their passwords, participants trained for a longer period to use bend gestures (M=165s, Md=143.5s, SD=63.94s) than they trained to use the keypad app (M=90s, Md=91.5s, SD=30.92s), $\left( {\mathrm{Z} = - {2.97},\mathrm{\;p};{.005}}\right)$ . After training, participants took slightly longer to create their first bend password $\left( {\mathrm{M} = {59.6}\mathrm{\;s}}\right)$ than their first PIN (M=48.8s), but the difference was not statistically significant (n.s.), as shown in Figure 6. Moreover, participants' perceived easiness to create bend passwords was not statistically significantly different than creating PINs (Table 1). We also found no significant differences between participants who are blind and who had low vision regarding learnability (n.s.).
+
+
+
+Figure 6: Creation and Login time in the first trial, in seconds. Differences are not statistically significant.
+
+#### 5.1.2 Password Creation Strategies
+
+We observed how participants created passwords and asked which strategies they used to create them (Table 2). More than half of our participants used some sort of pattern $\left( {\mathrm{N} = 9}\right)$ , such as mirroring gestures from one side of the device to the other $\left( {\mathrm{N} = 5}\right)$ . Our results are similar to those from previous work [33] with sighted participants, where patterns were the main strategy used to create passwords (44%). Although almost half of our participants said they associated their PINs to series of numbers they were familiar with $\left( {\mathrm{N} = 7}\right)$ , no participant reported using association as a strategy to create their bend passwords.
+
+### 5.2 Memorability
+
+#### 5.2.1 Re-thinking Passwords
+
+Although participants simply created their first bend password, most did not remember it. 11 participants had to re-create their bend passwords, while only 2 had to re-create their PINs, resulting in a significant difference between the number of trials to create memorable bend passwords $\left( {\mathrm{M} = {1.94},\mathrm{{Md}} = 2,\mathrm{{SD}} = 1}\right)$ and PINs $(\mathrm{M} = {1.12}$ , $\mathrm{{Md}} = 1,\mathrm{{SD}} = {0.34})\left( {\mathrm{Z} = - {2.36},\mathrm{p};{.01}}\right)$ . From the 11 participants who forgot their initial bend password, 9 forgot it at the confirmation step $\left( {{81.8}\% }\right)$ , and 2 forgot it at the rehearsal step $\left( {{18.2}\% }\right)$ . We attribute the initial difficulty to memorize bend passwords to the lack of muscle memory for bend passwords, which required participants to create new memorization strategies. As P5 said, "It's fun but you have to suspend anything you know about passwords. You have to think in a new way." As participants had to confirm passwords 3 times after creating them, P13 said that having to go through three confirmations probably makes passwords easier for people to remember, "engraving them into memory", and suggested that this should be required in real-life.
+
+#### 5.2.2 Ease of Recall
+
+In session 2, although ${81.3}\% \left( {\mathrm{N} = {13}}\right)$ participants remembered their bend passwords, 1 participant was not able to enter it due to prototype errors. Thus, participants' login success rate was ${75}\% \left( {\mathrm{N} = {12}}\right)$ for bend passwords and ${93.8}\%$ for PINs $\left( {\mathrm{N} = {15}}\right)$ , but a McNemar test with the continuity correction found no significant difference between them. Most participants successfully entered their bend passwords and PINs in the first trial (n.s.). Although the memorability of PINs was slightly better, bend passwords had a similar memorability (n.s.), even though it was a novel method for participants. Additionally, while in session 1 participants rated bend passwords significantly harder to remember than PINs (Table 1), their ratings in session 2 for the same questions about bend passwords and PINs were not significantly different (n.s.).
+
+| Question | Session | Bend Md (SD) | PINMd (SD) | Statistics |
| Ease of password creation | 1 | ${6.0}\left( {2.62}\right)$ | 7.5 (2.33) | Z=-0.95, p=.17 |
| Ease of remembering | 1 | $\mathbf{{5.5}\left( {2.66}\right) }$ | $\mathbf{{8.0}\left( {1.68}\right) }$ | Z=-1.88, p=.03 |
| 2 | ${7.5}\left( {1.98}\right)$ | ${9.0}\left( {2.38}\right)$ | Z= -0.97, p = .17 |
| Confidence in remembering | 1 | 7.0 (2.47) | ${8.0}\left( {1.89}\right)$ | Z= -1.56, p = .06 |
| 2 | ${8.0}\left( {1.89}\right)$ | ${9.5}\left( {2.68}\right)$ | Z=-0.86, p=.20 |
| Perceived overall security | 1 | ${6.0}\left( {2.13}\right)$ | ${8.0}\left( {1.46}\right)$ | Z=-1.40, p=.08 |
| Security against shoulder surfing | 1 | 6.5 (2.37) | ${7.0}\left( {2.15}\right)$ | $\mathrm{Z} = {0.68},\mathrm{p} = {.75}$ |
| Likelihood to use if available | 1 | 7.5 (2.83) | - | - |
| 2 | ${6.5}\left( {2.69}\right)$ | - | - |
| Likelihood to use in flex. devices | 1 | 6.5 (2.49) | - | - |
| 2 | ${5.5}\left( {2.53}\right)$ | - | - |
+
+Table 1: User Questionnaire responses to 10-point Likert scale questions, where 1 represents strongly disagree. Significant difference marked in bold.
+
+| Strategies to create passwords | Bend passwords | ${PINs})$ |
| Pattern | 5 (9) | 4 (9) |
| Simple combination | 5 (3) | 0 (2) |
| Repetition | 0 (3) | 2 (6) |
| Association | 1 | 7 |
+
+Table 2: Strategies participants reported using to create memorable passwords. Numbers in parentheses express the number of times researchers observed the use of each strategy.
+
+#### 5.2.3 Confidence to Remember
+
+Participants' confidence to remember their passwords was affected by the number of errors they made in the rehearsal step. Those who had fewer incorrect trials rated their confidence significantly higher both for bend passwords $\left( {{\varkappa }^{2}\left( {{24}, N = {16}}\right) = {36.98}, p = {.04}}\right)$ and for PINs $\left( {{\varkappa }^{2}\left( {{10},\mathrm{\;N} = {16}}\right) = {23.43},\mathrm{p} = {.009}}\right)$ . Also, participants who were more confident in remembering their passwords before login in session 2 were significantly more likely to remember their bend passwords $\left( {{\varkappa }^{2}\left( {6,\mathrm{\;N} = {16}}\right) = {85},\mathrm{p} = {.01}}\right)$ and their PINs $\left( {{\varkappa }^{2}\left( {5,\mathrm{\;N} = {16}}\right) }\right.$ $= {85},\mathrm{p} = {.007})$ .
+
+#### 5.2.4 Password Memorization Strategies
+
+We asked participants in session 2 whether they used any strategies to remember their bend passwords. Four (25%) said they thought about their passwords throughout the week for a total of 7 (43.8%) who thought about them at least once. Also, while 7 (43.8%) said their methods to create their bend passwords were the main strategy to memorize them, 2 (12.5%) said they did not use any strategy. Both of them forgot their bend passwords in session 2 and a participant who forgot her PIN also did not use a strategy to remember it. Association, a common strategy used to memorize PINs, was not used by participants to memorize bend passwords, potentially because of their three-dimensionality. However, results from session 2 indicate that, regardless the strategy used to create passwords, maintain them in one's memory depend on good memorization strategies, which include at least repeating in one's head how the password was.
+
+#### 5.2.5 Rate of Errors
+
+Our analysis of memorability considered the number of correct logins in session 2. Thus, for analysing the number of errors, we considered the errors that were not followed by a new password creation, excluding only those caused by a prototype error. Both bend passwords and PINs had the same number of incorrect entries $\left( {\mathrm{N} = 7}\right)$ . Similarly, the number of participants that made incorrect entries was the same: 4 for each password type (only 1 participant had an error with both bend passwords and PINs). Thus, there was no significant difference in number of errors (n.s.).
+
+| Type | Mean Length (SD) | Mean Unique Entries (SD) |
| Bend Password | ${6.44}\left( {0.81}\right)$ | ${5.81}\left( {1.05}\right)$ |
| PIN | 6.19 (0.54) | ${4.69}\left( {1.14}\right)$ |
+
+Table 3: Unique entries are the number of unique digits (PIN) and gestures (bend password) in a password.
+
+### 5.3 Ease of Use
+
+Participants rated the likelihood of using bend passwords for different users Most participants considered BendyPass easy to use $\left( {\mathrm{N} = {10}}\right)$ and liked its haptic and audio feedback $\left( {\mathrm{N} = 9}\right)$ . We also analysed the bends used to compose each password.
+
+#### 5.3.1 Potential Users
+
+When asked who might like to use bend passwords, 12 (75%) participants said vision-impaired people in general. P5 said, "Certainly blind and low vision, or people with learning disabilities that make them have issues with numbers, and seniors or people with learning issues". This supports the idea that physical deformable interaction in general, and bendable interfaces in particular, are perceived to offer great potential to people with low vision, not only by researchers $\left\lbrack {{15},{19},{33}}\right\rbrack$ , but also by users themselves.
+
+#### 5.3.2 Bend Passwords Used
+
+We analyzed the password characteristics and the composition of all passwords created by our participants (Table 3). Both bend passwords and PINs ranged from 6 to 8 gestures/digits, but most were equal to the minimum length of 6 required from participants. Password length was not statistically significant. In contrast to prior work [33], our participants used more unique gestures than unique digits $\left( {\mathrm{Z} = - {1.95},\mathrm{p} = {.03}}\right)$ . Every bend gesture was used at least once by at least one participant to compose a bend password. However, some gestures were more frequently used than others. The top three most frequently used gestures were: top right corner up (17%), top left corner up (14%), and bottom left corner up (12%), exactly the same top three single gestures for sighted participants [33]. The least used gestures were top right corner down (6%), fold down (7%) and fold up (8%). Participants tended to prefer upwards gestures (60.2%) than downwards gestures (39.8%), confirming previous findings $\left\lbrack {{26},{31},{33}}\right\rbrack$ , even though the difference was not significant.
+
+### 5.4 Satisfaction
+
+To evaluate the satisfaction from a user perspective, we studied the time needed to login, measuring efficiency, asked participants about the overall experience and perceived security Finally, we also report below on their perceived drawbacks and limitations.
+
+#### 5.4.1 Overall Experience
+
+After the final session, we asked participants to tell us how they would describe their experience using bend passwords to a friend. Nine participants expressed positive experiences using BendyPass while only 3 described it negatively. For example, P4 said it was "fun, interesting, challenging, intriguing", while P8 said, "it was easy, [there is] a little of learning curve to know how to do the bends right, but once you got that it's easy to use, even easier than swiping the touch screen to find numbers". P7, on the other hand, said, "if errors were removed, it would be OK. Primary reason for negative comments are the errors and the fact that the surface should be [...] more responsive.". The errors mentioned by the participant involved non recognition of gestures performed $\left( {\mathrm{N} = 5}\right)$ or the recognition of opposite directional gestures $\left( {\mathrm{N} = 1}\right)$ . The latter was caused by the sensors used in BendyPass, which react not only to deformation but also to pressure, which can occur when participants grasp the prototype strongly.
+
+#### 5.4.2 Efficiency of Login
+
+Following the methodology used in relevant literature [33], we compared participants' fastest confirmation and rehearsal times to evaluate whether their performance improved with practice. Participants took significantly less time to rehearse their PINs than to confirm them $\left( {\mathrm{Z} = - {2.97},\mathrm{p} = {.001}}\right)$ , but were not significantly faster with bend passwords (n.s.), indicating their efficiency achieved optimal numbers from the start. Participants took slightly less time to complete a first login with their bend passwords $(\mathrm{M} = {22.4}\mathrm{s},\mathrm{{Md}} = {21.5}\mathrm{s}$ , SD=8.48s) than with their PINs (M=35s, Md=25s, SD=21.25s) (n.s.), as shown in Figure 6. We selected the fastest login time from each participant who logged in successfully. As observed in the previous steps, participants took significantly less time in their fastest login with bend passwords $\left( {\mathrm{M} = {13}\mathrm{\;s},\mathrm{{Md}} = {12}\mathrm{\;s},\mathrm{{SD}} = {3.1}\mathrm{\;s}}\right)$ than with PINs (M=18.27s, Md=16s, SD=7.71s) (Z= -2.20, p = .01).
+
+#### 5.4.3 Perceived Security
+
+We found no significant difference between the perceived security of bend passwords and PINs. Interestingly, when we asked participants to justify their ratings for the security of bend passwords and PINs against shoulder surfing attacks, 7 participants said bend passwords are easy to see by others, while 5 participants said PINs are easy to see. This was also one of the most common reasons participants gave for their ratings of the overall security of bend passwords $\left( {\mathrm{N} = 5}\right)$ , although another 4 participants said bend passwords were difficult to see. Thus, there was no consensus amongst participants on whether bend passwords are easy or hard to observe.
+
+#### 5.4.4 Drawbacks of Using BendyPass
+
+We asked participants to point out characteristics that worked well with bend passwords, as well as aspects that should be improved. The most common disadvantage participants mentioned was having to carry an additional device $\left( {\mathrm{N} = 6}\right)$ . For example, $\mathrm{P}7$ said "I don't like carrying extra things, I barely remember my charging cable". On the other hand, most suggested reducing the protuberance of the button $\left( {\mathrm{N} = {11}}\right)$ and improving the accuracy of the bend password recognition $\left( {\mathrm{N} = {10}}\right)$ , especially for folding gestures $\left( {\mathrm{N} = 9}\right)$ . Although more than half of the participants liked the audio feedback provided by BendyPass, at least 3 inclined towards deactivating this option. Interestingly, 7 participants liked the form factor of BendyPass while 8 suggested the reduction of its size, even saying it would be nice to have it as a key chain.
+
+### 5.5 Limitations
+
+Similar to prior work [15], we recruited more blind than low-vision participants. This might be a result of a higher interest of the blind community in novel assistive technologies but might also be an indicative to the difficulty in classifying some people as blind or low vision. For example, one of our participants self-declared as blind, but said he uses inverted colours on his smartphone to better see the screen.
+
+Although we tested our prototypes and had 3 pilot sessions, we faced prototype issues during the study sessions, where Bendy-Pass was not fully accurate on recognizing bend gestures, and the smartphone app did not work with all typing styles. Although we acknowledge both the app keypad and the typing method in our study are not the same as most participants use in their own smartphones, we argue the learning process new users go through when using a new device or software was simulated well.
+
+## 6 Discussion
+
+We presented the results of a user study on bend passwords compared to PINs with 16 participants who were blind or had low vision We found that bend passwords were as easy to create as PINs, and participants assessed them as easy to remember as PINs. Participants reported being more likely to use bend passwords on a flexible device but would also be willing to use them in a separate device for gaining access to password-secured systems and spaces. This is because bend passwords take users with low vision less time to enter than PIN passwords.
+
+### 6.1 Learnability of Bend Passwords
+
+Explaining to participants how to use BendyPass took around 2 minutes, confirming previous findings [42] that users are able to quickly understand how to interact with deformable devices. Users may need more training with BendyPass than with the smartphone, and slightly more time to create bend passwords than PINs. We attribute this to the novelty of the paradigm as opposed to the commonly used touch-based PIN passwords. Still, our results show that the time needed to create a password using bend or PIN are not statistically significant.
+
+### 6.2 Memorability of Bend Passwords
+
+Participants forgot their bend passwords significantly more often than their PINs. This can relate to their unfamiliarity memorizing gestures compared to memorizing sequences of numbers. The initial difficulty to memorize bend passwords is supported by participants ratings during the first session on the easiness to remember bend passwords, significantly lower than PINs. However, the results of the same question in session 2 were not statistically significant. This is also supported by their success rates and the number of attempts to complete a first successful login. Still, the lack of muscle memory for bend passwords that supports the association memorability strategy impacts the performance of bend authorization compared to PINs.
+
+Perhaps future work should explore muscle memory associations such as typing on a keyboard or playing a musical instrument. Other deformable authentication methods can be also used to support people with vision impairments using mental association for better memorability. Examples of such interactions could mix both deformation with alpha-numeric associations, such as enabling only a small number of bend gestures in a Morse code style sequence, stroking characters on a texture-changing edge, or twisting a shape-memory strip a number of times back and forth.
+
+### 6.3 Easiness to Use Bend Passwords
+
+Users rated bend passwords as easy to use and appropriate for people with vision impairments. Bend passwords were faster to login than PINs for people with low vision. This is mainly because our participants used smartphones with accessibility features, which slowed them down. In fact, research acknowledges that being slow to enter is actually one of the great limitations users dislike about PINs [15].
+
+### 6.4 Usability for Blind vs Low Vision
+
+We did not find significant differences between the time participants who were blind and participants who had low vision took to train how to use bend passwords or to create a bend password. This suggests that people with low vision can benefit as much as the blind from a tactile physical password.
+
+### 6.5 Overall Usability of Bend Passwords
+
+Due to the familiarity with PINs—as expected—, the time needed for learning and training how to use bend passwords can take longer. PINs are often created from numeric patterns that users' remember, such as dates and phone numbers that are already saved in our memory prior to creating a password. In contrast, people do not usually have memorized bend sequences that they can use when creating a bend password. This describes the memorability difference between bend passwords and numeric passwords. On the other hand, bend passwords were faster to log in than PINs and had similar number of errors and perceived user satisfaction. Also, the deformation interaction was more accessible than touch-screen interaction, as it uses a more tactile input method and provides vibration as feedback. These learnings indicate the opportunities and limitations of bend passwords when comparing their usability with PIN passwords.
+
+### 6.6 Bend Passwords in Future Devices
+
+While the current prototype was a separate device, hence would require the user to carry an extra device, the paper does not endorse bend authentication to be implemented as such in future devices. Instead, we cater for expected near-future flexible phones as a vast amount of relevant work envisions that bend interaction will soon be integrated in mobile devices, thus we explore the authentication through bends accordingly. We envision that bend passwords would be integrated in such future flexible smartphones, or through a flexible phone case over a rigid smartphone (e.g. [21,34]).
+
+## 7 CONCLUSION
+
+We explored the application of a bendable user authentication method for people with vision impairment using bend passwords. We conducted a user study with 16 people who are blind or have low vision to evaluate the learnability and memorability of bend passwords on BendyPass when compared to PINs on a smartphone. Our paper extends previous work, not only by engaging experts and participants with vision impairment, but also by challenging prior work on whether we should be designing such deformable authentication methods or not. We do not argue that adopting bend passwords is not necessarily the answer to touch-screen accessibility, but explore its potential and limitations as an alternative.
+
+In this study, we gained insights from situating BendyPass in the hands of its intended users and learned about its ease of use, memorability, learnability and user satisfaction. We found that bend passwords do not outperform touch-based interactions for people with vision impairment. Despite being as easy to learn as PINs, bend passwords are still not easy to memorize. Bend passwords were significantly faster to enter than PINs on iPhone (using the screen reader VoiceOver and the standard typing style), but were not rated by users to be more secure than PINs. Such findings shed light on limitations of the bend interaction opportunity that was often over-promised in HCI literature.
+
+Our inquiry was to assess how people with vision impairment will use bend passwords versus PIN passwords they already use. With accessibility applications, the key tenant is not outperformance, but providing people with multiple ways to achieve their goals in case one is ill-fitted. For example, voice commands are not faster than touch, but in some scenarios, or for some users, they are more practical and it is useful to have the option to switch between interaction techniques that do not necessarily outperform each other altogether e.g. mouse and keyboard. Therefore, our paper is not claiming that Bend outperforms PIN or that having a clear winner is a goal, but is exploring the usability of each method to contribute to other researchers and designers what refinements can provide accessibility choices and create alternatives.
+
+We envision deformability will be integrated in smartphones soon and argue that careful considerations should be taken into account and challenged when designing such interactivity. Future work should address challenges of deformable interaction for accessibility in-use, with the target user group, including association, memorability and security against shoulder surfing attacks, beyond the initial work on this subject [33]. We plan to explore further deformable gestures, replacing bend and fold by other deformations. We also plan to integrate it in the phone case, and connected to users' smartphones through Bluetooth. Such enhancements will allow a longitudinal study investigating the long-term usability of deformable authentication for people living with vision impairment, seeking the development of current hindering technology.
+
+## ACKNOWLEDGMENTS
+
+This work was supported and funded by the National Sciences and Engineering Research Council of Canada (NSERC) through a Discovery grant (2017-06300), a Discovery Accelerator Supplement (2017-507935) and the Collaborative Learning in Usability Experiences Create grant (2015-465639). We thank Kim Kilpatrick and Nolan Jenikov from the Canadian Council of the Blind in Ottawa for their great help.
+
+## REFERENCES
+
+[1] ABBYY. TextGrabber, 2018.
+
+[2] T. T. Ahmaniemi, J. Kildal, and M. Haveri. What is a device bend gesture really good for? In SIGCHI Conference on Human Factors in Computing Systems, pp. 3503-3512. ACM, 2014. doi: 10.1145/ 2556288.2557306
+
+[3] T. Ahmed, R. Hoyle, K. Connelly, D. Crandall, and A. Kapadia. Privacy Concerns and Behaviors of People with Visual Impairments. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, pp. 3523-3532, 2015. doi: 10.1145/2702123 .2702334
+
+[4] M. Alnfiai and S. Sampalli. An Evaluation of SingleTapBraille Keyboard: A Text Entry Method That Utilizes Braille Patterns on Touch-screen Devices. Proceedings of the 18th International ACM SIGAC-CESS Conference on Computers and Accessibility, pp. 161-169, 2016. doi: 10.1145/2982142.2982161
+
+[5] A. J. Aviv, J. T. Davin, F. Wolf, and R. Kuber. Towards Baselines for Shoulder Surfing on Mobile Authentication. Proceedings of the 33rd Annual Computer Security Applications Conference on - ACSAC, pp. 486-498, 2017. doi: 10.1145/3134600.3134609
+
+[6] S. Azenkot and N. B. Lee. Exploring the use of speech input by blind people on mobile devices. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 1-8, 2013. doi: 10.1145/2513383.2513440
+
+[7] S. Azenkot and K. Rector. Passchords: secure multi-touch authentication for blind people. Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility, pp. 159-166, 2012. doi: 10.1145/2384916.2384945
+
+[8] V. Balaji and K. S. Kuppusamy. Towards accessible mobile pattern authentication for persons with visual impairments. In 2017 International Conference on Computational Intelligence in Data Science(ICCIDS), pp. 1-5, 2017.
+
+[9] R. Balakrishnan, G. Fitzmaurice, G. Kurtenbach, and K. Singh. Exploring interactive curve and surface manipulation using a bend and twist sensitive input strip. In Proceedings of the Symposium on Interactive 3D Graphics, pp. 111-118, 1999.
+
+[10] N. M. Barbosa, J. Hayes, and Y. Wang. UniPass: design and evaluation of a smart device-based password manager for visually impaired users.
+
+UbiComp, pp. 49-60, 2016. doi: 10.1145/2971648.2971722
+
+[11] BEP. EyeNote App Overview, 2018.
+
+[12] BlindSquare. BlindSquare, 2017.
+
+[13] J. Bonneau, C. Herley, P. C. van Oorschot, and F. Stajano. The Quest to Replace Passwords: A Framework for Comparative Evaluation of Web Authentication Schemes. 2012 \{IEEE\} \{Symposium\} on \{Security\} and \{Privacy\} (\{SP\}), 2012:553-567, 2012. doi: 10.1080/ 09500690902717283
+
+[14] D. Briotto Faustino and A. Girouard. Bend Passwords on BendyPass : A User Authentication Method for People with Vision Impairment. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS '18, pp. 435-437. ACM Press, New York, 2018. doi: 10.1145/3234695.3241032
+
+[15] D. Briotto Faustino and A. Girouard. Understanding Authentication Method Use on Mobile Devices by People with Vision Impairment. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS '18, pp. 217-228. ACM Press, New York, 2018. doi: 10.1145/3234695.3236342
+
+[16] Cloud Sight.ai Image Recognition API. TapTapSee, 2019.
+
+[17] Á. Csapó, G. Wersényi, H. Nagy, and T. Stockman. A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research. Journal on Multimodal User Interfaces, 9(4):275-286, 2015. doi: 10.1007/s12193-015-0182-7
+
+[18] B. Dosono, J. Hayes, and Y. Wang. "I'm Stuck !": A Contextual Inquiry of People with Visual Impairments in Authentication. Proceedings of the eleventh Symposium On Usable Privacy and Security, pp. 151-168, 2015.
+
+[19] M. Ernst and A. Girouard. Bending Blindly: Exploring Bend Gestures for the Blind. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2088-2096. ACM Press, New York, 2016. doi: 10.1145/2851581.2892303
+
+[20] M. Ernst, T. Swan, V. Cheung, and A. Girouard. Typhlex: Exploring Deformable Input for Blind Users Controlling a Mobile Screen Reader. IEEE Pervasive Computing, 16(4):28-35, oct 2017. doi: 10.1109/ MPRV.2017.3971123
+
+[21] E. Fares, V. Cheung, and A. Girouard. Effects of bend gesture training on learnability and memorability in a mobile game. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, ISS 2017, 2017. doi: 10.1145/3132272.3134142
+
+[22] A. Girouard, J. Lo, M. Riyadh, F. Daliri, A. K. Eady, and J. Pasquero. One-Handed Bend Interactions with Deformable Smartphones. In Proc. CHI'15, pp. 1509-1518, 2015.
+
+[23] M. M. Haque, S. Zawoad, and R. Hasan. Secure Techniques and Methods for Authenticating Visually Impaired Mobile Phone Users. IEEE International Conference on Technologies for Homeland Security (Hst), pp. 735-740, 2013.
+
+[24] J. Kildal. Interacting with deformable user interfaces: Effect of material stiffness and type of deformation gesture. In C. Magnusson, D. Szymczak, and S. Brewster, eds., Haptic and Audio Interaction Design, pp. 71-80. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
+
+[25] J. Kildal and G. Wilson. Feeling it: the roles of stiffness, deformation range and feedback in the control of deformable ui. In Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI '12, p. 393. ACM Press, New York, 2012. doi: 10.1145/2388676. 2388766
+
+[26] B. Lahey, A. Girouard, W. Burleson, and R. Vertegaal. PaperPhone: Understanding the Use of Bend Gestures in Mobile Devices with Flexible Electronic Paper Displays. In Proceedings of the ACM 2011 Conference on Human Factors in Computing Systems, pp. 1303-1312. Berlin, Germany, 2011. doi: 10.1145/1978942.1979136
+
+[27] J. Lazar, B. Wentz, and M. Winckler. Information Privacy and Security as a Human Right for People with Disabilities. In J. Lazar and M. Stein, eds., Disability, Human Rights, and Information Technology, pp. 199- 211. University of Pennsylvania Press, Philadelphia, 2017.
+
+[28] S.-S. Lee, S. Kim, B. Jin, E. Choi, B. Kim, X. Jia, D. Kim, and K.-p. Lee. How users manipulate deformable displays as input devices. In SIGCHI Conference on Human Factors in Computing Systems, pp.
+
+1647-1656. ACM, New York, 2010. doi: 10.1145/1753326.1753572
+
+[29] S.-s. Lee, Y.-k. Lim, and K.-P. Lee. Exploring the effects of size on deformable user interfaces. In International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 89-94. ACM, New York, 2012. doi: 10.1145/2371664.2371682
+
+[30] B. Leporini, M. C. Buzzi, and M. Buzzi. Interacting with Mobile Devices via VoiceOver : Usability and Accessibility Issues. Proceedings of the 24th Australian Computer-Human Interaction Conference, pp. 339-348, 2012. doi: 10.1145/2414536.2414591
+
+[31] J. Lo and A. Girouard. Bendy: Exploring Mobile Gaming with Flexible Devices. Proceedings of the Tenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI'17, pp. 163-172, 2017. doi: 10.1145/3024969.3024970
+
+[32] S. Maqsood. Shoulder surfing susceptibility of bend passwords. In Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems - CHI EA '14, pp. 915-920, 2014. doi: 10.1145/2559206.2579411
+
+[33] S. Maqsood, S. Chiasson, and A. Girouard. Bend Passwords: Using Gestures to Authenticate on Flexible Devices. Personal and Ubiquitous Computing, 20(4):573-600, 2016. doi: 10.1007/s00779-016-0928-6
+
+[34] P. Marti and I. Iacono. Experience over time: evaluating the experience of use of a squeezable interface in the medium term. Multimedia Tools and Applications, 76(4):5095-5116, feb 2017. doi: 10.1007/s11042 $- {016} - {3595} - 8$
+
+[35] Microsoft. Seeing AI - Talking camera app for those with a visual impairment, 2018.
+
+[36] G. Mone. The Future Is Flexible Displays. Communications of the ACM, 56(6):16-17, 2013. doi: 10.1145/2461256.2461263
+
+[37] A. Mulholland. With new technology, few blind Canadians read braille, 2017.
+
+[38] National Federation of the Blind. Blindness Statistics - Statistical Facts about Blindness in the United States, 2019.
+
+[39] J. Nielsen. Usability 101: Introduction to Usability, 2012.
+
+[40] L. Rainie and A. Perrin. 10 facts about smartphones as the iPhone turns 10, 2017.
+
+[41] D. Rose. Braille is spreading but who's using it?, 2012.
+
+[42] C. Schwesig, I. Poupyrev, and E. Mori. Gummi: A Bendable Computer. In Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, pp. 263-270, 2004. doi: 10.1145/985692.985726
+
+[43] P. Strohmeier, J. Burstyn, J. P. Carrascal, V. Levesque, and R. Vertegaal. ReFlex: A Flexible Smartphone with Active Haptic Feedback for Bend Input. In International Conference on Tangible, Embedded, and Embodied Interaction, pp. 185-192, 2016. doi: 10.1145/2839462. 2839494
+
+[44] K. Warren, J. Lo, V. Vadgama, and A. Girouard. Bending the rules: Bend gesture classification for flexible displays. In SIGCHI Conference on Human Factors in Computing Systems, pp. 607-610. ACM, 2013. doi: 10.1145/2470654.2470740
+
+[45] O. Wiese and V. Roth. See you next time. Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI '16, pp. 453-464, 2016. doi: 10. 1145/2935334.2935388
+
+[46] World Health Organisation. International Classification of impairments, disabilites and handicaps (ICIDH), 1980.
+
+[47] H. Ye, M. Malu, U. Oh, and L. Findlater. Current and future mobile and wearable device use by people with visual impairments. In SIGCHI Conference on Human Factors in Computing Systems, pp. 3123-3132. ACM Press, 2014. doi: 10.1145/2556288.2557085
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..219407e8d8e05ed07f6fdc15333daae39ccfdfbe
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/QpdI7o_pDQ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,344 @@
+§ BEND OR PIN: STUDYING BEND PASSWORD AUTHENTICATION WITH PEOPLE WITH VISION IMPAIRMENT
+
+Daniella Briotto Faustino*
+
+Carleton University
+
+Sara Nabil†
+
+Carleton University
+
+Audrey Girouard ${}^{ \ddagger }$
+
+Carleton University
+
+§ ABSTRACT
+
+People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-; Human-centered computing-Haptic devices- ; Human-centered computing-User studies-;
+
+§ 1 INTRODUCTION
+
+While accessibility features like screen magnifiers and screen readers make devices such as flat touchscreen smartphones usable for people with vision impairment, many challenges remain. Typing on smartphones, for example, is complex for users who are blind or have low vision $\left\lbrack {4,{30}}\right\rbrack$ , often requiring them to use external physical keyboards [6, 47]. Also, screen readers read everything aloud to users, jeopardizing their privacy and requiring them to use of earphones in public spaces $\left\lbrack {3,7,{47}}\right\rbrack$ . Additionally, accessibility features have a drawback of making vision-impaired users vulnerable to shoulder surfing and aural eavesdropping when entering PINs [23]. Shoulder surfing can result from the use of screen magnifiers zoom in the keyboard, making password entries more visible, while aural eavesdropping is possible because screen readers read everything typed aloud, even password entries. Therefore, almost ${70}\%$ of the vision-impaired are concerned with typing passwords in public spaces and prefer biometric user authentication, such as fingerprints [15]. However, biometrics may not work when there is an environmental change (e.g. moist hands), and thus act as a re-authenticator for other authentication methods, such as PINs or patterns [5]. The problem is twofold: patterns are not accessible for people with vision impairment $\left\lbrack {8,{28}}\right\rbrack$ and PINs are regarded by them as insecure to unlock mobile devices $\left\lbrack {8,{15}}\right\rbrack$ , easy to guess, inaccessible or inconvenient $\left\lbrack {7,{15},{18}}\right\rbrack$ .
+
+To give vision-impaired users a more accessible and secure alternative for patterns and PINs, previous work has suggested the use of deformable user interactions, which supports tactile interaction $\left\lbrack {{36},{42},{43}}\right\rbrack$ . More specifically, they proposed the use of bend passwords [33], a user authentication method where a sequence of bend gestures works as a password. However, no previous studies were carried out to investigate the usability of bend passwords for the vision-impaired.
+
+ < g r a p h i c s >
+
+Figure 1: BendyPass prototype, where users can enter bend and fold gestures to compose a bend password.
+
+In this paper, we present our exploration-led research of bend passwords. For our study, we developed BendyPass, a flexible device to capture bend passwords, that was proposed by in a previous demonstration [14] and extended its capabilities into an interactive device (Figure 1). We developed it through an iterative process with deformable interfaces' researchers and vision-impairment experts. Then, we conducted a within-subject study with vision-impaired participants to compare bend passwords and PINs to answer these research questions:
+
+ * Q1. Would deformable interaction outperform touch interaction for people with vision impairment?
+
+ * Q2. What can we learn -for design- about the ease of use, memorability and learnability of bend passwords versus PINs for people with vision impairment?
+
+Our paper builds on two main publications $\left\lbrack {{14},{33}}\right\rbrack$ in three key aspects. We created a refined prototype (inspired from a demonstration [14]) that is more efficient and robust with additional features. We targeted a specific user population, people with vision impairment, that no previous work has actually run studies with/for. We evaluated both bend and PIN authentication concurrently to quantitatively compare them and explore the usability of each in a rigorous and thorough user study, breaking new grounds.
+
+As such, our study is the first to explore the potential and limitations of bend passwords with people with vision impairments. Through the analysis of the data collected from 16 blind and low vision participants, our three key contributions are: (1) Refining the design and fabrication of deformable prototypes for password input; (2) Presenting insights on the usability of bend passwords compared to PINs; (3) Exploring the design opportunities and challenges of bend passwords.
+
+*e-mail: daniella.briottofaustino@carleton.ca
+
+${}^{ \dagger }$ e-mail: sara.nabil@carleton.ca
+
+${}^{\frac{1}{4}}$ e-mail: audrey.girouard@carleton.ca
+
+§ 2 RELATED WORK
+
+Researchers have explored the use of technology for assisting people with vision impairment $\left\lbrack {4,{17},{23}}\right\rbrack$ . While mostly relied on touch interaction on common smartphones, some researchers explored the opportunity of authenticating through physical deformation rather than touch-screens, for people living with blindness or vision impairment. In this section, we review the use of technology, authentication methods and deformable interfaces with respect to such special needs.
+
+§ 2.1 TECHNOLOGY AND VISION IMPAIRMENT
+
+Smartphones have become widely adopted not only by sighted individuals, but also by individuals with vision impairments, thanks to the rise of accessibility features and assistive applications in mainstream devices $\left\lbrack {{17},{23}}\right\rbrack$ . The most common accessibility features are screen readers and screen magnifiers. With the increased smart-phone adoption, the quality of life among the vision-impaired has improved [4], because they use their smartphones to execute tasks previously achievable only by using multiple assistive technology devices. Smartphones work as assistive tools aggregators, giving users access to apps to identify bills [11], street names [12], colours [16], objects and faces [35], read printed text [1].
+
+Nonetheless, using smartphones also has its own challenges. Entering data on smartphones can be difficult for the vision-impaired $\left\lbrack {4,{30}}\right\rbrack$ , who might need to use external physical keyboards $\left\lbrack {6,{47}}\right\rbrack$ . Also, screen readers put users at risk of others listening to their private information, requiring the use of earphones in public spaces $\left\lbrack {3,7,{47}}\right\rbrack$ . Braille is usually considered a solution, because eventual attackers probably do not know how it works. However, few vision-impaired know Braille. Among vision-impaired Americans and Canadians, fewer than ${10}\%$ read Braille $\left\lbrack {{37},{38}}\right\rbrack$ and among the British, less than 1% of the people with vision impairment can read Braille [41].
+
+§ 2.2 USER AUTHENTICATION METHODS
+
+User authentication methods are the way users can prove their identity, have access to their devices and accounts [33], and protect their personal information [27]. The user authentication methods can be categorized as: knowledge-based (something the user knows, such as PINs passwords), token-based (something the user has, such as key fobs), or biometric-based (something the user is, such as fingerprints) [27]. Generally, alphanumeric passwords are the standard for user authentication on websites [13]. On the other hand, PINs are the standard for unlocking smartphones [45]. For sighted Americans, for example, PIN is the most commonly used method to unlock smartphones [40].
+
+A 2018 online survey with 325 vision-impaired people found 75% the smartphone users had an authentication method to unlock their devices [15], a higher percentage than the ones found in research from 2012 [7] and 2015 [18], where 0% and 33% used an authentication method, respectively. The 2018 survey also found their preferred user authentication method on smartphones is fingerprint, which they also consider the most secure authentication method [15]. On the other hand, participants considered PINs the least secure authentication method, and only 16% use it as their main method [15]. The perceived security of PINs is impacted by the fact that typing PINs when using embedded screen readers makes people with vision impairment more susceptible to others aural-eavesdropping [23]. Similarly, using screen magnifiers while entering passwords increase the susceptibility for visual eavesdropping or shoulder surfing [23]. However, most vision-impaired seem unaware that even if a smartphone has a biometric method set up, a knowledge-based authentication is still their main barrier against unauthorized access to smartphones [15], as biometrics act as mere re-authenticators for knowledge-based methods [5].
+
+Vision-impaired users do not want to deal with the complexity of user authentication methods [15], but with the large volume of personal data generally stored in smartphones [27], it is essential to protect users' privacy [23]. Researchers have explored user authentication alternatives for the vision-impaired to reduce observer threats, including password management system [10], accessible pattern for touch-screens [8], and novel user authentication methods based on the user's gait [23] and multi-finger taps on the touch-screen [7]. However, no previous study has rigorously explored the use of deformation gestures as a method for user authentication with people with vision impairment yet.
+
+§ 2.3 DEFORMABLE BENDABLE INTERFACES
+
+Deformable devices allow users to physically manipulate their shapes as a form of input, by bending, twisting or deforming them $\left\lbrack {2,{20}}\right\rbrack$ . Such interfaces are often referred to as Deformable User Interfaces (DUIs) [24] - that is, the "physical manual deformation of a display to form a curvature for the purpose of triggering a software action", including a bend gesture [26]. One of the first interfaces designed to explore physical input interaction through bending was ShapeTape [9]. Similarly, Gummi [42] was made to afford bending its physical form using flexible electronic components including sensors that are able to measure deformation. Companies (such as Samsung, LG and Huawei) are currently developing foldable flexible devices, and we expect deformable devices to include bend gestures as input in the near future.
+
+Considering how blind users solely rely on non-visual feedback (e.g., tactile cues and audio), and that deformation is a tactile form of input, Ernst et al. [20] proposed deformation could be beneficial for the blind. They developed a deformable device that accepted bend gestures to control a smartphone screen reader. In their preliminary evaluation with vision-impaired participants, Ernst and Girouard [19] found bend gestures might be "easier out of the box than touch [interactions]" and could improve the accessibility of smartphones.
+
+Another possible application for bend gestures is bend passwords, first proposed by Maqsood et al. [33] as a sequence of bend gestures to authenticate the user. In a user study with sighted participants comparing bend passwords to PINs, researchers found bend passwords easy to memorize as PINs, but might allow users to rely on their muscle memory to recall their passwords [33]. Additionally, bend passwords might be harder to observe than PINs, being potentially safer against shoulder surfing attacks [32]. Our work expands and evaluate a prototype published as a demonstration [14]. Due to the tactile nature of this method and its promising opportunity, and as flexible smartphones may become available soon, we want to explore the usability of bend passwords for people who are vision-impaired or blind.
+
+§ 3 PROTOTYPE
+
+This section describes the design process resulting in our final prototype that we later evaluate with people with vision impairments. The related previous work, especially $\left\lbrack {{14},{33}}\right\rbrack$ were the launching points for this paper that helped: 1) conceptualize and demonstrate the technical issues of creating an initial prototype [14]; and 2) test this interaction method with a generalized population (i.e. sighted people) [33].
+
+§ 3.1 DESIGN PROCESS
+
+We started by consulting a blind expert who teaches technology to people with vision impairment in a local organization of the blind. The expert shared her concerns about vision-impaired users deciding not to use authentication methods on their smartphones due to their perceived inaccessibility or complexity, and confirmed that bend passwords could have potential for the vision-impaired. Then, we developed an initial version of the prototype and presented it to a group of 10 vision impaired, describing bend passwords as an alternative for authenticating. Most of them indicated interest in a tactile user authentication method.
+
+Our process consisted of developing concurrent alternative prototypes based on previous research and on feedback received from deformable interface researchers of the demonstration prototype, then presenting them to the two blind experts to participate in our iterative design process. We consulted them to define the best size, material stiffness, groove design, sensor placement and set of gestures, to create a prototype that would be well suited for such special needs. We iterated through a dozen preliminary prototypes and a total of four meetings with the blind experts before choosing our final design.
+
+§ 3.2 PRIOR WORK INSPIRATION
+
+Previous research showed that users prefer a deformable device the size of a smartphone rather than a tablet, to minimize the user's level of fatigue and comfort [29] and minimize the need to reposition their hands to perform bend gestures $\left\lbrack {{19},{31}}\right\rbrack$ . Users perform most deformations while holding a rectangular device horizontally [28] and they select simple bend gestures that are less physically demanding $\left\lbrack {{26},{28}}\right\rbrack$ . Additionally, forcing the user to change their grip to perform gestures not only causes discomfort $\left\lbrack {{22},{28}}\right\rbrack$ , but also slows task completion [2, 19], raises the likelihood of false activation [22], and increases the risk of dropping the device [22]. Hence, we decided to design a rectangular device similar in size to a medium smartphone and considering the importance of allowing users to access all corners without re-gripping, we designed our interface for landscape use.
+
+Regarding device deformability, previous research showed that higher stiffness requires more physical effort [25], which negatively influences users' preference and performance when bending [24]. Finally, researchers recommend that bendable interfaces could help users identify deformable areas by having grooves on the bendable points $\left\lbrack {{19},{22}}\right\rbrack$ and by providing haptic feedback $\left\lbrack {43}\right\rbrack$ . Thus, we opted for using malleable silicone for our device's bendable areas, adding grooves and haptic feedback.
+
+§ 3.3 FINAL PROTOTYPE DESIGN
+
+This prototype's purpose is to put this opportunity in the hands of users and explore its potential in comparison to touch-screens. Our final prototype, BendyPass (Figure 1) is approximately the size of an iPod Touch $\left( {{11.5} \times 6 \times 1\mathrm{\;{cm}}}\right)$ , made of silicone and has a single push-button that allows the user to either confirm the password or delete the previous gesture entered. The button also indicates the device's orientation, both helping users to recognize which side should be facing them, and which side should be left or right.
+
+Inspired by a recent demo [14], our prototype is composed of two silicone layers that enclose all electronic components. We 3D printed a mold including a vertical groove in its centre, four corner grooves that create triangular areas around each of its corners, and a lowered part to insert its push-button (Figure 2, top). The grooves extend from side to side to facilitate bending gestures and they are also angled to avoid users' fingers to be pinched when bending the device. To fabricate our prototype, we used two different types of silicone, making the bendable areas more flexible than the central area (Alumilite A30 and A80, respectively). This emphasizes the bendable areas while protecting the components in the center.
+
+The electronic components included 51 " Flexpoint bidirectional sensors to recognize bend gestures and a vibration motor to provide haptic feedback when an action is recognized. BendyPass components are positioned as shown in Figure 2 (bottom): the vibration motor is on the left side and the flex sensors are in the centre, and in the four corners. The components are connected to an Arduino Leonardo microcontroller connected to a MacBook Pro laptop. Gestures applied to our prototype become letters on the computer, while long button presses(i,1s)activate the Enter key and short button presses activate the Backspace key. The letter mapping is invisible to the user, who only needs to perform bend gestures and operate the button.
+
+ < g r a p h i c s >
+
+Figure 2: BendyPass 3D mold design (top view) and its internal components housed in a thin foam layer (pink).
+
+§ 3.4 BEND PASSWORDS ON BENDYPASS
+
+BendyPass recognizes 10 simple bend gestures (Figure 3), including bending each corner upwards or downwards ( 8 gestures) and folding it in half upwards or downwards ( 2 gestures). Our prototype was programmed to recognize fewer gestures than Maqsood et al.'s [33], because our two blind experts considered excessively complex the double gestures (gestures performed in two corners at the same time). With 10 possible gestures, BendyPass has the same number of possible combinations of a PIN, as a six-gesture bend password has the same strength against brute force guessing as a 6-digit PIN.
+
+Aside from the haptic feedback, BendyPass also provides optional audio feedback verbalizing the name of the gesture entered. Prior work indicated that for an effective communication of gestures, it is necessary to provide information about location and direction [44]. Thus, we named our gestures using both (e.g. "top left corner up"). The exceptions are folding gestures, which showed to be confusing in preliminary analysis using location. For example, a "centre up" could trick users into moving the sides up, while the centre would move down. Thus, we opted to use the name of the gesture-fold-in addition to the direction for these.
+
+§ 4 USER STUDY
+
+§ 4.1 APPARATUS
+
+We designed a user study to compare two knowledge-based user authentication methods: bend passwords and PINs. Thus, in addition to our prototype BendyPass, we used an iPhone 6S for PIN entry, because most people with vision impairment use iPhones [15] due to its native screen magnifier and screen reader functionalities. We selected a keypad from a remote keyboard app [52] to simulate a PIN entry screen while transmitting the keys typed to a computer, where we could save them. We chose it because it worked relatively well with the screen reader VoiceOver and the Standard typing style. In this accessible typing mode, users explore the screen by either swiping or single-tapping it and they trigger a key by either lifting their finger and double tapping the screen or keeping their finger on the screen and tapping it with another finger.
+
+ < g r a p h i c s >
+
+Figure 3: Set of 10 bend gestures available on BendyPass.
+
+ < g r a p h i c s >
+
+Figure 4: The structure of our user study.
+
+We also developed a PHP website to receive and verify passwords both from BendyPass and the smartphone. The website was connected to a mySQL database and hosted using XAMPP. Our database saves participants IDs, password entries, and the time the input started and ended. Our website provides audio cues (such as "Create your password using the gestures learned" or "Wrong password, please try again") to help users navigate the password creation process. It also provides optional audio feedback for bend gestures or button presses. For the audio, we recorded the screen reader VoiceOver reading all messages on a MacBook Pro, using the default speed of 45 . We used JavaScript to assign actions to their respective audio snippets.
+
+§ 4.2 METHODOLOGY
+
+We structured our user study to be composed of two 60-minute sessions (Figure 4), following the main tasks proposed by relevant literature [33]. The first session focused on password learnability, while the second session, about a week after the first, focused on password memorability.
+
+We started the first session by interviewing participants, asking them whether they had already tried a flexible device, followed by questions on their demographics and user authentication perception and use. Then, we asked participants to create both: 1) a bend password on BendyPass and 2) a new PIN on the smartphone with at least 6 digits/gestures. After creating the passwords, participants confirmed them 3 times, before rehearsing them by completing 5 successful logins. Whenever participants expressed uncertainty about their passwords, while confirming or rehearsing them, we asked if they wanted to create a new one and allowed them to do so. After confirmation, participants had a pause to answer questions about the easiness to create and remember their password, and the perceived overall security, specifically against shoulder surfing. Participants could create a new password if forgotten. Participants who chose to create a new password during rehearsal had to immediately confirm and rehearse them. The session ended with questions allowing participants to reflect on their experience, the likelihood of using bend passwords and further insights about what worked well and what did not.
+
+After a week, we started the second session by asking participants questions related to their easiness to remember their passwords, whether they used any strategy to memorize their password during the week, which accessibility features and typing styles they use in their own smartphone. Then, we gave participants as many attempts as they needed to complete 5 successful logins using each of the two passwords created in the first session. Finally, we finished the session by discussing with them their final thoughts and reflection regarding their likelihood to use bend passwords over PINs (in general and in flexible devices), their overall experience using BendyPass, potential user groups for bend passwords, and any proposed enhancements.
+
+We presented the two devices to participants in counterbalanced order, among participants and between sessions with the same participant. In both sessions, after interacting with each device, participants answered device-specific questions. Besides learnability and memorability, we also evaluated the other quality components of usability [39], by measuring efficiency and satisfaction in session 2, and number of errors in both sessions.
+
+In both sessions, participants verbally answered our questions about each device, regarding the easiness to create passwords, the perceived security of both password schemes and their opinions about bend passwords (Figure 5). All questions were 10-point Likert scales, where 1 was the least favourable.
+
+§ 4.3 DATA ANALYSIS
+
+We transcribed participants' comments and answers to open-ended questions using Inqscribe (inqscribe.com). We performed the qualitative analysis on open-ended answers and quantitative analysis on both multiple-choice and coded answers using R Studio (rstu-dio.com). Quantitative analysis included the analysis of 732 log records (participant $\times$ step $\times$ trial) using Wilcoxon Signed-Rank (Z) for numerical data, and chi-square tests $\left( {\varkappa }^{2}\right)$ of independence between variables for categorical data, but we focus on reporting significant results. Time measured included time thinking about the passwords and time entering them. We used Grounded Theory [18] to code answers of interview questions. Whenever necessary, we coded answers in more than one theme, but we did not code unclear answers.
+
+§ 4.4 PARTICIPANTS
+
+We recruited participants by snowballing, mainly through a local council of the blind and Facebook groups. Our recruitment criteria included participants who were at least 18 years old and were either blind or had low vision. We held sessions either at a lab or at an office at the local council of the blind. Among our 16 participants, 10 declared they were blind, 5 declared they had low vision, and one expressed having another condition. For our analysis, we grouped the latter with the blind because he could not see the smartphone screen. This resulted in 11 blind participants (68.7%) and 5 with low vision (31.2%), a distribution similar to previous work [15]. Many participants self-declared as males $\left( {\mathrm{N} = {10},{62.5}\% }\right)$ . Ages of participants ranged from 22 to 76 years-old (M=54.31, SD=15.38). Three participants also had another impairment, related to hearing loss $\left( {\mathrm{N} = 2}\right)$ , attention $\left( {\mathrm{N} = 1}\right)$ or psycho-motor system $\left( {\mathrm{N} = 1}\right)$ , according to the World Health Organization classification [46].
+
+ < g r a p h i c s >
+
+Figure 5: Distribution of Likert scale responses; the first three pairs are from session 1, the last three are from session 2. SS stands for Shoulder Surfing.
+
+Almost all participants said they use assistive apps on their smart-phones $\left( {\mathrm{N} = {15},{93.8}\% }\right)$ , only 5 participants said they use a Braille display, a smaller proportion than in relevant work [15] (31.3% vs ${42.5}\% )$ . Three participants had experience in studies on deformable flexible devices, though never for user authentication. Most answers to the interview matched results from the survey in prior work [15], suggesting our participants represent well the group of people with vision impairment who have access to the internet and mobile devices. All blind and most low-vision participants interacted with the smartphone with a screen reader. Only one participant used screen magnifier. All participants returned for session 2 about a week later (M=7.28 days, Md=7, SD=0.87).
+
+§ 5 FINDINGS
+
+We discuss each of our main findings, introducing potential and implications for design learnt from studying bend passwords in-use with users with vision impairment. Our reported findings combine the results from observation of usability, analysing log files of passwords, and questionnaire results.
+
+§ 5.1 LEARNABILITY
+
+§ 5.1.1 EASE OF CREATING
+
+Before creating their passwords, participants trained for a longer period to use bend gestures (M=165s, Md=143.5s, SD=63.94s) than they trained to use the keypad app (M=90s, Md=91.5s, SD=30.92s), $\left( {\mathrm{Z} = - {2.97},\mathrm{\;p};{.005}}\right)$ . After training, participants took slightly longer to create their first bend password $\left( {\mathrm{M} = {59.6}\mathrm{\;s}}\right)$ than their first PIN (M=48.8s), but the difference was not statistically significant (n.s.), as shown in Figure 6. Moreover, participants' perceived easiness to create bend passwords was not statistically significantly different than creating PINs (Table 1). We also found no significant differences between participants who are blind and who had low vision regarding learnability (n.s.).
+
+ < g r a p h i c s >
+
+Figure 6: Creation and Login time in the first trial, in seconds. Differences are not statistically significant.
+
+§ 5.1.2 PASSWORD CREATION STRATEGIES
+
+We observed how participants created passwords and asked which strategies they used to create them (Table 2). More than half of our participants used some sort of pattern $\left( {\mathrm{N} = 9}\right)$ , such as mirroring gestures from one side of the device to the other $\left( {\mathrm{N} = 5}\right)$ . Our results are similar to those from previous work [33] with sighted participants, where patterns were the main strategy used to create passwords (44%). Although almost half of our participants said they associated their PINs to series of numbers they were familiar with $\left( {\mathrm{N} = 7}\right)$ , no participant reported using association as a strategy to create their bend passwords.
+
+§ 5.2 MEMORABILITY
+
+§ 5.2.1 RE-THINKING PASSWORDS
+
+Although participants simply created their first bend password, most did not remember it. 11 participants had to re-create their bend passwords, while only 2 had to re-create their PINs, resulting in a significant difference between the number of trials to create memorable bend passwords $\left( {\mathrm{M} = {1.94},\mathrm{{Md}} = 2,\mathrm{{SD}} = 1}\right)$ and PINs $(\mathrm{M} = {1.12}$ , $\mathrm{{Md}} = 1,\mathrm{{SD}} = {0.34})\left( {\mathrm{Z} = - {2.36},\mathrm{p};{.01}}\right)$ . From the 11 participants who forgot their initial bend password, 9 forgot it at the confirmation step $\left( {{81.8}\% }\right)$ , and 2 forgot it at the rehearsal step $\left( {{18.2}\% }\right)$ . We attribute the initial difficulty to memorize bend passwords to the lack of muscle memory for bend passwords, which required participants to create new memorization strategies. As P5 said, "It's fun but you have to suspend anything you know about passwords. You have to think in a new way." As participants had to confirm passwords 3 times after creating them, P13 said that having to go through three confirmations probably makes passwords easier for people to remember, "engraving them into memory", and suggested that this should be required in real-life.
+
+§ 5.2.2 EASE OF RECALL
+
+In session 2, although ${81.3}\% \left( {\mathrm{N} = {13}}\right)$ participants remembered their bend passwords, 1 participant was not able to enter it due to prototype errors. Thus, participants' login success rate was ${75}\% \left( {\mathrm{N} = {12}}\right)$ for bend passwords and ${93.8}\%$ for PINs $\left( {\mathrm{N} = {15}}\right)$ , but a McNemar test with the continuity correction found no significant difference between them. Most participants successfully entered their bend passwords and PINs in the first trial (n.s.). Although the memorability of PINs was slightly better, bend passwords had a similar memorability (n.s.), even though it was a novel method for participants. Additionally, while in session 1 participants rated bend passwords significantly harder to remember than PINs (Table 1), their ratings in session 2 for the same questions about bend passwords and PINs were not significantly different (n.s.).
+
+max width=
+
+Question Session Bend Md (SD) PINMd (SD) Statistics
+
+1-5
+Ease of password creation 1 ${6.0}\left( {2.62}\right)$ 7.5 (2.33) Z=-0.95, p=.17
+
+1-5
+2*Ease of remembering 1 $\mathbf{{5.5}\left( {2.66}\right) }$ $\mathbf{{8.0}\left( {1.68}\right) }$ Z=-1.88, p=.03
+
+2-5
+ 2 ${7.5}\left( {1.98}\right)$ ${9.0}\left( {2.38}\right)$ Z= -0.97, p = .17
+
+1-5
+2*Confidence in remembering 1 7.0 (2.47) ${8.0}\left( {1.89}\right)$ Z= -1.56, p = .06
+
+2-5
+ 2 ${8.0}\left( {1.89}\right)$ ${9.5}\left( {2.68}\right)$ Z=-0.86, p=.20
+
+1-5
+Perceived overall security 1 ${6.0}\left( {2.13}\right)$ ${8.0}\left( {1.46}\right)$ Z=-1.40, p=.08
+
+1-5
+Security against shoulder surfing 1 6.5 (2.37) ${7.0}\left( {2.15}\right)$ $\mathrm{Z} = {0.68},\mathrm{p} = {.75}$
+
+1-5
+2*Likelihood to use if available 1 7.5 (2.83) - -
+
+2-5
+ 2 ${6.5}\left( {2.69}\right)$ - -
+
+1-5
+2*Likelihood to use in flex. devices 1 6.5 (2.49) - -
+
+2-5
+ 2 ${5.5}\left( {2.53}\right)$ - -
+
+1-5
+
+Table 1: User Questionnaire responses to 10-point Likert scale questions, where 1 represents strongly disagree. Significant difference marked in bold.
+
+max width=
+
+Strategies to create passwords Bend passwords ${PINs})$
+
+1-3
+Pattern 5 (9) 4 (9)
+
+1-3
+Simple combination 5 (3) 0 (2)
+
+1-3
+Repetition 0 (3) 2 (6)
+
+1-3
+Association 1 7
+
+1-3
+
+Table 2: Strategies participants reported using to create memorable passwords. Numbers in parentheses express the number of times researchers observed the use of each strategy.
+
+§ 5.2.3 CONFIDENCE TO REMEMBER
+
+Participants' confidence to remember their passwords was affected by the number of errors they made in the rehearsal step. Those who had fewer incorrect trials rated their confidence significantly higher both for bend passwords $\left( {{\varkappa }^{2}\left( {{24},N = {16}}\right) = {36.98},p = {.04}}\right)$ and for PINs $\left( {{\varkappa }^{2}\left( {{10},\mathrm{\;N} = {16}}\right) = {23.43},\mathrm{p} = {.009}}\right)$ . Also, participants who were more confident in remembering their passwords before login in session 2 were significantly more likely to remember their bend passwords $\left( {{\varkappa }^{2}\left( {6,\mathrm{\;N} = {16}}\right) = {85},\mathrm{p} = {.01}}\right)$ and their PINs $\left( {{\varkappa }^{2}\left( {5,\mathrm{\;N} = {16}}\right) }\right.$ $= {85},\mathrm{p} = {.007})$ .
+
+§ 5.2.4 PASSWORD MEMORIZATION STRATEGIES
+
+We asked participants in session 2 whether they used any strategies to remember their bend passwords. Four (25%) said they thought about their passwords throughout the week for a total of 7 (43.8%) who thought about them at least once. Also, while 7 (43.8%) said their methods to create their bend passwords were the main strategy to memorize them, 2 (12.5%) said they did not use any strategy. Both of them forgot their bend passwords in session 2 and a participant who forgot her PIN also did not use a strategy to remember it. Association, a common strategy used to memorize PINs, was not used by participants to memorize bend passwords, potentially because of their three-dimensionality. However, results from session 2 indicate that, regardless the strategy used to create passwords, maintain them in one's memory depend on good memorization strategies, which include at least repeating in one's head how the password was.
+
+§ 5.2.5 RATE OF ERRORS
+
+Our analysis of memorability considered the number of correct logins in session 2. Thus, for analysing the number of errors, we considered the errors that were not followed by a new password creation, excluding only those caused by a prototype error. Both bend passwords and PINs had the same number of incorrect entries $\left( {\mathrm{N} = 7}\right)$ . Similarly, the number of participants that made incorrect entries was the same: 4 for each password type (only 1 participant had an error with both bend passwords and PINs). Thus, there was no significant difference in number of errors (n.s.).
+
+max width=
+
+Type Mean Length (SD) Mean Unique Entries (SD)
+
+1-3
+Bend Password ${6.44}\left( {0.81}\right)$ ${5.81}\left( {1.05}\right)$
+
+1-3
+PIN 6.19 (0.54) ${4.69}\left( {1.14}\right)$
+
+1-3
+
+Table 3: Unique entries are the number of unique digits (PIN) and gestures (bend password) in a password.
+
+§ 5.3 EASE OF USE
+
+Participants rated the likelihood of using bend passwords for different users Most participants considered BendyPass easy to use $\left( {\mathrm{N} = {10}}\right)$ and liked its haptic and audio feedback $\left( {\mathrm{N} = 9}\right)$ . We also analysed the bends used to compose each password.
+
+§ 5.3.1 POTENTIAL USERS
+
+When asked who might like to use bend passwords, 12 (75%) participants said vision-impaired people in general. P5 said, "Certainly blind and low vision, or people with learning disabilities that make them have issues with numbers, and seniors or people with learning issues". This supports the idea that physical deformable interaction in general, and bendable interfaces in particular, are perceived to offer great potential to people with low vision, not only by researchers $\left\lbrack {{15},{19},{33}}\right\rbrack$ , but also by users themselves.
+
+§ 5.3.2 BEND PASSWORDS USED
+
+We analyzed the password characteristics and the composition of all passwords created by our participants (Table 3). Both bend passwords and PINs ranged from 6 to 8 gestures/digits, but most were equal to the minimum length of 6 required from participants. Password length was not statistically significant. In contrast to prior work [33], our participants used more unique gestures than unique digits $\left( {\mathrm{Z} = - {1.95},\mathrm{p} = {.03}}\right)$ . Every bend gesture was used at least once by at least one participant to compose a bend password. However, some gestures were more frequently used than others. The top three most frequently used gestures were: top right corner up (17%), top left corner up (14%), and bottom left corner up (12%), exactly the same top three single gestures for sighted participants [33]. The least used gestures were top right corner down (6%), fold down (7%) and fold up (8%). Participants tended to prefer upwards gestures (60.2%) than downwards gestures (39.8%), confirming previous findings $\left\lbrack {{26},{31},{33}}\right\rbrack$ , even though the difference was not significant.
+
+§ 5.4 SATISFACTION
+
+To evaluate the satisfaction from a user perspective, we studied the time needed to login, measuring efficiency, asked participants about the overall experience and perceived security Finally, we also report below on their perceived drawbacks and limitations.
+
+§ 5.4.1 OVERALL EXPERIENCE
+
+After the final session, we asked participants to tell us how they would describe their experience using bend passwords to a friend. Nine participants expressed positive experiences using BendyPass while only 3 described it negatively. For example, P4 said it was "fun, interesting, challenging, intriguing", while P8 said, "it was easy, [there is] a little of learning curve to know how to do the bends right, but once you got that it's easy to use, even easier than swiping the touch screen to find numbers". P7, on the other hand, said, "if errors were removed, it would be OK. Primary reason for negative comments are the errors and the fact that the surface should be [...] more responsive.". The errors mentioned by the participant involved non recognition of gestures performed $\left( {\mathrm{N} = 5}\right)$ or the recognition of opposite directional gestures $\left( {\mathrm{N} = 1}\right)$ . The latter was caused by the sensors used in BendyPass, which react not only to deformation but also to pressure, which can occur when participants grasp the prototype strongly.
+
+§ 5.4.2 EFFICIENCY OF LOGIN
+
+Following the methodology used in relevant literature [33], we compared participants' fastest confirmation and rehearsal times to evaluate whether their performance improved with practice. Participants took significantly less time to rehearse their PINs than to confirm them $\left( {\mathrm{Z} = - {2.97},\mathrm{p} = {.001}}\right)$ , but were not significantly faster with bend passwords (n.s.), indicating their efficiency achieved optimal numbers from the start. Participants took slightly less time to complete a first login with their bend passwords $(\mathrm{M} = {22.4}\mathrm{s},\mathrm{{Md}} = {21.5}\mathrm{s}$ , SD=8.48s) than with their PINs (M=35s, Md=25s, SD=21.25s) (n.s.), as shown in Figure 6. We selected the fastest login time from each participant who logged in successfully. As observed in the previous steps, participants took significantly less time in their fastest login with bend passwords $\left( {\mathrm{M} = {13}\mathrm{\;s},\mathrm{{Md}} = {12}\mathrm{\;s},\mathrm{{SD}} = {3.1}\mathrm{\;s}}\right)$ than with PINs (M=18.27s, Md=16s, SD=7.71s) (Z= -2.20, p = .01).
+
+§ 5.4.3 PERCEIVED SECURITY
+
+We found no significant difference between the perceived security of bend passwords and PINs. Interestingly, when we asked participants to justify their ratings for the security of bend passwords and PINs against shoulder surfing attacks, 7 participants said bend passwords are easy to see by others, while 5 participants said PINs are easy to see. This was also one of the most common reasons participants gave for their ratings of the overall security of bend passwords $\left( {\mathrm{N} = 5}\right)$ , although another 4 participants said bend passwords were difficult to see. Thus, there was no consensus amongst participants on whether bend passwords are easy or hard to observe.
+
+§ 5.4.4 DRAWBACKS OF USING BENDYPASS
+
+We asked participants to point out characteristics that worked well with bend passwords, as well as aspects that should be improved. The most common disadvantage participants mentioned was having to carry an additional device $\left( {\mathrm{N} = 6}\right)$ . For example, $\mathrm{P}7$ said "I don't like carrying extra things, I barely remember my charging cable". On the other hand, most suggested reducing the protuberance of the button $\left( {\mathrm{N} = {11}}\right)$ and improving the accuracy of the bend password recognition $\left( {\mathrm{N} = {10}}\right)$ , especially for folding gestures $\left( {\mathrm{N} = 9}\right)$ . Although more than half of the participants liked the audio feedback provided by BendyPass, at least 3 inclined towards deactivating this option. Interestingly, 7 participants liked the form factor of BendyPass while 8 suggested the reduction of its size, even saying it would be nice to have it as a key chain.
+
+§ 5.5 LIMITATIONS
+
+Similar to prior work [15], we recruited more blind than low-vision participants. This might be a result of a higher interest of the blind community in novel assistive technologies but might also be an indicative to the difficulty in classifying some people as blind or low vision. For example, one of our participants self-declared as blind, but said he uses inverted colours on his smartphone to better see the screen.
+
+Although we tested our prototypes and had 3 pilot sessions, we faced prototype issues during the study sessions, where Bendy-Pass was not fully accurate on recognizing bend gestures, and the smartphone app did not work with all typing styles. Although we acknowledge both the app keypad and the typing method in our study are not the same as most participants use in their own smartphones, we argue the learning process new users go through when using a new device or software was simulated well.
+
+§ 6 DISCUSSION
+
+We presented the results of a user study on bend passwords compared to PINs with 16 participants who were blind or had low vision We found that bend passwords were as easy to create as PINs, and participants assessed them as easy to remember as PINs. Participants reported being more likely to use bend passwords on a flexible device but would also be willing to use them in a separate device for gaining access to password-secured systems and spaces. This is because bend passwords take users with low vision less time to enter than PIN passwords.
+
+§ 6.1 LEARNABILITY OF BEND PASSWORDS
+
+Explaining to participants how to use BendyPass took around 2 minutes, confirming previous findings [42] that users are able to quickly understand how to interact with deformable devices. Users may need more training with BendyPass than with the smartphone, and slightly more time to create bend passwords than PINs. We attribute this to the novelty of the paradigm as opposed to the commonly used touch-based PIN passwords. Still, our results show that the time needed to create a password using bend or PIN are not statistically significant.
+
+§ 6.2 MEMORABILITY OF BEND PASSWORDS
+
+Participants forgot their bend passwords significantly more often than their PINs. This can relate to their unfamiliarity memorizing gestures compared to memorizing sequences of numbers. The initial difficulty to memorize bend passwords is supported by participants ratings during the first session on the easiness to remember bend passwords, significantly lower than PINs. However, the results of the same question in session 2 were not statistically significant. This is also supported by their success rates and the number of attempts to complete a first successful login. Still, the lack of muscle memory for bend passwords that supports the association memorability strategy impacts the performance of bend authorization compared to PINs.
+
+Perhaps future work should explore muscle memory associations such as typing on a keyboard or playing a musical instrument. Other deformable authentication methods can be also used to support people with vision impairments using mental association for better memorability. Examples of such interactions could mix both deformation with alpha-numeric associations, such as enabling only a small number of bend gestures in a Morse code style sequence, stroking characters on a texture-changing edge, or twisting a shape-memory strip a number of times back and forth.
+
+§ 6.3 EASINESS TO USE BEND PASSWORDS
+
+Users rated bend passwords as easy to use and appropriate for people with vision impairments. Bend passwords were faster to login than PINs for people with low vision. This is mainly because our participants used smartphones with accessibility features, which slowed them down. In fact, research acknowledges that being slow to enter is actually one of the great limitations users dislike about PINs [15].
+
+§ 6.4 USABILITY FOR BLIND VS LOW VISION
+
+We did not find significant differences between the time participants who were blind and participants who had low vision took to train how to use bend passwords or to create a bend password. This suggests that people with low vision can benefit as much as the blind from a tactile physical password.
+
+§ 6.5 OVERALL USABILITY OF BEND PASSWORDS
+
+Due to the familiarity with PINs—as expected—, the time needed for learning and training how to use bend passwords can take longer. PINs are often created from numeric patterns that users' remember, such as dates and phone numbers that are already saved in our memory prior to creating a password. In contrast, people do not usually have memorized bend sequences that they can use when creating a bend password. This describes the memorability difference between bend passwords and numeric passwords. On the other hand, bend passwords were faster to log in than PINs and had similar number of errors and perceived user satisfaction. Also, the deformation interaction was more accessible than touch-screen interaction, as it uses a more tactile input method and provides vibration as feedback. These learnings indicate the opportunities and limitations of bend passwords when comparing their usability with PIN passwords.
+
+§ 6.6 BEND PASSWORDS IN FUTURE DEVICES
+
+While the current prototype was a separate device, hence would require the user to carry an extra device, the paper does not endorse bend authentication to be implemented as such in future devices. Instead, we cater for expected near-future flexible phones as a vast amount of relevant work envisions that bend interaction will soon be integrated in mobile devices, thus we explore the authentication through bends accordingly. We envision that bend passwords would be integrated in such future flexible smartphones, or through a flexible phone case over a rigid smartphone (e.g. [21,34]).
+
+§ 7 CONCLUSION
+
+We explored the application of a bendable user authentication method for people with vision impairment using bend passwords. We conducted a user study with 16 people who are blind or have low vision to evaluate the learnability and memorability of bend passwords on BendyPass when compared to PINs on a smartphone. Our paper extends previous work, not only by engaging experts and participants with vision impairment, but also by challenging prior work on whether we should be designing such deformable authentication methods or not. We do not argue that adopting bend passwords is not necessarily the answer to touch-screen accessibility, but explore its potential and limitations as an alternative.
+
+In this study, we gained insights from situating BendyPass in the hands of its intended users and learned about its ease of use, memorability, learnability and user satisfaction. We found that bend passwords do not outperform touch-based interactions for people with vision impairment. Despite being as easy to learn as PINs, bend passwords are still not easy to memorize. Bend passwords were significantly faster to enter than PINs on iPhone (using the screen reader VoiceOver and the standard typing style), but were not rated by users to be more secure than PINs. Such findings shed light on limitations of the bend interaction opportunity that was often over-promised in HCI literature.
+
+Our inquiry was to assess how people with vision impairment will use bend passwords versus PIN passwords they already use. With accessibility applications, the key tenant is not outperformance, but providing people with multiple ways to achieve their goals in case one is ill-fitted. For example, voice commands are not faster than touch, but in some scenarios, or for some users, they are more practical and it is useful to have the option to switch between interaction techniques that do not necessarily outperform each other altogether e.g. mouse and keyboard. Therefore, our paper is not claiming that Bend outperforms PIN or that having a clear winner is a goal, but is exploring the usability of each method to contribute to other researchers and designers what refinements can provide accessibility choices and create alternatives.
+
+We envision deformability will be integrated in smartphones soon and argue that careful considerations should be taken into account and challenged when designing such interactivity. Future work should address challenges of deformable interaction for accessibility in-use, with the target user group, including association, memorability and security against shoulder surfing attacks, beyond the initial work on this subject [33]. We plan to explore further deformable gestures, replacing bend and fold by other deformations. We also plan to integrate it in the phone case, and connected to users' smartphones through Bluetooth. Such enhancements will allow a longitudinal study investigating the long-term usability of deformable authentication for people living with vision impairment, seeking the development of current hindering technology.
+
+§ ACKNOWLEDGMENTS
+
+This work was supported and funded by the National Sciences and Engineering Research Council of Canada (NSERC) through a Discovery grant (2017-06300), a Discovery Accelerator Supplement (2017-507935) and the Collaborative Learning in Usability Experiences Create grant (2015-465639). We thank Kim Kilpatrick and Nolan Jenikov from the Canadian Council of the Blind in Ottawa for their great help.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7570e565ed8aceea77e2733049780cd588b26b7d
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,523 @@
+# A Baseline Study of Emphasis Effects in Information Visualization
+
+Aristides Mairena * Martin Dechant ${}^{ \dagger }$ Carl Gutwin ${}^{ \ddagger }$ Andy Cockburn §
+
+University of Saskatchewan
+
+University of Canterbury
+
+## Abstract
+
+Emphasis effects - visual changes that make certain elements more prominent - are commonly used in information visualization to draw the user's attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user's experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Perception; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+## 1 INTRODUCTION
+
+Emphasis effects are visual changes that make certain elements more prominent, and are commonly used in information visualization to draw the user's attention or to indicate importance. Emphasizing important data points is a common method used by designers to support the user when gradually exploring the data - or in narrative visualization [25], when known aspects of the data are presented to the users. Effective emphasis alters a data point's visual features $\left\lbrack {4,{22}}\right\rbrack$ so that a viewer’s attention will be guided to the region of interest [62]. A wide variety of visual variables have been considered as emphasis effects $\left\lbrack {{17},{22},{23},{60}}\right\rbrack$ . For example, a visualization can use colour to emphasize certain data points: differences in the visual prominence of the selected data points will be achieved through the variation in color, a visual variable known to guide attention [24].
+
+Although theoretical frameworks of emphasis exist that link visually diverse emphasis effects through the idea of visual prominence compared to background elements [22], we still know little about how emphasis effects will be perceived by users. In particular, questions remain about what visual effects, and what magnitudes of those effects, will be most quickly recognized as emphasis by the viewer of a visualization; in addition, we know little about how different effects compare and what levels of two different effects will lead to similar user experiences.
+
+Many metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. These models are generally constructed using large and visually isolated stimuli under optimal conditions, and so although they are effective at predicting perceptibility in isolation, they do not work well with distractors, and do not work well with even minor changes to the visual field $\left\lbrack {3,{59}}\right\rbrack$ .
+
+Visualizations, in contrast, often consist of large numbers of a variety of marks viewed using a wide range of devices and environments - and designers may use a variety of techniques to emphasize data points. Current guidelines do not address how different emphasis effects are perceived by viewers in visualizations, or provide an equivalence metric for perceived emphasis so designers can choose effects correctly. Without effective models of visual prominence in visualizations, designers lack information on how different visual effects compare, and do not know what magnitude of effect to use to appropriately guide a viewer's attention to an area of interest.
+
+To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. It is important to note that the three emphasis effects are qualitatively different - for example, colour and size manipulate just the emphasized element, whereas blur/focus manipulates everything but the emphasized element - and so our goal is not simply to identify which effect is most perceivable, but rather to establish how the effects compare across a range of intensity levels.
+
+To do this, our first study established a baseline of perceived visual prominence by showing participants artificial scatterplot visualizations with one element highlighted, and measuring the time to their first fixation on the element (through eye tracking), their time to click on the element, and their subjective rating of visual prominence. We then built a model from the first study's data using logarithmic curves, that can be used to predict the relationship between the different emphasis effects. Our second study then examined perceived emphasis in a more realistic context, by looking at visual prominence in complex visualizations that are taken from real-world applications (the MASSVIS dataset [7]). We evaluated our model by using it to predict the results of the second study for three different measures; the model was accurate, with ${R}^{2}$ values as high as 0.96 .
+
+Our two studies provide new findings about how people perceive three emphasis effects and their magnitudes in visualizations:
+
+- There were significant differences in both studies for emphasis effect: blur/focus was most prominent, and colour least prominent, with size in between depending on magnitude.
+
+- There were also significant differences between the magnitude levels for all effects, providing a graduated way to increase or decrease perceived prominence.
+
+---
+
+*e-mail: aristides.mairena@usask.ca
+
+${}^{ \dagger }$ e-mail:martin.dechant@usask.ca
+
+${}^{\frac{1}{4}}$ e-mail:gutwin@cs.usask.ca
+
+8 e-mail:andy@cosc.canterbury.ac.nz
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+- A predictive model based on logarithmic curves fit the Study 1 data well, and was accurate at predicting perceived emphasis in Study 2 (particularly in terms of subjective ratings).
+
+It is important to note that our goal in the study was not to conduct a "shoot-out" between all possible types of emphasis, but rather to determine whether different effects are perceived differently, and to provide initial empirical evidence about how effects compare. Our results provide an initial empirical foundation for understanding how visual effects operate and are experienced by viewers when used for emphasis in visualizations; and although more work is needed to refine and broaden the models, our work provides useful new information for designers who want to control how emphasis effects will be perceived by users.
+
+## 2 RELATED WORK
+
+Emphasis is essential to InfoVis and is used to highlight regions of interest in a visualization. While there is a large body of research in this domain, much of the work seeks to understand how the underlying perceptual system operates - limiting the possibility of extracting design lessons from low-level data and findings. We survey current empirical studies of perception from visualization and vision science to inform our work.
+
+### 2.1 Visual Attention and Graphical Perception
+
+There are various theories and computational models for selective visual attention, but in general, most theories agree that attention operates by alternately selecting "features" from a number of incoming subsets of sensory data for further processing [51]. Early work suggests a two-stage process: first, a bottom-up, pre-attentive stage, which is automatic and independent of a task [51], where attention is guided to the most salient items in a scene [70]; followed by a second, slower, top-down stage that is driven by current tasks and goals $\left\lbrack {{34},{50},{62}}\right\rbrack$ . Within this model, the conjunction of basic features (such as colour and orientation) stems from "binding" features together (known as Feature Integration Theory [62]).
+
+A second theory, the Guided Search Theory, extends the two-stage process by proposing that attention can be biased toward targets of interest (e.g., a user looking for a red circle) in the top-down phase by encoding particular visual characteristics [69]: for example, assigning a higher weight to the red colour. Recently-proposed attention theories, however, challenge the two-stage model suggesting there is also bias to prioritize items that have been previously selected, thus proposing a three-stage model: current goals, selection history, and physical salience (bottom-up attention) [1].
+
+Attention is commonly examined through visual search experiments which usually ask participants to determine whether a target is present in a scene with distractors; reaction times (RT) and accuracy are used to model the relationship between the response and the number of distractors. Frequently, the term "popout" is used to describe a target item that is easily identified due to its unique visual properties in these searches. While the noticeability of specific visual characteristics such as colour and size cues have also been examined from an attention perspective $\left\lbrack {{21},{44}}\right\rbrack$ , a related area of research called graphical perception takes a more in-depth look at the suitability of different visual channels, and at how choices in visual variables for encoding data affect visualization effectiveness [15].
+
+Graphical perception studies have explored how different visual channels might support a variety of tasks for visualization [52]. Bertin was among the first to study the ability of visual variables to encode information, suggesting that variations in individual visual variables is an effective tool for encoding information and achieving noticeability [4]. Particularly, Bertin suggests that selective visual variables, such as position, size, colour hue, or texture allows viewers to immediately detect variables.
+
+Following Bertin, researchers in multiple disciplines such as cartography [42], statistics [15], and computer science [43] have conducted human-subjects experiments and have derived rankings of visual variables for nominal, ordinal, or quantitative data $\lbrack {15},{42}$ , ${43},{57}\rbrack$ . In addition to comparing the effectiveness of alternative visual variables for visualization, researchers have investigated how other design factors such as aspect ratios [13], chart sizes [26], and animations [63] influence the effectiveness of charts.
+
+Graphical perception studies have focused on measuring how the visual encoding of variables affect the accuracy of estimating and understanding values of the underlying data; insights from studies in graphical perception, however, can also be applied in manipulating data points in a visualization to guide a viewer's attention to an area of particular importance.
+
+### 2.2 Emphasis in Visualization
+
+Emphasis is essential to information visualization by offering support to a user when exploring data, for instance through highlighting areas of interest when brushing and linking across multiple views to emphasize relationships [35]. Emphasis is also important when presenting known aspects of data to a user through narrative visualization [54]. The goal of emphasis is to manipulate the visual features of an important data point to make it visually prominent, such that a viewer's bottom-up attention is attracted to the point [22].
+
+While distortion and magnification techniques - which create emphasis effects by simultaneously manipulating a visual variable's size and positioning - have been a focus of infovis researchers for creating emphasis $\left\lbrack {{11},{32},{39}}\right\rbrack$ , other techniques such as blur [36], motion [27], and flicker [65] have also been studied. Given the varied range of emphasis techniques, Hall et al. suggested a categorization of emphasis effects into two main groups based on how visuals change over time: time-invariant and time-variant effects [22].
+
+Time-invariant emphasis effects such as highlighting (colouring a data point in a visualization), and blurring (where a data point is shown in focus while the other elements are blurred) do not change with time, and do not use features such as fly-in, fade-in or other transitions [22]. In contrast, time-variant emphasis effects such as motion, flickering, or zooming involve time variations, commonly achieved through animations that alter the appearance of a data point [22].
+
+While there are many ways in which a data point in a visualization can be emphasized, all visual techniques generate emphasis by making the focus mark (i.e., the target) visually more prominent by making it sufficiently dissimilar from the other elements (i.e., the non-target marks) in at least one visual channel [22]. For example, blur/focus, magnification, and highlighting create emphasis by making one data point more visually prominent than others (e.g., sharper, bigger, or a different colour).
+
+There are three main properties of a visual channel that could influence the effectiveness of the visual prominence of an emphasized target mark against the set of non-emphasized marks: the similarity between targets and non-targets, the similarity of all non-targets, and the channel offset (i.e., the lowest value of the non-targets) [64]. Similarity theory shows that visual search efficiency decreases with increased target/non-target similarity and with decreased similarity between the non-targets [18]).
+
+Another theory, the relational account of attention theory, also suggests that the perceived similarity between targets and non-targets can be modeled by the magnitude of a vector in feature space pointing from the target to the closest non-target [2]. If users are given a feature direction in a visual search task (e.g., find the brightest or largest), attention will be guided to the mark that differs in the given direction from the other marks. In this theory, however, the nontarget similarity does not have an influence on the visual prominence of a target.
+
+Findings from classic psychophysics and visual search experiments, however, cannot always be applied directly to data visualization. Simple changes such as adding links between dots to simulate a node-link diagram, or changes to contrast effects due to a background luminance have been shown to have considerable effects on the results from prior experiments $\left\lbrack {3,{59}}\right\rbrack$ . These results reinforce the need for empirical evaluations of visualizations to validate theory and evaluate real-world visualization applications.
+
+### 2.3 Evaluations of Perception in Visualization
+
+Evaluations of perception in visualization have focused on understanding the details of integral and separable channels [58] and the interactions between separable channels. Smart and Szafir found that separability among shape, colour, and size perception functions asymmetrically, with shape found to affect the perception of size and colour more strongly compared to size's or colour's effect on shape perception [58]. Other studies have shown that size perception is biased by specific hues, and quantity estimation in visualizations are affected by both size and colour [14,16].
+
+Understanding of visual channels in visualization has largely categorized visual channels with terms such as "fully separable" and "no significant interference" [45, 70]. Design of visualizations that utilize these separable visual channels can be achieved by encoding data with visual variables that are known to "pop out". In a review of visual popout, Healey identified sixteen different visual variables or features that are known to pop out [24], including hue, size, orientation, and luminance.
+
+Scatterplots are one of the most effective visualizations for visual judgments due to data points being positioned along a common scale [25]. Several studies have explicitly explored graphical perception in scatterplots, with many recent techniques being developed to automate scatterplot design [12], and to predict perceptual attributes that may affect scatterplot analysis such as similarity or separability. However, these studies and techniques primarily focus on analyses over single-channel features for scatterplot design to improve legibility or its suitability for data comparison [19].
+
+Eye-tracking evaluations are a popular and effective tool for understanding how users view and visually explore visualizations [5,6]. For example, eye-tracking has been used to understand how different tasks and visual search strategies affect cognitive processes through fixation patterns $\left\lbrack {{46},{47}}\right\rbrack$ , and has also been used to evaluate specific visualization types $\left\lbrack {{10},{29},{30}}\right\rbrack$ , for comparing multiple types of visualizations [20], and for evaluating decision making and interaction in visualization [6,33].
+
+Free-viewing is a common technique for evaluating human perception of visual stimuli. Participants are not given a task and are instructed to freely look around the image, which avoids task-dependent effects and peripheral-vision effects. As some attention theories suggest that attention can be guided by a high-level task $\left\lbrack {1,{70}}\right\rbrack$ , free-viewing allows attention to be guided by image elements in a bottom-up manner. This assumption has guided researchers to the use of free-viewing for collecting ground truth data for evaluating saliency and attention in visualizations.
+
+However, despite the extensive body of research from vision science on graphical perception, prior research has been focused on evaluating factors that may affect the visual prominence of a specific emphasis effect [64], or in empirically ranking visual variables for encoding data $\left\lbrack {{15},{26}}\right\rbrack$ ; few guidelines discuss the issue of how different emphasis effects are perceived by viewers in visualizations, or consider issues of equivalence for perceived emphasis. Therefore, in the evaluations described next, we set out to determine the viewer's perception of visual prominence, and the effectiveness of a variety of emphasis effects at a wide range of intensity levels.
+
+## 3 GENERAL STUDY METHODS
+
+Data visualizations are used both to reveal patterns in data through exploration, and to communicate specific information to a viewer. When building visualizations for communication, a designer may need to draw a user's attention to a specific data point in order to better reveal the narrative focus of the visualization, and this can effectively be done by increasing the perceptual difference in the visual variables of the underlying data.
+
+In the following two studies, we experimentally evaluate how specific emphasis effects are experienced by a viewer. Our first study was designed to determine the baseline visual prominence of eight levels of three emphasis effects using different visual variables (blur/focus, colour, and size). In simple scatterplot visualizations, we visually emphasized one data element, and gathered eye-tracking data, mouse clicks, and subjective ratings of visual prominence Our second study built on the first; it used a similar paradigm but increased the complexity of the visualizations by using subset of the MASSVIS dataset - a repository of static data visualizations obtained from a variety of publicly-available online sources intended for a wide audience [7]. In addition, we developed predictive models from the first study's data, and used these to predict the results of the second study.
+
+Our theoretical starting point for these studies was the mathematical framework of emphasis effects in data visualizations developed by Hall et al., where visually diverse emphasis effects can be linked through the idea of visual prominence compared to background elements [22]. Our first study extends this previous work to determine the visual prominence of emphasis effects through eye-tracking metrics, click data, and subjective ratings. Using eye movement data makes it possible to examine which areas of a visualization viewers attend to and how their attention can be guided by applying emphasis effects. Combining eye tracking, interaction logs and subjective methods allowed us to collect a more diverse set of data, allowing us to analyze how participants' actions were guided by their perception of the different effects. This rich data allows us to better understand how users perceive commonly used effects that designers can use to emphasize a particular element in a visualization.
+
+### 3.1 Emphasis Effects and Levels
+
+Many modern visualization software and libraries utilize a wide range of emphasis techniques. For example, Chart.js, a commonly-used visualization library for the web, increases a mark's size when a user clicks on it to generate emphasis, while Tableau uses a combination of blur/focus and size to emphasize a mark. Based on an informal survey of visualization tools, we chose three visual variables for our study - colour, blur/focus, and size - that are commonly used to provide emphasis in many different contexts. While other common techniques exist (such as border highlighting), the variables we chose are known to be detected and processed in a pre-attentive way [24].
+
+- Colour. Emphasizing an element using colour means changing the hue of the data element to be different from the standard element colour; colour is well known to "pop out" when there is adequate difference between the highlighted item and the other elements, and colour change is widely used to indicate importance.
+
+- Size. Emphasizing an element using size means increasing the area of the data element such that it is bigger than other elements. Size also pops out, and is used in several visualization tools for interactive highlighting.
+
+- Blur/Focus. Emphasizing an element using blur/focus means applying a blur filter (e.g., Gaussian blur) to all of the elements in the visualization except for the emphasized element (which remains sharp). This effect is therefore qualitatively different from colour and size because it affects a much larger fraction of the overall view.
+
+For each type of emphasis effect, we chose several levels of the visual variable so that we could test the effect at different levels of magnitude (eight levels for Study 1, and three levels for Study 2). We sampled mark sizes, colour differences, and blur strength along increasing levels of difference between the target and the distractor - we call these levels 'magnitude of difference'. For some of the visual variables, the magnitude of difference range was constrained at both ends (e.g., there is a fixed range of hues between red and blue); for other variables, such as blur or size, the range was constrained only at one end (e.g., blur/focus and size start from the sharpness and size of the distractors and range up to an arbitrary upper end).
+
+
+
+Figure 1: Visual variables and magnitude of difference levels. Rows 1-3: Colour, Blur/Focus, Size (distractors upper, targets lower).
+
+It is important to note that the magnitude scales for each variable are different, since we do not have a way to translate perceptual equivalency across effects (investigating this equivalency is one of the goals of our studies). For colour, we chose eight magnitude levels using a colour difference metric that normalizes the colour space to provide a closer fit between perceptual and geometric differences between colours [49]. ${\Delta E}$ is a metric devised to understand and measure how the human eye perceives colour difference, where a difference of 2.3 is roughly equal to one Just Noticeable Difference (JND) [55]. By utilizing ${\Delta E}$ , we can more accurately compare a wider range of colours, utilizing all the colours of a colour space to compare differences and comparing their change in visual perception. We use the current ${\Delta E}$ standard, CIEDE2000 [56], as our primary colour difference metric, which has added corrections to account for lightness, chroma, and hue. For the colour levels used in the first study, we chose eight fixed colour differences (i.e., the difference between emphasized and non-emphasized elements) ranging from ${\Delta E10}$ to ${\Delta E45}$ (see Figure 1) The empirical results we describe below confirm that the increasing ${\Delta E}$ values did result in increasing perceptibility of the emphasized data element (e.g., see Fig 4).
+
+For size, we chose eight fixed size differences (geometric difference in mark area between emphasized and non-emphasized content) from 25% to 200%. As shown in Figure 1, the size differences indicate area rather than diameter. Since previous research has also determined that perceived size can be different from geometric size, we then calculated the perceptual size difference for each level, and used the perceptual value in our analysis (again, we note that we do not currently have a correspondence between magnitude levels for different variables, and so the levels that we chose simply provide a range within which we can gather empirical evidence about perception). We used the equation Perceived size $=$ Actual size ${}^{0.86}$ , were actual size signifies the area of the object, as suggested by previous work [38] (see Fig 1).
+
+For blur/focus, we do not have a perceptual difference metric similar to colour’s ${\Delta E}$ or to perceptual size as described above. Therefore, for blur/focus, we simply chose levels that cover a wide range of perceived prominence for all targets. We chose eight different blur intensities (applied to the non-emphasized areas of a visualization) implemented using GIMP's Gaussian Blur function - with blur radius ranging from 1 to 8 . Examples of emphasized targets and corresponding distractors are shown in Figure 1.
+
+Our first study measured the baseline perceptibility of the three emphasis effects at each effect's eight different magnitude levels, using artificial static scatterplot visualizations rendered using Chart.js ${}^{1}$ . Scatterplots were rendered on a white background using one-pixel gray axes. The second study used the same three effects, but only three of the eight levels; we used the visual variable to manipulate elements in visualizations taken from the MASSVIS dataset.
+
+### 3.2 Apparatus
+
+To record eye movement and interaction data we used an SMI Red- $\mathrm{m}$ eye tracker running at ${60}\mathrm{\;{Hz}}$ on a Dell 24-inch monitor (screen resolution of ${1920} \times {1080}$ ) connected to a Windows 10 PC. The viewing distance was approximately ${60}\mathrm{\;{cm}}$ (Figure 2). Gaze data was recorded using SMI Experiment Center and analyzed with SMI BEGAZE software. Users' heads were not fixed, but they were instructed to avoid unnecessary head movements. The experiment was conducted in an indoor laboratory with normal lighting conditions. All questionnaire data was collected through web-based forms.
+
+
+
+Figure 2: Setup and visualization graphic presented to participant for one trial.
+
+## 4 STUDY 1: ESTABLISHING A BASELINE FOR PER- CEIVED EMPHASIS
+
+### 4.1 Participants
+
+Twenty-one participants were recruited from the local university pool. We excluded three participants from our analysis either for self-reporting a colour vision deficiency, or for high eye-tracking deviation; this left eighteen people (7 male, 11 female) who were given a $\$ {10}$ honorarium for their participation. The average age of the participants was 26 (SD 4.5). All participants continuing to the study reported normal or corrected-to-normal vision and no colour-vision deficiencies, and all were experienced with mouse-and-windows applications ( ${10}\mathrm{{hrs}}/\mathrm{{wk}}$ ). Six participants reported previous experience with information visualizations from previous university courses.
+
+### 4.2 Study Procedure
+
+Participants completed informed consent forms and demographic questionnaires. Participants then completed a colour vision test: we checked for colour-vision deficiency using ten Ishihara test plates [31]. Next, we used the five-point calibration procedure from the SMI experimental suite to calibrate the eye tracker. Once the eyetracker calibration step was completed, participants carried out a series of trials with our scatterplot visualizations. The instructions given to participants were to visually explore each visualization and click on the element they felt was most emphasized. The monitor was blanked after each trial (after the participants clicked on an element) and the study software then asked the participant to rate the perceived visual prominence of the target mark, on a 1-7 scale. After participants provided their subjective rating, mouse position was re-centered and the next trial began.
+
+To prevent learning effects and to account for attention theories that suggest that visual attention can be guided by previously seen targets [1], participants were assigned to a random order presentation of all visual stimuli. Each visualization contained one target mark (an emphasized stimuli) and twenty randomly-placed distractor marks, avoiding overlaps. While spatial distance between distractor marks and the target can influence colour difference perceptions [9], we elected to construct our scatterplots with variable element spacing to increase the visual complexity of the stimuli for increased ecological validity, while ensuring distractors and targets avoided overlaps. The three emphasis effect types were presented at their 8 magnitude-of-difference levels, and each emphasis level was presented 5 times. Each target maintained the same appearance for each of the 5 trials of the level, but changed location. Across all levels and locations, each visual variable was presented in 40 trials, all participants completed the 120 trials of the study. Our study setup ensured that targets were located at a maximum of ${18}^{ \circ }$ viewing angle from the centre of the screen (a previous study shows that participants are able to find targets at angles of ${18}^{ \circ }$ at least ${75}\%$ of the time in under ${240}\mathrm{\;{ms}}$ [21]). Our target positioning and the free-viewing method used in the study meant that participants were always able to keep targets close to their central vision. We changed the target's location to evaluate the visual variables at multiple locations within a visualization, while ensuring targets would remain visible in a participant's visual field.
+
+### 4.3 Study Design and Analysis
+
+Our goals for the study were to gather empirical data about perceptibility for the different visual variables and magnitude levels, and to use that data for a predictive model. To investigate differences, we used a repeated-measures design with two factors:
+
+- Emphasis Effect (blur/focus, colour, size)
+
+- Magnitude of Difference (levels 1-8)
+
+Because magnitude levels are specific to each visual variable (e.g., the scale for colour difference is independent of the scale for size or blur), the levels are intended only to test increasing magnitudes for each effect. The ANOVA analysis for Magnitude therefore investigates whether perception of the visual variable is affected by each variable's increasing magnitude.
+
+We used four dependent measures that provide different aspects of the user's experience with emphasis: first, we tracked time to eye fixation on the target (which provides an indication of early visual attention); second, we recorded the time to the user's mouse click on the target (which measures the user's conscious decision about emphasis); third, we recorded the user's total fixation time on the target (to determine whether stronger emphasis leads to longer fixation); and fourth, we asked for the user's subjective rating of the target's emphasis (which provides a more detailed measure of the user's conscious decision).
+
+We used our analysis results to explore relative differences between the emphasis effects, and to fit curves to our empirical data in order to develop a predictive model.
+
+## 5 STUDY 1 RESULTS
+
+### 5.1 Time to Target Fixation, Time to Target Click, and Fixation Times
+
+We analyzed differences between emphasis effect and magnitude of difference on participant's time to target fixation, target click, and fixation time in an Area of Interest (AOI) surrounding the emphasized visual target. We report effect sizes for significant RM-ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and $> {.14}$ large [41]). For all follow up tests involving multiple comparisons, the Holm correction was used.
+
+Time to Target Fixation. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{34}} = {17.73}, p < {0.001},{\eta }^{2} = {0.07}}\right)$ , and Magnitude of Difference $\left( {{F}_{7,{119}} = {8.23}, p < {0.001},{\eta }^{2} = {0.19}}\right)$ on time to target fixation, with no interaction between the factors $\left( {{F}_{14.238} = {2.71}, p = {0.27}}\right)$ . These data are shown in Fig 3; note that size levels for size are adjusted in the charts to show perceptual size (but for analysis, size levels were mapped to the standard 1-8 scale). Across all Magnitudes, participants fixated on targets fastest in the Blur/Focus condition ( ${828}\mathrm{\;{ms}}$ ), followed by Size ( ${913}\mathrm{\;{ms}}$ ) and Colour (1242 ms). Post-hoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between Color $\rightarrow$ Blur and Color $\rightarrow$ Size. Across all emphasis effects, time to target fixation was the fastest at magnitude $8\left( {{613}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude $1\left( {{1733}\mathrm{\;{ms}}}\right)$ . A similar post-hoc t-test was applied for pairs of magnitude differences and showed a significant difference for $1 \rightarrow 2 - 8\left( {\mathrm{p} < {0.001}}\right)$ , and $3 \rightarrow 7$ $\left( {\mathrm{p} < {0.05}}\right)$ .
+
+---
+
+${}^{1}$ https://www.chartjs.org/
+
+---
+
+
+
+Figure 3: Time to Target Fixation. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur = 0.45, colour $= {0.60}$ , and size $= {0.87}$ . Levels for size are adjusted in the chart to show perceptual size rather than geometric size.
+
+Time to Target Click. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{34}} = {40.99}, p < {0.001},{\eta }^{2} = {0.24}}\right)$ , and Magnitude of Difference $\left( {{F}_{7,{119}} = {56.45}, p < {0.001},{\eta }^{2} = {0.45}}\right)$ on target click and an interaction between the factors $\left( {{F}_{{14},{238}} = }\right.$ ${2.68}, p < {0.01},{\eta }^{2} = {0.08})$ . These data are illustrated in Fig 4. Participants clicked on focused targets fastest in the Blur condition (2051ms), followed by size(2141ms)and colour(2882ms). Posthoc t-tests again showed significant $\left( {p < {0.01}}\right)$ differences between Color $\rightarrow$ Blur and Color $\rightarrow$ Size. Averaged across all emphasis effects, time to click was fastest at magnitude $7\left( {{1791}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude 1 (3748 ms).
+
+Total Target Fixation Time. RM-ANOVA showed a significant main effect of Magnitude of Difference $\left( {{F}_{7,{119}} = {6.65}, p < }\right.$ ${0.001},{\eta }^{2} = {0.12}$ ) on fixation time, but no difference between ${Em}$ - phasis Effects $\left( {{F}_{2,{34}} = {2.76}, p = {0.08}}\right)$ . Averaged across magnitude, total fixation time was similar among the emphasis effects ( ${1990}\mathrm{\;{ms}}$ for size; 2260 ms for both size and colour). Averaged across all effects, a Magnitude of 7 had fixation time of ${2470}\mathrm{\;{ms}}$ , while a Magnitude of 1 had the least time at ${1913}\mathrm{\;{ms}}$ . Post-hoc t-tests showed significant (all $p < {0.05}$ ) differences on difference pairs $1 \rightarrow 5,1 \rightarrow$ $7,3 \rightarrow 7$ , and $7 \rightarrow 8$ .
+
+### 5.2 Subjective Perception of Visual Prominence
+
+After the presentation of each visualization, participants were asked to rate how visually prominent the emphasized data point appeared to them. Mean response scores are shown in Figure 5. We used the Aligned Rank Transform [68] with the ARTool package in R to enable analysis of the subjective prominence responses using RM-ANOVA. For subjective ratings of perceived emphasis there were main effects of Emphasis Effect $\left( {{F}_{2,{408}} = {56.38}, p < {0.001}}\right)$ and Magnitude of Difference $\left( {{F}_{7,{408}} = {24.98}, p < {0.001}}\right)$ , with no interaction $\left( {{F}_{{14},{408}} = {1.43}, p = {0.13}}\right)$ . Results from these analyses follows those from Time to Click, in which sharp objects in the Focus/Blur emphasis condition were, on average, perceived as most visually prominent, followed by Size and Colour - with an increasing perceived visual prominence as we increase the Magnitude of Difference.
+
+
+
+Figure 4: Time to Target Click. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur =0.68, colour $= {0.90}$ , and size $= {0.96}$ . Levels for size are adjusted in the chart to show perceptual size rather than geometric size.
+
+### 5.3 Re-assessing Perceptual Levels for Color and Size
+
+The magnitude levels chosen for color used the ${\Delta E}$ scale that is intended to provide perceptually-equal magnitude differences; similarly, the (adjusted) levels for size also provide linear magnitude changes in perceptual space. This means that we would expect a linear change in perceptual response to the changes in color and size (in Figures 3, 4, and 5). However, these charts show curves for both color and size, not linear relationships. This suggests that for visualization tasks, the perceptual scales for color and size may underestimate the decrease in emphasis as the effect increases in magnitude.
+
+### 5.4 Participant Preferences and Comments
+
+At the end of the study session we asked participants to state which emphasis effect they felt was the most visually prominent, least prominent, and to provide further comments on their responses. Overall, focus/blur was seen as most prominent (seven participants voted for blur/focus, six for size, and four for colour). One participant stated that no one effect stood out as most prominent. Participant comments for the three emphasis effects reflect the empirical findings, favouring blur/focus. One participant reported, "[In focus/blur] other data points were very blurry and hard to distinguish so the clear one stood out more than if the colour were different or the size were different (i.e. could only focus on the emphasized one, compared to the other types where you could still view the non-emphasized points)". Another participant stated "[blur/focus] clearly hid the other circles". Participants that favoured size reported that size may be easier for quick comparisons; one remarked "It is easier for the eye to visualize a bigger/smaller size in comparison to other dots vs trying to see a colour difference of a similar size dot".
+
+### 5.5 Building an Initial Equivalence Model
+
+We used the raw data from Study 1 to build initial predictive models of time to target fixation, time to click, and subjective rating of emphasis - and although more data will be needed to refine the predictions, we are able to capture some of the main differences between the three emphasis effects that we examined. Our models are simple functions fit to the raw empirical data; we use logarithmic functions they are commonly used to describe human performance in signal-detection and perceptual studies [67]. We fit the functions to the data using $\mathrm{R}$ (Im(mean $\sim \log$ (magnitude of difference)); we could then use R's 'predict' function to get predicted values. The fitted logarithmic curves for time to target fixation, time to click, and subjective ratings are shown in Figures 3, 4, and 5. Captions for these figures also state the ${R}^{2}$ values for the accuracy of the fitted functions to the data: for time to fixation the curve was only moderately accurate, but for time to click and subjective ratings, the accuracy was much higher.
+
+
+
+Figure 5: Perceived Prominence of Emphasis Effects. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur $= {0.80}$ , colour $= {0.97}$ , and size $= {0.98}$ .
+
+The logarithmic curves provide a simple model that allows investigation of equivalence between the three effects. For all three measures, the models allow us to observe some main features of the relationships: first, colour is consistently less perceptible than the other two effects, both in terms of performance data and subjective ratings; second, size and blur/focus are very similar at level 3 and above of both performance measures, but at levels 1 and 2, size is somewhat weaker; third, size and blur/focus are more clearly separated in subjective ratings, with clear differences up to level 5 .
+
+These models, once validated, can allow simple calculation of equivalence between effects. As an example of how the calculation works, consider a scenario where a designer needs to change from a blur/focus emphasis effect to one that uses colour; interpolation of the curves of Figure 4 indicate that to translate the perceived emphasis of level 1 of blur/focus, a designer would need to use a colour effect of approximately level 7. However, before we can consider using the models for equivalence, we need to verify that they are robust enough to work with other visualizations. We do this by predicting data from Study 2 with the models developed from Study 1, as described below.
+
+## 6 STUDY 2: PERCEPTION OF EMPHASIS IN COM- PLEX VISUALIZATIONS
+
+In contrast to the scatterplots used in Study 1, many visualizations include other visual factors such as background graphics, labels, titles, annotations and other embellishments that may affect how a user's attention is guided and ultimately how an emphasis effect is perceived. Therefore, we need to understand how users perceive emphasis effects in more complex visualizations. We designed our study following a similar method to Study 1, but evaluated emphasis effects in complex, real-world visualization graphics from the MASSVIS database [7].
+
+### 6.1 Image Data
+
+As the emphasis effects we are studying are not particularly targeted towards a specific visualization type, we chose the MASSVIS database [7] as the source for image data. The dataset contains 5000 static data visualizations that are obtained from a variety of online sources, are generated from real-world applications, and are targeted to a broad audience; MASSVIS is a popular choice for investigating how general users understand data visualizations. We selected a subset of 16 visualizations from the dataset covering a variety of visualization types, including maps and scatterplots. All of the selected graphics were chosen based on having scatterplot-like features (e.g., points on a map) to ensure consistency across our two studies. Each of the 16 visualizations had one emphasis effect applied at a time (Fig 6), and these augmented images were used to evaluate how users perceive the different emphasis effects.
+
+### 6.2 Magnitude Levels for Emphasized Stimuli
+
+We used a subset of Study 1 ’s magnitude levels for Study 2 - we chose three uniform steps (1,4, and 7) from Study 1, giving us coverage of the range we used in the baseline results. Example graphics with an emphasized data point are illustrated in Fig 6.
+
+### 6.3 Experimental Design and Procedure
+
+The experiment followed a similar procedure to that of Study 1. After providing informed consent and going through the eye-tracker calibration, participants were instructed to explore each visualization and to click on the area they felt was most emphasized. Similar to Study 1, to prevent learning effects and pre-attentive processing of previously seen stimuli, participants were assigned to a random order presentation. Test graphics contained one randomly-placed test mark in the graphic. After each stimulus presentation, participants were asked to rate the perceived visual prominence of the emphasized point they selected. Given the variety of colours and mark sizes in our sampled visualizations, our test mark colour and size difference are relative to those of the marks in each visualization. Individual differences for each image are considered in our discussion section. Each visual variable was presented in 48 trials (3 levels ${X16}$ graphics), and all participants completed all 144 trials of the study.
+
+### 6.4 Participants
+
+Twenty-four new participants (none of whom participated in Study 1) were recruited from the local university pool. We excluded four participants from our analysis for high eye-tracking deviation, or failure to follow experiment instructions. The remaining twenty participants ( 9 male, 9 female, 2 non-binary) were given a $\$ {10}$ honorarium for their participation. The average age of the participants was 26 (SD 6.02) and all reported normal or corrected-to-normal vision and no colour-vision deficiencies; all were experienced with mouse-and-windows applications $\left( {{10}\mathrm{{hrs}}/\mathrm{{wk}}}\right)$ , and 6 had previous visualization experience. We used the same experimental setup described in Study 1.
+
+### 6.5 Study Design and Analysis
+
+To understand how users perceive emphasis effects in more complex, real-world visualizations, and to asses our initial equivalence model, we used a repeated measures, within participants design with the same two factors as Study 1:
+
+- Emphasis Effect (blur/focus, colour, size).
+
+- Magnitude of Difference (Levels 1, 4, and 7 from Study 1).
+
+We used the same four dependent variables: time to fixate on target, time to target click, total fixation time, and subjective rating of perceived emphasis.
+
+
+
+Figure 6: Example stimulus display for study 2 (a) baseline, (b) focus/blur, (c) size/area, (d) colour; Rows 1-2 at level magnitude of difference 4. Row 3 shows magnitude of difference 7.
+
+## 7 STUDY 2 RESULTS
+
+We again analyzed emphasis effect and magnitude of difference on our four dependent measures, and we again report effect sizes as general eta-squared ${\eta }^{2}$ , and use Holm correction for followup tests.
+
+Time to Target Fixation. RM-ANOVA found no main effect of Emphasis Effect $\left( {{F}_{2,{38}} = {1.78}, p = {0.18}}\right)$ on time to target fixation, but did find an effect of Magnitude of Difference $\left( {{F}_{2,{38}} = {3.80}, p < }\right.$ ${0.01},{\eta }^{2} = {0.31}$ ), and an interaction between the factors $\left( {{F}_{4.76} = }\right.$ ${3.09}, p < {0.01},{\eta }^{2} = {0.05})$ . These data are shown in Fig 7. Posthoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between each magnitude-of-difference pair. Averaged across all emphasis effects, time to click on an emphasized data point was fastest at Magnitude 7 (3548 ms), and the slowest at Magnitude 1 (4932 ms).
+
+Time to Target Click. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{38}} = {41.18}, p < {0.01},{\eta }^{2} = {0.32}}\right)$ and Magnitude of Difference $\left( {{F}_{2,{38}} = {58.04}, p < {0.01},{\eta }^{2} = {0.62}}\right)$ on target click time, and an Emphasis Effect $\times$ Magnitude of Difference interaction $\left( {{F}_{4,{76}} = {14.64}, p < {0.01},{\eta }^{2} = {0.15}}\right)$ . These data are illustrated in Fig 8. Similar to Study 1, focused targets in the Blur condition were clicked on fastest(3106ms), followed by Size (4812 ms) and Colour ( ${5787}\mathrm{\;{ms}}$ ). Holm-corrected post-hoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between all pairs. Averaged across all emphasis effects, time to click on an emphasized data point was fastest at magnitude $7\left( {{4616}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude 1 (7012 ms).
+
+Total Target Fixation Time. RM-ANOVA showed a significant main effect of Emphasis Effect $\left( {{F}_{2,{38}} = {3.41}, p < {0.01},{\eta }^{2} = {0.04}}\right)$ and Magnitude of Difference $\left( {{F}_{2,{38}} = {15.08}, p < {0.01},{\eta }^{2} = {0.22}}\right)$ on total fixation time, and a Emphasis Effect $\times$ Magnitude of Difference $\left( {{F}_{4,{76}} = {4.30}, p < {0.01},{\eta }^{2} = {0.08}}\right)$ interaction. Averaged across magnitude of differences, fixation time for blur/focus was ${1369}\mathrm{\;{ms}}$ and 1240 msfor both Size and Colour. Averaged across all effects, a magnitude of difference of 7 gathered the most attention with a fixation time of ${1494}\mathrm{\;{ms}}$ , while a difference of 1 had the least fixation time at ${1080}\mathrm{\;{ms}}$ . Post-hoc t-tests showed significant (all $p < {0.01}$ ) differences for Magnitude of Difference but no difference among Emphasis Effects.
+
+### 7.1 Subjective Perception of Visual Prominence in Com- plex Visualizations
+
+After the presentation of each visualization, participants were asked to rate the visual prominence of the emphasized data point. Mean response scores are shown in Fig 9. We used the Aligned Rank Transform [68] with the ARTool package in R to enable analysis of the subjective responses using RM-ANOVA. RM-ANOVA showed there were main effects of Emphasis Effect $\left( {{F}_{2.171} = {16.05}, p < }\right.$ ${0.001})$ and Magnitude of Difference $\left( {{F}_{2,{171}} = {60.00}, p < {0.001}}\right)$ , and an interaction between the factors $\left( {{F}_{4.171} = {2.57}, p = {0.03}}\right)$ . Results from these analyses are shown in Figure 9 and follow those from Time to Click, in which sharp objects in the Focus/Blur effect were perceived as most visually prominent, followed by Size and Colour - with an increasing perceived visual prominence as the magnitude of difference increased.
+
+
+
+Figure 7: Time to Target Fixation in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+### 7.2 Participant Preferences and Comments
+
+After completing the study, participants provided their preferences and general comments on the emphasis effects they identified. Participant comments echoed our other findings. Participants made several comments on how the focus/blur emphasis effect helped them to rapidly identify content: for example"It [the emphasized point] just popped out more than the rest, provided more contrast"; "Because it [blur/focus] didn't allow me to see the others, I focused all my attention to the point that was not blurry". One participant favoured size, stating "[size] always drew my eye immediately".
+
+When asked whether there were other areas of a visualization that got their attention, one participant remarked "The titles and information, I was trying to read them and see if that would have helped somehow to identify what was emphasized", while another participant stated "I occasionally looked at the titles to see what the information was representing".
+
+### 7.3 Consistency Across Studies and Model Validation
+
+We used the models built from Study 1 data to predict the data gathered for each effect and magnitude used in Study 2, and then compared the empirical data points to the predicted values (predictions are shown in Figures 7, 8 and 9 as dotted lines). Although the absolute values of the predictions are lower than the true values, the predictions do capture many of the characteristics of the Study 2 results, as discussed below. We tested the correlation between the predicted and empirical values: for time to target fixation, the correlation was ${0.82}\left( {{R}^{2} = {0.87}}\right)$ ; for time to target click, correlation was ${0.92}\left( {{R}^{2} = {0.94}}\right)$ ; for subjective ratings, correlation was 0.96 $\left( {{R}^{2} = {0.96}}\right)$ .
+
+If equivalence models are to be useful, the perceptibility of emphasis must be reasonably reliable across different visualization situations. Our two studies involve two visual settings: plain scatterplots in Study 1, and more complex visualizations in Study 2 (with background graphics and colours, text, and multiple visual styles). Nevertheless, there are several similarities between the two sets of results (as indicated by the very strong correlation scores). In both studies, the colour effect was less perceivable (higher time to target fixation and target click time, and lower subjective ratings); however, the earlier difference between colour and size at the highest magnitude is now gone. As in study 1, the blur/focus effect is again consistently more perceivable (and is rated as more prominent). Also as in Study 1, there was a similar improvement in performance as the magnitude of the effect increases; there was less of a clear logarithmic curve for the emphasis effects (although this would be less apparent with only three magnitude levels in Study 2).
+
+
+
+Figure 8: Time to Target Click in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+
+
+Figure 9: Perceived Prominence of Emphasis Effects in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+The most obvious difference between the predicted and real values is that times for both fixation and clicking were substantially higher with the MASSVIS visualizations. However, this was an expected difference because of the additional visual information available in each image - and because all emphasis effects were affected similarly, any equivalence calculations using the model will be unaffected.
+
+The subjective responses were particularly well predicted by the Study 1 model (see Figure 5), with the predicted points being accurate both in terms of absolute score and the relationship between the effects. This is a particularly valuable finding, because as discussed below, it may be that the user's perception of emphasis is a more important measure for designers than the user's gaze patterns or click behaviour.
+
+The main point where the predictions were inaccurate - both for performance data and for subjective ratings - was the perceptibility of the size effect at level 7. After reviewing our stimuli for this condition, there are two possible reasons for the empirical results being different from predicted values. First, two of the visualizations (see
+
+
+
+Figure 10: Study 2 graphics. Graphic (a) contained a larger number of data points with composite blobs, leading to visual crowding. Graphic (b) has multiple visual elements (shapes, colours, and text), reducing the effect of size emphasis on a data point.
+
+Figure 10) contained a large number of data points and many visual elements overall, and previous research has shown that it is more difficult to recognize objects in a cluttered environment due to visual crowding, which can create a visual-perception bottleneck [40]. Second, when data points in these visualizations are dense, composite blobs with several overlapping points create marks that are larger than the default size. Although none of our target elements were in or beside these blobs, the presence of varying-size elements in the visualization may have reduced the prominence of our manipulation and forced participants to do a more careful visual search.
+
+This anomaly with the size effect points to another useful aspect of having a predictive model, however: that is, the identification of empirical results that are not as expected and that may need to be investigated further.
+
+## 8 DISCUSSION
+
+### 8.1 Summary of Findings
+
+We investigated how users perceive colour, size, and blur/focus when used as emphasis effects in both basic scatterplots and more complex visualizations. Magnitude scales for each variable are different, since we do not have a way to translate perceptual equivalency across effects, investigating this equivalency is one of the goals of our studies. Our evaluations provide several findings:
+
+- Different effects are perceived differently. Across both studies, blur/focus led to fastest target fixation and target click, and was rated highest in terms of visual prominence by participants; size also led to fast performance and high ratings of visual prominence at higher magnitude levels (with one exception); colour led to the slowest performance and lowest ratings for prominence.
+
+- Magnitude increases emphasis, although with diminishing effects. Across both studies, increasing the magnitude of the effect consistently increased visual prominence (again, with the same one exception); all effects showed a tailing-off of the effect of magnitude.
+
+- Models of perceived emphasis work reasonably well. A predictive model based on logarithmic curves fit the Study 1 data well, and was reasonably accurate at predicting emphasis in Study 2 (particularly the subjective ratings).
+
+In the following sections, we consider possible explanations for these results, look at how our findings and models can be used to assist designers in building visualizations with emphasis, and discuss limitations and directions for future research in this area.
+
+### 8.2 Explanation of Results
+
+## Differences Between the Emphasis Effects
+
+We saw consistent differences in fixation time, click time, and subjective ratings for our three emphasis effects, and the reasons for these differences arise from each technique's fundamental properties (as introduced earlier). First, blur/focus is an effect that manipulates the entire visualization except for the emphasized data element, and so has advantages over single-element techniques like colour and size. In particular, the blur effect guarantees that there will be no inadvertent competing visual stimuli that could slow the user's visual search (as happened with size at level 6 in Study 2), because all other elements are blurred. Second, the relative advantage for size over colour in our studies can be explained by the inherent limit on colour difference (i.e., there is a maximum difference between any two colours) whereas size difference has an unlimited upper end. The study results show that our range of magnitudes for size was larger than our range for colours - which points to the need for a better understanding of equivalences between effects.
+
+Effectiveness of the Predictive Model
+
+The model built from Study 1 data provided accurate predictions of the results in Study $2\left( {R}^{2}\right.$ values of0.87,0.94, and 0.96 for fixation time, click time, and subjective rating), and the model correctly represented the overall relationships between the emphasis effects and the changes expected with increasing magnitude level. The success of the predictive model shows that perception of emphasis is consistent between our two experimental settings - plain scatterplots in Study 1, and realistic scatterplots with other visual features in Study 2. In addition, the model was a useful tool for identifying results that need further exploration, including the greater overall response times for the MASSVIS dataset, and the anomalous performance of size at level 7. Of course, these anomalies can only be spotted when there are empirical results to compare to the model, but it is likely that the model will be used together with empirical testing until it matures with the addition of more data in different settings.
+
+## The Size-at-Level-Seven Anomaly
+
+As described above, the size effect at level seven was less prominent than expected, with two possible reasons: visual crowding from other elements in the visualizations, and inadvertent size variance from overlapping data points (see Figure 10). This result clearly indicates that there can be emergent properties in real-world visualizations that interfere with the user's perception of emphasis, and thus a planned emphasis effect must be considered in light of other visual elements. These real-world interactions are another motivation to have equivalence metrics, so that designers can switch from one emphasis effect to another (and preserve the prominence of the emphasized element) when interference is discovered.
+
+While our setup of the presentation of visual stimuli ensured that distractor marks and stimuli would not overlap, changes in the distance between distractors and the emphasized elements may affect their noticeability. Because effects of visual crowding occur with a wide range of objects, colours and shapes [66], this phenomena may have affected other individual data points as well; but our explicit decision to not control the distance between points in Study 1 means that our results provide a more valid representation of the challenges faced by designers when emphasizing elements in a crowded visualization. As noted above, global effects such as blur/focus are less affected by visual crowding, as blurring non-targets partially eliminates them from a user's view, leaving only the focused element available. We note that it is possible to quantify the overall degree of visual complexity in an image, and in future work this could be added to our models as a factor (i.e., further studies could examine perceived emphasis at different levels of crowding).
+
+A final reason that needs to be considered is the existence of possible noise in our results. We can see similar patterns from Study 1 with certain levels of a variable performing similar or worse than the previous lowest level (such as Size level 5 being slower than level 4). However, these differences are small(300ms), and given our overall quick response times across our empirical measures, variations of approximately ${500}\mathrm{\;{ms}}$ could be attributed to a normal variation in participant response times.
+
+### 8.3 Implications for Design
+
+Our findings are applicable in a number of different visualization contexts. Visualization designers often need to draw a user's attention to important data points; our studies improve understanding of how visual cues are detected as emphasis effects and offer insights to their perceived visual prominence. While the current set of visual stimuli examined was relatively small, we intend to explore further visual variables in future studies.
+
+A first design implication is that global visual effects such as blur/focus can achieve a high perceived visual prominence and remain relatively unaffected by a visualization's background. Perceived differences for other variables such as colour and size can be affected by the non-target elements, but by blurring the non-target objects in a visualization, the focused item is less likely to be affected by visual crowding. In visualizations with a large number of objects (such as different colours and shapes), blurring non-targets may achieve the highest noticeability - however, blur/focus cannot be used in visualizations where the user needs to inspect elements that are not emphasized.
+
+Second, predictive models of perceived visual prominence can be valuable tools for designers. Although our model is still only a first step, it was already able to predict the results of Study 2 reasonably well, and can already be used to consider the equivalence between perception of the three effects that we tested. (We note that the model should not be used to calculate exact conversion factors between the effects, but rather to understand general relationships and approximate relative magnitudes). As further studies are carried out and more data is added, models like ours can become resources for designers that can accelerate the design of a narrative visualization. It is interesting that the model was most accurate at predicting people's subjective ratings of prominence, which raises the question of which metric is most important. It may be that subjective perception is a better measure for a model, because when a designer adds emphasis to a visualization, they typically want the viewer to know that the item is being emphasized - that is, what the viewer thinks is being emphasized is possibly more important than what their eye is drawn to first.
+
+A third design consideration is for designers utilizing colour as a way to emphasize certain data points. It should be noted that a subset of users suffer from various genetic conditions which cause atypical forms of colour perception - in such cases, a different emphasis effect may be more appropriate. Designers may wish to use our metrics and results to evaluate the effectiveness of a different visual effect to achieve the same perceived importance. Our future work intends to evaluate the use of various visual cues for emphasis effects and compare the sets for individuals with normal vision and users with a vision deficiency.
+
+Beyond visualization, our findings can also be applicable in other domains. For example, interface designers may wish to use our results as a way of devising methods of providing visual feedback. For instance, visual feedback during "find" tasks in different software software such as web browsers and pdf readers varies - with some software opting for colour highlighting an item when found, while others increase its size or use a combination of both. To effectively guide a user's attention to an item, designers can use perceived visual prominence as a method to evaluate and compare different visual effects.
+
+### 8.4 Generalizability, Limitations, and Future Work
+
+Our studies tested a limited range of visualizations (i.e., scatterplot presentations), so the application of our results should be limited to that type; in Study 2, however, we did test a wide variety of different visual styles taken from real-world examples, and so we believe that our findings will be robust across a range of real scatterplots. In future work, we plan to extend our work to other types of visualizations and other real-world scenarios, with a variety of datasets. We also tested only a single emphasized data point, and an opportunity to extend to our work is to investigate visualizations that emphasize multiple points. Multiple points of emphasis also provides us with another opportunity to test the predictions of the model - that is, if two data elements are emphasized with different effects that our model predicts should be equally prominent, which will the user fixate on first? (We note that this kind of comparison is only possible with single-element effects such as size and colour).
+
+The difference levels for the visual variables tested in our experiments are intended to be generalizable for the design of emphasized elements in typical visualizations. However, although we tested a wide range of magnitude of differences, it is possible that our findings are influenced by the magnitude of differences we tested (as noted above in terms of the range of difference that is possible with each visual variable). A variety of other visual variables can be implemented as emphasis effects (see Healey for a review [24]). Visualization designers that intend to use a different range of magnitude of differences or emphasis effects may follow methods similar to the ones presented in this paper - in particular, testing user's reaction times and their subjective ratings to determine the noticeability of their effects. We also plan to carry out studies that look at how magnitude of emphasis is affected by clutter and by other mappings of visual variables to data variables.
+
+Other factors in generalization should be considered as well. Colour perception models rely on a simplified model of the world that assume perfect viewing conditions. While this assumption is necessary for understanding the visual system, complexities of the real world such as the viewing environment [37], lighting conditions $\left\lbrack {8,{48}}\right\rbrack$ , and display device $\left\lbrack {53}\right\rbrack$ may affect visual perception. Our experimental viewing conditions were controlled and remained stable throughout the studies, however, future work could extend these results to larger user samples and different viewing conditions, using crowd-sourcing methods [25].
+
+There are several additional opportunities for extending our findings. We explored emphasis effects with static visual variables (time-invariant in terms of Hall et al.'s framework [22]) but there are many other effects that could be tested, including depth, outline, transparency, or shape. Additionally, future research should investigate time-variant emphasis effects with dynamic visual variables such as flicker or motion and extend our results to interactive visualizations.
+
+We evaluated our emphasis effects based on empirical metrics such as time to target fixation, and time to mouse click. There are other ways emphasis effects can be evaluated. For instance, the MASSVIS dataset contains a comprehensive set of user attention maps on the visualizations [7]. We intend to analyze viewer's attention maps on the visualizations, comparing the visualization's attention maps with and without an emphasis effect applied.
+
+Finally, we elected to use the CIE2000 as it is commonly used in visualization and has been methodologically validated in past studies $\left\lbrack {{28},{61}}\right\rbrack$ . Future work may consider the use of other colour difference models or colour spaces, such as CIECAM02 [45]. We anticipate investigating a number of different colour spaces will result in more accurate models of colour difference perceptions for visualization design.
+
+## 9 CONCLUSION
+
+Emphasis is an essential component of InfoVis, and is used by designers to draw a user's attention or to indicate importance. However, it is difficult for designers to know how different emphasis effects will compare and what level of one effect is equivalent to what level of another when designing visualizations. We carried out two user studies to evaluate the visual prominence of three emphasis effects (blur/focus, colour, and size) at various strength levels, and developed a predictive model that can indicate equivalence between effects. Results from our two studies provide the beginnings of an empirical foundation for understanding how visual effects operate and are experienced by viewers when used for emphasis in visualizations, and provide new information for designers who want to control how emphasis effects will be perceived by users.
+
+## 10 ACKNOWLEDGMENTS
+
+This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and additionally by a Canada First Research Excellence Fund (CFREF) grant sponsored by the the Global Institute for Food Security(GIFS).
+
+## REFERENCES
+
+[1] E. Awh, A. V. Belopolsky, and J. Theeuwes. Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in cogni-
+
+tive sciences, 16(8):437-443, 2012.
+
+[2] S. I. Becker. The role of target-distractor relationships in guiding attention and the eyes in visual search. Journal of Experimental Psychology: General, 139(2):247, 2010.
+
+[3] R. V. d. Berg, F. W. Cornelissen, and J. B. T. M. Roerdink. Perceptual dependencies in information visualization assessed by complex visual search. ACM Trans. Appl. Percept., 4(4):3:1-3:21, Feb. 2008. doi: 10. 1145/1278760.1278763
+
+[4] J. Bertin, W. J. Berg, and H. Wainer. Semiology of graphics: diagrams, networks, maps, vol. 1. University of Wisconsin press Madison, 1983.
+
+[5] T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf, and T. Ertl. State-of-the-art of visualization for eye tracking data. In EuroVis (STARs), 2014.
+
+[6] T. Blascheck, L. MacDonald Vermeulen, J. Vermeulen, C. Perin, W. Willett, T. Ertl, and S. Carpendale. Exploration Strategies for Discovery of Interactivity in Visualizations. IEEE Transactions on Visualization and Computer Graphics, 2626(c):1-1, 2018. doi: 10. 1109/TVCG.2018.2802520
+
+[7] M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister. What makes a visualization memorable? IEEE Transactions on Visualization and Computer Graphics, 19(12):2306-2315, 2013.
+
+[8] D. H. Brainard and B. A. Wandell. Asymmetric color matching: how color appearance depends on the illuminant. JOSA A, 9(9):1433-1448, 1992.
+
+[9] A. Brychtová and A. Çöltekin. The effect of spatial distance on the discriminability of colors in maps. Cartography and Geographic Information Science, 44(3):229-245, 2017.
+
+[10] M. Burch, N. Konevtsova, J. Heinrich, M. Hoeferlin, and D. Weiskopf. Evaluation of traditional, orthogonal, and radial tree diagrams by an eye tracking study. IEEE Transactions on Visualization and Computer Graphics, 17(12):2440-2448, 2011.
+
+[11] M. S. T. Carpendale and C. Montagnese. A framework for unifying presentation space. In Proceedings of the 14th annual ACM symposium on User interface software and technology, pp. 61-70. ACM, 2001.
+
+[12] H. Chen, W. Chen, H. Mei, Z. Liu, K. Zhou, W. Chen, W. Gu, and K.-L. Ma. Visual abstraction and exploration of multi-class scatterplots. IEEE Transactions on Visualization and Computer Graphics, 20(12):1683- 1692, 2014.
+
+[13] W. S. Cleveland. Visualizing Data. Hobart Press, 1993.
+
+[14] W. S. Cleveland and W. S. Cleveland. A color-caused optical illusion on a statistical graph. The American Statistician, 37(2):101-105, 1983.
+
+[15] W. S. Cleveland and R. McGill. Graphical perception: Theory, experimentation, and application to the development of graphical meth-
+
+ods. Journal of the American statistical association, 79(387):531-554, 1984.
+
+[16] M. A. Correll, E. C. Alexander, and M. Gleicher. Quantity estimation in visualizations of tagged text. In Proceedings of the SIGCHI Conference
+
+on Human Factors in Computing Systems, CHI '13, pp. 2697-2706. ACM, New York, NY, USA, 2013. doi: 10.1145/2470654.2481373
+
+[17] H. Doleisch, M. Gasser, and H. Hauser. Interactive feature specification for focus+ context visualization of complex simulation data. In VisSym, vol. 3, pp. 239-248, 2003.
+
+[18] J. Duncan and G. W. Humphreys. Visual search and stimulus similarity. Psychological review, 96(3):433, 1989.
+
+[19] M. Gleicher, M. Correll, C. Nothelfer, and S. Franconeri. Perception of average value in multiclass scatterplots. IEEE transactions on visualization and computer graphics, 19(12):2316-2325, 2013.
+
+[20] J. H. Goldberg and J. I. Helfman. Comparing information graphics: a critical look at eye tracking. In Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization, pp. 71-78. ACM, 2010.
+
+[21] C. Gutwin, A. Cockburn, and A. Coveney. Peripheral popout: The influence of visual angle and stimulus intensity on popout effects. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 208-219. ACM, 2017.
+
+[22] K. W. Hall, C. Perin, P. G. Kusalik, C. Gutwin, and S. Carpendale. Formalizing emphasis in information visualization. Comput. Graph. Forum, 35(3):717-737, June 2016. doi: 10.1111/cgf. 12936
+
+[23] H. Hauser. Generalizing focus+ context visualization. In Scientific visualization: The visual extraction of knowledge from data, pp. 305- 327. Springer, 2006.
+
+[24] C. Healey and J. Enns. Attention and visual memory in visualization and computer graphics. IEEE transactions on visualization and computer graphics, 18(7):1170-1188, 2011.
+
+[25] J. Heer and M. Bostock. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. Proceedings of the 28th Annual Chi Conference on Human Factors in Computing Systems, pp. 203-212, 2010. doi: 10.1145/1753326.1753357
+
+[26] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: the effects of chart size and layering on the graphical perception of time series visualizations. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1303-1312. ACM, 2009.
+
+[27] J. Heer and G. Robertson. Animated transitions in statistical data graphics. IEEE transactions on visualization and computer graphics, 13(6):1240-1247, 2007.
+
+[28] J. Heer and M. Stone. Color naming models for color selection, image editing and palette design. p. 1007, 2012. doi: 10.1145/2207676. 2208547
+
+[29] W. Huang. Using eye tracking to investigate graph layout effects. In 2007 6th International Asia-Pacific Symposium on Visualization, pp. 97-100. IEEE, 2007.
+
+[30] W. Huang, P. Eades, and S.-H. Hong. A graph reading behavior: Geodesic-path tendency. In 2009 IEEE Pacific Visualization Symposium, pp. 137-144. IEEE, 2009.
+
+[31] S. Ishihara. Test for colour-blindness. Kanehara Tokyo, Japan, 1987.
+
+[32] A. Keahey. The generalized detail-in-context problem. In Proceedings of the 1998 IEEE Symposium on Information Visualization, INFOVIS '98, pp. 44-51. IEEE Computer Society, Washington, DC, USA, 1998.
+
+[33] S.-H. Kim, Z. Dong, H. Xian, B. Upatising, and J. S. Yi. Does an eye tracker tell the truth about visualizations?: findings while investigating visualizations for decision making. IEEE Transactions on Visualization and Computer Graphics, 18(12):2421-2430, 2012.
+
+[34] C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. In Matters of intelligence, pp. 115-141. Springer, 1987.
+
+[35] R. Kosara, H. Hauser, and D. L. Gresh. An interaction view on information visualization. State-of-the-Art Report. Proceedings of EURO-GRAPHICS, pp. 123-137, 2003.
+
+[36] R. Kosara, S. Miksch, and H. Hauser. Focus+ context taken literally. IEEE Computer Graphics and Applications, 22(1):22-29, 2002.
+
+[37] J. H. Krantz. Stimulus delivery on the web: What can be presented when calibration isn't possible. Dimensions of Internet science, pp. 113-130, 2001.
+
+[38] R. E. Krider, P. Raghubir, and A. Krishna. Pizzas: $\pi$ or square? psychophysical biases in area comparisons. Marketing Science, 20(4):405- 425, 2001.
+
+[39] Y. K. Leung and M. D. Apperley. A review and taxonomy of distortion-oriented presentation techniques. ACM Trans. Comput.-Hum. Interact., 1(2):126-160, June 1994. doi: 10.1145/180171.180173
+
+[40] D. M. Levi. Crowding-an essential bottleneck for object recognition: A mini-review. Vision research, 48(5):635-654, 2008.
+
+[41] T. R. Levine and C. R. Hullett. Eta squared, partial eta squared, and misreporting of effect size in communication research. Human Communication Research, 28(4):612-625, 2002.
+
+[42] A. MacEachren. How Maps Work: Representation, Visualization, and Design. Guilford Publications, 2004.
+
+[43] J. Mackinlay. Automating the design of graphical presentations of relational information. Acm Transactions On Graphics (Tog), 5(2):110- 141, 1986.
+
+[44] A. Mairena, C. Gutwin, and A. Cockburn. Peripheral notifications in large displays: Effects of feature combination and task interference. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 640. ACM, 2019.
+
+[45] N. Moroney, M. D. Fairchild, R. W. Hunt, C. Li, M. R. Luo, and T. Newman. The ciecam02 color appearance model. In Color and Imaging Conference, vol. 2002, pp. 23-27. Society for Imaging Science and Technology, 2002.
+
+[46] M. Pohl, M. Schmitt, and S. Diehl. Comparing the readability of graph layouts using eyetracking and task-oriented analysis. In Computational Aesthetics, pp. 49-56, 2009.
+
+[47] M. Raschke, T. Blascheck, M. Richter, T. Agapkin, and T. Ertl. Visual analysis of perceptual and cognitive processes. In 2014 International Conference on Information Visualization Theory and Applications (IVAPP), pp. 284-291. IEEE, 2014.
+
+[48] P. Rizzo, A. Bierman, and M. S. Rea. Color and brightness discrimination of white leds. In Solid State Lighting II, vol. 4776, pp. 235-246. International Society for Optics and Photonics, 2002.
+
+[49] A. R. Robertson. Historical development of cie recommended color difference equations. Color Research & Application, 15(3):167-170, 1990.
+
+[50] R. Rosenholtz. A simple saliency model predicts a number of motion popout phenomena. Vision research, 39(19):3157-3163, 1999.
+
+[51] R. Rosenholtz, J. Huang, and K. A. Ehinger. Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in psychology, 3:13, 2012.
+
+[52] B. Saket, A. Endert, and C. Demiralp. Task-based effectiveness of basic visualizations. IEEE transactions on visualization and computer graphics, 25(7):2505-2512, 2018.
+
+[53] A. Sarkar, L. Blondé, P. L. Callet, F. Autrusseau, P. Morvan, and J. Stauder. A color matching experiment using two displays: design considerations and pilot test results. In Conference on Colour in Graphics, Imaging, and Vision, vol. 2010, pp. 414-422. Society for Imaging Science and Technology, 2010.
+
+[54] E. Segel and J. Heer. Narrative visualization: Telling stories with data. IEEE transactions on visualization and computer graphics, 16(6):1139- 1148, 2010.
+
+[55] G. Sharma and R. Bala. Digital color imaging handbook. CRC press, 2002.
+
+[56] G. Sharma, W. Wu, and E. N. Dalal. The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 30(1):21- 30, 2005.
+
+[57] B. G. Shortridge. Stimulus processing models from psychology: Can we use them in cartography? The American Cartographer, 9(2):155- 167, 1982.
+
+[58] S. Smart and D. A. Szafir. Measuring the Separability of Shape, Size, and Color in Scatterplots. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19, pp. 1-14, 2019. doi: 10.1145/3290605.3300899
+
+[59] M. Stone and L. Bartram. Alpha, contrast and the perception of visual metadata. 01 2009.
+
+[60] H. Strobelt, D. Oelke, B. C. Kwon, T. Schreck, and H. Pfister. Guidelines for effective usage of text highlighting techniques. IEEE transactions on visualization and computer graphics, 22(1):489-498, 2015.
+
+[61] D. A. Szafir, M. Stone, and M. Gleicher. Adapting color difference for design. In Color and Imaging Conference, vol. 2014, pp. 228-233. Society for Imaging Science and Technology, 2014.
+
+[62] A. M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive psychology, 12(1):97-136, 1980.
+
+[63] R. Veras and C. Collins. Saliency Deficit and Motion Outlier Detection in Animated Scatterplots. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19, (Chi):1-12, 2019. doi: 10.1145/3290605.3300771
+
+[64] M. Waldner, A. Karimov, and E. Gröller. Exploring visual prominence of multi-channel highlighting in visualizations. In Proceedings of the 33rd Spring Conference on Computer Graphics, SCCG '17, pp. 8:1-8:10. ACM, New York, NY, USA, 2017. doi: 10.1145/3154353. 3154369
+
+[65] M. Waldner, M. Le Muzic, M. Bernhard, W. Purgathofer, and I. Viola. Attractive flicker-guiding attention in dynamic narrative visualizations. IEEE transactions on visualization and computer graphics, 20(12):2456-2465, 2014.
+
+[66] D. Whitney and D. M. Levi. Visual crowding: A fundamental limit on conscious perception and object recognition. Trends in cognitive sciences, 15(4):160-168, 2011.
+
+[67] F. A. Wichmann and N. J. Hill. The psychometric function: I. fitting, sampling, and goodness of fit. Perception & psychophysics, 63(8):1293- 1313, 2001.
+
+[68] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 143-146. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942.1978963
+
+[69] J. M. Wolfe. Guided search 4.0: A guided search model that does not require memory for rejected distractors. Journal of Vision, 1(3):349- 349, 2001.
+
+[70] J. M. Wolfe and T. S. Horowitz. What attributes guide the deployment of visual attention and how do they do it? Nature reviews neuroscience, 5(6):495, 2004.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d931a7190d10ab53a711dec123ffa6f66c240f4d
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/RQaTpcuiJF/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,367 @@
+§ A BASELINE STUDY OF EMPHASIS EFFECTS IN INFORMATION VISUALIZATION
+
+Aristides Mairena * Martin Dechant ${}^{ \dagger }$ Carl Gutwin ${}^{ \ddagger }$ Andy Cockburn §
+
+University of Saskatchewan
+
+University of Canterbury
+
+§ ABSTRACT
+
+Emphasis effects - visual changes that make certain elements more prominent - are commonly used in information visualization to draw the user's attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user's experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Perception; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+§ 1 INTRODUCTION
+
+Emphasis effects are visual changes that make certain elements more prominent, and are commonly used in information visualization to draw the user's attention or to indicate importance. Emphasizing important data points is a common method used by designers to support the user when gradually exploring the data - or in narrative visualization [25], when known aspects of the data are presented to the users. Effective emphasis alters a data point's visual features $\left\lbrack {4,{22}}\right\rbrack$ so that a viewer’s attention will be guided to the region of interest [62]. A wide variety of visual variables have been considered as emphasis effects $\left\lbrack {{17},{22},{23},{60}}\right\rbrack$ . For example, a visualization can use colour to emphasize certain data points: differences in the visual prominence of the selected data points will be achieved through the variation in color, a visual variable known to guide attention [24].
+
+Although theoretical frameworks of emphasis exist that link visually diverse emphasis effects through the idea of visual prominence compared to background elements [22], we still know little about how emphasis effects will be perceived by users. In particular, questions remain about what visual effects, and what magnitudes of those effects, will be most quickly recognized as emphasis by the viewer of a visualization; in addition, we know little about how different effects compare and what levels of two different effects will lead to similar user experiences.
+
+Many metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. These models are generally constructed using large and visually isolated stimuli under optimal conditions, and so although they are effective at predicting perceptibility in isolation, they do not work well with distractors, and do not work well with even minor changes to the visual field $\left\lbrack {3,{59}}\right\rbrack$ .
+
+Visualizations, in contrast, often consist of large numbers of a variety of marks viewed using a wide range of devices and environments - and designers may use a variety of techniques to emphasize data points. Current guidelines do not address how different emphasis effects are perceived by viewers in visualizations, or provide an equivalence metric for perceived emphasis so designers can choose effects correctly. Without effective models of visual prominence in visualizations, designers lack information on how different visual effects compare, and do not know what magnitude of effect to use to appropriately guide a viewer's attention to an area of interest.
+
+To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. It is important to note that the three emphasis effects are qualitatively different - for example, colour and size manipulate just the emphasized element, whereas blur/focus manipulates everything but the emphasized element - and so our goal is not simply to identify which effect is most perceivable, but rather to establish how the effects compare across a range of intensity levels.
+
+To do this, our first study established a baseline of perceived visual prominence by showing participants artificial scatterplot visualizations with one element highlighted, and measuring the time to their first fixation on the element (through eye tracking), their time to click on the element, and their subjective rating of visual prominence. We then built a model from the first study's data using logarithmic curves, that can be used to predict the relationship between the different emphasis effects. Our second study then examined perceived emphasis in a more realistic context, by looking at visual prominence in complex visualizations that are taken from real-world applications (the MASSVIS dataset [7]). We evaluated our model by using it to predict the results of the second study for three different measures; the model was accurate, with ${R}^{2}$ values as high as 0.96 .
+
+Our two studies provide new findings about how people perceive three emphasis effects and their magnitudes in visualizations:
+
+ * There were significant differences in both studies for emphasis effect: blur/focus was most prominent, and colour least prominent, with size in between depending on magnitude.
+
+ * There were also significant differences between the magnitude levels for all effects, providing a graduated way to increase or decrease perceived prominence.
+
+*e-mail: aristides.mairena@usask.ca
+
+${}^{ \dagger }$ e-mail:martin.dechant@usask.ca
+
+${}^{\frac{1}{4}}$ e-mail:gutwin@cs.usask.ca
+
+8 e-mail:andy@cosc.canterbury.ac.nz
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+ * A predictive model based on logarithmic curves fit the Study 1 data well, and was accurate at predicting perceived emphasis in Study 2 (particularly in terms of subjective ratings).
+
+It is important to note that our goal in the study was not to conduct a "shoot-out" between all possible types of emphasis, but rather to determine whether different effects are perceived differently, and to provide initial empirical evidence about how effects compare. Our results provide an initial empirical foundation for understanding how visual effects operate and are experienced by viewers when used for emphasis in visualizations; and although more work is needed to refine and broaden the models, our work provides useful new information for designers who want to control how emphasis effects will be perceived by users.
+
+§ 2 RELATED WORK
+
+Emphasis is essential to InfoVis and is used to highlight regions of interest in a visualization. While there is a large body of research in this domain, much of the work seeks to understand how the underlying perceptual system operates - limiting the possibility of extracting design lessons from low-level data and findings. We survey current empirical studies of perception from visualization and vision science to inform our work.
+
+§ 2.1 VISUAL ATTENTION AND GRAPHICAL PERCEPTION
+
+There are various theories and computational models for selective visual attention, but in general, most theories agree that attention operates by alternately selecting "features" from a number of incoming subsets of sensory data for further processing [51]. Early work suggests a two-stage process: first, a bottom-up, pre-attentive stage, which is automatic and independent of a task [51], where attention is guided to the most salient items in a scene [70]; followed by a second, slower, top-down stage that is driven by current tasks and goals $\left\lbrack {{34},{50},{62}}\right\rbrack$ . Within this model, the conjunction of basic features (such as colour and orientation) stems from "binding" features together (known as Feature Integration Theory [62]).
+
+A second theory, the Guided Search Theory, extends the two-stage process by proposing that attention can be biased toward targets of interest (e.g., a user looking for a red circle) in the top-down phase by encoding particular visual characteristics [69]: for example, assigning a higher weight to the red colour. Recently-proposed attention theories, however, challenge the two-stage model suggesting there is also bias to prioritize items that have been previously selected, thus proposing a three-stage model: current goals, selection history, and physical salience (bottom-up attention) [1].
+
+Attention is commonly examined through visual search experiments which usually ask participants to determine whether a target is present in a scene with distractors; reaction times (RT) and accuracy are used to model the relationship between the response and the number of distractors. Frequently, the term "popout" is used to describe a target item that is easily identified due to its unique visual properties in these searches. While the noticeability of specific visual characteristics such as colour and size cues have also been examined from an attention perspective $\left\lbrack {{21},{44}}\right\rbrack$ , a related area of research called graphical perception takes a more in-depth look at the suitability of different visual channels, and at how choices in visual variables for encoding data affect visualization effectiveness [15].
+
+Graphical perception studies have explored how different visual channels might support a variety of tasks for visualization [52]. Bertin was among the first to study the ability of visual variables to encode information, suggesting that variations in individual visual variables is an effective tool for encoding information and achieving noticeability [4]. Particularly, Bertin suggests that selective visual variables, such as position, size, colour hue, or texture allows viewers to immediately detect variables.
+
+Following Bertin, researchers in multiple disciplines such as cartography [42], statistics [15], and computer science [43] have conducted human-subjects experiments and have derived rankings of visual variables for nominal, ordinal, or quantitative data $\lbrack {15},{42}$ , ${43},{57}\rbrack$ . In addition to comparing the effectiveness of alternative visual variables for visualization, researchers have investigated how other design factors such as aspect ratios [13], chart sizes [26], and animations [63] influence the effectiveness of charts.
+
+Graphical perception studies have focused on measuring how the visual encoding of variables affect the accuracy of estimating and understanding values of the underlying data; insights from studies in graphical perception, however, can also be applied in manipulating data points in a visualization to guide a viewer's attention to an area of particular importance.
+
+§ 2.2 EMPHASIS IN VISUALIZATION
+
+Emphasis is essential to information visualization by offering support to a user when exploring data, for instance through highlighting areas of interest when brushing and linking across multiple views to emphasize relationships [35]. Emphasis is also important when presenting known aspects of data to a user through narrative visualization [54]. The goal of emphasis is to manipulate the visual features of an important data point to make it visually prominent, such that a viewer's bottom-up attention is attracted to the point [22].
+
+While distortion and magnification techniques - which create emphasis effects by simultaneously manipulating a visual variable's size and positioning - have been a focus of infovis researchers for creating emphasis $\left\lbrack {{11},{32},{39}}\right\rbrack$ , other techniques such as blur [36], motion [27], and flicker [65] have also been studied. Given the varied range of emphasis techniques, Hall et al. suggested a categorization of emphasis effects into two main groups based on how visuals change over time: time-invariant and time-variant effects [22].
+
+Time-invariant emphasis effects such as highlighting (colouring a data point in a visualization), and blurring (where a data point is shown in focus while the other elements are blurred) do not change with time, and do not use features such as fly-in, fade-in or other transitions [22]. In contrast, time-variant emphasis effects such as motion, flickering, or zooming involve time variations, commonly achieved through animations that alter the appearance of a data point [22].
+
+While there are many ways in which a data point in a visualization can be emphasized, all visual techniques generate emphasis by making the focus mark (i.e., the target) visually more prominent by making it sufficiently dissimilar from the other elements (i.e., the non-target marks) in at least one visual channel [22]. For example, blur/focus, magnification, and highlighting create emphasis by making one data point more visually prominent than others (e.g., sharper, bigger, or a different colour).
+
+There are three main properties of a visual channel that could influence the effectiveness of the visual prominence of an emphasized target mark against the set of non-emphasized marks: the similarity between targets and non-targets, the similarity of all non-targets, and the channel offset (i.e., the lowest value of the non-targets) [64]. Similarity theory shows that visual search efficiency decreases with increased target/non-target similarity and with decreased similarity between the non-targets [18]).
+
+Another theory, the relational account of attention theory, also suggests that the perceived similarity between targets and non-targets can be modeled by the magnitude of a vector in feature space pointing from the target to the closest non-target [2]. If users are given a feature direction in a visual search task (e.g., find the brightest or largest), attention will be guided to the mark that differs in the given direction from the other marks. In this theory, however, the nontarget similarity does not have an influence on the visual prominence of a target.
+
+Findings from classic psychophysics and visual search experiments, however, cannot always be applied directly to data visualization. Simple changes such as adding links between dots to simulate a node-link diagram, or changes to contrast effects due to a background luminance have been shown to have considerable effects on the results from prior experiments $\left\lbrack {3,{59}}\right\rbrack$ . These results reinforce the need for empirical evaluations of visualizations to validate theory and evaluate real-world visualization applications.
+
+§ 2.3 EVALUATIONS OF PERCEPTION IN VISUALIZATION
+
+Evaluations of perception in visualization have focused on understanding the details of integral and separable channels [58] and the interactions between separable channels. Smart and Szafir found that separability among shape, colour, and size perception functions asymmetrically, with shape found to affect the perception of size and colour more strongly compared to size's or colour's effect on shape perception [58]. Other studies have shown that size perception is biased by specific hues, and quantity estimation in visualizations are affected by both size and colour [14,16].
+
+Understanding of visual channels in visualization has largely categorized visual channels with terms such as "fully separable" and "no significant interference" [45, 70]. Design of visualizations that utilize these separable visual channels can be achieved by encoding data with visual variables that are known to "pop out". In a review of visual popout, Healey identified sixteen different visual variables or features that are known to pop out [24], including hue, size, orientation, and luminance.
+
+Scatterplots are one of the most effective visualizations for visual judgments due to data points being positioned along a common scale [25]. Several studies have explicitly explored graphical perception in scatterplots, with many recent techniques being developed to automate scatterplot design [12], and to predict perceptual attributes that may affect scatterplot analysis such as similarity or separability. However, these studies and techniques primarily focus on analyses over single-channel features for scatterplot design to improve legibility or its suitability for data comparison [19].
+
+Eye-tracking evaluations are a popular and effective tool for understanding how users view and visually explore visualizations [5,6]. For example, eye-tracking has been used to understand how different tasks and visual search strategies affect cognitive processes through fixation patterns $\left\lbrack {{46},{47}}\right\rbrack$ , and has also been used to evaluate specific visualization types $\left\lbrack {{10},{29},{30}}\right\rbrack$ , for comparing multiple types of visualizations [20], and for evaluating decision making and interaction in visualization [6,33].
+
+Free-viewing is a common technique for evaluating human perception of visual stimuli. Participants are not given a task and are instructed to freely look around the image, which avoids task-dependent effects and peripheral-vision effects. As some attention theories suggest that attention can be guided by a high-level task $\left\lbrack {1,{70}}\right\rbrack$ , free-viewing allows attention to be guided by image elements in a bottom-up manner. This assumption has guided researchers to the use of free-viewing for collecting ground truth data for evaluating saliency and attention in visualizations.
+
+However, despite the extensive body of research from vision science on graphical perception, prior research has been focused on evaluating factors that may affect the visual prominence of a specific emphasis effect [64], or in empirically ranking visual variables for encoding data $\left\lbrack {{15},{26}}\right\rbrack$ ; few guidelines discuss the issue of how different emphasis effects are perceived by viewers in visualizations, or consider issues of equivalence for perceived emphasis. Therefore, in the evaluations described next, we set out to determine the viewer's perception of visual prominence, and the effectiveness of a variety of emphasis effects at a wide range of intensity levels.
+
+§ 3 GENERAL STUDY METHODS
+
+Data visualizations are used both to reveal patterns in data through exploration, and to communicate specific information to a viewer. When building visualizations for communication, a designer may need to draw a user's attention to a specific data point in order to better reveal the narrative focus of the visualization, and this can effectively be done by increasing the perceptual difference in the visual variables of the underlying data.
+
+In the following two studies, we experimentally evaluate how specific emphasis effects are experienced by a viewer. Our first study was designed to determine the baseline visual prominence of eight levels of three emphasis effects using different visual variables (blur/focus, colour, and size). In simple scatterplot visualizations, we visually emphasized one data element, and gathered eye-tracking data, mouse clicks, and subjective ratings of visual prominence Our second study built on the first; it used a similar paradigm but increased the complexity of the visualizations by using subset of the MASSVIS dataset - a repository of static data visualizations obtained from a variety of publicly-available online sources intended for a wide audience [7]. In addition, we developed predictive models from the first study's data, and used these to predict the results of the second study.
+
+Our theoretical starting point for these studies was the mathematical framework of emphasis effects in data visualizations developed by Hall et al., where visually diverse emphasis effects can be linked through the idea of visual prominence compared to background elements [22]. Our first study extends this previous work to determine the visual prominence of emphasis effects through eye-tracking metrics, click data, and subjective ratings. Using eye movement data makes it possible to examine which areas of a visualization viewers attend to and how their attention can be guided by applying emphasis effects. Combining eye tracking, interaction logs and subjective methods allowed us to collect a more diverse set of data, allowing us to analyze how participants' actions were guided by their perception of the different effects. This rich data allows us to better understand how users perceive commonly used effects that designers can use to emphasize a particular element in a visualization.
+
+§ 3.1 EMPHASIS EFFECTS AND LEVELS
+
+Many modern visualization software and libraries utilize a wide range of emphasis techniques. For example, Chart.js, a commonly-used visualization library for the web, increases a mark's size when a user clicks on it to generate emphasis, while Tableau uses a combination of blur/focus and size to emphasize a mark. Based on an informal survey of visualization tools, we chose three visual variables for our study - colour, blur/focus, and size - that are commonly used to provide emphasis in many different contexts. While other common techniques exist (such as border highlighting), the variables we chose are known to be detected and processed in a pre-attentive way [24].
+
+ * Colour. Emphasizing an element using colour means changing the hue of the data element to be different from the standard element colour; colour is well known to "pop out" when there is adequate difference between the highlighted item and the other elements, and colour change is widely used to indicate importance.
+
+ * Size. Emphasizing an element using size means increasing the area of the data element such that it is bigger than other elements. Size also pops out, and is used in several visualization tools for interactive highlighting.
+
+ * Blur/Focus. Emphasizing an element using blur/focus means applying a blur filter (e.g., Gaussian blur) to all of the elements in the visualization except for the emphasized element (which remains sharp). This effect is therefore qualitatively different from colour and size because it affects a much larger fraction of the overall view.
+
+For each type of emphasis effect, we chose several levels of the visual variable so that we could test the effect at different levels of magnitude (eight levels for Study 1, and three levels for Study 2). We sampled mark sizes, colour differences, and blur strength along increasing levels of difference between the target and the distractor - we call these levels 'magnitude of difference'. For some of the visual variables, the magnitude of difference range was constrained at both ends (e.g., there is a fixed range of hues between red and blue); for other variables, such as blur or size, the range was constrained only at one end (e.g., blur/focus and size start from the sharpness and size of the distractors and range up to an arbitrary upper end).
+
+ < g r a p h i c s >
+
+Figure 1: Visual variables and magnitude of difference levels. Rows 1-3: Colour, Blur/Focus, Size (distractors upper, targets lower).
+
+It is important to note that the magnitude scales for each variable are different, since we do not have a way to translate perceptual equivalency across effects (investigating this equivalency is one of the goals of our studies). For colour, we chose eight magnitude levels using a colour difference metric that normalizes the colour space to provide a closer fit between perceptual and geometric differences between colours [49]. ${\Delta E}$ is a metric devised to understand and measure how the human eye perceives colour difference, where a difference of 2.3 is roughly equal to one Just Noticeable Difference (JND) [55]. By utilizing ${\Delta E}$ , we can more accurately compare a wider range of colours, utilizing all the colours of a colour space to compare differences and comparing their change in visual perception. We use the current ${\Delta E}$ standard, CIEDE2000 [56], as our primary colour difference metric, which has added corrections to account for lightness, chroma, and hue. For the colour levels used in the first study, we chose eight fixed colour differences (i.e., the difference between emphasized and non-emphasized elements) ranging from ${\Delta E10}$ to ${\Delta E45}$ (see Figure 1) The empirical results we describe below confirm that the increasing ${\Delta E}$ values did result in increasing perceptibility of the emphasized data element (e.g., see Fig 4).
+
+For size, we chose eight fixed size differences (geometric difference in mark area between emphasized and non-emphasized content) from 25% to 200%. As shown in Figure 1, the size differences indicate area rather than diameter. Since previous research has also determined that perceived size can be different from geometric size, we then calculated the perceptual size difference for each level, and used the perceptual value in our analysis (again, we note that we do not currently have a correspondence between magnitude levels for different variables, and so the levels that we chose simply provide a range within which we can gather empirical evidence about perception). We used the equation Perceived size $=$ Actual size ${}^{0.86}$ , were actual size signifies the area of the object, as suggested by previous work [38] (see Fig 1).
+
+For blur/focus, we do not have a perceptual difference metric similar to colour’s ${\Delta E}$ or to perceptual size as described above. Therefore, for blur/focus, we simply chose levels that cover a wide range of perceived prominence for all targets. We chose eight different blur intensities (applied to the non-emphasized areas of a visualization) implemented using GIMP's Gaussian Blur function - with blur radius ranging from 1 to 8 . Examples of emphasized targets and corresponding distractors are shown in Figure 1.
+
+Our first study measured the baseline perceptibility of the three emphasis effects at each effect's eight different magnitude levels, using artificial static scatterplot visualizations rendered using Chart.js ${}^{1}$ . Scatterplots were rendered on a white background using one-pixel gray axes. The second study used the same three effects, but only three of the eight levels; we used the visual variable to manipulate elements in visualizations taken from the MASSVIS dataset.
+
+§ 3.2 APPARATUS
+
+To record eye movement and interaction data we used an SMI Red- $\mathrm{m}$ eye tracker running at ${60}\mathrm{\;{Hz}}$ on a Dell 24-inch monitor (screen resolution of ${1920} \times {1080}$ ) connected to a Windows 10 PC. The viewing distance was approximately ${60}\mathrm{\;{cm}}$ (Figure 2). Gaze data was recorded using SMI Experiment Center and analyzed with SMI BEGAZE software. Users' heads were not fixed, but they were instructed to avoid unnecessary head movements. The experiment was conducted in an indoor laboratory with normal lighting conditions. All questionnaire data was collected through web-based forms.
+
+ < g r a p h i c s >
+
+Figure 2: Setup and visualization graphic presented to participant for one trial.
+
+§ 4 STUDY 1: ESTABLISHING A BASELINE FOR PER- CEIVED EMPHASIS
+
+§ 4.1 PARTICIPANTS
+
+Twenty-one participants were recruited from the local university pool. We excluded three participants from our analysis either for self-reporting a colour vision deficiency, or for high eye-tracking deviation; this left eighteen people (7 male, 11 female) who were given a $\$ {10}$ honorarium for their participation. The average age of the participants was 26 (SD 4.5). All participants continuing to the study reported normal or corrected-to-normal vision and no colour-vision deficiencies, and all were experienced with mouse-and-windows applications ( ${10}\mathrm{{hrs}}/\mathrm{{wk}}$ ). Six participants reported previous experience with information visualizations from previous university courses.
+
+§ 4.2 STUDY PROCEDURE
+
+Participants completed informed consent forms and demographic questionnaires. Participants then completed a colour vision test: we checked for colour-vision deficiency using ten Ishihara test plates [31]. Next, we used the five-point calibration procedure from the SMI experimental suite to calibrate the eye tracker. Once the eyetracker calibration step was completed, participants carried out a series of trials with our scatterplot visualizations. The instructions given to participants were to visually explore each visualization and click on the element they felt was most emphasized. The monitor was blanked after each trial (after the participants clicked on an element) and the study software then asked the participant to rate the perceived visual prominence of the target mark, on a 1-7 scale. After participants provided their subjective rating, mouse position was re-centered and the next trial began.
+
+To prevent learning effects and to account for attention theories that suggest that visual attention can be guided by previously seen targets [1], participants were assigned to a random order presentation of all visual stimuli. Each visualization contained one target mark (an emphasized stimuli) and twenty randomly-placed distractor marks, avoiding overlaps. While spatial distance between distractor marks and the target can influence colour difference perceptions [9], we elected to construct our scatterplots with variable element spacing to increase the visual complexity of the stimuli for increased ecological validity, while ensuring distractors and targets avoided overlaps. The three emphasis effect types were presented at their 8 magnitude-of-difference levels, and each emphasis level was presented 5 times. Each target maintained the same appearance for each of the 5 trials of the level, but changed location. Across all levels and locations, each visual variable was presented in 40 trials, all participants completed the 120 trials of the study. Our study setup ensured that targets were located at a maximum of ${18}^{ \circ }$ viewing angle from the centre of the screen (a previous study shows that participants are able to find targets at angles of ${18}^{ \circ }$ at least ${75}\%$ of the time in under ${240}\mathrm{\;{ms}}$ [21]). Our target positioning and the free-viewing method used in the study meant that participants were always able to keep targets close to their central vision. We changed the target's location to evaluate the visual variables at multiple locations within a visualization, while ensuring targets would remain visible in a participant's visual field.
+
+§ 4.3 STUDY DESIGN AND ANALYSIS
+
+Our goals for the study were to gather empirical data about perceptibility for the different visual variables and magnitude levels, and to use that data for a predictive model. To investigate differences, we used a repeated-measures design with two factors:
+
+ * Emphasis Effect (blur/focus, colour, size)
+
+ * Magnitude of Difference (levels 1-8)
+
+Because magnitude levels are specific to each visual variable (e.g., the scale for colour difference is independent of the scale for size or blur), the levels are intended only to test increasing magnitudes for each effect. The ANOVA analysis for Magnitude therefore investigates whether perception of the visual variable is affected by each variable's increasing magnitude.
+
+We used four dependent measures that provide different aspects of the user's experience with emphasis: first, we tracked time to eye fixation on the target (which provides an indication of early visual attention); second, we recorded the time to the user's mouse click on the target (which measures the user's conscious decision about emphasis); third, we recorded the user's total fixation time on the target (to determine whether stronger emphasis leads to longer fixation); and fourth, we asked for the user's subjective rating of the target's emphasis (which provides a more detailed measure of the user's conscious decision).
+
+We used our analysis results to explore relative differences between the emphasis effects, and to fit curves to our empirical data in order to develop a predictive model.
+
+§ 5 STUDY 1 RESULTS
+
+§ 5.1 TIME TO TARGET FIXATION, TIME TO TARGET CLICK, AND FIXATION TIMES
+
+We analyzed differences between emphasis effect and magnitude of difference on participant's time to target fixation, target click, and fixation time in an Area of Interest (AOI) surrounding the emphasized visual target. We report effect sizes for significant RM-ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and $> {.14}$ large [41]). For all follow up tests involving multiple comparisons, the Holm correction was used.
+
+Time to Target Fixation. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{34}} = {17.73},p < {0.001},{\eta }^{2} = {0.07}}\right)$ , and Magnitude of Difference $\left( {{F}_{7,{119}} = {8.23},p < {0.001},{\eta }^{2} = {0.19}}\right)$ on time to target fixation, with no interaction between the factors $\left( {{F}_{14.238} = {2.71},p = {0.27}}\right)$ . These data are shown in Fig 3; note that size levels for size are adjusted in the charts to show perceptual size (but for analysis, size levels were mapped to the standard 1-8 scale). Across all Magnitudes, participants fixated on targets fastest in the Blur/Focus condition ( ${828}\mathrm{\;{ms}}$ ), followed by Size ( ${913}\mathrm{\;{ms}}$ ) and Colour (1242 ms). Post-hoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between Color $\rightarrow$ Blur and Color $\rightarrow$ Size. Across all emphasis effects, time to target fixation was the fastest at magnitude $8\left( {{613}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude $1\left( {{1733}\mathrm{\;{ms}}}\right)$ . A similar post-hoc t-test was applied for pairs of magnitude differences and showed a significant difference for $1 \rightarrow 2 - 8\left( {\mathrm{p} < {0.001}}\right)$ , and $3 \rightarrow 7$ $\left( {\mathrm{p} < {0.05}}\right)$ .
+
+${}^{1}$ https://www.chartjs.org/
+
+ < g r a p h i c s >
+
+Figure 3: Time to Target Fixation. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur = 0.45, colour $= {0.60}$ , and size $= {0.87}$ . Levels for size are adjusted in the chart to show perceptual size rather than geometric size.
+
+Time to Target Click. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{34}} = {40.99},p < {0.001},{\eta }^{2} = {0.24}}\right)$ , and Magnitude of Difference $\left( {{F}_{7,{119}} = {56.45},p < {0.001},{\eta }^{2} = {0.45}}\right)$ on target click and an interaction between the factors $\left( {{F}_{{14},{238}} = }\right.$ ${2.68},p < {0.01},{\eta }^{2} = {0.08})$ . These data are illustrated in Fig 4. Participants clicked on focused targets fastest in the Blur condition (2051ms), followed by size(2141ms)and colour(2882ms). Posthoc t-tests again showed significant $\left( {p < {0.01}}\right)$ differences between Color $\rightarrow$ Blur and Color $\rightarrow$ Size. Averaged across all emphasis effects, time to click was fastest at magnitude $7\left( {{1791}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude 1 (3748 ms).
+
+Total Target Fixation Time. RM-ANOVA showed a significant main effect of Magnitude of Difference $\left( {{F}_{7,{119}} = {6.65},p < }\right.$ ${0.001},{\eta }^{2} = {0.12}$ ) on fixation time, but no difference between ${Em}$ - phasis Effects $\left( {{F}_{2,{34}} = {2.76},p = {0.08}}\right)$ . Averaged across magnitude, total fixation time was similar among the emphasis effects ( ${1990}\mathrm{\;{ms}}$ for size; 2260 ms for both size and colour). Averaged across all effects, a Magnitude of 7 had fixation time of ${2470}\mathrm{\;{ms}}$ , while a Magnitude of 1 had the least time at ${1913}\mathrm{\;{ms}}$ . Post-hoc t-tests showed significant (all $p < {0.05}$ ) differences on difference pairs $1 \rightarrow 5,1 \rightarrow$ $7,3 \rightarrow 7$ , and $7 \rightarrow 8$ .
+
+§ 5.2 SUBJECTIVE PERCEPTION OF VISUAL PROMINENCE
+
+After the presentation of each visualization, participants were asked to rate how visually prominent the emphasized data point appeared to them. Mean response scores are shown in Figure 5. We used the Aligned Rank Transform [68] with the ARTool package in R to enable analysis of the subjective prominence responses using RM-ANOVA. For subjective ratings of perceived emphasis there were main effects of Emphasis Effect $\left( {{F}_{2,{408}} = {56.38},p < {0.001}}\right)$ and Magnitude of Difference $\left( {{F}_{7,{408}} = {24.98},p < {0.001}}\right)$ , with no interaction $\left( {{F}_{{14},{408}} = {1.43},p = {0.13}}\right)$ . Results from these analyses follows those from Time to Click, in which sharp objects in the Focus/Blur emphasis condition were, on average, perceived as most visually prominent, followed by Size and Colour - with an increasing perceived visual prominence as we increase the Magnitude of Difference.
+
+ < g r a p h i c s >
+
+Figure 4: Time to Target Click. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur =0.68, colour $= {0.90}$ , and size $= {0.96}$ . Levels for size are adjusted in the chart to show perceptual size rather than geometric size.
+
+§ 5.3 RE-ASSESSING PERCEPTUAL LEVELS FOR COLOR AND SIZE
+
+The magnitude levels chosen for color used the ${\Delta E}$ scale that is intended to provide perceptually-equal magnitude differences; similarly, the (adjusted) levels for size also provide linear magnitude changes in perceptual space. This means that we would expect a linear change in perceptual response to the changes in color and size (in Figures 3, 4, and 5). However, these charts show curves for both color and size, not linear relationships. This suggests that for visualization tasks, the perceptual scales for color and size may underestimate the decrease in emphasis as the effect increases in magnitude.
+
+§ 5.4 PARTICIPANT PREFERENCES AND COMMENTS
+
+At the end of the study session we asked participants to state which emphasis effect they felt was the most visually prominent, least prominent, and to provide further comments on their responses. Overall, focus/blur was seen as most prominent (seven participants voted for blur/focus, six for size, and four for colour). One participant stated that no one effect stood out as most prominent. Participant comments for the three emphasis effects reflect the empirical findings, favouring blur/focus. One participant reported, "[In focus/blur] other data points were very blurry and hard to distinguish so the clear one stood out more than if the colour were different or the size were different (i.e. could only focus on the emphasized one, compared to the other types where you could still view the non-emphasized points)". Another participant stated "[blur/focus] clearly hid the other circles". Participants that favoured size reported that size may be easier for quick comparisons; one remarked "It is easier for the eye to visualize a bigger/smaller size in comparison to other dots vs trying to see a colour difference of a similar size dot".
+
+§ 5.5 BUILDING AN INITIAL EQUIVALENCE MODEL
+
+We used the raw data from Study 1 to build initial predictive models of time to target fixation, time to click, and subjective rating of emphasis - and although more data will be needed to refine the predictions, we are able to capture some of the main differences between the three emphasis effects that we examined. Our models are simple functions fit to the raw empirical data; we use logarithmic functions they are commonly used to describe human performance in signal-detection and perceptual studies [67]. We fit the functions to the data using $\mathrm{R}$ (Im(mean $\sim \log$ (magnitude of difference)); we could then use R's 'predict' function to get predicted values. The fitted logarithmic curves for time to target fixation, time to click, and subjective ratings are shown in Figures 3, 4, and 5. Captions for these figures also state the ${R}^{2}$ values for the accuracy of the fitted functions to the data: for time to fixation the curve was only moderately accurate, but for time to click and subjective ratings, the accuracy was much higher.
+
+ < g r a p h i c s >
+
+Figure 5: Perceived Prominence of Emphasis Effects. Empirical means (solid lines) and log curve (dashed lines). ${R}^{2}$ values for logarithmic curves: blur $= {0.80}$ , colour $= {0.97}$ , and size $= {0.98}$ .
+
+The logarithmic curves provide a simple model that allows investigation of equivalence between the three effects. For all three measures, the models allow us to observe some main features of the relationships: first, colour is consistently less perceptible than the other two effects, both in terms of performance data and subjective ratings; second, size and blur/focus are very similar at level 3 and above of both performance measures, but at levels 1 and 2, size is somewhat weaker; third, size and blur/focus are more clearly separated in subjective ratings, with clear differences up to level 5 .
+
+These models, once validated, can allow simple calculation of equivalence between effects. As an example of how the calculation works, consider a scenario where a designer needs to change from a blur/focus emphasis effect to one that uses colour; interpolation of the curves of Figure 4 indicate that to translate the perceived emphasis of level 1 of blur/focus, a designer would need to use a colour effect of approximately level 7. However, before we can consider using the models for equivalence, we need to verify that they are robust enough to work with other visualizations. We do this by predicting data from Study 2 with the models developed from Study 1, as described below.
+
+§ 6 STUDY 2: PERCEPTION OF EMPHASIS IN COM- PLEX VISUALIZATIONS
+
+In contrast to the scatterplots used in Study 1, many visualizations include other visual factors such as background graphics, labels, titles, annotations and other embellishments that may affect how a user's attention is guided and ultimately how an emphasis effect is perceived. Therefore, we need to understand how users perceive emphasis effects in more complex visualizations. We designed our study following a similar method to Study 1, but evaluated emphasis effects in complex, real-world visualization graphics from the MASSVIS database [7].
+
+§ 6.1 IMAGE DATA
+
+As the emphasis effects we are studying are not particularly targeted towards a specific visualization type, we chose the MASSVIS database [7] as the source for image data. The dataset contains 5000 static data visualizations that are obtained from a variety of online sources, are generated from real-world applications, and are targeted to a broad audience; MASSVIS is a popular choice for investigating how general users understand data visualizations. We selected a subset of 16 visualizations from the dataset covering a variety of visualization types, including maps and scatterplots. All of the selected graphics were chosen based on having scatterplot-like features (e.g., points on a map) to ensure consistency across our two studies. Each of the 16 visualizations had one emphasis effect applied at a time (Fig 6), and these augmented images were used to evaluate how users perceive the different emphasis effects.
+
+§ 6.2 MAGNITUDE LEVELS FOR EMPHASIZED STIMULI
+
+We used a subset of Study 1 ’s magnitude levels for Study 2 - we chose three uniform steps (1,4, and 7) from Study 1, giving us coverage of the range we used in the baseline results. Example graphics with an emphasized data point are illustrated in Fig 6.
+
+§ 6.3 EXPERIMENTAL DESIGN AND PROCEDURE
+
+The experiment followed a similar procedure to that of Study 1. After providing informed consent and going through the eye-tracker calibration, participants were instructed to explore each visualization and to click on the area they felt was most emphasized. Similar to Study 1, to prevent learning effects and pre-attentive processing of previously seen stimuli, participants were assigned to a random order presentation. Test graphics contained one randomly-placed test mark in the graphic. After each stimulus presentation, participants were asked to rate the perceived visual prominence of the emphasized point they selected. Given the variety of colours and mark sizes in our sampled visualizations, our test mark colour and size difference are relative to those of the marks in each visualization. Individual differences for each image are considered in our discussion section. Each visual variable was presented in 48 trials (3 levels ${X16}$ graphics), and all participants completed all 144 trials of the study.
+
+§ 6.4 PARTICIPANTS
+
+Twenty-four new participants (none of whom participated in Study 1) were recruited from the local university pool. We excluded four participants from our analysis for high eye-tracking deviation, or failure to follow experiment instructions. The remaining twenty participants ( 9 male, 9 female, 2 non-binary) were given a $\$ {10}$ honorarium for their participation. The average age of the participants was 26 (SD 6.02) and all reported normal or corrected-to-normal vision and no colour-vision deficiencies; all were experienced with mouse-and-windows applications $\left( {{10}\mathrm{{hrs}}/\mathrm{{wk}}}\right)$ , and 6 had previous visualization experience. We used the same experimental setup described in Study 1.
+
+§ 6.5 STUDY DESIGN AND ANALYSIS
+
+To understand how users perceive emphasis effects in more complex, real-world visualizations, and to asses our initial equivalence model, we used a repeated measures, within participants design with the same two factors as Study 1:
+
+ * Emphasis Effect (blur/focus, colour, size).
+
+ * Magnitude of Difference (Levels 1, 4, and 7 from Study 1).
+
+We used the same four dependent variables: time to fixate on target, time to target click, total fixation time, and subjective rating of perceived emphasis.
+
+ < g r a p h i c s >
+
+Figure 6: Example stimulus display for study 2 (a) baseline, (b) focus/blur, (c) size/area, (d) colour; Rows 1-2 at level magnitude of difference 4. Row 3 shows magnitude of difference 7.
+
+§ 7 STUDY 2 RESULTS
+
+We again analyzed emphasis effect and magnitude of difference on our four dependent measures, and we again report effect sizes as general eta-squared ${\eta }^{2}$ , and use Holm correction for followup tests.
+
+Time to Target Fixation. RM-ANOVA found no main effect of Emphasis Effect $\left( {{F}_{2,{38}} = {1.78},p = {0.18}}\right)$ on time to target fixation, but did find an effect of Magnitude of Difference $\left( {{F}_{2,{38}} = {3.80},p < }\right.$ ${0.01},{\eta }^{2} = {0.31}$ ), and an interaction between the factors $\left( {{F}_{4.76} = }\right.$ ${3.09},p < {0.01},{\eta }^{2} = {0.05})$ . These data are shown in Fig 7. Posthoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between each magnitude-of-difference pair. Averaged across all emphasis effects, time to click on an emphasized data point was fastest at Magnitude 7 (3548 ms), and the slowest at Magnitude 1 (4932 ms).
+
+Time to Target Click. RM-ANOVA showed significant main effects of Emphasis Effect $\left( {{F}_{2,{38}} = {41.18},p < {0.01},{\eta }^{2} = {0.32}}\right)$ and Magnitude of Difference $\left( {{F}_{2,{38}} = {58.04},p < {0.01},{\eta }^{2} = {0.62}}\right)$ on target click time, and an Emphasis Effect $\times$ Magnitude of Difference interaction $\left( {{F}_{4,{76}} = {14.64},p < {0.01},{\eta }^{2} = {0.15}}\right)$ . These data are illustrated in Fig 8. Similar to Study 1, focused targets in the Blur condition were clicked on fastest(3106ms), followed by Size (4812 ms) and Colour ( ${5787}\mathrm{\;{ms}}$ ). Holm-corrected post-hoc t-tests showed significant $\left( {p < {0.01}}\right)$ differences between all pairs. Averaged across all emphasis effects, time to click on an emphasized data point was fastest at magnitude $7\left( {{4616}\mathrm{\;{ms}}}\right)$ , and the slowest at magnitude 1 (7012 ms).
+
+Total Target Fixation Time. RM-ANOVA showed a significant main effect of Emphasis Effect $\left( {{F}_{2,{38}} = {3.41},p < {0.01},{\eta }^{2} = {0.04}}\right)$ and Magnitude of Difference $\left( {{F}_{2,{38}} = {15.08},p < {0.01},{\eta }^{2} = {0.22}}\right)$ on total fixation time, and a Emphasis Effect $\times$ Magnitude of Difference $\left( {{F}_{4,{76}} = {4.30},p < {0.01},{\eta }^{2} = {0.08}}\right)$ interaction. Averaged across magnitude of differences, fixation time for blur/focus was ${1369}\mathrm{\;{ms}}$ and 1240 msfor both Size and Colour. Averaged across all effects, a magnitude of difference of 7 gathered the most attention with a fixation time of ${1494}\mathrm{\;{ms}}$ , while a difference of 1 had the least fixation time at ${1080}\mathrm{\;{ms}}$ . Post-hoc t-tests showed significant (all $p < {0.01}$ ) differences for Magnitude of Difference but no difference among Emphasis Effects.
+
+§ 7.1 SUBJECTIVE PERCEPTION OF VISUAL PROMINENCE IN COM- PLEX VISUALIZATIONS
+
+After the presentation of each visualization, participants were asked to rate the visual prominence of the emphasized data point. Mean response scores are shown in Fig 9. We used the Aligned Rank Transform [68] with the ARTool package in R to enable analysis of the subjective responses using RM-ANOVA. RM-ANOVA showed there were main effects of Emphasis Effect $\left( {{F}_{2.171} = {16.05},p < }\right.$ ${0.001})$ and Magnitude of Difference $\left( {{F}_{2,{171}} = {60.00},p < {0.001}}\right)$ , and an interaction between the factors $\left( {{F}_{4.171} = {2.57},p = {0.03}}\right)$ . Results from these analyses are shown in Figure 9 and follow those from Time to Click, in which sharp objects in the Focus/Blur effect were perceived as most visually prominent, followed by Size and Colour - with an increasing perceived visual prominence as the magnitude of difference increased.
+
+ < g r a p h i c s >
+
+Figure 7: Time to Target Fixation in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+§ 7.2 PARTICIPANT PREFERENCES AND COMMENTS
+
+After completing the study, participants provided their preferences and general comments on the emphasis effects they identified. Participant comments echoed our other findings. Participants made several comments on how the focus/blur emphasis effect helped them to rapidly identify content: for example"It [the emphasized point] just popped out more than the rest, provided more contrast"; "Because it [blur/focus] didn't allow me to see the others, I focused all my attention to the point that was not blurry". One participant favoured size, stating "[size] always drew my eye immediately".
+
+When asked whether there were other areas of a visualization that got their attention, one participant remarked "The titles and information, I was trying to read them and see if that would have helped somehow to identify what was emphasized", while another participant stated "I occasionally looked at the titles to see what the information was representing".
+
+§ 7.3 CONSISTENCY ACROSS STUDIES AND MODEL VALIDATION
+
+We used the models built from Study 1 data to predict the data gathered for each effect and magnitude used in Study 2, and then compared the empirical data points to the predicted values (predictions are shown in Figures 7, 8 and 9 as dotted lines). Although the absolute values of the predictions are lower than the true values, the predictions do capture many of the characteristics of the Study 2 results, as discussed below. We tested the correlation between the predicted and empirical values: for time to target fixation, the correlation was ${0.82}\left( {{R}^{2} = {0.87}}\right)$ ; for time to target click, correlation was ${0.92}\left( {{R}^{2} = {0.94}}\right)$ ; for subjective ratings, correlation was 0.96 $\left( {{R}^{2} = {0.96}}\right)$ .
+
+If equivalence models are to be useful, the perceptibility of emphasis must be reasonably reliable across different visualization situations. Our two studies involve two visual settings: plain scatterplots in Study 1, and more complex visualizations in Study 2 (with background graphics and colours, text, and multiple visual styles). Nevertheless, there are several similarities between the two sets of results (as indicated by the very strong correlation scores). In both studies, the colour effect was less perceivable (higher time to target fixation and target click time, and lower subjective ratings); however, the earlier difference between colour and size at the highest magnitude is now gone. As in study 1, the blur/focus effect is again consistently more perceivable (and is rated as more prominent). Also as in Study 1, there was a similar improvement in performance as the magnitude of the effect increases; there was less of a clear logarithmic curve for the emphasis effects (although this would be less apparent with only three magnitude levels in Study 2).
+
+ < g r a p h i c s >
+
+Figure 8: Time to Target Click in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+ < g r a p h i c s >
+
+Figure 9: Perceived Prominence of Emphasis Effects in Complex Visualizations. Empirical means (solid lines) and predicted means (dashed lines).
+
+The most obvious difference between the predicted and real values is that times for both fixation and clicking were substantially higher with the MASSVIS visualizations. However, this was an expected difference because of the additional visual information available in each image - and because all emphasis effects were affected similarly, any equivalence calculations using the model will be unaffected.
+
+The subjective responses were particularly well predicted by the Study 1 model (see Figure 5), with the predicted points being accurate both in terms of absolute score and the relationship between the effects. This is a particularly valuable finding, because as discussed below, it may be that the user's perception of emphasis is a more important measure for designers than the user's gaze patterns or click behaviour.
+
+The main point where the predictions were inaccurate - both for performance data and for subjective ratings - was the perceptibility of the size effect at level 7. After reviewing our stimuli for this condition, there are two possible reasons for the empirical results being different from predicted values. First, two of the visualizations (see
+
+ < g r a p h i c s >
+
+Figure 10: Study 2 graphics. Graphic (a) contained a larger number of data points with composite blobs, leading to visual crowding. Graphic (b) has multiple visual elements (shapes, colours, and text), reducing the effect of size emphasis on a data point.
+
+Figure 10) contained a large number of data points and many visual elements overall, and previous research has shown that it is more difficult to recognize objects in a cluttered environment due to visual crowding, which can create a visual-perception bottleneck [40]. Second, when data points in these visualizations are dense, composite blobs with several overlapping points create marks that are larger than the default size. Although none of our target elements were in or beside these blobs, the presence of varying-size elements in the visualization may have reduced the prominence of our manipulation and forced participants to do a more careful visual search.
+
+This anomaly with the size effect points to another useful aspect of having a predictive model, however: that is, the identification of empirical results that are not as expected and that may need to be investigated further.
+
+§ 8 DISCUSSION
+
+§ 8.1 SUMMARY OF FINDINGS
+
+We investigated how users perceive colour, size, and blur/focus when used as emphasis effects in both basic scatterplots and more complex visualizations. Magnitude scales for each variable are different, since we do not have a way to translate perceptual equivalency across effects, investigating this equivalency is one of the goals of our studies. Our evaluations provide several findings:
+
+ * Different effects are perceived differently. Across both studies, blur/focus led to fastest target fixation and target click, and was rated highest in terms of visual prominence by participants; size also led to fast performance and high ratings of visual prominence at higher magnitude levels (with one exception); colour led to the slowest performance and lowest ratings for prominence.
+
+ * Magnitude increases emphasis, although with diminishing effects. Across both studies, increasing the magnitude of the effect consistently increased visual prominence (again, with the same one exception); all effects showed a tailing-off of the effect of magnitude.
+
+ * Models of perceived emphasis work reasonably well. A predictive model based on logarithmic curves fit the Study 1 data well, and was reasonably accurate at predicting emphasis in Study 2 (particularly the subjective ratings).
+
+In the following sections, we consider possible explanations for these results, look at how our findings and models can be used to assist designers in building visualizations with emphasis, and discuss limitations and directions for future research in this area.
+
+§ 8.2 EXPLANATION OF RESULTS
+
+§ DIFFERENCES BETWEEN THE EMPHASIS EFFECTS
+
+We saw consistent differences in fixation time, click time, and subjective ratings for our three emphasis effects, and the reasons for these differences arise from each technique's fundamental properties (as introduced earlier). First, blur/focus is an effect that manipulates the entire visualization except for the emphasized data element, and so has advantages over single-element techniques like colour and size. In particular, the blur effect guarantees that there will be no inadvertent competing visual stimuli that could slow the user's visual search (as happened with size at level 6 in Study 2), because all other elements are blurred. Second, the relative advantage for size over colour in our studies can be explained by the inherent limit on colour difference (i.e., there is a maximum difference between any two colours) whereas size difference has an unlimited upper end. The study results show that our range of magnitudes for size was larger than our range for colours - which points to the need for a better understanding of equivalences between effects.
+
+Effectiveness of the Predictive Model
+
+The model built from Study 1 data provided accurate predictions of the results in Study $2\left( {R}^{2}\right.$ values of0.87,0.94, and 0.96 for fixation time, click time, and subjective rating), and the model correctly represented the overall relationships between the emphasis effects and the changes expected with increasing magnitude level. The success of the predictive model shows that perception of emphasis is consistent between our two experimental settings - plain scatterplots in Study 1, and realistic scatterplots with other visual features in Study 2. In addition, the model was a useful tool for identifying results that need further exploration, including the greater overall response times for the MASSVIS dataset, and the anomalous performance of size at level 7. Of course, these anomalies can only be spotted when there are empirical results to compare to the model, but it is likely that the model will be used together with empirical testing until it matures with the addition of more data in different settings.
+
+§ THE SIZE-AT-LEVEL-SEVEN ANOMALY
+
+As described above, the size effect at level seven was less prominent than expected, with two possible reasons: visual crowding from other elements in the visualizations, and inadvertent size variance from overlapping data points (see Figure 10). This result clearly indicates that there can be emergent properties in real-world visualizations that interfere with the user's perception of emphasis, and thus a planned emphasis effect must be considered in light of other visual elements. These real-world interactions are another motivation to have equivalence metrics, so that designers can switch from one emphasis effect to another (and preserve the prominence of the emphasized element) when interference is discovered.
+
+While our setup of the presentation of visual stimuli ensured that distractor marks and stimuli would not overlap, changes in the distance between distractors and the emphasized elements may affect their noticeability. Because effects of visual crowding occur with a wide range of objects, colours and shapes [66], this phenomena may have affected other individual data points as well; but our explicit decision to not control the distance between points in Study 1 means that our results provide a more valid representation of the challenges faced by designers when emphasizing elements in a crowded visualization. As noted above, global effects such as blur/focus are less affected by visual crowding, as blurring non-targets partially eliminates them from a user's view, leaving only the focused element available. We note that it is possible to quantify the overall degree of visual complexity in an image, and in future work this could be added to our models as a factor (i.e., further studies could examine perceived emphasis at different levels of crowding).
+
+A final reason that needs to be considered is the existence of possible noise in our results. We can see similar patterns from Study 1 with certain levels of a variable performing similar or worse than the previous lowest level (such as Size level 5 being slower than level 4). However, these differences are small(300ms), and given our overall quick response times across our empirical measures, variations of approximately ${500}\mathrm{\;{ms}}$ could be attributed to a normal variation in participant response times.
+
+§ 8.3 IMPLICATIONS FOR DESIGN
+
+Our findings are applicable in a number of different visualization contexts. Visualization designers often need to draw a user's attention to important data points; our studies improve understanding of how visual cues are detected as emphasis effects and offer insights to their perceived visual prominence. While the current set of visual stimuli examined was relatively small, we intend to explore further visual variables in future studies.
+
+A first design implication is that global visual effects such as blur/focus can achieve a high perceived visual prominence and remain relatively unaffected by a visualization's background. Perceived differences for other variables such as colour and size can be affected by the non-target elements, but by blurring the non-target objects in a visualization, the focused item is less likely to be affected by visual crowding. In visualizations with a large number of objects (such as different colours and shapes), blurring non-targets may achieve the highest noticeability - however, blur/focus cannot be used in visualizations where the user needs to inspect elements that are not emphasized.
+
+Second, predictive models of perceived visual prominence can be valuable tools for designers. Although our model is still only a first step, it was already able to predict the results of Study 2 reasonably well, and can already be used to consider the equivalence between perception of the three effects that we tested. (We note that the model should not be used to calculate exact conversion factors between the effects, but rather to understand general relationships and approximate relative magnitudes). As further studies are carried out and more data is added, models like ours can become resources for designers that can accelerate the design of a narrative visualization. It is interesting that the model was most accurate at predicting people's subjective ratings of prominence, which raises the question of which metric is most important. It may be that subjective perception is a better measure for a model, because when a designer adds emphasis to a visualization, they typically want the viewer to know that the item is being emphasized - that is, what the viewer thinks is being emphasized is possibly more important than what their eye is drawn to first.
+
+A third design consideration is for designers utilizing colour as a way to emphasize certain data points. It should be noted that a subset of users suffer from various genetic conditions which cause atypical forms of colour perception - in such cases, a different emphasis effect may be more appropriate. Designers may wish to use our metrics and results to evaluate the effectiveness of a different visual effect to achieve the same perceived importance. Our future work intends to evaluate the use of various visual cues for emphasis effects and compare the sets for individuals with normal vision and users with a vision deficiency.
+
+Beyond visualization, our findings can also be applicable in other domains. For example, interface designers may wish to use our results as a way of devising methods of providing visual feedback. For instance, visual feedback during "find" tasks in different software software such as web browsers and pdf readers varies - with some software opting for colour highlighting an item when found, while others increase its size or use a combination of both. To effectively guide a user's attention to an item, designers can use perceived visual prominence as a method to evaluate and compare different visual effects.
+
+§ 8.4 GENERALIZABILITY, LIMITATIONS, AND FUTURE WORK
+
+Our studies tested a limited range of visualizations (i.e., scatterplot presentations), so the application of our results should be limited to that type; in Study 2, however, we did test a wide variety of different visual styles taken from real-world examples, and so we believe that our findings will be robust across a range of real scatterplots. In future work, we plan to extend our work to other types of visualizations and other real-world scenarios, with a variety of datasets. We also tested only a single emphasized data point, and an opportunity to extend to our work is to investigate visualizations that emphasize multiple points. Multiple points of emphasis also provides us with another opportunity to test the predictions of the model - that is, if two data elements are emphasized with different effects that our model predicts should be equally prominent, which will the user fixate on first? (We note that this kind of comparison is only possible with single-element effects such as size and colour).
+
+The difference levels for the visual variables tested in our experiments are intended to be generalizable for the design of emphasized elements in typical visualizations. However, although we tested a wide range of magnitude of differences, it is possible that our findings are influenced by the magnitude of differences we tested (as noted above in terms of the range of difference that is possible with each visual variable). A variety of other visual variables can be implemented as emphasis effects (see Healey for a review [24]). Visualization designers that intend to use a different range of magnitude of differences or emphasis effects may follow methods similar to the ones presented in this paper - in particular, testing user's reaction times and their subjective ratings to determine the noticeability of their effects. We also plan to carry out studies that look at how magnitude of emphasis is affected by clutter and by other mappings of visual variables to data variables.
+
+Other factors in generalization should be considered as well. Colour perception models rely on a simplified model of the world that assume perfect viewing conditions. While this assumption is necessary for understanding the visual system, complexities of the real world such as the viewing environment [37], lighting conditions $\left\lbrack {8,{48}}\right\rbrack$ , and display device $\left\lbrack {53}\right\rbrack$ may affect visual perception. Our experimental viewing conditions were controlled and remained stable throughout the studies, however, future work could extend these results to larger user samples and different viewing conditions, using crowd-sourcing methods [25].
+
+There are several additional opportunities for extending our findings. We explored emphasis effects with static visual variables (time-invariant in terms of Hall et al.'s framework [22]) but there are many other effects that could be tested, including depth, outline, transparency, or shape. Additionally, future research should investigate time-variant emphasis effects with dynamic visual variables such as flicker or motion and extend our results to interactive visualizations.
+
+We evaluated our emphasis effects based on empirical metrics such as time to target fixation, and time to mouse click. There are other ways emphasis effects can be evaluated. For instance, the MASSVIS dataset contains a comprehensive set of user attention maps on the visualizations [7]. We intend to analyze viewer's attention maps on the visualizations, comparing the visualization's attention maps with and without an emphasis effect applied.
+
+Finally, we elected to use the CIE2000 as it is commonly used in visualization and has been methodologically validated in past studies $\left\lbrack {{28},{61}}\right\rbrack$ . Future work may consider the use of other colour difference models or colour spaces, such as CIECAM02 [45]. We anticipate investigating a number of different colour spaces will result in more accurate models of colour difference perceptions for visualization design.
+
+§ 9 CONCLUSION
+
+Emphasis is an essential component of InfoVis, and is used by designers to draw a user's attention or to indicate importance. However, it is difficult for designers to know how different emphasis effects will compare and what level of one effect is equivalent to what level of another when designing visualizations. We carried out two user studies to evaluate the visual prominence of three emphasis effects (blur/focus, colour, and size) at various strength levels, and developed a predictive model that can indicate equivalence between effects. Results from our two studies provide the beginnings of an empirical foundation for understanding how visual effects operate and are experienced by viewers when used for emphasis in visualizations, and provide new information for designers who want to control how emphasis effects will be perceived by users.
+
+§ 10 ACKNOWLEDGMENTS
+
+This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and additionally by a Canada First Research Excellence Fund (CFREF) grant sponsored by the the Global Institute for Food Security(GIFS).
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..749b6a15a4c3e4d8cb68812ac60b78ea9a904a45
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,331 @@
+# Support System for Etching Latte Art by Tracing Procedure Based on Projection Mapping
+
+Momoka Kawai*
+
+Tokyo Denki University
+
+Shuhei Kodama ${}^{ \dagger }$
+
+Tokyo Denki University
+
+ASTRODESIGN, Inc.
+
+Tokiichiro Takahashi ${}^{ \ddagger }$
+
+Tokyo Denki University
+
+ASTRODESIGN, Inc.
+
+## Abstract
+
+It is difficult for beginners to create well-balanced etched latte art patterns using two fluids with different viscosities, such as foamed milk and syrup. However, it is not easy to create well-balanced etched latte art even while watching process videos that show procedures.
+
+In this paper, we propose a system that supports beginners in creating well-balanced etched latte art by projecting the etching procedure directly onto a cappuccino.
+
+In addition, we examine the similarity between etched latte art and design templates using background subtraction. The experimental results show the progress in creating well-balanced etched latte art using our system.
+
+Keywords: Projection Mapping, Etching Latte Art, Learning Support System.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-User studies; Applied computing-Education-Computer-assisted instruction
+
+## 1 INTRODUCTION
+
+Etching latte art is the practice of drawing images on a coffee using a thin rod such as a toothpick [2]. There are several methods of etching latte art depending on tools and toppings. An easy method of etching latte art for beginners is pouring syrup directly onto milk foam and etching it to create patterns, as shown in Figure 1. The color combination obtained using syrup automatically makes a drink appear impressive. Hence, baristas are under less pressure to create a difficult design [8]. However, it is difficult for beginners to imagine how to pour syrup and etch it to create beautiful patterns because etching latte art involves two fluids with different viscosities. Furthermore, even though beginners can watch videos that show the procedure of etching latte art, latte art created through imitation does not appear well balanced. In this study, we define the etched latte art drawn in the middle of a coffee cup with unified lines as "well balanced." It is impossible to etch well-balanced latte art without repeated practice.
+
+We develop a support system that helps even beginners etch well-balanced latte art by directly projecting the etching procedure using syrup onto a cappuccino. Moreover, the deformation of a fluid with viscosity, such as syrup, is projected through animations to support beginners in understanding the deformation of such fluids. We demonstrate the usefulness of this system through a questionnaire survey and examine the similarity between etched latte art and design templates using background subtraction. To the best of the authors' knowledge, there is no system that shows the procedure of etching latte art in the manner that our system does.
+
+## 2 RELATED WORK
+
+### 2.1 Support for Creating Latte Art
+
+There are two methods of creating latte art, etching and free pouring. In the former, syrup is poured on milk foam, and patterns are created via etching. In the latter, no tools are used and patterns are created using only the flow of milk, such as pouring.
+
+Hu and Chi [4] proposed a simulation method that considered the viscosity of milk to express the flow of milk in latte art. The flow of milk obtained in their research was quite similar to that in actual latte art.
+
+However, from the viewpoint of practicing latte art, users must estimate the paths of pouring milk and acquire the manipulation of a milk jug from simulation results. Moreover, it is difficult for users to understand the flow of milk unless they have advanced skills.
+
+Pikalo [7] developed an automated latte art machine that used a modified inkjet cartridge to infuse tiny droplets of colorant into the upper layer of a beverage. Latte art with any design can be easily created using this machine, which is similar to a printer.
+
+This machine enables everyone to create original latte art without any barista skills. However, the machine cannot create latte art using milk foam, such as free pour latte art. Therefore, baristas still require practice to create other kinds of latte art.
+
+Kawai et al. [5] developed a free pour latte art support system that showed the procedure of pouring milk in the form of animated lines. This system targets baristas who have experience in creating basic free pour latte art and know the amount of milk to be poured. People without any experience in creating free pour latte art must practice several times.
+
+### 2.2 Support for Creating Latte Art Using a Projector
+
+Flagg et al. [3] developed a painting support system by projecting a painting procedure on a canvas. They placed two projectors behind users to prevent the projected painting procedure from being hidden by users' shadows. This system is quite large scale and expensive.
+
+Morioka et al. [6] visually supported cooking by projecting how to chop ingredients at appropriate locations; this system can indicate how to chop ingredients and provide detailed notes and cooking procedures, which are difficult to understand during cooking while reading a recipe. However, users require a kitchen with a hole on the ceiling to use this system; this system is quite large scale because it projects instructions from a hole on the ceiling.
+
+Xie et al. [9] proposed an interactive system that enables even inexperienced people to build large-scale balloon art in an easy and enjoyable manner using spatially augmented reality. This system provides fabrication guidance to illustrate the differences between the depth maps of a target three-dimensional shape and a work in progress. In addition, they designed a shaking animation for each number to increase user immersion.
+
+Yoshida et al. [10] proposed an architecture-scale computer-assisted digital fabrication method that used a depth camera and projection mapping. This system captured a current work using a depth camera, compared a scanned geometry with a target shape, and then projected guiding information based on an evaluation. They mentioned that augmented reality devices, such as head-mounted displays, could be used. However, the calibration of places displaying instructions for each device would be required. The proposed projector-camera guidance system can be prepared using a simple calibration process, and it allows for intuitive information sharing among workers.
+
+---
+
+*e-mail: m-kawai@vcl.jp
+
+${}^{ \dagger }$ e-mail: s-kodama@vcl.jp
+
+${}^{ \ddagger }$ e-mail: toki@vcl.jp
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+
+
+Figure 1: Procedure of etching latte art (left to right).
+
+As indicated in this paper, instructions can be projected on hands. Therefore, the difference between instructions and users' pick manipulation can be reduced. For this reason, we utilized a small projector to provide support for creating etched latte art.
+
+## 3 Proposed Method
+
+### 3.1 System Overview
+
+The system overview is shown in Figure 2. The system configuration is provided below (Figure 3).
+
+- A laptop computer is connected to a small projector.
+
+- The projector shows a procedure on a cappuccino.
+
+- The projector is placed on a tripod.
+
+Firstly, users select a pattern among the Heart Ring, Leaf, or Spider Web patterns, as shown in Figure 2(a). Next, the users manually adjust the projector such that it points at a cappuccino. The procedure of creating the selected etched latte art is repeatedly projected on the cappuccino (Figure 2(b)). Then, the animation of syrup deformation is repeatedly projected on the cappuccino (Figure 2(c)).
+
+Finally, the procedure is projected on the cappuccino once again. Users pour syrup to trace the projected image. Then, the latte art is completed via etching to trace the projected lines (Figure 2(d)).
+
+The animations at each step can be played at any time.
+
+### 3.2 Display Function of Procedure
+
+In our system, users select a pattern from three kinds of etched latte art (the first column in Table 1). There are several other patterns of etched latte art as well. However, we adopted these patterns because they include all types of syrup placements (dots, curves, and straight lines) and pick manipulations (curves, straight lines, and spirals) used for creating etched latte art. A procedure is projected on the cappuccino after a pattern is selected.
+
+First, the appropriate locations for placing syrup corresponding to the selected etched latte art are projected (the second column in Table 1). Next, the manipulation of the pick is projected (the third column in Table 1).
+
+It is difficult to understand the procedure of etching latte art through books because they only show the sequence of several frames of a process video, as shown in Figure 1. Our system separately displays how to place syrup and manipulate the pick. As the depth, angle, or speed of a tool does not significantly affect the final outcome, our system does not show any other information.
+
+Table 1: Procedure of etching latte art using syrup.
+
+| Etched Latte Art | Design Template | Syrup Place | Pick Manipulation |
| Heart Ring | | | |
| Leaf | | | |
| Spider Web | | | |
+
+### 3.3 Animations of Syrup Deformation
+
+It is difficult for beginners to imagine the procedure of syrup deformation through pick manipulation. Our system helps users understand this procedure by directly projecting prepared animations (Table 2). In our system, syrup is indicated in brown and the manipulation of the pick is indicated in blue. We used actual syrup and manipulated it using the pick to observe syrup deformation. Then, we created the animation to be the same as the actual syrup deformation by employing Adobe After Effects and considering two fluids with different viscosities. It required approximately ${30}\mathrm{\;{min}}$ to create each design template and $2\mathrm{\;h}$ to create each animation of syrup deformation.
+
+### 3.4 System Configuration
+
+As shown in Figure 3, a cappuccino is placed in front of a user and the projector mounted on the tripod is placed on the left side of the cappuccino (it is placed on the right side if the user is left handed). The procedure of etching latte art and the animations of syrup deformation are projected from the top.
+
+## 4 EVALUATION
+
+### 4.1 Evaluation Method
+
+We conducted an experiment to evaluate our system. Twelve etched latte art beginners participated in the experiment. The participants were students in their 20s. They were not paid, and they did not aim to become baristas. Two of the participants had experience in drawing pictures (participants $\mathrm{G}$ and $\mathrm{H}$ ). We divided the participants into two groups (Group 1 and Group 2). To ensure homogeneity in the groups, the participants were randomly grouped and they never practiced creating etched latte art. The participants created etched latte art patterns using different methods (one without using the system and one with the system). Group 1 created etched latte art without using the system at first; thereafter, they created etched latte art using the system. In contrast, Group 2 created etched latte art using the system at first; thereafter, they created etched latte art without using the system. We selected patterns from the three previously mentioned etched latte art patterns so that all participants did not create the same pattern. The participants filled out a questionnaire after etching latte art.
+
+
+
+Figure 2: System overview.
+
+
+
+Figure 3: System configuration.
+
+Generally, in commercial use, baristas use well-steamed smooth foamed milk containing fair quality small-sized bubbles generated by a steamer attached to an espresso machine [1]. However, it is difficult to create such good quality foamed milk using a household milk frother. Milk foamed by a milk frother contains large bubbles that break easily; hence, syrup placed on this kind of milk foam spreads.
+
+Table 2: Animations indicating the syrup deformation.
+
+
+
+Table 3: Process of etching latte art.
+
+
+
+Therefore, in this experiment, participants created etched latte art using yoghurt. There is no difference between yoghurt and foamed milk in terms of the manipulation of the pick. Thus, we considered that using yoghurt in the experiment would not affect the evaluation of the system. We did not have to consider the difference resulting from using our system with a cappuccino .
+
+(1) Etching Latte Art without System
+
+Participants repeatedly watched a process video of etching latte art (Table 3). Then, they created etched latte art without using the system while watching the process video.
+
+(2) Etching Latte Art with System
+
+Participants created etched latte art using our system. First, they watched a procedure projected on the cappuccino. Second, the computer graphics animations of syrup deformation (animation speed was almost the same as the speed of actually etching latte art) were projected. Finally, the procedure of etching latte art was projected on the cappuccino once again, and the participants created etched latte art by tracing the projected procedure. The system could advance from the syrup placement to pick manipulation steps at any time. The experiment required approximately 60 to ${100}\mathrm{\;s}$ for each participant.
+
+Table 4: Experimental result. Group 1 created etched latte art without using the system, whereas Group 2 created the art using our system. The red lines highlight the undesirable parts mentioned in Subsection 4.2.
+
+
+
+### 4.2 Results of Creating Etched Latte Art
+
+The latte art etched by the participants is shown in Table 4. We compare and evaluate the etched latte art created without (row indicated by "Without system' in Table 4) and with our system (row indicated by "With system" in Table 4).
+
+Participants A, B, G, and H created the "Heart Ring" pattern (Table 4 A, B, G, H). Participants B and H used an excessive amount of syrup when they did not use the system. Therefore, the hearts were extremely large and their shape was not as required. However, the participants were able to adjust the amount of syrup when they used the system. As a result, the shape of each heart was clearer and the quality of the etched latte art was better.
+
+Participants C, D, I, and J created the "Leaf" pattern (Table 4 $\mathrm{C},\mathrm{D},\mathrm{I},\mathrm{J})$ . Participants $\mathrm{C}$ and $\mathrm{J}$ could not draw a line vertically. Participants J could not provide the same distance between each syrup. Their etched latte art appeared distorted because of these problems. They could create a well-balanced etched latte art with the same distance between each syrup using our system.
+
+Participants E, F, K, and L created the "Spider Web" pattern (Table $4\mathrm{E},\mathrm{F},\mathrm{K},\mathrm{L}$ ). Participants $\mathrm{E},\mathrm{F}$ , and $\mathrm{K}$ could not draw a spiral within a certain space when they did not use the system. However, they could create the spiral using the system and produced better-balanced etched latte art.
+
+Thus, the etched latte art created using our system was of good quality. It was therefore evidenced that even beginners could create well-balanced etched latte art using our system.
+
+### 4.3 Participants' Questionnaire
+
+Participants compared the etched latte art created with and without using the system. We conducted a questionnaire survey. The questionnaire consisted of the following five questions:
+
+Question 1. Can you imagine how to create etched latte art before watching the process video?
+
+Question 2. Is it easy to create etched latte art while watching the process video?
+
+Question 3. Can you understand the syrup deformation from the animations projected by the system?
+
+Question 4. Is the animation speed of syrup deformation appropriate?
+
+Question 5. Is it easy to create etched latte art by tracing the procedure projected by the system?
+
+The participants answered these questions on a 5-pt Likert scale (5: Yes, 4: Maybe, 3: Not confident, 2: Not too much, 1: Not at all).
+
+In addition, the participants provided feedback on the system and the areas of improvement for the system.
+
+Table 5: Results of questionnaire survey.
+
+ | Questions | 1pt | 2pt | 3pt | 4pt | 5pt | Median |
| 1 | I can imagine how to create etched latte art before watching the process video. | 4 | 6 | 1 | 1 | 0 | 2 |
| 2 | It is easy to create etched latte art while watching the process video. | 1 | 5 | 1 | 3 | 2 | 2.5 |
| 3 | I can understand the syrup deformation from the animations projected by the system. | 0 | 0 | 0 | 2 | 10 | 5 |
| 4 | The animation speed of syrup deformation is appropriate. | 0 | 0 | 1 | 4 | 7 | 5 |
| 5 | It is easy to create etched latte art by tracing the procedure projected by the system. | 0 | 0 | 0 | 4 | 8 | 5 |
+
+#### 4.3.1 Questionnaire Survey Results
+
+## The results of the questionnaire survey are shown in Table 5.
+
+Over 80 percent of the participants answered Question 1 as $1\mathrm{{pt}}$ or $2\mathrm{{pt}}$ , and the median was $2\mathrm{{pt}}$ , which is low. According to this result, etched latte art patterns are complex for people who see them for the first time. It is difficult for them to imagine how to create these patterns.
+
+For Question 2, five participants answered 2 pt and only two participants answered 5 pt, and the median was ${2.5}\mathrm{{pt}}$ .
+
+More than 90 percent of the participants answered questions 3 and 4 as 4 pt or 5 pt, and the median was 5 pt. We consider that the animations of syrup deformation provided by our system help users appropriately understand how syrup deforms when it is manipulated by the pick. In addition, the animation speed of syrup deformation is ideal.
+
+All participants answered Question 5 as 4 pt or 5 pt. We consider that it is possible to create good quality etched latte art using our system. Our system is popular with participants because it projects the procedure directly on a cappuccino. Hence, the participants are not required to watch another screen that displays process videos while etching latte art.
+
+#### 4.3.2 Participants' Comments
+
+The participants provided the following comments about creating etched latte art without using the system:
+
+(1) I could not understand where to place the syrup because it was difficult to understand the location and the amount of syrup from the process video.
+
+(2) It was difficult to manipulate the pick, and I could not create the desired pattern.
+
+The participants provided the following comments about creating etched latte art using the system:
+
+(3) It was easy to draw a line using the pick because I was only required to trace a line projected on the cappuccino. Thus, it was clear where and how much syrup I should place.
+
+(4) I was delighted that I could create etched latte art even though I had never attempted it because the procedure was easy to understand.
+
+(5) The animation of syrup deformation indicated how the syrup deforms. Therefore, I could imagine it.
+
+The participants provided the following points of improvement for the system:
+
+(6) I might have been able to place the suitable amount of syrup if the animations showed how to place syrup.
+
+(7) At certain times, I found it difficult to trace lines from left to right because of my shadow.
+
+(8) I was slightly confused because a large amount of syrup remained at the center and the line projected on the syrup was not visible.
+
+#### 4.3.3 Discussion
+
+Based on the results in Table 4 and comments (1) to (5), we can conclude that even beginners are able to easily create etched latte art by utilizing our system as compared to watching process videos on another screen. This is because the system directly projects a procedure on a cappuccino, hence its popularity with users.
+
+Users adjust the amount of the syrup by placing it on the pattern projected on the cappuccino. However, this pattern is a static image; hence, a few participants use an excessive amount of syrup, as stated in comment (6). We consider that preparing animations that show how to place syrup helps users clearly understand and imagine the speed and amount of syrup.
+
+Additionally, as stated in comments (7) and (8), the projected procedure is difficult to view in a few cases owing to the position of users' hands or the color of the background. We will resolve this problem using multiple projectors.
+
+### 4.4 Evaluation of Experimental Result
+
+The purpose of latte art is to make customers enjoy not only the taste of coffee but also its appearance. Therefore, we consider that it is important for latte art to have a good appearance. Therefore, we performed a questionnaire survey among inexperienced people to determine which etched latte art (made without or with our system) appears more well-balanced, for the etched latte art created by each participant. Moreover, to obtain quantitative values of the etched latte art's appearance as aid of the inexperienced people's questionnaire result, we created foreground images through background subtraction for each design template and etched latte art to show which etched latte art is more similar to each design template.
+
+#### 4.4.1 Questionnaire Survey among Inexperienced People
+
+Sixty inexperienced people who had not participated in the experiment were asked to determine the etched latte art (created without or with the system) that was more similar to the design template. The participants were the authors' acquaintances. Their ages ranged from 19 to 58 years old. They were not paid for participation in the study, and two people had experience in creating free pour latte art.
+
+The information about whether the etched latte art was created without or with the system was not provided to the participants.
+
+The results of the questionnaire survey are shown in Figure 4.
+
+The etched latte art created by ten participants out of twelve was more similar to the design template when the system was used compared to when the system was not used.
+
+Participant G placed the appropriate amount of syrup at the suitable location even without the system. In this case, the etched latte art created without and with the system was well balanced.
+
+Participant I could not follow the procedure appropriately because he/she placed syrup too rapidly. This participant stated that it might have been easier to create well-balanced etched latte art if the animations had shown how to place syrup. We will improve the system to resolve this issue by creating new animations that show the suitable speed of placing syrup.
+
+
+
+Figure 4: Result of questionnaire survey among inexperienced people. Sixty inexperienced people were asked which etched latte art (made without or with the system) looked more similar to the design template, for the latte art etched by each participant.
+
+#### 4.4.2 Background Subtraction
+
+We created foreground images through background subtraction for each design template and etched latte art to quantitatively evaluate the etched latte art that was more similar to the design template. The white pixels in the foreground images indicated the difference between the design template and etched latte art, and black pixels indicated the parts that were the same.
+
+We normalized the black pixels to quantify the similarity between each design template and etched latte art. A larger value indicated higher similarity.
+
+The results of background subtraction are shown in Table 6.
+
+The etched latte art created by ten participants out of twelve was more similar to the design template when the system was used compared to when the system was not used.
+
+In the etched latte art created by Participant A, the location of each heart was adjusted by our system. However, the participant used an excessive amount of syrup. As a result, there was considerable difference between the etched latte art and design template.
+
+In the etched latte art created by Participant D, the syrup was off to the right. As a result, there was significant difference between the etched latte art and design template. However, the difference between the etched latte art created without and with the system was only 0.1 percent, which was negligible.
+
+We must indicate the appropriate amount of the syrup more clearly to obtain higher similarity between the design template and etched latte art.
+
+#### 4.4.3 Discussion
+
+The results of the questionnaire survey of inexperienced people and background subtraction show that more than 80 percent of the participants created better-balanced etched latte art using our system. Two participants created better-balanced etched latte art without using the system. However, these participants were different in the cases of the questionnaire survey and background subtraction. Based on this result, we confirm there are instances where people consider that etched latte art is similar to the design template even though the result of the background subtraction shows that it is not, and vice versa. We must improve the system considering what kind of etched latte art people prefer.
+
+In addition, we confirmed if the results for the participants who had experience in drawing pictures (participants $\mathrm{G}$ and $\mathrm{H}$ ) were different from those of others.
+
+As shown in Figure 4, the votes were divided among participants $\mathrm{G}$ and $\mathrm{H}$ . This may be because these participants created well-balanced etched latte art even without the system. However, the votes were also divided among participants I and L, who did not have experience in drawing pictures.
+
+As shown in Table 6, the similarities between the design template and the etched latte art made without the system were 82.8 and 78.7 percent for participants $\mathrm{G}$ and $\mathrm{H}$ , respectively, whereas the similarity was 80.3 and 74.1 percent for participants A and B, respectively. No large difference was be confirmed between them.
+
+Based on these results, we confirmed that the art backgrounds of the participants did not significantly affect the results.
+
+In the future, we will confirm if the users make progress in creating better-balanced etched latte art by repeatedly using our system and if they can create well-balanced latte art even without the system.
+
+## 5 CONCLUSION
+
+We have developed a system that supports beginners in practicing and creating etched latte art and helps them understand syrup deformation by directly projecting the procedure and animations of syrup deformation onto a cappuccino. The participants' evaluations have verified the usefulness of our system.
+
+As mentioned in Subsection 3.2, the procedure of our system is considerably simple. To the best of our knowledge, there is no system that shows the procedure of etching latte art in the manner that our system does. We will confirm the effect of simplifying the process video by simply demonstrating the procedure on a laptop.
+
+The system has certain limitations, e.g., participants complained about the lack of information while using the system. Therefore, we will redesign the system by considering participants' views. Currently, our system is particularly effective for one-time use; however, the system might be used repeatedly by adding advice or correction functions and automatically creating animations of syrup deformation.
+
+Moreover, we will examine the generalizability and scalability of our system. At present, our system supports only three patterns; however, other patterns can be supported by preparing design templates and animations. In addition, we will consider the points of improvement obtained in the survey.
+
+Table 6: Results of background subtraction. Similarities are represented by a number in the range of 0.000 to 1.000 (1.000 indicates that latte art is the same as the design template).
+
+| Group 1 | Participants | A | B | C | D | E | F |
| Without system | | | | | | |
| Similarity | 0.803 | 0.741 | 0.806 | 0.761 | 0.524 | 0.582 |
| With system | | | | | | |
| Similarity | 0.787 | 0.880 | 0.853 | 0.760 | 0.619 | 0.603 |
| Group 2 | Participants | G | H | I | J | K | L |
| Without system | | | | | | |
| Similarity | 0.828 | 0.787 | 0.799 | 0.797 | 0.556 | 0.551 |
| With system | | | | | | |
| Similarity | 0.846 | 0.850 | 0.824 | 0.830 | 0.652 | 0.598 |
+
+Our system only uses a small projector. Thus, it is easy to adjust the projector to point at a canvas. Our system can be applied for decorating cookies or cakes with icing, which use materials with different viscosities, by appropriately changing the viscosity in the animation of syrup deformation.
+
+## ACKNOWLEDGMENTS
+
+The authors would like to thank Mr. Shigeaki Suzuki and AS-TRODESIGN, Inc. for their kind support, the reviewers for their helpful comments, and Editage (www.editage.com) for English language editing.
+
+## REFERENCES
+
+[1] S. Aupiais. Barista basics: How to texture milk in 14 steps. https://www.perfectdailygrind.com/2018/02/ barista-basics-texture-milk-14-steps/, 2018. [Online; accessed 2020-04-02].
+
+[2] I. Bez. What is latte art? https://www.latteartguide.com/ 2013/04/what-is-latte-art.html, 2013. [Online; accessed 2020-04-02].
+
+[3] M. Flagg and J. M. Rehg. Projector-guided painting. Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, pages 235-244, 2006.
+
+[4] C.-C. Hu and M.-T. Chi. Digital latte art. SIGGRAPH Asia 2013 Posters, (35):35:1-35:1, 2013.
+
+[5] M. Kawai, S. Kodama, and T. Takahashi. A free pour latte art support system by showing paths of pouring milk using design templates. IEVC2019 The 6th IIEEJ International Conference on Image Electronics and Visual Computing, (2A-2), 2019.
+
+[6] S. Morioka and H. Ueda. Cooking support system utilizing built-in cameras and projectors. MVA2011 IAPR Conference on Machine Vision Applications, pages 271-274, 2011.
+
+[7] O. Pikalo. Latte art machine. ACM SIGGRAPH 2008 New Tech Demos, (22):22:1-22:1, 2008.
+
+[8] H. Wilson. How to etch latte art: A video guide. https://www.perfectdailygrind.com/2016/07/ etch-latte-art-video-guide/, 2016. [Online; accessed 2020-04-02].
+
+[9] H. Xie, Y. Peng, N. Chen, D. Xie, C.-M. Chang, and K. Miyata. Bal-loonfab: Digital fabrication of large-scale balloon art. Extended ${Ab}$ - stracts of the 2019 CHI Conference on Human Factors in Computing Systems, (LBW0164):LBW0164:1-LBW0164:6, 2019.
+
+[10] H. Yoshida, T. Igarashi, Y. Obuchi, Y. Takami, J. Sato, M. Araki, M. Miki, K. Nagata, K. Sakai, and S. Igarashi. Architecture-scale human-assisted additive manufacturing. ACM Trans. Graph., 34(4):88:1-88:8, July 2015.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5e34dc62f459cc26d10448e07b34e33bf46ee7fa
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/Tu1NiBXxf0/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,427 @@
+§ SUPPORT SYSTEM FOR ETCHING LATTE ART BY TRACING PROCEDURE BASED ON PROJECTION MAPPING
+
+Momoka Kawai*
+
+Tokyo Denki University
+
+Shuhei Kodama ${}^{ \dagger }$
+
+Tokyo Denki University
+
+ASTRODESIGN, Inc.
+
+Tokiichiro Takahashi ${}^{ \ddagger }$
+
+Tokyo Denki University
+
+ASTRODESIGN, Inc.
+
+§ ABSTRACT
+
+It is difficult for beginners to create well-balanced etched latte art patterns using two fluids with different viscosities, such as foamed milk and syrup. However, it is not easy to create well-balanced etched latte art even while watching process videos that show procedures.
+
+In this paper, we propose a system that supports beginners in creating well-balanced etched latte art by projecting the etching procedure directly onto a cappuccino.
+
+In addition, we examine the similarity between etched latte art and design templates using background subtraction. The experimental results show the progress in creating well-balanced etched latte art using our system.
+
+Keywords: Projection Mapping, Etching Latte Art, Learning Support System.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-User studies; Applied computing-Education-Computer-assisted instruction
+
+§ 1 INTRODUCTION
+
+Etching latte art is the practice of drawing images on a coffee using a thin rod such as a toothpick [2]. There are several methods of etching latte art depending on tools and toppings. An easy method of etching latte art for beginners is pouring syrup directly onto milk foam and etching it to create patterns, as shown in Figure 1. The color combination obtained using syrup automatically makes a drink appear impressive. Hence, baristas are under less pressure to create a difficult design [8]. However, it is difficult for beginners to imagine how to pour syrup and etch it to create beautiful patterns because etching latte art involves two fluids with different viscosities. Furthermore, even though beginners can watch videos that show the procedure of etching latte art, latte art created through imitation does not appear well balanced. In this study, we define the etched latte art drawn in the middle of a coffee cup with unified lines as "well balanced." It is impossible to etch well-balanced latte art without repeated practice.
+
+We develop a support system that helps even beginners etch well-balanced latte art by directly projecting the etching procedure using syrup onto a cappuccino. Moreover, the deformation of a fluid with viscosity, such as syrup, is projected through animations to support beginners in understanding the deformation of such fluids. We demonstrate the usefulness of this system through a questionnaire survey and examine the similarity between etched latte art and design templates using background subtraction. To the best of the authors' knowledge, there is no system that shows the procedure of etching latte art in the manner that our system does.
+
+§ 2 RELATED WORK
+
+§ 2.1 SUPPORT FOR CREATING LATTE ART
+
+There are two methods of creating latte art, etching and free pouring. In the former, syrup is poured on milk foam, and patterns are created via etching. In the latter, no tools are used and patterns are created using only the flow of milk, such as pouring.
+
+Hu and Chi [4] proposed a simulation method that considered the viscosity of milk to express the flow of milk in latte art. The flow of milk obtained in their research was quite similar to that in actual latte art.
+
+However, from the viewpoint of practicing latte art, users must estimate the paths of pouring milk and acquire the manipulation of a milk jug from simulation results. Moreover, it is difficult for users to understand the flow of milk unless they have advanced skills.
+
+Pikalo [7] developed an automated latte art machine that used a modified inkjet cartridge to infuse tiny droplets of colorant into the upper layer of a beverage. Latte art with any design can be easily created using this machine, which is similar to a printer.
+
+This machine enables everyone to create original latte art without any barista skills. However, the machine cannot create latte art using milk foam, such as free pour latte art. Therefore, baristas still require practice to create other kinds of latte art.
+
+Kawai et al. [5] developed a free pour latte art support system that showed the procedure of pouring milk in the form of animated lines. This system targets baristas who have experience in creating basic free pour latte art and know the amount of milk to be poured. People without any experience in creating free pour latte art must practice several times.
+
+§ 2.2 SUPPORT FOR CREATING LATTE ART USING A PROJECTOR
+
+Flagg et al. [3] developed a painting support system by projecting a painting procedure on a canvas. They placed two projectors behind users to prevent the projected painting procedure from being hidden by users' shadows. This system is quite large scale and expensive.
+
+Morioka et al. [6] visually supported cooking by projecting how to chop ingredients at appropriate locations; this system can indicate how to chop ingredients and provide detailed notes and cooking procedures, which are difficult to understand during cooking while reading a recipe. However, users require a kitchen with a hole on the ceiling to use this system; this system is quite large scale because it projects instructions from a hole on the ceiling.
+
+Xie et al. [9] proposed an interactive system that enables even inexperienced people to build large-scale balloon art in an easy and enjoyable manner using spatially augmented reality. This system provides fabrication guidance to illustrate the differences between the depth maps of a target three-dimensional shape and a work in progress. In addition, they designed a shaking animation for each number to increase user immersion.
+
+Yoshida et al. [10] proposed an architecture-scale computer-assisted digital fabrication method that used a depth camera and projection mapping. This system captured a current work using a depth camera, compared a scanned geometry with a target shape, and then projected guiding information based on an evaluation. They mentioned that augmented reality devices, such as head-mounted displays, could be used. However, the calibration of places displaying instructions for each device would be required. The proposed projector-camera guidance system can be prepared using a simple calibration process, and it allows for intuitive information sharing among workers.
+
+*e-mail: m-kawai@vcl.jp
+
+${}^{ \dagger }$ e-mail: s-kodama@vcl.jp
+
+${}^{ \ddagger }$ e-mail: toki@vcl.jp
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+ < g r a p h i c s >
+
+Figure 1: Procedure of etching latte art (left to right).
+
+As indicated in this paper, instructions can be projected on hands. Therefore, the difference between instructions and users' pick manipulation can be reduced. For this reason, we utilized a small projector to provide support for creating etched latte art.
+
+§ 3 PROPOSED METHOD
+
+§ 3.1 SYSTEM OVERVIEW
+
+The system overview is shown in Figure 2. The system configuration is provided below (Figure 3).
+
+ * A laptop computer is connected to a small projector.
+
+ * The projector shows a procedure on a cappuccino.
+
+ * The projector is placed on a tripod.
+
+Firstly, users select a pattern among the Heart Ring, Leaf, or Spider Web patterns, as shown in Figure 2(a). Next, the users manually adjust the projector such that it points at a cappuccino. The procedure of creating the selected etched latte art is repeatedly projected on the cappuccino (Figure 2(b)). Then, the animation of syrup deformation is repeatedly projected on the cappuccino (Figure 2(c)).
+
+Finally, the procedure is projected on the cappuccino once again. Users pour syrup to trace the projected image. Then, the latte art is completed via etching to trace the projected lines (Figure 2(d)).
+
+The animations at each step can be played at any time.
+
+§ 3.2 DISPLAY FUNCTION OF PROCEDURE
+
+In our system, users select a pattern from three kinds of etched latte art (the first column in Table 1). There are several other patterns of etched latte art as well. However, we adopted these patterns because they include all types of syrup placements (dots, curves, and straight lines) and pick manipulations (curves, straight lines, and spirals) used for creating etched latte art. A procedure is projected on the cappuccino after a pattern is selected.
+
+First, the appropriate locations for placing syrup corresponding to the selected etched latte art are projected (the second column in Table 1). Next, the manipulation of the pick is projected (the third column in Table 1).
+
+It is difficult to understand the procedure of etching latte art through books because they only show the sequence of several frames of a process video, as shown in Figure 1. Our system separately displays how to place syrup and manipulate the pick. As the depth, angle, or speed of a tool does not significantly affect the final outcome, our system does not show any other information.
+
+Table 1: Procedure of etching latte art using syrup.
+
+max width=
+
+Etched Latte Art Design Template Syrup Place Pick Manipulation
+
+1-4
+Heart Ring
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+1-4
+Leaf
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+1-4
+Spider Web
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+1-4
+
+§ 3.3 ANIMATIONS OF SYRUP DEFORMATION
+
+It is difficult for beginners to imagine the procedure of syrup deformation through pick manipulation. Our system helps users understand this procedure by directly projecting prepared animations (Table 2). In our system, syrup is indicated in brown and the manipulation of the pick is indicated in blue. We used actual syrup and manipulated it using the pick to observe syrup deformation. Then, we created the animation to be the same as the actual syrup deformation by employing Adobe After Effects and considering two fluids with different viscosities. It required approximately ${30}\mathrm{\;{min}}$ to create each design template and $2\mathrm{\;h}$ to create each animation of syrup deformation.
+
+§ 3.4 SYSTEM CONFIGURATION
+
+As shown in Figure 3, a cappuccino is placed in front of a user and the projector mounted on the tripod is placed on the left side of the cappuccino (it is placed on the right side if the user is left handed). The procedure of etching latte art and the animations of syrup deformation are projected from the top.
+
+§ 4 EVALUATION
+
+§ 4.1 EVALUATION METHOD
+
+We conducted an experiment to evaluate our system. Twelve etched latte art beginners participated in the experiment. The participants were students in their 20s. They were not paid, and they did not aim to become baristas. Two of the participants had experience in drawing pictures (participants $\mathrm{G}$ and $\mathrm{H}$ ). We divided the participants into two groups (Group 1 and Group 2). To ensure homogeneity in the groups, the participants were randomly grouped and they never practiced creating etched latte art. The participants created etched latte art patterns using different methods (one without using the system and one with the system). Group 1 created etched latte art without using the system at first; thereafter, they created etched latte art using the system. In contrast, Group 2 created etched latte art using the system at first; thereafter, they created etched latte art without using the system. We selected patterns from the three previously mentioned etched latte art patterns so that all participants did not create the same pattern. The participants filled out a questionnaire after etching latte art.
+
+ < g r a p h i c s >
+
+Figure 2: System overview.
+
+ < g r a p h i c s >
+
+Figure 3: System configuration.
+
+Generally, in commercial use, baristas use well-steamed smooth foamed milk containing fair quality small-sized bubbles generated by a steamer attached to an espresso machine [1]. However, it is difficult to create such good quality foamed milk using a household milk frother. Milk foamed by a milk frother contains large bubbles that break easily; hence, syrup placed on this kind of milk foam spreads.
+
+Table 2: Animations indicating the syrup deformation.
+
+ < g r a p h i c s >
+
+Table 3: Process of etching latte art.
+
+ < g r a p h i c s >
+
+Therefore, in this experiment, participants created etched latte art using yoghurt. There is no difference between yoghurt and foamed milk in terms of the manipulation of the pick. Thus, we considered that using yoghurt in the experiment would not affect the evaluation of the system. We did not have to consider the difference resulting from using our system with a cappuccino .
+
+(1) Etching Latte Art without System
+
+Participants repeatedly watched a process video of etching latte art (Table 3). Then, they created etched latte art without using the system while watching the process video.
+
+(2) Etching Latte Art with System
+
+Participants created etched latte art using our system. First, they watched a procedure projected on the cappuccino. Second, the computer graphics animations of syrup deformation (animation speed was almost the same as the speed of actually etching latte art) were projected. Finally, the procedure of etching latte art was projected on the cappuccino once again, and the participants created etched latte art by tracing the projected procedure. The system could advance from the syrup placement to pick manipulation steps at any time. The experiment required approximately 60 to ${100}\mathrm{\;s}$ for each participant.
+
+Table 4: Experimental result. Group 1 created etched latte art without using the system, whereas Group 2 created the art using our system. The red lines highlight the undesirable parts mentioned in Subsection 4.2.
+
+ < g r a p h i c s >
+
+§ 4.2 RESULTS OF CREATING ETCHED LATTE ART
+
+The latte art etched by the participants is shown in Table 4. We compare and evaluate the etched latte art created without (row indicated by "Without system' in Table 4) and with our system (row indicated by "With system" in Table 4).
+
+Participants A, B, G, and H created the "Heart Ring" pattern (Table 4 A, B, G, H). Participants B and H used an excessive amount of syrup when they did not use the system. Therefore, the hearts were extremely large and their shape was not as required. However, the participants were able to adjust the amount of syrup when they used the system. As a result, the shape of each heart was clearer and the quality of the etched latte art was better.
+
+Participants C, D, I, and J created the "Leaf" pattern (Table 4 $\mathrm{C},\mathrm{D},\mathrm{I},\mathrm{J})$ . Participants $\mathrm{C}$ and $\mathrm{J}$ could not draw a line vertically. Participants J could not provide the same distance between each syrup. Their etched latte art appeared distorted because of these problems. They could create a well-balanced etched latte art with the same distance between each syrup using our system.
+
+Participants E, F, K, and L created the "Spider Web" pattern (Table $4\mathrm{E},\mathrm{F},\mathrm{K},\mathrm{L}$ ). Participants $\mathrm{E},\mathrm{F}$ , and $\mathrm{K}$ could not draw a spiral within a certain space when they did not use the system. However, they could create the spiral using the system and produced better-balanced etched latte art.
+
+Thus, the etched latte art created using our system was of good quality. It was therefore evidenced that even beginners could create well-balanced etched latte art using our system.
+
+§ 4.3 PARTICIPANTS' QUESTIONNAIRE
+
+Participants compared the etched latte art created with and without using the system. We conducted a questionnaire survey. The questionnaire consisted of the following five questions:
+
+Question 1. Can you imagine how to create etched latte art before watching the process video?
+
+Question 2. Is it easy to create etched latte art while watching the process video?
+
+Question 3. Can you understand the syrup deformation from the animations projected by the system?
+
+Question 4. Is the animation speed of syrup deformation appropriate?
+
+Question 5. Is it easy to create etched latte art by tracing the procedure projected by the system?
+
+The participants answered these questions on a 5-pt Likert scale (5: Yes, 4: Maybe, 3: Not confident, 2: Not too much, 1: Not at all).
+
+In addition, the participants provided feedback on the system and the areas of improvement for the system.
+
+Table 5: Results of questionnaire survey.
+
+max width=
+
+X Questions 1pt 2pt 3pt 4pt 5pt Median
+
+1-8
+1 I can imagine how to create etched latte art before watching the process video. 4 6 1 1 0 2
+
+1-8
+2 It is easy to create etched latte art while watching the process video. 1 5 1 3 2 2.5
+
+1-8
+3 I can understand the syrup deformation from the animations projected by the system. 0 0 0 2 10 5
+
+1-8
+4 The animation speed of syrup deformation is appropriate. 0 0 1 4 7 5
+
+1-8
+5 It is easy to create etched latte art by tracing the procedure projected by the system. 0 0 0 4 8 5
+
+1-8
+
+§ 4.3.1 QUESTIONNAIRE SURVEY RESULTS
+
+§ THE RESULTS OF THE QUESTIONNAIRE SURVEY ARE SHOWN IN TABLE 5.
+
+Over 80 percent of the participants answered Question 1 as $1\mathrm{{pt}}$ or $2\mathrm{{pt}}$ , and the median was $2\mathrm{{pt}}$ , which is low. According to this result, etched latte art patterns are complex for people who see them for the first time. It is difficult for them to imagine how to create these patterns.
+
+For Question 2, five participants answered 2 pt and only two participants answered 5 pt, and the median was ${2.5}\mathrm{{pt}}$ .
+
+More than 90 percent of the participants answered questions 3 and 4 as 4 pt or 5 pt, and the median was 5 pt. We consider that the animations of syrup deformation provided by our system help users appropriately understand how syrup deforms when it is manipulated by the pick. In addition, the animation speed of syrup deformation is ideal.
+
+All participants answered Question 5 as 4 pt or 5 pt. We consider that it is possible to create good quality etched latte art using our system. Our system is popular with participants because it projects the procedure directly on a cappuccino. Hence, the participants are not required to watch another screen that displays process videos while etching latte art.
+
+§ 4.3.2 PARTICIPANTS' COMMENTS
+
+The participants provided the following comments about creating etched latte art without using the system:
+
+(1) I could not understand where to place the syrup because it was difficult to understand the location and the amount of syrup from the process video.
+
+(2) It was difficult to manipulate the pick, and I could not create the desired pattern.
+
+The participants provided the following comments about creating etched latte art using the system:
+
+(3) It was easy to draw a line using the pick because I was only required to trace a line projected on the cappuccino. Thus, it was clear where and how much syrup I should place.
+
+(4) I was delighted that I could create etched latte art even though I had never attempted it because the procedure was easy to understand.
+
+(5) The animation of syrup deformation indicated how the syrup deforms. Therefore, I could imagine it.
+
+The participants provided the following points of improvement for the system:
+
+(6) I might have been able to place the suitable amount of syrup if the animations showed how to place syrup.
+
+(7) At certain times, I found it difficult to trace lines from left to right because of my shadow.
+
+(8) I was slightly confused because a large amount of syrup remained at the center and the line projected on the syrup was not visible.
+
+§ 4.3.3 DISCUSSION
+
+Based on the results in Table 4 and comments (1) to (5), we can conclude that even beginners are able to easily create etched latte art by utilizing our system as compared to watching process videos on another screen. This is because the system directly projects a procedure on a cappuccino, hence its popularity with users.
+
+Users adjust the amount of the syrup by placing it on the pattern projected on the cappuccino. However, this pattern is a static image; hence, a few participants use an excessive amount of syrup, as stated in comment (6). We consider that preparing animations that show how to place syrup helps users clearly understand and imagine the speed and amount of syrup.
+
+Additionally, as stated in comments (7) and (8), the projected procedure is difficult to view in a few cases owing to the position of users' hands or the color of the background. We will resolve this problem using multiple projectors.
+
+§ 4.4 EVALUATION OF EXPERIMENTAL RESULT
+
+The purpose of latte art is to make customers enjoy not only the taste of coffee but also its appearance. Therefore, we consider that it is important for latte art to have a good appearance. Therefore, we performed a questionnaire survey among inexperienced people to determine which etched latte art (made without or with our system) appears more well-balanced, for the etched latte art created by each participant. Moreover, to obtain quantitative values of the etched latte art's appearance as aid of the inexperienced people's questionnaire result, we created foreground images through background subtraction for each design template and etched latte art to show which etched latte art is more similar to each design template.
+
+§ 4.4.1 QUESTIONNAIRE SURVEY AMONG INEXPERIENCED PEOPLE
+
+Sixty inexperienced people who had not participated in the experiment were asked to determine the etched latte art (created without or with the system) that was more similar to the design template. The participants were the authors' acquaintances. Their ages ranged from 19 to 58 years old. They were not paid for participation in the study, and two people had experience in creating free pour latte art.
+
+The information about whether the etched latte art was created without or with the system was not provided to the participants.
+
+The results of the questionnaire survey are shown in Figure 4.
+
+The etched latte art created by ten participants out of twelve was more similar to the design template when the system was used compared to when the system was not used.
+
+Participant G placed the appropriate amount of syrup at the suitable location even without the system. In this case, the etched latte art created without and with the system was well balanced.
+
+Participant I could not follow the procedure appropriately because he/she placed syrup too rapidly. This participant stated that it might have been easier to create well-balanced etched latte art if the animations had shown how to place syrup. We will improve the system to resolve this issue by creating new animations that show the suitable speed of placing syrup.
+
+ < g r a p h i c s >
+
+Figure 4: Result of questionnaire survey among inexperienced people. Sixty inexperienced people were asked which etched latte art (made without or with the system) looked more similar to the design template, for the latte art etched by each participant.
+
+§ 4.4.2 BACKGROUND SUBTRACTION
+
+We created foreground images through background subtraction for each design template and etched latte art to quantitatively evaluate the etched latte art that was more similar to the design template. The white pixels in the foreground images indicated the difference between the design template and etched latte art, and black pixels indicated the parts that were the same.
+
+We normalized the black pixels to quantify the similarity between each design template and etched latte art. A larger value indicated higher similarity.
+
+The results of background subtraction are shown in Table 6.
+
+The etched latte art created by ten participants out of twelve was more similar to the design template when the system was used compared to when the system was not used.
+
+In the etched latte art created by Participant A, the location of each heart was adjusted by our system. However, the participant used an excessive amount of syrup. As a result, there was considerable difference between the etched latte art and design template.
+
+In the etched latte art created by Participant D, the syrup was off to the right. As a result, there was significant difference between the etched latte art and design template. However, the difference between the etched latte art created without and with the system was only 0.1 percent, which was negligible.
+
+We must indicate the appropriate amount of the syrup more clearly to obtain higher similarity between the design template and etched latte art.
+
+§ 4.4.3 DISCUSSION
+
+The results of the questionnaire survey of inexperienced people and background subtraction show that more than 80 percent of the participants created better-balanced etched latte art using our system. Two participants created better-balanced etched latte art without using the system. However, these participants were different in the cases of the questionnaire survey and background subtraction. Based on this result, we confirm there are instances where people consider that etched latte art is similar to the design template even though the result of the background subtraction shows that it is not, and vice versa. We must improve the system considering what kind of etched latte art people prefer.
+
+In addition, we confirmed if the results for the participants who had experience in drawing pictures (participants $\mathrm{G}$ and $\mathrm{H}$ ) were different from those of others.
+
+As shown in Figure 4, the votes were divided among participants $\mathrm{G}$ and $\mathrm{H}$ . This may be because these participants created well-balanced etched latte art even without the system. However, the votes were also divided among participants I and L, who did not have experience in drawing pictures.
+
+As shown in Table 6, the similarities between the design template and the etched latte art made without the system were 82.8 and 78.7 percent for participants $\mathrm{G}$ and $\mathrm{H}$ , respectively, whereas the similarity was 80.3 and 74.1 percent for participants A and B, respectively. No large difference was be confirmed between them.
+
+Based on these results, we confirmed that the art backgrounds of the participants did not significantly affect the results.
+
+In the future, we will confirm if the users make progress in creating better-balanced etched latte art by repeatedly using our system and if they can create well-balanced latte art even without the system.
+
+§ 5 CONCLUSION
+
+We have developed a system that supports beginners in practicing and creating etched latte art and helps them understand syrup deformation by directly projecting the procedure and animations of syrup deformation onto a cappuccino. The participants' evaluations have verified the usefulness of our system.
+
+As mentioned in Subsection 3.2, the procedure of our system is considerably simple. To the best of our knowledge, there is no system that shows the procedure of etching latte art in the manner that our system does. We will confirm the effect of simplifying the process video by simply demonstrating the procedure on a laptop.
+
+The system has certain limitations, e.g., participants complained about the lack of information while using the system. Therefore, we will redesign the system by considering participants' views. Currently, our system is particularly effective for one-time use; however, the system might be used repeatedly by adding advice or correction functions and automatically creating animations of syrup deformation.
+
+Moreover, we will examine the generalizability and scalability of our system. At present, our system supports only three patterns; however, other patterns can be supported by preparing design templates and animations. In addition, we will consider the points of improvement obtained in the survey.
+
+Table 6: Results of background subtraction. Similarities are represented by a number in the range of 0.000 to 1.000 (1.000 indicates that latte art is the same as the design template).
+
+max width=
+
+5*Group 1 Participants A B C D E F
+
+2-8
+ Without system
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+2-8
+ Similarity 0.803 0.741 0.806 0.761 0.524 0.582
+
+2-8
+ With system
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+2-8
+ Similarity 0.787 0.880 0.853 0.760 0.619 0.603
+
+1-8
+5*Group 2 Participants G H I J K L
+
+2-8
+ Without system
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+2-8
+ Similarity 0.828 0.787 0.799 0.797 0.556 0.551
+
+2-8
+ With system
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+2-8
+ Similarity 0.846 0.850 0.824 0.830 0.652 0.598
+
+1-8
+
+Our system only uses a small projector. Thus, it is easy to adjust the projector to point at a canvas. Our system can be applied for decorating cookies or cakes with icing, which use materials with different viscosities, by appropriately changing the viscosity in the animation of syrup deformation.
+
+§ ACKNOWLEDGMENTS
+
+The authors would like to thank Mr. Shigeaki Suzuki and AS-TRODESIGN, Inc. for their kind support, the reviewers for their helpful comments, and Editage (www.editage.com) for English language editing.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e10c7d0455d7221a6981125115b54c225ddd5e6
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,337 @@
+# Lean-Interaction: passive image manipulation in concurrent multitasking
+
+Danny Schott*
+
+Otto von Guericke University
+
+Benjamin Hatscher ${}^{ \dagger }$
+
+Otto von Guericke University
+
+Steffi Hußlein ${}^{\pi }$
+
+Magdeburg-Stendal University of Applied Science Fabian Joeres ${}^{ \ddagger }$ Mareike Gabele ${}^{§}$ Otto von Guericke University Otto von Guericke University
+
+Christian Hansen"
+
+Otto von Guericke University
+
+## Abstract
+
+Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a hands-free image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user's performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.
+
+Keywords: Multimodal Interaction; Multitasking; Hands-free Interaction; Radiology; Medical Domain
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction techniques-; Human-centered computing-Human computer interaction (HCI)-Interaction paradigms
+
+## 1 INTRODUCTION
+
+Multitasking is an essential part of today's working environments, whether intentionally or involuntarily through a plethora of devices and communication channels. In this context, working on a screen while interruptions appear occasionally is a widely investigated scenario $\left\lbrack {3,{16},{22}}\right\rbrack$ .
+
+Even though it is important to understand how recovery from interruptions work, there are scenarios where the primary task is essential and cannot be interrupted completely, but still, additional information is required to reach the overall goal. Prominent examples for such demanding scenarios are flight coordination [17], piloting [15], train driving [13] or medicine [20]. The focus here shifts from recovery from distraction to how to maintain an acceptable level of performance for a specific task during concurrent multitasking.
+
+To find suitable input methods that fit these needs, we propose a 3- step approach: First, an exemplary scenario from the medical context is chosen and investigated in-depth to understand the restrictions and requirements of a representative real-world worst case as well as the occurring combinations of tasks. Second, input methods that potentially require notably low cognitive resources are selected, informed by the multiple resource theory [25], the passive input paradigm [21] and observations. Third, the scenario is abstracted to minimize the influence of domain knowledge and evaluated in a user study using a between-subject design.
+
+
+
+Figure 1: We took the complex activities of a radiologist as a starting point to differentiate between primary and secondary tasks (left picture). We created a simulation of this scenario by abstracting these tasks and developed natural input modalities for hands-free control (right picture).
+
+The main contributions of this work are the development, abstraction, demonstration and evaluation of a free-hand input modality for Zoom and Pan based on natural behavior with minimal influence on a primary task. This may lead to safer, faster and cheaper use in demanding interaction scenarios.
+
+Analytical observation has revealed which natural movements are suitable as complementary input modalities. When overviewing a picture, the participants mainly moved their upper bodies forward and backward, while simultaneously moved their eyebrows up and down. Within the evaluation of a prototypical setup, it has been shown that leaning is suitable as a secondary input modality since it has minimal influence on the primary task. A secondary input on the movement of the eyebrows, in contrast, proved to be unsuitable as manipulation technique. It's less accurate and physically demanding.
+
+---
+
+*e-mail: danny.schott@ovgu.de
+
+${}^{ \dagger }$ e-mail: benjamin.hatscher@ovgu.de
+
+${}^{ \ddagger }$ e-mail: fabian.joeres@ovgu.de
+
+§e-mail: mareike.gabele@ovgu.de
+
+Te-mail: steffi.husslein@hs-magdeburg.de
+
+- mail: christian.hansen@ovgu.de
+
+---
+
+## 2 RELATED WORK
+
+### 2.1 Concurrent Multitasking
+
+Performing two or more tasks in parallel or short succession is called multitasking. Salvucci et al. proposed a continuum ranging from concurrent interaction to sequential multitasking [23]. Adler and Benbunan-Fich found that the degree of multitasking influences productivity and accuracy differently: medium multitaskers are more productive than low or high multitaskers, while accuracy decreases with the degree of multitasking [2]. Interruptions lead to additional time to resume the primary task $\left\lbrack {{18},{24}}\right\rbrack$ and are more disruptive when occurring at points of a higher workload than during low workload phases [1]. It comes as no surprise that users tend to defer interrupting tasks to periods of lower workload [22]. A dual-task setting by Janssen et al. concludes that people can strategically focus their attention on certain tasks in order to achieve a certain performance and that choice of strategy has a major impact on task completion $\left\lbrack {{10},{11}}\right\rbrack$ . Wickens’ multiple resource theory assumes that multiple cognitive resources are tied to different modalities and tasks only conflict if they tap into the same resource [25]. This implies that the brain is able to separate different cognitive processes from each other. These findings lead us to the idea that if the influence of a secondary task on a concurrent primary task should be minimized, investigating multimodal interaction approaches seems plausible.
+
+### 2.2 Multimodal Input
+
+According to Oviatt, human-computer input modes can be separated into active and passive ones [21]. Active modes require intentional action by the user, while passive modes rely on unintentional action or behavior. Using passive modes as input lowers the cognitive effort as no explicit command has to be given by the user. This idea connects to non-command interfaces by Nielsen, which derive user intentions from observing the user [19]. With the goal of minimized influence in primary task accuracy in mind, it could be hypothesized that passive input modes reduce the influence of the secondary task even further than active input modes.
+
+## 3 MULTITASKING IN INTERVENTIONAL RADIOLOGY
+
+To investigate the idea of multimodal input for concurrent multitasking scenarios, we picked interventional radiology as a concrete example. During an intervention, interaction with medical images is required simultaneously to the high-priority task of instrument handling. During radiological interventions in general, a needle or catheter is inserted through small incisions into the patient's body. In order to navigate the instrument to the targeted pathological structures to be treated, the radiologist relies on real-time image data gathered by imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT), ultrasound (US) or accesses images recorded before the intervention [14]. A common workaround in clinical practice is installing an assistant as a proxy user to maintain asepsis (and due to workflow restrictions), which is time-consuming and error-prone compared to direct interaction $\left\lbrack {5,9,{20}}\right\rbrack$ and is rated by clinicians as least practicable [26].
+
+To gain insights into challenges in clinical practice we observed a radiological intervention and conducted a semi-structured interview with an experienced radiologist regarding the challenges of human-computer interaction tasks. In the following section, we report observations, expert comments and conclude difficult secondary tasks.
+
+During the observation, the radiologist wanted to take a closer look at the details of the latest x-ray image. A range of strategies to achieve this was observed during the intervention.
+
+- Leaning closer: The radiologists leans over the patient table towards the display showing the image.
+
+- Moving the display: An assistant is asked to move the display closer to the physician.
+
+- Pointing gesture: While examining the image, the radiologist points at a specific spot on the screen without touching it and instructs an assistant to enlarge this area of the image.
+
+- Verbal task delegation: When both hands are occupied with guiding the catheter, an assistant is instructed verbally to adjust the image section displayed on the screen.
+
+The expert interview aimed at understanding concrete action sequences concerning image selection and zoom was conducted to investigate the potential for direct interaction methods. Overall, the radiologist assessed the operation of the current system very positively, but restrictions in the operation of the control panel were often perceived as disturbing. It is particularly interesting to note that when navigating through the images, there is no need to look at the joystick, which suggests that hand-eye coordination has been perfected by routine. It was pointed out that different layouts of the $\mathrm{x}$ -ray images are used according to the user’s preference. The radiologist interviewed, for example, prefers a permanent full-screen view in order to be able to see details better on the screen. In this context, he emphasizes the necessity and current problems of zoom functions: "Definitely necessary [...] In principle, zooming is possible, but it is connected with a lot of manual actions and therefore not popular among us". In the angiography suite used for the observed intervention, this function is not available quickly enough. The desired image segment must first be selected, then navigated to the image settings with the help of a touch interface, whereby the view can then be enlarged by ${150}\%$ . Afterward, the image section can be moved with a joystick. Zooming, in his case, takes place in fluoroscopic $2\mathrm{D}$ as well as in $3\mathrm{D}$ volume data. The respondent himself usually performs this task, because a description of the target to the MTRA would be too inefficient. For an efficient operation of the system, "[...] Infinitely variable magnification and pan is indispensable for a useful zooming function".
+
+## 4 CONCEPT DEVELOPMENT: ABSTRACTING MEDICAL TASKS
+
+### 4.1 Catheter navigation as primary task
+
+The task referred to below as "primary task" simulates the medical task of a physician during a radiological intervention. In this context, the task has the highest priority for the physician, because his primary goal is to perform the intervention and to ensure optimal care for the patient. During a catheter intervention, instruments must be held and guided to the correct position in the body. The correct alignment by moving the tip back and forth requires a lot of concentration and is made more difficult by the patient's breathing movements or movements.
+
+### 4.2 Abstraction
+
+Our implementation aims to create a situation in which the user performs a motor activity with their hands. The manual movements of a radiologist are to be imitated using the example of the guidance of a catheter in a vascular structure during an intervention. We simplified this task to forwards and backwards movements and created a prototypical input device (see Figure 2). An abstract visualization in form of a horizontal bar displays the user input and a target marker. A vertical line represents the current position of the input device. A marker with a range of tolerance moves in a specific sequential movement along the $\mathrm{x}$ -axis and forces the user to pay attention to this task all the time (see Figure 3). To force the focus on the primary task even further, the marker jumps to a random position on the bar every 10 seconds.
+
+
+
+Figure 2: Left: Catheter Navigation Task. Right: Prototype for the simulation of that task.
+
+
+
+Figure 3: Slide control for transmission and feedback of the user input. The dark line element indicates the direction. If it is within the tolerance range (thin lines / contour), the display is green (bottom). Outside this range, the display turns red (top).
+
+### 4.3 Image Manipulation as a secondary task
+
+Current angiography systems offer the possibility to interact intra-operatively with image data on a monitor. Usually, one or more monitors are located directly in the radiologist's field of view, where various information such as live and reference images, patient data and system parameters are displayed. The displays are divided into different areas with an individually configurable layout. The selection of elements, the change of different views and modes, calling certain functions and the manipulation of images, such as scrolling in data sets, changing contrasts or adjusting the magnification factor are common interactions in such systems. In our work, these actions are summarized under the term secondary task. This secondary task can be seen as combinations of two fundamental interactions:
+
+#### 4.3.1 Selection
+
+Within the system, medical image data sets or viewports on the screen can be selected, functions executed or modes changed. The desired object or view on the monitor interface is selected first and the selection is then confirmed. The modalities which are available in current systems are located on the control panel which enables one- or two-dimensional interaction by means of keys, joysticks and touch screens.
+
+#### 4.3.2 Manipulation
+
+The selected data can then be manipulated in one or two dimensions using the same control elements. Frequent tasks are scrolling through image series, magnification of specific structures (zoom), shifting the visible image section (pan) and changing the image contrast and brightness (windowing). Further, these input methods are used to rotate and enlarge $3\mathrm{D}$ volume data sets.
+
+
+
+Figure 4: Process of visual abstraction of medical imagery.
+
+### 4.4 Abstraction
+
+During our clinical observation, the interaction steps in the treatment of an arteriovenous malformation (AVM) in the brain turned out to be the foundation for our secondary interaction task. An AVM is a congenital abnormal tangle of blood vessels in which arteries are directly connected to veins, disrupting normal blood flow and oxygen supply. It can be visualized with imaging techniques such as digital subtraction angiography (DSA). The radiologist's goal is to locate these and treat them accordingly. Fluoroscopic X-ray images often have relatively low contrast and low resolution. It is difficult to precisely recognize details such as the position of the catheter within such images and to distinguish overlaying blood vessels. To get as most out of the images as possible, the radiologist should be able to zoom and pan into the region of interest.
+
+We abstracted a typical image recording of an AVM, which can be seen in Figure 4.[A] shows an original recording of an AVM . In the next step [B], the size of the entire structure (red line) and the size of the AVM (blue circle) were marked. In [C] the region of interest is located between the viewfinder (blue line) and the whole structure (red line). Only within this radius were randomized target elements distributed and during the task, the region of interest must be brought into the target position. The surrounding vessels might serve as orientation features when panning and zooming and are replaced pattern of geometric shapes in shades of grey. Four different patterns (ellipse, rectangle, triangle, hexagon) were designed to simulate different images. The abstracted task view can be seen in image [D].
+
+## 5 INTERACTION METHODS
+
+In the following, direct interaction techniques that allow performing the secondary task while the hands are occupied are described. We focused on interaction methods that do not occupy the hands as they are used to perform the medical, primary task, the feet as foot pedals are a standard method to control angiography systems and body-worn equipment such as sensors or head-mounted eye-tracking devices are ruled out due to sterility concerns. Hatscher et al. found that contactless interactions via speech or gesture are suitable, but that interaction with the feet is most effective and also most accepted by the users [7]. Johnson et al. also describes that foot pedals in the OR are used to trigger image capture and that both these and the hands (holding and manipulating the wire and catheter) are extremely busy, which is why we have decided not to strain these extremities any further [12].
+
+### 5.1 Selection
+
+Selecting a specific element on the graphical user interface can be divided into subtasks pointing and confirmation. Head movements are used to allow pointing without using the hands or fingers. Since only a coarse selection of a dedicated range is necessary, changing the head direction serves as a pointer tool. A cursor's position is directly mapped to the direction the face is pointing. This corresponds to the natural way of turning the head towards an object of interest, like it is used for example for head mounted displays [14]. Confirmation of the element to be pointed at is done via voice commands and head gestures.
+
+The voice command to confirm a selection is "select", which has to be uttered while the head-controlled cursor points at the desired object. To release the current selection, the user has to say "exit". For a completely hands-free workflow, the commands "start" for system activation and "stop" for deactivation act as a clutching mechanism to avoid unintentional input [4].
+
+Confirmation of a selection with head gestures is done with a nod. Similar to voice commands, the cursor has to stay over the object to be selected while performing the gesture. Shaking the head deselects the current object. Both head gestures are further used for activation and deactivation. An overview of all selection functions and corresponding input methods can be found in table 1 .
+
+Table 1: hands-free selection techniques
+
+| Selection | Voice commands | Head gestures |
| System activation | "Start" | Nod |
| Selection | Head pointing | Head pointing |
| (Point + Confirm) | + "Select" | + Nod |
| Deselection | "Exit" | Shake |
| System deactivation | "Stop" | Shake |
+
+### 5.2 Continuous Manipulation
+
+For image manipulation, we focused on panning and zooming as it corresponds to 1D and 2D input and therefore, can be transferred to a wide range of HCI tasks. The choice of manipulation techniques lies at the root of the observations in the medical context. In this case, for example, hands-free input can be achieved by leaning and with facial expressions. Other possible inputs, such as from the shoulders, may be less suitable for these purposes, as they can have a greater impact on the task.
+
+Based on our observations during a radiological intervention, leaning closer to a display indicated the need to see more details. Therefore, leaning to the front is used for zooming in while leaning back zooms out. The zoom level is mapped on the leaning angle of the user. The position of the upper body is set as ${100}\%$ magnification (no zoom) when the desired view is selected by using one of the selection methods described in Table 2.
+
+Lowering and lifting the eyebrows is used for zooming in a similar fashion. By trying to recognize something in the distance, you lower your eyebrows to focus on the target and recognize it more clearly. Lowering starts zooming in while raising the eyebrows zooms out as the position of the eyebrows is similarly hard to control and to measure. Due to the low resolution of this movement, this method applied a fixed rate of change if the eyebrows were lifted over a certain upper threshold or below a lower threshold. During both methods, panning can be performed simultaneously by head pointing.
+
+Both manipulation techniques are based on passive actions which, according to Oviatt [21], are defined as unintentional behaviour. We have taken this as an opportunity to transform these passive actions into active interaction techniques so that the interaction corresponds to the natural behaviour.
+
+Table 2: hands-free manipulation techniques
+
+| Manipulation | Full body movement | Facial expressions |
| Zoom in | Lean to front | Lower eyebrows |
| Zoom out | Lean to back | Raise eyebrows |
| Pan | Head pointing | Head pointing |
+
+## 6 EVALUATION
+
+### 6.1 Study design
+
+We manipulated three independent variables. First, we varied whether participants performed the concurrent primary task (PT) while interacting with the images to investigate the influence of the task on the interaction methods and vice versa. The presence of a primary task was varied between subjects. The second variable was the frame selection method (speech-based or gesture-based). Finally, the image manipulation method varied between leaning manipulation and eyebrow manipulation. The latter two variables were within-subject variables. We investigated three dependent variables. Task completion time was recorded for each trial. We also measured the proportion of time spent outside the primary task target area (hereafter called the error time). We assessed this as an indicator of primary task performance. Finally, we analyzed the overall unweighted NASA-TLX rating (RAW TLX) as an indicator of the subjective perception of the interaction concepts.
+
+We divided the participants into two test groups. Test group A tested all conditions of the experiment with the primary task, while test group B went through the same conditions without the primary task. One input modality for selection (head gestures or voice) was combined with one input modality for manipulation (full body movement or facial expression). Each subject underwent four possible modality combinations to perform the secondary task. The assignment of the combinations and sequence in the execution was randomized so that if possible, no identical modality follows another.
+
+### 6.2 Participants
+
+Our goal was to gain general insights into the developed hands-free interaction techniques and their interaction concerning cognitive and physical stress in a scenario similar to surgery. For this reason, we selected a heterogeneous group of participants with medical, technical and creative backgrounds.
+
+We recruited 16 participants (10 female; 6 male) in the environment of our university-aged between 22 and ${38}(\mathrm{M} = {26.9};\mathrm{{SD}} =$ 4.3). The user study last between one and one and a half hours and participants received between 15 and 30 Euro (Due to recruitment problems, the remuneration had to be increased for half of the total participants; mainly medical students).
+
+Seven subjects were students of human medicine - the others had a technical or creative background. Regarding their background, the participants were equally assigned to the test groups with and without primary task. Seven out of eight subjects in group A indicated the right hand as dominant. This information was necessary because the system had to be adjusted accordingly, and the instrument had to be aligned accordingly. It was also ascertained whether there was a speech disorder in order to check for possible complications with speech input, whereby all test persons denied the question. Further, the participants were asked about a visual disorder, whereby only persons whose defective vision is not too pronounced were invited because a spectacle frame can partially impede the interaction with the eyebrows. Through self-testing, it was possible to say in advance that the system could be operated without restriction in the case of low short-sightedness. Six people reported a visual impairment and one person a color and vision impairment. Further information limited knowledge in the areas of human-computer interaction, gesture control, tracking, and voice control. Here it should be examined whether and if so, what influence the respective skills and knowledge have on the system in general. The survey was carried out using a Likert scale from none (1) to very experienced (5) - based on experience in the respective areas.
+
+### 6.3 Participant task
+
+In the starting point, the participant sees four equally sized segments arranged in a grid, which accommodate a pattern of different geometric forms and a dashed contour of the respective form in its center. A dark, semi-transparent surface lies above the segments and signals a standby mode. In the lower area of the interface, a status light indicates whether a user has been tracked and a text field shows which input has been made. Once a user has been tracked, a cursor is displayed that can be moved by changing the head direction. After entering the initial start command (head gesture or voice), the interface clears up, the input mode displays the current status "Start", and the slider is activated and can be moved. In addition, individual shapes from the background pattern appear dark due to a random selection by the system. It is now possible for the user to select a segment by a selection method (head gesture or voice), on which he points with the cursor. When the command is entered, the selection of the area is highlighted with a blue frame and the word "Select" appears in the area of the input mode. The participant is now directly in zoom mode and can manipulate the image. At the same time, the cursor disappears because it is the same as the geometric contour. The user enlarges the texture in the respective segment by the respective manipulation technique (leaning or eyebrow), intending to bring the respective dark geometric form to the size and position of the viewfinder (dashed contour). At the same time, the user moves the slider, which visualizes the correctness of the input by a color change between green and red. When the position and size match, the contour disappears, and the dark element shows a blue color that the task is completed. The blue frame remains until the deselection command "Exit" is entered, which is also displayed in input mode. After this input, the cursor appears again. As soon as all elements in the four segments have been enlarged and the user is in selection mode, he can terminate the system or task with a final command (head gesture or voice). The input mode shows the word "Stop", the upper area is faded over again, and the slider is inactive. The essential system states are summarized in Figure 5.
+
+### 6.4 Setup
+
+#### 6.4.1 Software
+
+The software prototype was implemented in Unity 3D 2018.2 using C#. The voice and gesture control was realized with the included SDK (Software Development Kit) of Microsoft Kinect v2.
+
+#### 6.4.2 Body-Movements
+
+The Kinect provides camera coordinates to find 3D points in space. In addition to color and depth information, it also provides integrated skeleton tracking for capturing the human body, which generally determines whether a user is located and in what position he or she is relative to the room. If an input were made for selection (head gesture or voice), a zero point would be set. If the user leans forward from this point to a maximum of ${40}\mathrm{\;{cm}}$ (about ${20}^{ \circ }$ ), the image is enlarged (maximum magnification level 300%). This leaning angle is similar to the one we observed and ist mapped to a rate of change, i.e. straight pose $= =$ no zoom; leaning forward a little $= =$ zooming in slowly; leaning forward a lot $= =$ zooming in fast. Hann et al present a similar zoom technique in a virtual reality concept for the GI endoscopy [6].
+
+#### 6.4.3 Facial Expressions
+
+For the interpretation of the facial expressions, the library used can be used to access aspects that were used to implement the interaction with the eyebrows. Three points on the face were recorded for this purpose: The center of the left and right eyebrows including nose tip, whereby the difference of the distance between the two eyebrows and the nose tip was determined.
+
+#### 6.4.4 Head Movement
+
+By rotating the head, the user can move a cursor and also use this method to pan in images. Direction vectors were created from the orientation coordinates to implement the head gestures. Vector changes in an interval were determined and different states checked whether there was a change in the head position to recognize a head gesture (nodding and shaking). Thus the physical strain was kept as low as possible because even minimal movements of the head could be detected. The received data were interpolated by exponential smoothing into smoother movements.
+
+#### 6.4.5 Voice Recognition
+
+A list of the defined signal words was created ("Start", "Select", "Exit", "Stop") and we used the Kinect SDK's language package for keyword detection. The confidence level for keyword recognition was set to low throughout, which meant that even speech input by non-native speakers was recognized consistently well.
+
+#### 6.4.6 Data Log
+
+A data logger has been integrated, which enables the display of command inputs and outputs. The values are stored in a log file so that they can be converted into tables for later evaluation. At the time when the user starts processing the secondary task by initial execution of the respective interaction, a time counter starts until the user completes the task by a command (task completion time). Furthermore, the incorrect input (time out) of the user was recorded, i.e., the time how long the input was outside the tolerance range.
+
+#### 6.4.7 Hardware Prototype
+
+The physical prototype was implemented with the physical computing platform Arduino (model Uno R3), its associated development environment and a distance sensor. The sensor sits in a tube and measures the distance in a range of $0 - {100}\mathrm{\;{mm}}$ with a resolution of $1\mathrm{\;{mm}}$ (deviation $3\%$ ) to the rod guided by the user. A displayed slider in the interface transferred the user input 1:1 via USB to Unity. The user moves a vertical line in horizontal direction with the aim to keep this line within a defined tolerance range. As long as the line is within the range, the slider bar remains green. If this is left, it turns red. The components were brought together into stable housing using 3D printing.
+
+The experiment was carried out in a laboratory with one subject and one investigator. Similar dimensions were chosen to simulate the overall structure of an operating theatre. The subject was exactly ${180}\mathrm{\;{cm}}$ away from a 75" monitor, measured from the edge of the table to the display. Above the display, a Microsoft Kinect was installed, which was adjustable in angle. In between was a table on which the hardware prototype for the primary task was located. The monitor and table were height-adjustable and adapted individually to each test subject. Setup of the user test is illustrated in Figure 6.
+
+
+
+Figure 5: The upper part of the interface is divided into four segments with different geometric shapes. Further down, information on the input mode, a status light (user tracked) and the interactive slider are displayed as visual feedback on the primary task. [A] shows the manipulation mode in which a segment was selected (blue frame) and an image enlargement takes place. In [B] the user is in selection mode and has the possibility to move a cursor to the desired area and select it.
+
+
+
+Figure 6: Subject (a) stands in front of a monitor (b) on which a Microsoft Kinect (c) is installed. In between is the control element of the primary task (d) and behind it a computer (e) that processes the data.
+
+### 6.5 Study procedure
+
+After agreeing to the study and collecting demographic data, the presentation of the prototype and the instruction of the individual tasks began. It was explained that the situation of a radiological intervention was simulated by briefly informing the participant about the activities of the radiologist. Then the interface was explained, as well as the various modalities for carrying out the tasks. It was pointed out that the secondary task should be performed as quickly as possible. If the subject was in test group A (with the primary task), the primary task on the physical prototype was explained as an adapted interaction of the catheter navigation task, which should be performed as accurately as possible. Experimental group B did not receive this explanation.
+
+The system was individually adapted to the person before the actual execution began. Monitor height, height of the physical prototype, handiness (right or left) as well as tracking system was adjusted to the body size. The face also had to be calibrated to allow interaction with the eyebrows.
+
+In order to get a feel for the interaction with the physical prototype and to be able to determine deviations, test group A was used to determine how long they can remain within the tolerance range. The upper part of the interface was hidden, so only the slider was visible. The measurement of the deviation (baseline) took place in a time of 90 seconds. The slider was hidden entirely in test group B to avoid additional distractions.
+
+In the first step before the execution, the control of the cursor was practiced with the help of the head rotation. Information cards for the assigned combinations were placed in the field of view of the participant.
+
+Subsequently, training runs were carried out per combination/interaction technique. If the participants stated to be ready, the first run started and time was measured. After three identical trials, the data (task completion time and time-out) were recorded, and the subject received a NASA TLX questionnaire. The following three combinations and runs were performed according to the same procedure. Finally, a semi-structured interview was conducted. The general feeling during the execution of the experiment was questioned and there was the possibility to give feedback.
+
+### 6.6 Data analysis
+
+All dependent variables were averaged for each of the three trials with identical conditions that each participant encountered. Three-way analyses of variance (ANOVAs) were conducted for the task completion time and the overall TLX rating. A two-way ANOVA was conducted for the error time. The investigator's observations and participants' comments were qualitatively reviewed and clustered by two investigators.
+
+### 6.7 Results
+
+The effects that we found in our inferential statistical analysis are reported in Table 3. We found main effects on the task completion time for the presence of a primary task (Fig. 8) and for the manipulation method (Fig. 7). The manipulation method also showed a main effect on the participants' overall TLX rating (Fig. 10). Within our sample, we observed a trend suggesting that the primary task presence may affect TLX rating (Fig. 9). Although this main effect did not show as significant, the potential effect size is considerable (partial ${\eta }^{2} = {0.049}$ ). The manipulation method showed a significant main effect on the TLX rating. Neither the selection method nor the manipulation method showed significant effects on the error time (i.e. on primary task performance). However, the selection method had a potentially considerable effect size (partial ${\eta }^{2} = {0.084}$ ) that should be investigated further in future studies (Figure 11). We observed no significant interaction effects.
+
+Table 3: Overview of the significant effects found in the evaluation study.
+
+| Dependent variable / effect type | Factor | Degrees of freedom | F-value | p-value | Partial ${\eta }^{2}$ |
| Task Completion Time |
| Main effects | Primary task | 1 | 8.5 | 0.005* | 0.132 |
| Manipulation method | 1 | 7.97 | 0.007* | 0.125 |
| TLX rating |
| Main effect | Manipulation method | 1 | 5.08 | 0.028* | 0.083 |
+
+
+
+Figure 7: Influence of the manipulation method on task completion time. Error bars show standard error
+
+
+
+Figure 8: Influence of the presence of a primary task on task completion time. Error bars show standard error
+
+## 7 Discussion
+
+In comparing task completion time between the group with a primary task condition and the one without, we found that performing the secondary interaction takes longer when a primary task has to be fulfilled simultaneously. This is the expected result as additional time is required to switch the focus between both tasks.
+
+As a selection method, head gestures and voice commands were compared for system activation and selection. Pointing was done using head movements in both methods. We found the main effect, within our sample, between the selection method and the time spent outside the target range for the primary task, indicating that head gestures may influence the primary task less than voice commands. The effect was not found to be significant. However, it did show a considerable effect size. Thus, the non-significance may be due to our limited sample size and this possible effect should be investigated further. Comments during the study, however, support an advantage of head gestures over voice commands as nodding and shaking the head are easy to distinguish and to remember compared to voice commands. On the other hand, using speech as an input channel was described as intuitive while shaking the head was found uncomfortable and imprecise.
+
+
+
+Figure 9: Influence of the presence of a primary task on subjective workload. Error bars show standard error
+
+
+
+Figure 10: Influence of the manipulation method on subjective workload. Error bars show standard error
+
+Taking a closer look at continuous manipulation methods for hands-free zooming reveals that full-body movement (leaning) performed significantly better than facial expressions (moving the eyebrows) in terms of task completion time. The subjective workload was significantly higher for facial expression. A possible explanation can be found in the comments collected during the study. Moving the eyebrows was deemed physically exhausting and according to three participants the fear arose that mood changes would trigger an unintended interaction. Overall, leaning seems to be a suitable, natural input method that allows a faster performance of zooming when the hands are occupied. As a summary, gestures should connect to natural movements as unusual movements such as moving the eyebrows are less precise and exhausting.
+
+
+
+Figure 11: Influence of the selection method on the primary task error rate. Error bars show standard error
+
+Even though a medical scenario served as an exemplary scenario in this work, the tasks were abstracted in a way that no expert knowledge is required. Therefore, the findings presented in this work can be transferred to other domains with similar requirements such as flight coordination, piloting or train driving.
+
+### 7.1 Limitations
+
+The presented study took place in a lab setting, which ruled out external factors and allowed the participants to focus on the given tasks. On the downside, no account was taken for external sources of distraction which might appear in real world scenarios. The abstracted tasks used in the presented study differed from the medical task. The focus for the primary tasks during a catheter intervention lies at the center of the screen instead of the lower edge. Therefore, switching between both tasks might be more demanding in our setup than in the real-world scenario. The primary task baseline was measured at first during the study. Subsequent tasks might have been performed faster due to a learning effect. This effect has to be avoided in the future by applying more extended training periods beforehand and assessing the baseline measures at different points during the study. The position of the Kinect sensor caused minor technical limitations. Eyebrow input was less accurate when the user was leaning towards the display as the eyes and eyebrows were hard to detect at a steep angle. Further, raising and lowering the eyebrows was a discrete interaction compared to continuous leaning. Therefore, eyebrowinput might be more suitable as a safety feature, clutchingmechanism or manipulation method. Speech recognition was accurate in this setup, but in the OR it could be disturbed by conversations among users and sounds from devices present. Especially in the presented scenario, there is a danger that head and body movement can impair the extremely sensitive stability of the catheter. Foot input might be a better choice, but were disregarded as they might interfere with foot pedals as an established modality for controlling medical devices. The interaction between different foot-based input methods needs to be examined to leverage this input channel without affecting the main task.
+
+## 8 CONCLUSION AND FUTURE WORK
+
+In this work, we compared hands-free methods to support concurrent multitasking by using multimodal interaction channels. We found leaning to be a fast method for zooming tasks and gained insights into the suitability of head movements and voice commands as secondary input methods. In the future, the proposed methods need to be compared to domain-specific state-of-the-art input methods such as joysticks, touch screens or task delegation similar to Hettig et al. or Wipfli et al. $\left\lbrack {8,{26}}\right\rbrack$ . Further, the performance at higher cognitive load due to environmental factors, multi-user scenarios or additional types of tasks have to be taken into account. In the long run, the proposed approach might support natural interaction for demanding scenarios, leading to fewer errors and faster task completion.
+
+## ACKNOWLEDGMENTS
+
+This work is funded by the Federal Ministry of Education and Research (BMBF) within the STIMULATE research campus (grant number 13GW0095A).
+
+## REFERENCES
+
+[1] P. D. Adamczyk and B. P. Bailey. If not now, when?: the effects of interruption at different moments within task execution. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 271-278, 2004.
+
+[2] R. F. Adler and R. Benbunan-Fich. Juggling on a high wire: Multitasking effects on performance. International Journal of Human-Computer Studies, 70(2):156-168, 2012.
+
+[3] R. Benbunan-Fich and G. E. Truman. Technical Opinion: Multitasking with Laptops During Meetings. Commun. ACM, 52(2):139-141, 2009. doi: 10.1145/1461928.1461963
+
+[4] S. Cronin and G. Doherty. Touchless computer interfaces in hospitals: A review. Health informatics journal, p. 1460458217748342, 2018.
+
+[5] S. Grange, T. Fong, and C. Baur. M/ORIS: a medical/operating room interaction system. In Proceedings of the 6th international conference on Multimodal interfaces, pp. 159-166, 2004.
+
+[6] A. Hann, B. M. Walter, N. Mehlhase, and A. Meining. Virtual reality in gi endoscopy: intuitive zoom for improving diagnostics and training. Gut, 68(6):957-959, 2019.
+
+[7] B. Hatscher, M. Luz, and C. Hansen. Foot Interaction Concepts to Support Radiological Interventions. i-com, 17(1):3-13, 2018.
+
+[8] J. Hettig, P. Saalfeld, M. Luz, M. Becker, M. Skalej, and C. Hansen. Comparison of gesture and conventional interaction techniques for interventional neuroradiology. International Journal of Computer Assisted Radiology and Surgery, pp. 1643-1653, 2017.
+
+[9] A. Hübler, C. Hansen, O. Beuing, M. Skalej, and B. Preim. Workflow Analysis for Interventional Neuroradiology using Frequent Pattern Mining. In Proceedings of the Annual Meeting of the German Society of Computer- and Robot-Assisted Surgery, pp. 165-168. Munich, 2014.
+
+[10] C. P. Janssen and D. P. Brumby. Strategic adaptation to performance objectives in a dual-task setting. Cognitive science, 34(8):1548-1560, 2010.
+
+[11] C. P. Janssen, D. P. Brumby, and R. Garnett. Natural break points: The influence of priorities and cognitive and motor cues on dual-task interleaving. Journal of Cognitive Engineering and Decision Making, 6(1):5-29, 2012.
+
+[12] R. Johnson, K. O'Hara, A. Sellen, C. Cousins, and A. Criminisi. Exploring the potential for touchless interaction in image-guided interventional radiology. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3323-3332. ACM, 2011.
+
+[13] H. Karvonen, I. Aaltonen, M. Wahlström, L. Salo, P. Savioja, and L. Norros. Hidden roles of the train driver: A challenge for metro automation. Interacting with Computers, 23(4):289-298, 2011.
+
+[14] E. Larsen, F. Umminger, X. Ye, N. Rimon, J. R. Stafford, and X. Lou. Methods and systems for user interaction within virtual reality scene using head mounted display, Sept. 112018. US Patent App. 10/073,516.
+
+[15] W.-C. Li, G. Braithwaite, M. Greaves, C.-K. Hsu, and S.-C. Lin. The evaluation of military pilot's attention distributions on the flight deck.
+
+In Proceedings of the International Conference on Human-Computer Interaction in Aerospace, HCI-Aero '16, pp. 15:1-15:6. ACM, New York, NY, USA, 2016. doi: 10.1145/2950112.2964588
+
+[16] G. Mark, S. T. Iqbal, M. Czerwinski, P. Johns, and A. Sano. Neurotics can't focus: An in situ study of online multitasking in the workplace. In Proceedings of the 2016 CHI conference on human factors in computing systems, pp. 1739-1744, 2016.
+
+[17] U. Metzger and R. Parasuraman. The role of the air traffic controller in future air traffic management: An empirical study of active control versus passive monitoring. Human Factors, 43(4):519-528, 2001. PMID: 12002002. doi: 10.1518/001872001775870421
+
+[18] C. A. Monk, J. G. Trafton, and D. A. Boehm-Davis. The effect of interruption duration and demand on resuming suspended goals. Journal of Experimental Psychology: Applied, 14(4):299, 2008.
+
+[19] J. Nielsen. Noncommand user interfaces. Communications of the ACM, 36(4):82-100, 1993.
+
+[20] K. O'Hara, G. Gonzalez, A. Sellen, G. Penney, A. Varnavas, H. Mentis, A. Criminisi, R. Corish, M. Rouncefield, N. Dastur, et al. Touchless interaction in surgery. Communications of the ${ACM},{57}\left( 1\right) : {70} - {77}$ , 2014.
+
+[21] S. Oviatt. Multimodal interfaces. In The human-computer interaction handbook, pp. 439-458. CRC press, 2007.
+
+[22] D. D. Salvucci and P. Bogunovich. Multitasking and monotasking: the effects of mental workload on deferred task interruptions. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 85-88, 2010.
+
+[23] D. D. Salvucci, N. A. Taatgen, and J. P. Borst. Toward a Unified Theory of the Multitasking Continuum: From Concurrent Performance to Task Switching, Interruption, and Resumption. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, pp. 1819-1828. ACM, New York, NY, USA, 2009. doi: 10. 1145/1518701.1518981
+
+[24] J. G. Trafton, E. M. Altmann, D. P. Brock, and F. E. Mintz. Preparing to resume an interrupted task: Effects of prospective goal encoding and retrospective rehearsal. International Journal of Human-Computer Studies, 58(5):583-603, 2003.
+
+[25] C. D. Wickens. Multiple resources and performance prediction. Theoretical issues in ergonomics science, 3(2):159-177, 2002.
+
+[26] R. Wipfli, V. Dubois-Ferrière, S. Budry, P. Hoffmeyer, and C. Lovis. Gesture-controlled image management for operating room: a randomized crossover study to compare interaction using gestures, mouse, and third person relaying. PloSone, 11(4):e0153596, 2016.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..410899b10732c68ab70f7fb2f1a8604c0b3d0743
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/VZs3gFWv7Y/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,328 @@
+§ LEAN-INTERACTION: PASSIVE IMAGE MANIPULATION IN CONCURRENT MULTITASKING
+
+Danny Schott*
+
+Otto von Guericke University
+
+Benjamin Hatscher ${}^{ \dagger }$
+
+Otto von Guericke University
+
+Steffi Hußlein ${}^{\pi }$
+
+Magdeburg-Stendal University of Applied Science Fabian Joeres ${}^{ \ddagger }$ Mareike Gabele ${}^{§}$ Otto von Guericke University Otto von Guericke University
+
+Christian Hansen"
+
+Otto von Guericke University
+
+§ ABSTRACT
+
+Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a hands-free image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user's performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.
+
+Keywords: Multimodal Interaction; Multitasking; Hands-free Interaction; Radiology; Medical Domain
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction techniques-; Human-centered computing-Human computer interaction (HCI)-Interaction paradigms
+
+§ 1 INTRODUCTION
+
+Multitasking is an essential part of today's working environments, whether intentionally or involuntarily through a plethora of devices and communication channels. In this context, working on a screen while interruptions appear occasionally is a widely investigated scenario $\left\lbrack {3,{16},{22}}\right\rbrack$ .
+
+Even though it is important to understand how recovery from interruptions work, there are scenarios where the primary task is essential and cannot be interrupted completely, but still, additional information is required to reach the overall goal. Prominent examples for such demanding scenarios are flight coordination [17], piloting [15], train driving [13] or medicine [20]. The focus here shifts from recovery from distraction to how to maintain an acceptable level of performance for a specific task during concurrent multitasking.
+
+To find suitable input methods that fit these needs, we propose a 3- step approach: First, an exemplary scenario from the medical context is chosen and investigated in-depth to understand the restrictions and requirements of a representative real-world worst case as well as the occurring combinations of tasks. Second, input methods that potentially require notably low cognitive resources are selected, informed by the multiple resource theory [25], the passive input paradigm [21] and observations. Third, the scenario is abstracted to minimize the influence of domain knowledge and evaluated in a user study using a between-subject design.
+
+ < g r a p h i c s >
+
+Figure 1: We took the complex activities of a radiologist as a starting point to differentiate between primary and secondary tasks (left picture). We created a simulation of this scenario by abstracting these tasks and developed natural input modalities for hands-free control (right picture).
+
+The main contributions of this work are the development, abstraction, demonstration and evaluation of a free-hand input modality for Zoom and Pan based on natural behavior with minimal influence on a primary task. This may lead to safer, faster and cheaper use in demanding interaction scenarios.
+
+Analytical observation has revealed which natural movements are suitable as complementary input modalities. When overviewing a picture, the participants mainly moved their upper bodies forward and backward, while simultaneously moved their eyebrows up and down. Within the evaluation of a prototypical setup, it has been shown that leaning is suitable as a secondary input modality since it has minimal influence on the primary task. A secondary input on the movement of the eyebrows, in contrast, proved to be unsuitable as manipulation technique. It's less accurate and physically demanding.
+
+*e-mail: danny.schott@ovgu.de
+
+${}^{ \dagger }$ e-mail: benjamin.hatscher@ovgu.de
+
+${}^{ \ddagger }$ e-mail: fabian.joeres@ovgu.de
+
+§e-mail: mareike.gabele@ovgu.de
+
+Te-mail: steffi.husslein@hs-magdeburg.de
+
+ * mail: christian.hansen@ovgu.de
+
+§ 2 RELATED WORK
+
+§ 2.1 CONCURRENT MULTITASKING
+
+Performing two or more tasks in parallel or short succession is called multitasking. Salvucci et al. proposed a continuum ranging from concurrent interaction to sequential multitasking [23]. Adler and Benbunan-Fich found that the degree of multitasking influences productivity and accuracy differently: medium multitaskers are more productive than low or high multitaskers, while accuracy decreases with the degree of multitasking [2]. Interruptions lead to additional time to resume the primary task $\left\lbrack {{18},{24}}\right\rbrack$ and are more disruptive when occurring at points of a higher workload than during low workload phases [1]. It comes as no surprise that users tend to defer interrupting tasks to periods of lower workload [22]. A dual-task setting by Janssen et al. concludes that people can strategically focus their attention on certain tasks in order to achieve a certain performance and that choice of strategy has a major impact on task completion $\left\lbrack {{10},{11}}\right\rbrack$ . Wickens’ multiple resource theory assumes that multiple cognitive resources are tied to different modalities and tasks only conflict if they tap into the same resource [25]. This implies that the brain is able to separate different cognitive processes from each other. These findings lead us to the idea that if the influence of a secondary task on a concurrent primary task should be minimized, investigating multimodal interaction approaches seems plausible.
+
+§ 2.2 MULTIMODAL INPUT
+
+According to Oviatt, human-computer input modes can be separated into active and passive ones [21]. Active modes require intentional action by the user, while passive modes rely on unintentional action or behavior. Using passive modes as input lowers the cognitive effort as no explicit command has to be given by the user. This idea connects to non-command interfaces by Nielsen, which derive user intentions from observing the user [19]. With the goal of minimized influence in primary task accuracy in mind, it could be hypothesized that passive input modes reduce the influence of the secondary task even further than active input modes.
+
+§ 3 MULTITASKING IN INTERVENTIONAL RADIOLOGY
+
+To investigate the idea of multimodal input for concurrent multitasking scenarios, we picked interventional radiology as a concrete example. During an intervention, interaction with medical images is required simultaneously to the high-priority task of instrument handling. During radiological interventions in general, a needle or catheter is inserted through small incisions into the patient's body. In order to navigate the instrument to the targeted pathological structures to be treated, the radiologist relies on real-time image data gathered by imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT), ultrasound (US) or accesses images recorded before the intervention [14]. A common workaround in clinical practice is installing an assistant as a proxy user to maintain asepsis (and due to workflow restrictions), which is time-consuming and error-prone compared to direct interaction $\left\lbrack {5,9,{20}}\right\rbrack$ and is rated by clinicians as least practicable [26].
+
+To gain insights into challenges in clinical practice we observed a radiological intervention and conducted a semi-structured interview with an experienced radiologist regarding the challenges of human-computer interaction tasks. In the following section, we report observations, expert comments and conclude difficult secondary tasks.
+
+During the observation, the radiologist wanted to take a closer look at the details of the latest x-ray image. A range of strategies to achieve this was observed during the intervention.
+
+ * Leaning closer: The radiologists leans over the patient table towards the display showing the image.
+
+ * Moving the display: An assistant is asked to move the display closer to the physician.
+
+ * Pointing gesture: While examining the image, the radiologist points at a specific spot on the screen without touching it and instructs an assistant to enlarge this area of the image.
+
+ * Verbal task delegation: When both hands are occupied with guiding the catheter, an assistant is instructed verbally to adjust the image section displayed on the screen.
+
+The expert interview aimed at understanding concrete action sequences concerning image selection and zoom was conducted to investigate the potential for direct interaction methods. Overall, the radiologist assessed the operation of the current system very positively, but restrictions in the operation of the control panel were often perceived as disturbing. It is particularly interesting to note that when navigating through the images, there is no need to look at the joystick, which suggests that hand-eye coordination has been perfected by routine. It was pointed out that different layouts of the $\mathrm{x}$ -ray images are used according to the user’s preference. The radiologist interviewed, for example, prefers a permanent full-screen view in order to be able to see details better on the screen. In this context, he emphasizes the necessity and current problems of zoom functions: "Definitely necessary [...] In principle, zooming is possible, but it is connected with a lot of manual actions and therefore not popular among us". In the angiography suite used for the observed intervention, this function is not available quickly enough. The desired image segment must first be selected, then navigated to the image settings with the help of a touch interface, whereby the view can then be enlarged by ${150}\%$ . Afterward, the image section can be moved with a joystick. Zooming, in his case, takes place in fluoroscopic $2\mathrm{D}$ as well as in $3\mathrm{D}$ volume data. The respondent himself usually performs this task, because a description of the target to the MTRA would be too inefficient. For an efficient operation of the system, "[...] Infinitely variable magnification and pan is indispensable for a useful zooming function".
+
+§ 4 CONCEPT DEVELOPMENT: ABSTRACTING MEDICAL TASKS
+
+§ 4.1 CATHETER NAVIGATION AS PRIMARY TASK
+
+The task referred to below as "primary task" simulates the medical task of a physician during a radiological intervention. In this context, the task has the highest priority for the physician, because his primary goal is to perform the intervention and to ensure optimal care for the patient. During a catheter intervention, instruments must be held and guided to the correct position in the body. The correct alignment by moving the tip back and forth requires a lot of concentration and is made more difficult by the patient's breathing movements or movements.
+
+§ 4.2 ABSTRACTION
+
+Our implementation aims to create a situation in which the user performs a motor activity with their hands. The manual movements of a radiologist are to be imitated using the example of the guidance of a catheter in a vascular structure during an intervention. We simplified this task to forwards and backwards movements and created a prototypical input device (see Figure 2). An abstract visualization in form of a horizontal bar displays the user input and a target marker. A vertical line represents the current position of the input device. A marker with a range of tolerance moves in a specific sequential movement along the $\mathrm{x}$ -axis and forces the user to pay attention to this task all the time (see Figure 3). To force the focus on the primary task even further, the marker jumps to a random position on the bar every 10 seconds.
+
+ < g r a p h i c s >
+
+Figure 2: Left: Catheter Navigation Task. Right: Prototype for the simulation of that task.
+
+ < g r a p h i c s >
+
+Figure 3: Slide control for transmission and feedback of the user input. The dark line element indicates the direction. If it is within the tolerance range (thin lines / contour), the display is green (bottom). Outside this range, the display turns red (top).
+
+§ 4.3 IMAGE MANIPULATION AS A SECONDARY TASK
+
+Current angiography systems offer the possibility to interact intra-operatively with image data on a monitor. Usually, one or more monitors are located directly in the radiologist's field of view, where various information such as live and reference images, patient data and system parameters are displayed. The displays are divided into different areas with an individually configurable layout. The selection of elements, the change of different views and modes, calling certain functions and the manipulation of images, such as scrolling in data sets, changing contrasts or adjusting the magnification factor are common interactions in such systems. In our work, these actions are summarized under the term secondary task. This secondary task can be seen as combinations of two fundamental interactions:
+
+§ 4.3.1 SELECTION
+
+Within the system, medical image data sets or viewports on the screen can be selected, functions executed or modes changed. The desired object or view on the monitor interface is selected first and the selection is then confirmed. The modalities which are available in current systems are located on the control panel which enables one- or two-dimensional interaction by means of keys, joysticks and touch screens.
+
+§ 4.3.2 MANIPULATION
+
+The selected data can then be manipulated in one or two dimensions using the same control elements. Frequent tasks are scrolling through image series, magnification of specific structures (zoom), shifting the visible image section (pan) and changing the image contrast and brightness (windowing). Further, these input methods are used to rotate and enlarge $3\mathrm{D}$ volume data sets.
+
+ < g r a p h i c s >
+
+Figure 4: Process of visual abstraction of medical imagery.
+
+§ 4.4 ABSTRACTION
+
+During our clinical observation, the interaction steps in the treatment of an arteriovenous malformation (AVM) in the brain turned out to be the foundation for our secondary interaction task. An AVM is a congenital abnormal tangle of blood vessels in which arteries are directly connected to veins, disrupting normal blood flow and oxygen supply. It can be visualized with imaging techniques such as digital subtraction angiography (DSA). The radiologist's goal is to locate these and treat them accordingly. Fluoroscopic X-ray images often have relatively low contrast and low resolution. It is difficult to precisely recognize details such as the position of the catheter within such images and to distinguish overlaying blood vessels. To get as most out of the images as possible, the radiologist should be able to zoom and pan into the region of interest.
+
+We abstracted a typical image recording of an AVM, which can be seen in Figure 4.[A] shows an original recording of an AVM . In the next step [B], the size of the entire structure (red line) and the size of the AVM (blue circle) were marked. In [C] the region of interest is located between the viewfinder (blue line) and the whole structure (red line). Only within this radius were randomized target elements distributed and during the task, the region of interest must be brought into the target position. The surrounding vessels might serve as orientation features when panning and zooming and are replaced pattern of geometric shapes in shades of grey. Four different patterns (ellipse, rectangle, triangle, hexagon) were designed to simulate different images. The abstracted task view can be seen in image [D].
+
+§ 5 INTERACTION METHODS
+
+In the following, direct interaction techniques that allow performing the secondary task while the hands are occupied are described. We focused on interaction methods that do not occupy the hands as they are used to perform the medical, primary task, the feet as foot pedals are a standard method to control angiography systems and body-worn equipment such as sensors or head-mounted eye-tracking devices are ruled out due to sterility concerns. Hatscher et al. found that contactless interactions via speech or gesture are suitable, but that interaction with the feet is most effective and also most accepted by the users [7]. Johnson et al. also describes that foot pedals in the OR are used to trigger image capture and that both these and the hands (holding and manipulating the wire and catheter) are extremely busy, which is why we have decided not to strain these extremities any further [12].
+
+§ 5.1 SELECTION
+
+Selecting a specific element on the graphical user interface can be divided into subtasks pointing and confirmation. Head movements are used to allow pointing without using the hands or fingers. Since only a coarse selection of a dedicated range is necessary, changing the head direction serves as a pointer tool. A cursor's position is directly mapped to the direction the face is pointing. This corresponds to the natural way of turning the head towards an object of interest, like it is used for example for head mounted displays [14]. Confirmation of the element to be pointed at is done via voice commands and head gestures.
+
+The voice command to confirm a selection is "select", which has to be uttered while the head-controlled cursor points at the desired object. To release the current selection, the user has to say "exit". For a completely hands-free workflow, the commands "start" for system activation and "stop" for deactivation act as a clutching mechanism to avoid unintentional input [4].
+
+Confirmation of a selection with head gestures is done with a nod. Similar to voice commands, the cursor has to stay over the object to be selected while performing the gesture. Shaking the head deselects the current object. Both head gestures are further used for activation and deactivation. An overview of all selection functions and corresponding input methods can be found in table 1 .
+
+Table 1: hands-free selection techniques
+
+max width=
+
+Selection Voice commands Head gestures
+
+1-3
+System activation "Start" Nod
+
+1-3
+Selection Head pointing Head pointing
+
+1-3
+(Point + Confirm) + "Select" + Nod
+
+1-3
+Deselection "Exit" Shake
+
+1-3
+System deactivation "Stop" Shake
+
+1-3
+
+§ 5.2 CONTINUOUS MANIPULATION
+
+For image manipulation, we focused on panning and zooming as it corresponds to 1D and 2D input and therefore, can be transferred to a wide range of HCI tasks. The choice of manipulation techniques lies at the root of the observations in the medical context. In this case, for example, hands-free input can be achieved by leaning and with facial expressions. Other possible inputs, such as from the shoulders, may be less suitable for these purposes, as they can have a greater impact on the task.
+
+Based on our observations during a radiological intervention, leaning closer to a display indicated the need to see more details. Therefore, leaning to the front is used for zooming in while leaning back zooms out. The zoom level is mapped on the leaning angle of the user. The position of the upper body is set as ${100}\%$ magnification (no zoom) when the desired view is selected by using one of the selection methods described in Table 2.
+
+Lowering and lifting the eyebrows is used for zooming in a similar fashion. By trying to recognize something in the distance, you lower your eyebrows to focus on the target and recognize it more clearly. Lowering starts zooming in while raising the eyebrows zooms out as the position of the eyebrows is similarly hard to control and to measure. Due to the low resolution of this movement, this method applied a fixed rate of change if the eyebrows were lifted over a certain upper threshold or below a lower threshold. During both methods, panning can be performed simultaneously by head pointing.
+
+Both manipulation techniques are based on passive actions which, according to Oviatt [21], are defined as unintentional behaviour. We have taken this as an opportunity to transform these passive actions into active interaction techniques so that the interaction corresponds to the natural behaviour.
+
+Table 2: hands-free manipulation techniques
+
+max width=
+
+Manipulation Full body movement Facial expressions
+
+1-3
+Zoom in Lean to front Lower eyebrows
+
+1-3
+Zoom out Lean to back Raise eyebrows
+
+1-3
+Pan Head pointing Head pointing
+
+1-3
+
+§ 6 EVALUATION
+
+§ 6.1 STUDY DESIGN
+
+We manipulated three independent variables. First, we varied whether participants performed the concurrent primary task (PT) while interacting with the images to investigate the influence of the task on the interaction methods and vice versa. The presence of a primary task was varied between subjects. The second variable was the frame selection method (speech-based or gesture-based). Finally, the image manipulation method varied between leaning manipulation and eyebrow manipulation. The latter two variables were within-subject variables. We investigated three dependent variables. Task completion time was recorded for each trial. We also measured the proportion of time spent outside the primary task target area (hereafter called the error time). We assessed this as an indicator of primary task performance. Finally, we analyzed the overall unweighted NASA-TLX rating (RAW TLX) as an indicator of the subjective perception of the interaction concepts.
+
+We divided the participants into two test groups. Test group A tested all conditions of the experiment with the primary task, while test group B went through the same conditions without the primary task. One input modality for selection (head gestures or voice) was combined with one input modality for manipulation (full body movement or facial expression). Each subject underwent four possible modality combinations to perform the secondary task. The assignment of the combinations and sequence in the execution was randomized so that if possible, no identical modality follows another.
+
+§ 6.2 PARTICIPANTS
+
+Our goal was to gain general insights into the developed hands-free interaction techniques and their interaction concerning cognitive and physical stress in a scenario similar to surgery. For this reason, we selected a heterogeneous group of participants with medical, technical and creative backgrounds.
+
+We recruited 16 participants (10 female; 6 male) in the environment of our university-aged between 22 and ${38}(\mathrm{M} = {26.9};\mathrm{{SD}} =$ 4.3). The user study last between one and one and a half hours and participants received between 15 and 30 Euro (Due to recruitment problems, the remuneration had to be increased for half of the total participants; mainly medical students).
+
+Seven subjects were students of human medicine - the others had a technical or creative background. Regarding their background, the participants were equally assigned to the test groups with and without primary task. Seven out of eight subjects in group A indicated the right hand as dominant. This information was necessary because the system had to be adjusted accordingly, and the instrument had to be aligned accordingly. It was also ascertained whether there was a speech disorder in order to check for possible complications with speech input, whereby all test persons denied the question. Further, the participants were asked about a visual disorder, whereby only persons whose defective vision is not too pronounced were invited because a spectacle frame can partially impede the interaction with the eyebrows. Through self-testing, it was possible to say in advance that the system could be operated without restriction in the case of low short-sightedness. Six people reported a visual impairment and one person a color and vision impairment. Further information limited knowledge in the areas of human-computer interaction, gesture control, tracking, and voice control. Here it should be examined whether and if so, what influence the respective skills and knowledge have on the system in general. The survey was carried out using a Likert scale from none (1) to very experienced (5) - based on experience in the respective areas.
+
+§ 6.3 PARTICIPANT TASK
+
+In the starting point, the participant sees four equally sized segments arranged in a grid, which accommodate a pattern of different geometric forms and a dashed contour of the respective form in its center. A dark, semi-transparent surface lies above the segments and signals a standby mode. In the lower area of the interface, a status light indicates whether a user has been tracked and a text field shows which input has been made. Once a user has been tracked, a cursor is displayed that can be moved by changing the head direction. After entering the initial start command (head gesture or voice), the interface clears up, the input mode displays the current status "Start", and the slider is activated and can be moved. In addition, individual shapes from the background pattern appear dark due to a random selection by the system. It is now possible for the user to select a segment by a selection method (head gesture or voice), on which he points with the cursor. When the command is entered, the selection of the area is highlighted with a blue frame and the word "Select" appears in the area of the input mode. The participant is now directly in zoom mode and can manipulate the image. At the same time, the cursor disappears because it is the same as the geometric contour. The user enlarges the texture in the respective segment by the respective manipulation technique (leaning or eyebrow), intending to bring the respective dark geometric form to the size and position of the viewfinder (dashed contour). At the same time, the user moves the slider, which visualizes the correctness of the input by a color change between green and red. When the position and size match, the contour disappears, and the dark element shows a blue color that the task is completed. The blue frame remains until the deselection command "Exit" is entered, which is also displayed in input mode. After this input, the cursor appears again. As soon as all elements in the four segments have been enlarged and the user is in selection mode, he can terminate the system or task with a final command (head gesture or voice). The input mode shows the word "Stop", the upper area is faded over again, and the slider is inactive. The essential system states are summarized in Figure 5.
+
+§ 6.4 SETUP
+
+§ 6.4.1 SOFTWARE
+
+The software prototype was implemented in Unity 3D 2018.2 using C#. The voice and gesture control was realized with the included SDK (Software Development Kit) of Microsoft Kinect v2.
+
+§ 6.4.2 BODY-MOVEMENTS
+
+The Kinect provides camera coordinates to find 3D points in space. In addition to color and depth information, it also provides integrated skeleton tracking for capturing the human body, which generally determines whether a user is located and in what position he or she is relative to the room. If an input were made for selection (head gesture or voice), a zero point would be set. If the user leans forward from this point to a maximum of ${40}\mathrm{\;{cm}}$ (about ${20}^{ \circ }$ ), the image is enlarged (maximum magnification level 300%). This leaning angle is similar to the one we observed and ist mapped to a rate of change, i.e. straight pose $= =$ no zoom; leaning forward a little $= =$ zooming in slowly; leaning forward a lot $= =$ zooming in fast. Hann et al present a similar zoom technique in a virtual reality concept for the GI endoscopy [6].
+
+§ 6.4.3 FACIAL EXPRESSIONS
+
+For the interpretation of the facial expressions, the library used can be used to access aspects that were used to implement the interaction with the eyebrows. Three points on the face were recorded for this purpose: The center of the left and right eyebrows including nose tip, whereby the difference of the distance between the two eyebrows and the nose tip was determined.
+
+§ 6.4.4 HEAD MOVEMENT
+
+By rotating the head, the user can move a cursor and also use this method to pan in images. Direction vectors were created from the orientation coordinates to implement the head gestures. Vector changes in an interval were determined and different states checked whether there was a change in the head position to recognize a head gesture (nodding and shaking). Thus the physical strain was kept as low as possible because even minimal movements of the head could be detected. The received data were interpolated by exponential smoothing into smoother movements.
+
+§ 6.4.5 VOICE RECOGNITION
+
+A list of the defined signal words was created ("Start", "Select", "Exit", "Stop") and we used the Kinect SDK's language package for keyword detection. The confidence level for keyword recognition was set to low throughout, which meant that even speech input by non-native speakers was recognized consistently well.
+
+§ 6.4.6 DATA LOG
+
+A data logger has been integrated, which enables the display of command inputs and outputs. The values are stored in a log file so that they can be converted into tables for later evaluation. At the time when the user starts processing the secondary task by initial execution of the respective interaction, a time counter starts until the user completes the task by a command (task completion time). Furthermore, the incorrect input (time out) of the user was recorded, i.e., the time how long the input was outside the tolerance range.
+
+§ 6.4.7 HARDWARE PROTOTYPE
+
+The physical prototype was implemented with the physical computing platform Arduino (model Uno R3), its associated development environment and a distance sensor. The sensor sits in a tube and measures the distance in a range of $0 - {100}\mathrm{\;{mm}}$ with a resolution of $1\mathrm{\;{mm}}$ (deviation $3\%$ ) to the rod guided by the user. A displayed slider in the interface transferred the user input 1:1 via USB to Unity. The user moves a vertical line in horizontal direction with the aim to keep this line within a defined tolerance range. As long as the line is within the range, the slider bar remains green. If this is left, it turns red. The components were brought together into stable housing using 3D printing.
+
+The experiment was carried out in a laboratory with one subject and one investigator. Similar dimensions were chosen to simulate the overall structure of an operating theatre. The subject was exactly ${180}\mathrm{\;{cm}}$ away from a 75" monitor, measured from the edge of the table to the display. Above the display, a Microsoft Kinect was installed, which was adjustable in angle. In between was a table on which the hardware prototype for the primary task was located. The monitor and table were height-adjustable and adapted individually to each test subject. Setup of the user test is illustrated in Figure 6.
+
+ < g r a p h i c s >
+
+Figure 5: The upper part of the interface is divided into four segments with different geometric shapes. Further down, information on the input mode, a status light (user tracked) and the interactive slider are displayed as visual feedback on the primary task. [A] shows the manipulation mode in which a segment was selected (blue frame) and an image enlargement takes place. In [B] the user is in selection mode and has the possibility to move a cursor to the desired area and select it.
+
+ < g r a p h i c s >
+
+Figure 6: Subject (a) stands in front of a monitor (b) on which a Microsoft Kinect (c) is installed. In between is the control element of the primary task (d) and behind it a computer (e) that processes the data.
+
+§ 6.5 STUDY PROCEDURE
+
+After agreeing to the study and collecting demographic data, the presentation of the prototype and the instruction of the individual tasks began. It was explained that the situation of a radiological intervention was simulated by briefly informing the participant about the activities of the radiologist. Then the interface was explained, as well as the various modalities for carrying out the tasks. It was pointed out that the secondary task should be performed as quickly as possible. If the subject was in test group A (with the primary task), the primary task on the physical prototype was explained as an adapted interaction of the catheter navigation task, which should be performed as accurately as possible. Experimental group B did not receive this explanation.
+
+The system was individually adapted to the person before the actual execution began. Monitor height, height of the physical prototype, handiness (right or left) as well as tracking system was adjusted to the body size. The face also had to be calibrated to allow interaction with the eyebrows.
+
+In order to get a feel for the interaction with the physical prototype and to be able to determine deviations, test group A was used to determine how long they can remain within the tolerance range. The upper part of the interface was hidden, so only the slider was visible. The measurement of the deviation (baseline) took place in a time of 90 seconds. The slider was hidden entirely in test group B to avoid additional distractions.
+
+In the first step before the execution, the control of the cursor was practiced with the help of the head rotation. Information cards for the assigned combinations were placed in the field of view of the participant.
+
+Subsequently, training runs were carried out per combination/interaction technique. If the participants stated to be ready, the first run started and time was measured. After three identical trials, the data (task completion time and time-out) were recorded, and the subject received a NASA TLX questionnaire. The following three combinations and runs were performed according to the same procedure. Finally, a semi-structured interview was conducted. The general feeling during the execution of the experiment was questioned and there was the possibility to give feedback.
+
+§ 6.6 DATA ANALYSIS
+
+All dependent variables were averaged for each of the three trials with identical conditions that each participant encountered. Three-way analyses of variance (ANOVAs) were conducted for the task completion time and the overall TLX rating. A two-way ANOVA was conducted for the error time. The investigator's observations and participants' comments were qualitatively reviewed and clustered by two investigators.
+
+§ 6.7 RESULTS
+
+The effects that we found in our inferential statistical analysis are reported in Table 3. We found main effects on the task completion time for the presence of a primary task (Fig. 8) and for the manipulation method (Fig. 7). The manipulation method also showed a main effect on the participants' overall TLX rating (Fig. 10). Within our sample, we observed a trend suggesting that the primary task presence may affect TLX rating (Fig. 9). Although this main effect did not show as significant, the potential effect size is considerable (partial ${\eta }^{2} = {0.049}$ ). The manipulation method showed a significant main effect on the TLX rating. Neither the selection method nor the manipulation method showed significant effects on the error time (i.e. on primary task performance). However, the selection method had a potentially considerable effect size (partial ${\eta }^{2} = {0.084}$ ) that should be investigated further in future studies (Figure 11). We observed no significant interaction effects.
+
+Table 3: Overview of the significant effects found in the evaluation study.
+
+max width=
+
+Dependent variable / effect type Factor Degrees of freedom F-value p-value Partial ${\eta }^{2}$
+
+1-6
+6|c|Task Completion Time
+
+1-6
+2*Main effects Primary task 1 8.5 0.005* 0.132
+
+2-6
+ Manipulation method 1 7.97 0.007* 0.125
+
+1-6
+6|c|TLX rating
+
+1-6
+Main effect Manipulation method 1 5.08 0.028* 0.083
+
+1-6
+
+ < g r a p h i c s >
+
+Figure 7: Influence of the manipulation method on task completion time. Error bars show standard error
+
+ < g r a p h i c s >
+
+Figure 8: Influence of the presence of a primary task on task completion time. Error bars show standard error
+
+§ 7 DISCUSSION
+
+In comparing task completion time between the group with a primary task condition and the one without, we found that performing the secondary interaction takes longer when a primary task has to be fulfilled simultaneously. This is the expected result as additional time is required to switch the focus between both tasks.
+
+As a selection method, head gestures and voice commands were compared for system activation and selection. Pointing was done using head movements in both methods. We found the main effect, within our sample, between the selection method and the time spent outside the target range for the primary task, indicating that head gestures may influence the primary task less than voice commands. The effect was not found to be significant. However, it did show a considerable effect size. Thus, the non-significance may be due to our limited sample size and this possible effect should be investigated further. Comments during the study, however, support an advantage of head gestures over voice commands as nodding and shaking the head are easy to distinguish and to remember compared to voice commands. On the other hand, using speech as an input channel was described as intuitive while shaking the head was found uncomfortable and imprecise.
+
+ < g r a p h i c s >
+
+Figure 9: Influence of the presence of a primary task on subjective workload. Error bars show standard error
+
+ < g r a p h i c s >
+
+Figure 10: Influence of the manipulation method on subjective workload. Error bars show standard error
+
+Taking a closer look at continuous manipulation methods for hands-free zooming reveals that full-body movement (leaning) performed significantly better than facial expressions (moving the eyebrows) in terms of task completion time. The subjective workload was significantly higher for facial expression. A possible explanation can be found in the comments collected during the study. Moving the eyebrows was deemed physically exhausting and according to three participants the fear arose that mood changes would trigger an unintended interaction. Overall, leaning seems to be a suitable, natural input method that allows a faster performance of zooming when the hands are occupied. As a summary, gestures should connect to natural movements as unusual movements such as moving the eyebrows are less precise and exhausting.
+
+ < g r a p h i c s >
+
+Figure 11: Influence of the selection method on the primary task error rate. Error bars show standard error
+
+Even though a medical scenario served as an exemplary scenario in this work, the tasks were abstracted in a way that no expert knowledge is required. Therefore, the findings presented in this work can be transferred to other domains with similar requirements such as flight coordination, piloting or train driving.
+
+§ 7.1 LIMITATIONS
+
+The presented study took place in a lab setting, which ruled out external factors and allowed the participants to focus on the given tasks. On the downside, no account was taken for external sources of distraction which might appear in real world scenarios. The abstracted tasks used in the presented study differed from the medical task. The focus for the primary tasks during a catheter intervention lies at the center of the screen instead of the lower edge. Therefore, switching between both tasks might be more demanding in our setup than in the real-world scenario. The primary task baseline was measured at first during the study. Subsequent tasks might have been performed faster due to a learning effect. This effect has to be avoided in the future by applying more extended training periods beforehand and assessing the baseline measures at different points during the study. The position of the Kinect sensor caused minor technical limitations. Eyebrow input was less accurate when the user was leaning towards the display as the eyes and eyebrows were hard to detect at a steep angle. Further, raising and lowering the eyebrows was a discrete interaction compared to continuous leaning. Therefore, eyebrowinput might be more suitable as a safety feature, clutchingmechanism or manipulation method. Speech recognition was accurate in this setup, but in the OR it could be disturbed by conversations among users and sounds from devices present. Especially in the presented scenario, there is a danger that head and body movement can impair the extremely sensitive stability of the catheter. Foot input might be a better choice, but were disregarded as they might interfere with foot pedals as an established modality for controlling medical devices. The interaction between different foot-based input methods needs to be examined to leverage this input channel without affecting the main task.
+
+§ 8 CONCLUSION AND FUTURE WORK
+
+In this work, we compared hands-free methods to support concurrent multitasking by using multimodal interaction channels. We found leaning to be a fast method for zooming tasks and gained insights into the suitability of head movements and voice commands as secondary input methods. In the future, the proposed methods need to be compared to domain-specific state-of-the-art input methods such as joysticks, touch screens or task delegation similar to Hettig et al. or Wipfli et al. $\left\lbrack {8,{26}}\right\rbrack$ . Further, the performance at higher cognitive load due to environmental factors, multi-user scenarios or additional types of tasks have to be taken into account. In the long run, the proposed approach might support natural interaction for demanding scenarios, leading to fewer errors and faster task completion.
+
+§ ACKNOWLEDGMENTS
+
+This work is funded by the Federal Ministry of Education and Research (BMBF) within the STIMULATE research campus (grant number 13GW0095A).
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac3bb344acdb7c369f6f497000172ab72e19cf9e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,353 @@
+# Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualization Concepts Based on Transparent Displays
+
+Taufik Akbar Sitompul*
+
+Mälardalen University
+
+CrossControl
+
+Rikard Lindell ${}^{ \dagger }$
+
+Mälardalen University
+
+Markus Wallmyr ${}^{ \ddagger }$
+
+Mälardalen University
+
+CrossControl
+
+Antti Siren ${}^{§}$
+
+Forum for Intelligent Machines
+
+
+
+Figure 1: The left image shows an example of supportive system inside a mobile crane [6]. The right image illustrates that mobile crane operators often look at areas that are far away from the location where the supportive system is placed [2].
+
+## Abstract
+
+We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators' line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators' line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.
+
+Index Terms: Human-centered computing-Visualization-Visualization application domains-Information visualization; Visualization design and evaluation methods
+
+## 1 INTRODUCTION
+
+The mobile crane is one type of heavy machinery commonly found in the construction site due to its vital role of lifting and distributing materials. Unlike tower cranes that require some preparations before they can be used, mobile cranes can be mobilized and utilized more quickly. However, mobile cranes are complex machines, as operating them requires extensive training and full concentration $\left\lbrack {{10},{11}}\right\rbrack$ . When lifting a load, mobile cranes require wide work space in three dimensions. Operators must be cautious to prevent both the boom and the load from hitting other objects, such as structures, machines, or people. At the same time, operators must also avoid the machine from tipping over, since the machine's centre of balance is constantly changing depending on many factors, such as height and weight of the lifted load, ground's surface, and wind [19].
+
+The complex mobile crane operation leads to operators' cognitive workload continuously high [11]. Repetitive tasks and long working hours also make operators vulnerable to fatigue and distraction, which could lower their ability to mitigate upcoming hazards. 43% of crane-related accidents between 2004 and 2010 were caused by operators [12]. In addition, mobile cranes are also considered the most dangerous machine in the construction sector, as they contributed to about ${70}\%$ of all crane-related accidents [18]. Crane-related accidents can cause tremendous losses in property and life of both workers and non-workers $\left\lbrack {{12},{19}}\right\rbrack$ . The most common crane-related accidents are electrocution due to contacts with power lines, struck by the lifted load, struck by crane parts, or a collapsing crane [18].
+
+To assist operators, modern mobile cranes are equipped with head-down display supportive systems (see the left image in Figure 1). For example, Load Moment Indicator (LMI) systems that indicate if the maximum load capacity is approached or exceeded [19]. However, the presence of head-down displays could obstruct operators' view, and thus the information is displayed away from operators' line of sight, as shown in the right image in Figure 1. Furthermore, many LMI systems only present numerical information that does not support operators' contextual awareness, and thus consequently requires extra cognitive effort to interpret the meaning of the information [9]. In this case, the benefit of having supportive information is nullified by both information placement and information visualization.
+
+---
+
+*e-mail: taufik.akbar.sitompul@mdh.se
+
+${}^{ \dagger }$ e-mail: rikard.lindell@mdh.se
+
+${}^{\frac{1}{4}}$ e-mail: markus.wallmyr@crosscontrol.com
+
+§e-mail: antti.siren@fima.fi
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+
+
+Figure 2: a. The transparent display that could be installed on the crane’s windshield [14]. b. Using multiple displays for operating remote tower cranes [4]. c. Operating a mobile crane through a tablet-like display [11].
+
+We hypothesize that information presented near the line of sight would benefit mobile crane operators. For example, information displayed on the windshield would allow operators to acquire the supportive information without diverting their attention from operational areas. However, this approach has its own challenges. With information presented near operators' line of sight, there is a potential risk to distract operators from their work. Therefore, the information needs to be presented cautiously, where the right information is presented at the right time, the right place, and the right intensity $\left\lbrack {8,{10}}\right\rbrack$ . This approach will enable operators to perform their work, while maintaining awareness of both the machine and its surroundings. For this paper, we have had the following research questions:
+
+1. What kind of information that mobile crane operators need to know in order to perform safe operations?
+
+2. How should the supportive information behave and look like with respect to the performed operation?
+
+3. How do mobile crane operators perceive the proposed visualisation approach?
+
+The rest of this paper is divided into seven sections. Section 2 reviews prior work that investigated new ways of presenting information in cranes. Section 3 explains three different activities that have been carried out to address the research questions above. Section 4 presents the visualization concepts that we have designed, while Section 5 describes the feedback that we have obtained from the operators. Section 6 describes further suggestions from the operators and the reflection on the evaluation method used in this study. Section 6 acknowledges the limitations in this study and also outlines what could be done for future work. Section 8 finally concludes the study in this paper.
+
+## 2 RELATED WORK
+
+Considering the current setting where mobile crane operators receive supportive information via head-down displays, one may suggest alternative approaches using auditory or tactile modalities. However, mobile cranes are noisy and generate internal vibration due to the working engine [3] and the swinging lifted load [5]. Similarly, auditory information is already used in mobile cranes to some extent [10]. Adding more information via haptic and auditory channels could be counterproductive due to less clarity for conveying information compared to visual information [8].
+
+Prior research indicated that visual information is still used as the primary modality for presenting supportive information in heavy machinery, including both mobile cranes and fixed-position cranes, such as tower and off-shore cranes (see Sitompul and Wallmyr [24] for the complete review). Proposed approaches for improving safety in operations using cranes also vary. For off-shore cranes, Kvalberg [14] proposed to use transparent displays that could be installed on the crane's windshield for presenting the relative load capacity (see Figure 2a), which indicates how much weight a crane can lift depending on how far and high the load will be lifted [21]. For remotely controlled tower cranes, Chi et al. [7] proposed multi-displays where each display presents certain information, such as machine status, lifting path, potential collision, and multiple views of the working environment (see Figure 2b). Fang et al. [11] proposed a tablet-like display in mobile cranes to show multiple views of the working environment, including the supportive information that indicate recommended lifting path, potential collision, and excessive load (see both images in Figure 2c).
+
+The study of Kvalberg [14] was limited to the technical evaluation of the proposed transparent display (see Figure 2a). Although the experiment was carried out in a controlled environment (see Figure 2b), Chi et al. [7] involved five crane operators and 30 graduate students in their experiment. They compared the participants' performance with and without using the multi-view system. The results indicated that the participants took shorter time for completing the given task and the operator participants also perceived the multi-view system as something that could enhance safety in lifting operations. Fang et al. [11] involved five mobile crane operators in their experiment, which was also carried out in a real mobile crane. To evaluate the proposed system, they compared the performance of operators with and without the proposed system. The result showed that the operators have shorter response time and higher rate of correct responses when using the proposed system. In addition, the result from the Situation Present Assessment Method (SPAM) also indicated that the operators were able to maintain higher level of situation awareness by using the proposed system. Despite the positive result, the operators commented that the display was too small and it could also obstruct their view.
+
+
+
+Figure 3: Some sketches that were made in our design workshop. The left sketches show how we explored the visualization of relative load capacity. The middle sketches illustrate various ways of visualizing proximity warning in different directions, distances, and height. In top right, we investigated the visualization of changes on the machine's balance. In bottom right, we sketched how the wind speed and direction should be visualized.
+
+## 3 Methods
+
+Aligned with prior research, we hypothesize that information presented near the line of sight would benefit mobile crane operators, since they could acquire the supportive information without diverting their attention from operational areas. To address the research questions written in Section 1, three different activities were conducted, as described in the following subsections.
+
+### 3.1 Utilizing Safety Guidelines as a Source of Informa- tion
+
+To address the first research question and figure out which information is important for operators to perform safe operations, we reviewed four different mobile cranes operation safety guidelines $\left\lbrack {{15},{17},{20},{21}}\right\rbrack$ . This could also have been done by asking operators or domain experts. However, this alternative may be less efficient, because operators may have different operational styles or preferences, and thus having different requirements. Furthermore, we would have missed the international aspect covered by the guidelines from different parts of the world. Therefore, we used the safety guidelines as the starting point, as they are applicable to all operators regardless different operational styles or preferences. From the safety guidelines review, we have found that the guidelines are provided to prevent the following events:
+
+1. Collisions that may occur between the mobile crane, its parts, or the lifted load; and nearby people, or structures at the working area. To prevent this from happening, operators should know what is around the machine and what the machine is about to do.
+
+2. Loss of balance that could occur due to many factors, such as excessive load capacity, strong wind, or unstable ground. To avoid this event, operators should know the current state of the machine and never operate the machine beyond permitted conditions.
+
+### 3.2 Generating Ideas Through a Design Workshop
+
+To address the second research question, we used the findings from the safety guidelines for a design workshop to generate ideas for the appearance and behaviour of the the visualization. The workshop involved three human-computer interaction (HCI) researchers, who are also the authors of this paper. Two researchers have research expertise in human-machine interface for heavy machinery, while one researcher is a generalist.
+
+Since the type of displays influences the form of information and how it can be presented, we firstly discussed which display is appropriate in mobile cranes. As mobile cranes have a large front windshield and operators look through it most of the time [5], the wide windshield could be used as a space for presenting the supportive information, given that the information will not obstruct operators' view. There are various commercially available displays that can be used for this purpose, such as head-mounted displays (HMDs), projection-based head-up displays (HUDs), and transparent displays. However, each of these displays has its benefit and drawback in terms of the usage in this context [24]. Using head-mounted displays, for example, Microsoft HoloLens, enables operators to see the presented information exactly within their sight. Although newer HMDs come with better ergonomics, they are still not ergonomically comfortable to be used for long hours [22]. In addition, operators are already required to wear protective gears when working, and thus they may be quite reluctant to wear an additional equipment. Projection-based HUDs, like the ones available for cars, are another alternative that can be used and operators also do not need to wear another equipment. However, the quality of information presented using projection-based HUDs may be degraded in bright environments [25]. The third option is using transparent displays, like what Kvalberg [14] has used. However, this option also gives us two disadvantages [1]. Firstly, they are limited in terms of colors, since only yellow and green are currently available. Secondly, they support static visualisation only, as the display can only present information that has been specified before the display is manufactured. On the positive side, transparent displays are durable against extreme temperature, moisture, and vibration. See Figure 4 for a commercial example of transparent displays. After considering both benefit and drawback for each display, we reasoned that transparent displays are more suitable to be used in mobile cranes.
+
+Table 1: The profiles of mobile crane operators that we have interviewed
+
+| No | Gender | Age | Experience | Mobile crane sizes | Knowledge about head-up displays |
| 1 | Male | 38 years old | 12 years | 30 tons - 800 tons | Knows about it, but never tried it |
| 2 | Male | 39 years old | 20 years | 30 tons - 150 tons | Knows about it, but never tried it |
| 3 | Male | 61 years old | 38 years | 8 tons - 130 tons | Has no knowledge about it |
| 4 | Male | 53 years old | 21 years | 2.5 tons - 220 tons | Knows about it, but never tried it |
| 5 | Male | 37 years old | 7 years | 2.5 tons - 95 tons | Knows about it, but never tried it |
| 6 | Male | 45 years old | 20 years | 8 tons - 500 tons | Has tried it in a car |
+
+
+
+Figure 4: An example of stand-alone transparent display [1]. Each fixed element can be individually lit, whereas the widget's structure is statically rendered in the material.
+
+We generated ideas for the visualization that could help operators in preventing hazardous situations mentioned in Subsection 3.1. We then selected some of the generated concepts based on their suitability with transparent displays (see Figure 3). Eventually, we produced eight visualisation concepts that suit the appearance, characteristics, and capability of transparent displays:
+
+1. Two concepts for proximity warning that indicate position, distance, and height of obstacles.
+
+2. Two concepts that indicate the balance of the machine.
+
+3. One concept for showing wind speed and direction.
+
+4. One concept for illustrating how much the lifted load swings.
+
+5. One concept for presenting the relative load capacity, including the angle of the boom, the height of the hook to the ground, and the distance between the lifted load to the center of the machine.
+
+6. A generic warning sign that tells operators to stop their current action.
+
+The description for each visualization concept is presented in Section 4.
+
+### 3.3 Obtaining Feedback from Mobile Crane Operators
+
+To answer the third research question, we interviewed six mobile crane operators to validate the ease of use and possible benefits of the proposed visualizations for performing safe operations. The interviews were carried by two people, who are also the authors of this paper. After explaining both motivation and procedure of the interviews, as well as obtaining the informed consent from the operators, we collected some background information from the operators, such as age, experience as an operator, and different mobile cranes that they have used. In addition, we also asked if the operators have prior knowledge or experience on head-up displays. See Table 1 for some information about the operators that we have interviewed. Out of six operators that we interviewed, one operator has no knowledge about head-up displays. The remaining operators know about head-up displays, either through seeing commercials or driving a car that has a head-up display in it.
+
+
+
+Figure 5: The tools that were used to test the operators' understanding on the proposed visualization concepts. The human toys were used to represent nearby obstacles, the coin was used to indicate where the machine's center of gravity is, and the pens were used to indicate both wind speed and direction.
+
+After that, we presented the visualization concepts printed on papers, which illustrated how the visualizations look like in certain situations. We firstly explained what is the meaning of each component within the visualization concepts. We also used some tools (see Figure 5) to demonstrate the meaning for the visualization concepts. Once the operators confirmed that they understood the logic behind each visualization concept, we continued with five tests which evaluated the operators' understanding on the concepts for proximity warning, balance, wind speed, swinging load, and relative load capacity. We then presented different examples of the visualization on papers to the operators. There were ten examples for each concept of proximity warning, eight examples for each concept that shows the machine's balance, eight examples for wind speed, four examples for load swinging, and eight examples for relative load capacity. Some of the examples are presented in Section 4. The test for proximity warning had increasing complexity, for example, starting from one obstacle to multiple obstacles with different heights. The operators were asked to use the provided tools, such as toys, coin, and pens (see Figure 5) and moved them according to the shown visualization (see Figure 6). This method was useful for both us and the operators, since we could understand the operators' way of thinking through their actions and the operators could show what they were thinking without having to explain everything verbally. Surprisingly, four operators had their own mobile crane replicas and we encouraged them to use their own instead. This process was repeated until all visualization concepts were described and evaluated. The generic warning sign was not evaluated, since its meaning was too obvious for the operators.
+
+
+
+Figure 6: Some pictures that depict how we tested the operators' understanding, where the operators had to move around the provided tools according to the visualization shown on papers. a. The operators had to move the human toy(s) to the position where the obstacle is. b. The operators had to move the coin to the position where the machine's center of balance is. c. The operators had to arrange the tip of the pens to indicate the wind direction, while the number of pens represent the wind intensity; 1 pen = weak wind, 2 pens = medium wind, and 3 pens = strong wind. d. The operators were asked to move the hook that exists in the replica to show how much the lifted load is swinging.
+
+
+
+Figure 7: a. The distance between the machine and the obstacle is divided into three levels: near (1 radius), medium (2 radius), and far (3 radius). b. The meaning for each segment in the first concept of proximity warning. c. The meaning for each segment in the second concept of proximity warning.
+
+Lastly, we provided a paper that has an image of the interior view of a mobile crane's cabin. The operators were asked to place the visualization concepts, which were printed on a transparent film and then cut into pieces, on the windshield according to their preferences (see Figure 8). They were also encouraged to exclude concepts that they considered less important. Eventually, the operators were asked to describe the reasons for their decisions.
+
+## 4 THE PROPOSED VISUALIZATION CONCEPTS
+
+This section presents the description for each proposed visualisation concept that was generated from our design workshop.
+
+
+
+Figure 8: The operators were asked to place the proposed visualization concepts on the windshield according to their preferences. The image of the crane's cabin was downloaded from [16].
+
+### 4.1 Proximity Warning
+
+Both concepts for the proximity warning were made based on the top view of the mobile crane, with three levels of distance: near, medium, and far (see Figure 7a). In this study, we used humans as the form of obstacles for simplification purposes, and also because humans are moving objects. In practice, the obstacle can also be other things, such as buildings, trees, or overhead power lines. The visualization is always shown based on the direction where the cabin is facing.
+
+In the first concept, there are two groups of segments and each group represents the presence of obstacle(s) on the left side or the right side of the cabin (see Figure 7b). The left segments will be turned on when there is an obstacle on the left side of the cabin, and vice versa. As the visualization is split into two half circles, for obstacles that are exactly in the front or behind the machine, the same segment on both sides are turned on (see the bottom left image in Figure 9). The vertical segments show the position of the obstacle and its distance to the machine. The horizontal segments indicate three levels of altitude of the obstacle: lower, on the same level, or higher than the machine. The second concept is similar to the first concept, except that the visualization is in the form of a complete circle and the center parts indicate the altitude of the obstacle (see Figure 7c). See the images in Figure 9 for some examples on how the visualization will work in certain scenarios.
+
+
+
+Figure 9: Some scenarios that illustrate how both concepts of proximity warning are used. The visualization is always shown based on the direction where the cabin is facing.
+
+### 4.2 Balance-related Information
+
+We have created two concepts which indicate the balance of the machine. The first concept is called 'center of gravity' and the second one is called 'loads on outriggers'. These names also suggest what kind of information being visualized.
+
+The concept of center of gravity was also made based on the top view of the machine and it shows the current position of the center of gravity with respect to the center of the machine (see the left side images in Figure 10). When the center of gravity is near the center of the machine (the circle in the center), it shows that the machine is in a very stable position. To maintain the machine's balance, operators should ensure that the center of gravity does not go beyond the outermost segments, as the risk of tipping over is higher. Each segment in this concept indicates the position of the center of gravity.
+
+The concept of loads on outriggers depicts the load that four outriggers have. Depending on the direction of the cabin and how far the boom is extended, each outrigger may have different loads. In this concept, there are three rectangles next to each outrigger. These rectangles are used to indicate three levels of load on each outrigger: low, medium, and high. The right side images in Figure 10 illustrate how this concept works in specific scenarios.
+
+### 4.3 Wind Speed and Direction
+
+In this concept, the arrows indicate the direction where the wind goes. In each direction, there are three arrows that indicate the force of the wind: low, medium, and strong. In the center, the segments indicate the estimated wind speed counted in kilometer per hour. See the images in Figure 11 for some scenarios that illustrate the use of this concept.
+
+
+
+Figure 10: Some scenarios that depict the use of balance-related information based on the center of gravity (left) and loads on outriggers (right). The red circle represents where the center of gravity is, with respect to the machine. For both concepts, the visualizations are shown based on the direction of the front part of the machine, and not according to the cabin's direction.
+
+
+
+Figure 11: The arrows in the left side images and the wind icon in the right side images represent both wind direction and wind force. The wind direction is always shown depending on where the cabin is facing. The numbers in the center indicate the estimated wind speed.
+
+### 4.4 Swinging of the Lifted Load
+
+As the name implies, this concept indicates the swinging intensity of the lifted load. From the safety guidelines, we learned that the swinging could occur due to the wind, as well as the movement of the boom, and the swinging could affect the machine's balance. However, the visualization indicates the intensity of the swinging only, without telling the direction of the swinging. The reason behind this choice was due to the fact that the swinging could happen to any direction, and thus could complicate the visualization. This concept shows something like a pendulum. The center segment is turned on when the lifted load is not swinging. The next two segments are turned on when the lifted load is swinging a bit, while the outermost segments are turned when the swinging is stronger. See the images in Figure 12 to see how this concept works.
+
+
+
+Figure 12: The images that illustrate how the concept works. If there is no swinging, the center segment will be turned on, while other segments will be turned off. Farther segments indicate stronger swinging.
+
+### 4.5 Relative Load Capacity
+
+Since the relative load capacity constantly changes depending on various factors [21], this concept shows four types of information: (1) angle of the boom, (2) height between the hook and the ground, (3) distance between the lifted load to the center of the machine, and (4) ten rectangles that each represents ${10}\%$ relative load capacity (see Figure 13). The relative load capacity for each mobile crane, including the maximum limit for each influencing factor is usually documented and operators are advised to refer to that before performing lifting operations [15]. Exceeding the limit will cause the machine to tip over. In this case, the operators should prevent all rectangles from being turned on. See the images in Figure 14 for some examples that illustrate how the this concept works.
+
+
+
+Figure 13: The meaning for each component in the concept for showing the relative load capacity.
+
+### 4.6 Generic Warning Sign
+
+The last concept was a generic warning sign that appears only when a collision or loss of balance is imminent to occur (see Figure 15). When this warning appears, operators should stop their current action.
+
+## 5 RESULTS
+
+This section presents the feedback on each visualization concept, as well as where the information should be placed on the windshield.
+
+
+
+Figure 14: The images that illustrate how the concept works. Both top and center images illustrate that, even though the machine is lifting the same object, the relative load capacity varies depending on the height of between the hook and the ground, as well as the distance between the lifted load and the center of the machine. The bottom image depicts that the relative load capacity is of course increasing if the load is heavier. Note that the numbers in Figure 16 are used for simplification purposes only in order to demonstrate the concept.
+
+
+
+Figure 15: A generic warning that only appears when a hazardous situation is imminent to occur.
+
+### 5.1 Feedback on Proximity Warning
+
+When there was only one obstacle, it was quite easy for the operators to understand the meaning of both concepts and pinpoint the location of the obstacle. However, for the first concept, the idea of turning on the same segments on both sides, when something is exactly in front of or behind the cabin, was interpreted differently by the operators (see the images in Figure 16). The first concept was considered insufficient for all different scenarios, since if there are two different obstacles and have similar proximity on both sides, then the visualization will be the same as what is used for showing the obstacle that exists exactly in front of or behind the cabin. For the second concept, as the visualization is formed in a complete circle, it does not have the same drawback as the first concept (see Figure 16). The operators could easily pinpoint multiple obstacles using the second concept regardless of their proximity, and thus the operators preferred the second concept over the first one.
+
+Furthermore, we also discovered that both concepts have another drawback for indicating multiple obstacles in different altitudes (see the images in Figure 17). In this case, it was not clear which obstacle is higher, on the same level, or lower than the machine.
+
+
+
+Figure 16: The first concept of proximity warning was understood differently by the operators when the obstacle is exactly in the front or behind the cabin. However, this way of thinking was not wrong either, since if there are two obstacles, where one is in the left side and the another one is on the right side of the cabin, the visualization will then look the same.
+
+
+
+Figure 17: Both concepts are insufficient to visualize all different scenarios. In this case, although the operators were able to pinpoint both distance and position of the obstacles, the altitude of multiple obstacles could not be determined.
+
+The operators also commented that it would be good if we can also show the indication of altitude directly in the segments that show the proximity of the obstacle. Despite this drawback, all the operators would like to have this kind of information on the windshield.
+
+### 5.2 Feedback on Balance-related Information
+
+Both concepts could be understood easily by the operators and there was no issue with the concepts. Only one operator preferred to have the concept of center of gravity, while five operators rated loads on outriggers as the better concept. The main reason was due to the fact that modern cranes already have similar visualization, thus they felt more familiar with it. However, only four out of six operators would like to have either concept presented in the cabin. The remaining two operators commented that this kind of information already exists on the head-down display, thus they felt that it is unnecessary to have it on the windshield as well.
+
+### 5.3 Feedback on Wind Speed and Direction
+
+The operators could easily comprehend the meaning of the concept, since modern mobile cranes already display something similar. Regarding the importance of having such information, the operators said that it highly depends on the weather. In a clear weather, this information is not needed, as the operators already know that the wind speed will be within acceptable limits. On the contrary, when operating the machine in other weather conditions, this information becomes critical for performing safe operations. Nonetheless, only three operators who would like to have this information all the time.
+
+### 5.4 Feedback on Swinging of the Lifted Load
+
+The meaning of this concept was obvious for the operators. However, only two operators who would like to have this information on the windshield. The remaining operators commented that this information is not needed, as they could see the swinging and estimate how the swinging will affect the machine's balance.
+
+### 5.5 Feedback on Relative Load Capacity
+
+This concept was also well understood by the operators, since modern mobile cranes are already equipped with LMI systems, which indicate similar information. According to all the operators, this is the most important information for performing safe operations and they would like to have it on the windshield as well. However, they commented that the information about the angle of the boom could be removed, as it is not important.
+
+### 5.6 Feedback on Generic Warning Sign
+
+Although the meaning of this concept was very obvious to the operators, only four out of six operators would like to have this warning shown on the windshield. The remaining two operators said that modern mobile cranes already have distracting auditory warning for imminent danger, thus the visual warning is no longer needed.
+
+### 5.7 Information Placement
+
+As mentioned in Subsection 3.3, we also asked the operators to place where the information should be visualized on the windshield. In this activity, they were also allowed to include or exclude some of the visualization concepts according to their preferences. Based on the placements that have been made by the operators, we can observe that there is a pattern on where the information should be presented (see the images in Figure 18). We can see that the operators would like the information to be visualized peripherally. They commented that the central area has to be clear from any obstruction, otherwise it is going to harm their operations. However, an exception was made by two operators who put the generic warning sign in the centre, since this position could attract their attention immediately. Regarding the placement of other visualization concepts, we unfortunately could not get a firm indication from this study, as the operators' preferences are quite diverse.
+
+## 6 Discussion
+
+This section describes the reflection on the evaluation method that we used in this study, as well as further suggestions that were given by the operators in the end of the interviews.
+
+### 6.1 Reflection on the Evaluation Method Used in This Study
+
+Since the visualisation concepts are proposals, and thus do not exist in their intended forms, we need to reason about their validity on the basis of the evaluation method that we used. Krippendorf [13] presents different levels of validity, in order of increasing strength, such as demonstrative validity, experimental validity, interpretative validity, methodological validity, and pragmatic validity. Due to the way this study was conducted, we are specifically discussing about demonstrative validity and methodological validity.
+
+Regarding demonstrative validity, we were able to show the meaning of the proposed visualization concepts and how they could possibly work in different situations through the printed concepts on papers, along with the tools that the operators could interact with. This arrangement enabled the operators not only to understand the meaning of the proposed visualisation concepts more easily, but also having ideas on how the proposed concepts would work in various scenarios. In addition, we were also able to discover what would make sense or would not make sense according to the operators' way of thinking. For example, as what is presented in Subsection 5.1, we could discover that both concepts of proximity are inadequate for all different situations.
+
+
+
+Figure 18: The images that illustrate which visualization concepts that the operators preferred to have and where the information should be placed on the windshield. Note that 'HDD' refers to the head-down display that already exists inside the cabin.
+
+With respect to methodological validity, we decided to evaluate the proposed visualisation concepts in the paper form, since modifications could be incorporated easily in early stages. Despite using a low-fidelity prototype and some other tools, we were able to discover to what extent the proposed visualisation could suit the operators' needs and way of thinking in order to perform safe operations. Although the number of operators involved in this study is rather small, research on heavy machinery often involved small numbers of operators as the participants, either in observational studies [23] or experimental studies [24]. Our method was in contrast to what Kvalberg [14] has done, where the functional prototype has been developed, but there was no feedback from mobile crane operators. Needless to say, a prototype with higher fidelity that could be used in some scenarios, like what Chi et al. [7] and Fang et al. [11] have done, is required to determine to what extent the proposed visualization will benefit or hinder the operators.
+
+### 6.2 Suggestions for the Proposed Visualisation Con- cepts
+
+All the operators appreciated the effort of bringing the information closer to their line of sight. All of them agreed that this approach has the potential to improve safety in their operations, as they could acquire the information without diverting their attention from operational areas. Moreover, they also provided additional comments on how the transparent display could be made to better suit their needs.
+
+Firstly, the operators raised concerns on how much the transparent display will obstruct their view in practice. As mobile cranes could be used in any time of the day, they concerned that the brightness of the transparent display may be too much for their eyes when the operation is done in dark environments. On the contrary, having a bright display will be good in bright environments, thus the information can still be visible even though there is a direct sunlight. Therefore, besides automatic adoption to the ambient light intensity, it is should be possible to manually adjust the transparent display's brightness.
+
+Secondly, based on what is presented in Subsection 5.3, there were different opinions whether the information should always be presented on the windshield or not. Although the information is important, the information may not need to be visualised all the time. The operators also commented that it would be beneficial if they could choose what kind of information that will appear on the windshield, depending on their work environments. However, this kind of modification is not possible yet with the current transparent display, as the visualization is fixed when the display is manufactured. However, if there are multiple transparent displays showing different kinds of information, it is should be possible to manually decide which transparent display that should be turned on or turned off.
+
+## 7 LIMITATIONS AND FUTURE WORK
+
+The proposed visualization concepts presented in this paper were generated based on the findings from the safety guidelines review. According to the feedback in Subsections 5.2, 5.4, and 5.6, the operators could obtain similar information through looking directly at the environment or safety features that already exist on the head-down display, and thus having similar information presented on the windshield may not be so beneficial for them. Therefore, it is also important to take into account the availability of existing information inside mobile cranes and how the information is delivered in order to ensure that only essential information is presented on the windshield. However, we did not take this approach in this study, since different manufacturers may install diverse supportive information systems in their mobile cranes. As the result, one kind of information may be available in one mobile crane, but unavailable in another mobile crane.
+
+In this study, we used a low-fidelity prototype, which was printed on papers, and some other tools to show the proposed visualisation concepts and to test the operators' understanding on the shown visualisation. With this arrangement, we were still able to convey the meaning of the proposed visualisation concepts to the operators, as well as to test their understanding with the help of the provided tools. Having said that, we are still limited in terms of the fidelity, and thus the results in this study are better to be considered as an indication.
+
+In the future, we are planning to revise the proposed visualisation concepts according to the feedback from the operators that we have obtained. We are also planning to build a higher-fidelity prototype by building the actual transparent display that visualizes the proposed concepts. The prototype could then be used in future evaluations within controlled environments or real-world settings in order to investigate the impact of having such visualization on operators' performance in certain scenarios. For example, the number of things in the working environment that operators need to observe when operating the machine may influence the level of attention on the presented information. Furthermore, future evaluations could also be carried out to discover which placement of information that provides the optimum result for the operators.
+
+## 8 CONCLUSION
+
+In this paper, we have proposed and evaluated the visualization concepts using transparent displays that could be used to assist mobile crane operators to perform safe operations. We started the design process by gathering information from few safety guidelines, generating ideas by conducting a design workshop, and then obtaining feedback from the operators through interviews. The operators that we have interviewed appreciated this approach and the results from this study indicate what kind of information that operators need in order to perform safe operations, how we should visualize the information, and where to place the information on the windshield. Nonetheless, more studies, such as evaluations with some scenarios using high-fidelity prototypes, will need to be conducted to further determine both applicability and usefulness of this approach.
+
+## ACKNOWLEDGMENTS
+
+This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement number 764951.
+
+## REFERENCES
+
+[1] A. Abileah, K. Harkonen, A. Pakkala, and G. Smid. Transparent electroluminescent (EL) displays. Technical report, Planar Systems, 1 2008.
+
+[2] Associated Training Services. Crane operators and their team, November 2016.
+
+[3] A. P. Cann, A. W. Salmoni, P. Vi, and T. R. Eger. An exploratory study of whole-body vibration exposure and dose while operating heavy equipment in the construction industry. Applied Occupational and Environmental Hygiene, 18(12):999-1005, June 2003. doi: 10. 1080/715717338
+
+[4] Y.-C. Chen, H.-L. Chi, S.-C. Kangm, and S.-H. Hsieh. A smart crane operations assistance system using augmented reality technology. In the 28th International Symposium on Automation and Robotics in Construction, ISARC 2011, pp. 643-649. IAARC, Seoul, South Korea, 2011. doi: 10.22260/ISARC2011/0120
+
+[5] J. Y. Chew, K. Ohtomi, and H. Suzuki. Skill metrics for mobile crane operators based on gaze fixation pattern. In Advances in Human Aspects of Transportation, pp. 1139-1149. Springer, Cham, Switzerland, 2017. doi: 10.1007/978-3-319-41682-3-93
+
+[6] J. Y. Chew, K. Ohtomi, and H. Suzuki. Glance behavior as design indices of in-vehicle visual support system: A study using crane simulators. Applied Ergonomics, 73:183 - 193, November 2018. doi: 10. 1016/j.apergo.2018.07.005
+
+[7] H.-L. Chi, Y.-C. Chen, S.-C. Kang, and S.-H. Hsieh. Development of user interface for tele-operated cranes. Advanced Engineering Informatics, 26(3):641-652, 2012. doi: 10.1016/j.aei.2012.05.001
+
+[8] M. R. Endsley and D. G. Jones. Designing for Situation Awareness: An Approach to User-Centered Design. CRC Press, Boca Raton, USA, second ed., 42016. doi: 10.1201/b11371
+
+[9] Y. Fang and Y. K. Cho. Effectiveness analysis from a cognitive perspective for a real-time safety assistance system for mobile crane lifting operations. Journal of Construction Engineering and Management, 143(4), April 2017. doi: 10.1061/(ASCE)CO. 1943-7862.0001258
+
+[10] Y. Fang, Y. K. Cho, and J. Chen. A framework for real-time pro-active safety assistance for mobile crane lifting operations. Automation in Construction, 72:367 - 379, December 2016. doi: 10.1016/j.autcon. 2016.08.025
+
+[11] Y. Fang, Y. K. Cho, F. Durso, and J. Seo. Assessment of operator's
+
+situation awareness for smart operation of mobile cranes. Automation in Construction, 85:65 - 75, January 2018. doi: 10.1016/j.autcon.2017 .10.007
+
+[12] R. A. King. Analysis of crane and lifting accidents in North America from 2004 to 2010. Master's thesis, Massachusetts Institute of Technology, June 2012.
+
+[13] K. Krippendorff. The Semantic Turn: A New Foundation for Design. CRC Press, Boca Raton, USA, first ed., 2005.
+
+[14] J. L. Kvalberg. Head-up display in driller and crane cabin. Master's thesis, Norwegian University of Science and Technology, June 2010.
+
+[15] Labour Department. Code of practice for safe use of mobile cranes, September 2017.
+
+[16] Liebherr. Lisim - simulators for liebherr construction machines: training under even more realistic conditions, nov 2016.
+
+[17] Liebherr. Influence of wind on crane operation, 2017.
+
+[18] M. McCann, J. Gittleman, and M. Watters. Crane-related deaths in construction and recommendations for their prevention. Technical report, The Center for Construction Research and Training, November 2009.
+
+[19] R. L. Neitzel, N. S. Seixas, and K. K. Ren. A review of crane safety in the construction industry. Applied Occupational and Environmental Hygiene, 16(12):1106-1117, November 2001. doi: 10. 1080/10473220127411
+
+[20] Occupational Safety & Health Council. Safe lifting, 2002.
+
+[21] Safe Work Australia. General guide for cranes, December 2015.
+
+[22] Á. Segura, A. Moreno, G. Brunetti, and T. Henn. Interaction and ergonomics issues in the development of a mixed reality construction machinery simulator for safety training. In International Conference on Ergonomics and Health Aspects of Work with Computers, pp. 290-299. Springer, Berlin, Germany, 2007. doi: 10.1007/978-3-540-73333-1_36
+
+[23] T. A. Sitompul and M. Wallmyr. Analyzing online videos: A complement to field studies in remote locations. In the 17th IFIP TC. 13 International Conference on Human-Computer Interaction, INTERACT 2019, pp. 371-389. Springer, Cham, Switzerland, 2019. doi: 10. 1007/978-3-030-29387-1_21
+
+[24] T. A. Sitompul and M. Wallmyr. Using augmented reality to improve productivity and safety for heavy machinery operators: State of the art. In the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI '19, pp. 8:1-8:9. ACM, New York, USA, 2019. doi: 10.1145/3359997.3365689
+
+[25] P. Tretten, A. Gärling, R. Nilsson, and T. C. Larsson. An on-road study of head-up display: Preferred location and acceptance levels. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1):1914-1918, 2011. doi: 10.1177/1071181311551398
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9b1f11c3f7216e89cd428b3b0967a227b1e1ebe5
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/WWxviNxYwoc/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,317 @@
+§ PRESENTING INFORMATION CLOSER TO MOBILE CRANE OPERATORS' LINE OF SIGHT: DESIGNING AND EVALUATING VISUALIZATION CONCEPTS BASED ON TRANSPARENT DISPLAYS
+
+Taufik Akbar Sitompul*
+
+Mälardalen University
+
+CrossControl
+
+Rikard Lindell ${}^{ \dagger }$
+
+Mälardalen University
+
+Markus Wallmyr ${}^{ \ddagger }$
+
+Mälardalen University
+
+CrossControl
+
+Antti Siren ${}^{§}$
+
+Forum for Intelligent Machines
+
+ < g r a p h i c s >
+
+Figure 1: The left image shows an example of supportive system inside a mobile crane [6]. The right image illustrates that mobile crane operators often look at areas that are far away from the location where the supportive system is placed [2].
+
+§ ABSTRACT
+
+We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators' line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators' line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.
+
+Index Terms: Human-centered computing-Visualization-Visualization application domains-Information visualization; Visualization design and evaluation methods
+
+§ 1 INTRODUCTION
+
+The mobile crane is one type of heavy machinery commonly found in the construction site due to its vital role of lifting and distributing materials. Unlike tower cranes that require some preparations before they can be used, mobile cranes can be mobilized and utilized more quickly. However, mobile cranes are complex machines, as operating them requires extensive training and full concentration $\left\lbrack {{10},{11}}\right\rbrack$ . When lifting a load, mobile cranes require wide work space in three dimensions. Operators must be cautious to prevent both the boom and the load from hitting other objects, such as structures, machines, or people. At the same time, operators must also avoid the machine from tipping over, since the machine's centre of balance is constantly changing depending on many factors, such as height and weight of the lifted load, ground's surface, and wind [19].
+
+The complex mobile crane operation leads to operators' cognitive workload continuously high [11]. Repetitive tasks and long working hours also make operators vulnerable to fatigue and distraction, which could lower their ability to mitigate upcoming hazards. 43% of crane-related accidents between 2004 and 2010 were caused by operators [12]. In addition, mobile cranes are also considered the most dangerous machine in the construction sector, as they contributed to about ${70}\%$ of all crane-related accidents [18]. Crane-related accidents can cause tremendous losses in property and life of both workers and non-workers $\left\lbrack {{12},{19}}\right\rbrack$ . The most common crane-related accidents are electrocution due to contacts with power lines, struck by the lifted load, struck by crane parts, or a collapsing crane [18].
+
+To assist operators, modern mobile cranes are equipped with head-down display supportive systems (see the left image in Figure 1). For example, Load Moment Indicator (LMI) systems that indicate if the maximum load capacity is approached or exceeded [19]. However, the presence of head-down displays could obstruct operators' view, and thus the information is displayed away from operators' line of sight, as shown in the right image in Figure 1. Furthermore, many LMI systems only present numerical information that does not support operators' contextual awareness, and thus consequently requires extra cognitive effort to interpret the meaning of the information [9]. In this case, the benefit of having supportive information is nullified by both information placement and information visualization.
+
+*e-mail: taufik.akbar.sitompul@mdh.se
+
+${}^{ \dagger }$ e-mail: rikard.lindell@mdh.se
+
+${}^{\frac{1}{4}}$ e-mail: markus.wallmyr@crosscontrol.com
+
+§e-mail: antti.siren@fima.fi
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+ < g r a p h i c s >
+
+Figure 2: a. The transparent display that could be installed on the crane’s windshield [14]. b. Using multiple displays for operating remote tower cranes [4]. c. Operating a mobile crane through a tablet-like display [11].
+
+We hypothesize that information presented near the line of sight would benefit mobile crane operators. For example, information displayed on the windshield would allow operators to acquire the supportive information without diverting their attention from operational areas. However, this approach has its own challenges. With information presented near operators' line of sight, there is a potential risk to distract operators from their work. Therefore, the information needs to be presented cautiously, where the right information is presented at the right time, the right place, and the right intensity $\left\lbrack {8,{10}}\right\rbrack$ . This approach will enable operators to perform their work, while maintaining awareness of both the machine and its surroundings. For this paper, we have had the following research questions:
+
+1. What kind of information that mobile crane operators need to know in order to perform safe operations?
+
+2. How should the supportive information behave and look like with respect to the performed operation?
+
+3. How do mobile crane operators perceive the proposed visualisation approach?
+
+The rest of this paper is divided into seven sections. Section 2 reviews prior work that investigated new ways of presenting information in cranes. Section 3 explains three different activities that have been carried out to address the research questions above. Section 4 presents the visualization concepts that we have designed, while Section 5 describes the feedback that we have obtained from the operators. Section 6 describes further suggestions from the operators and the reflection on the evaluation method used in this study. Section 6 acknowledges the limitations in this study and also outlines what could be done for future work. Section 8 finally concludes the study in this paper.
+
+§ 2 RELATED WORK
+
+Considering the current setting where mobile crane operators receive supportive information via head-down displays, one may suggest alternative approaches using auditory or tactile modalities. However, mobile cranes are noisy and generate internal vibration due to the working engine [3] and the swinging lifted load [5]. Similarly, auditory information is already used in mobile cranes to some extent [10]. Adding more information via haptic and auditory channels could be counterproductive due to less clarity for conveying information compared to visual information [8].
+
+Prior research indicated that visual information is still used as the primary modality for presenting supportive information in heavy machinery, including both mobile cranes and fixed-position cranes, such as tower and off-shore cranes (see Sitompul and Wallmyr [24] for the complete review). Proposed approaches for improving safety in operations using cranes also vary. For off-shore cranes, Kvalberg [14] proposed to use transparent displays that could be installed on the crane's windshield for presenting the relative load capacity (see Figure 2a), which indicates how much weight a crane can lift depending on how far and high the load will be lifted [21]. For remotely controlled tower cranes, Chi et al. [7] proposed multi-displays where each display presents certain information, such as machine status, lifting path, potential collision, and multiple views of the working environment (see Figure 2b). Fang et al. [11] proposed a tablet-like display in mobile cranes to show multiple views of the working environment, including the supportive information that indicate recommended lifting path, potential collision, and excessive load (see both images in Figure 2c).
+
+The study of Kvalberg [14] was limited to the technical evaluation of the proposed transparent display (see Figure 2a). Although the experiment was carried out in a controlled environment (see Figure 2b), Chi et al. [7] involved five crane operators and 30 graduate students in their experiment. They compared the participants' performance with and without using the multi-view system. The results indicated that the participants took shorter time for completing the given task and the operator participants also perceived the multi-view system as something that could enhance safety in lifting operations. Fang et al. [11] involved five mobile crane operators in their experiment, which was also carried out in a real mobile crane. To evaluate the proposed system, they compared the performance of operators with and without the proposed system. The result showed that the operators have shorter response time and higher rate of correct responses when using the proposed system. In addition, the result from the Situation Present Assessment Method (SPAM) also indicated that the operators were able to maintain higher level of situation awareness by using the proposed system. Despite the positive result, the operators commented that the display was too small and it could also obstruct their view.
+
+ < g r a p h i c s >
+
+Figure 3: Some sketches that were made in our design workshop. The left sketches show how we explored the visualization of relative load capacity. The middle sketches illustrate various ways of visualizing proximity warning in different directions, distances, and height. In top right, we investigated the visualization of changes on the machine's balance. In bottom right, we sketched how the wind speed and direction should be visualized.
+
+§ 3 METHODS
+
+Aligned with prior research, we hypothesize that information presented near the line of sight would benefit mobile crane operators, since they could acquire the supportive information without diverting their attention from operational areas. To address the research questions written in Section 1, three different activities were conducted, as described in the following subsections.
+
+§ 3.1 UTILIZING SAFETY GUIDELINES AS A SOURCE OF INFORMA- TION
+
+To address the first research question and figure out which information is important for operators to perform safe operations, we reviewed four different mobile cranes operation safety guidelines $\left\lbrack {{15},{17},{20},{21}}\right\rbrack$ . This could also have been done by asking operators or domain experts. However, this alternative may be less efficient, because operators may have different operational styles or preferences, and thus having different requirements. Furthermore, we would have missed the international aspect covered by the guidelines from different parts of the world. Therefore, we used the safety guidelines as the starting point, as they are applicable to all operators regardless different operational styles or preferences. From the safety guidelines review, we have found that the guidelines are provided to prevent the following events:
+
+1. Collisions that may occur between the mobile crane, its parts, or the lifted load; and nearby people, or structures at the working area. To prevent this from happening, operators should know what is around the machine and what the machine is about to do.
+
+2. Loss of balance that could occur due to many factors, such as excessive load capacity, strong wind, or unstable ground. To avoid this event, operators should know the current state of the machine and never operate the machine beyond permitted conditions.
+
+§ 3.2 GENERATING IDEAS THROUGH A DESIGN WORKSHOP
+
+To address the second research question, we used the findings from the safety guidelines for a design workshop to generate ideas for the appearance and behaviour of the the visualization. The workshop involved three human-computer interaction (HCI) researchers, who are also the authors of this paper. Two researchers have research expertise in human-machine interface for heavy machinery, while one researcher is a generalist.
+
+Since the type of displays influences the form of information and how it can be presented, we firstly discussed which display is appropriate in mobile cranes. As mobile cranes have a large front windshield and operators look through it most of the time [5], the wide windshield could be used as a space for presenting the supportive information, given that the information will not obstruct operators' view. There are various commercially available displays that can be used for this purpose, such as head-mounted displays (HMDs), projection-based head-up displays (HUDs), and transparent displays. However, each of these displays has its benefit and drawback in terms of the usage in this context [24]. Using head-mounted displays, for example, Microsoft HoloLens, enables operators to see the presented information exactly within their sight. Although newer HMDs come with better ergonomics, they are still not ergonomically comfortable to be used for long hours [22]. In addition, operators are already required to wear protective gears when working, and thus they may be quite reluctant to wear an additional equipment. Projection-based HUDs, like the ones available for cars, are another alternative that can be used and operators also do not need to wear another equipment. However, the quality of information presented using projection-based HUDs may be degraded in bright environments [25]. The third option is using transparent displays, like what Kvalberg [14] has used. However, this option also gives us two disadvantages [1]. Firstly, they are limited in terms of colors, since only yellow and green are currently available. Secondly, they support static visualisation only, as the display can only present information that has been specified before the display is manufactured. On the positive side, transparent displays are durable against extreme temperature, moisture, and vibration. See Figure 4 for a commercial example of transparent displays. After considering both benefit and drawback for each display, we reasoned that transparent displays are more suitable to be used in mobile cranes.
+
+Table 1: The profiles of mobile crane operators that we have interviewed
+
+max width=
+
+No Gender Age Experience Mobile crane sizes Knowledge about head-up displays
+
+1-6
+1 Male 38 years old 12 years 30 tons - 800 tons Knows about it, but never tried it
+
+1-6
+2 Male 39 years old 20 years 30 tons - 150 tons Knows about it, but never tried it
+
+1-6
+3 Male 61 years old 38 years 8 tons - 130 tons Has no knowledge about it
+
+1-6
+4 Male 53 years old 21 years 2.5 tons - 220 tons Knows about it, but never tried it
+
+1-6
+5 Male 37 years old 7 years 2.5 tons - 95 tons Knows about it, but never tried it
+
+1-6
+6 Male 45 years old 20 years 8 tons - 500 tons Has tried it in a car
+
+1-6
+
+ < g r a p h i c s >
+
+Figure 4: An example of stand-alone transparent display [1]. Each fixed element can be individually lit, whereas the widget's structure is statically rendered in the material.
+
+We generated ideas for the visualization that could help operators in preventing hazardous situations mentioned in Subsection 3.1. We then selected some of the generated concepts based on their suitability with transparent displays (see Figure 3). Eventually, we produced eight visualisation concepts that suit the appearance, characteristics, and capability of transparent displays:
+
+1. Two concepts for proximity warning that indicate position, distance, and height of obstacles.
+
+2. Two concepts that indicate the balance of the machine.
+
+3. One concept for showing wind speed and direction.
+
+4. One concept for illustrating how much the lifted load swings.
+
+5. One concept for presenting the relative load capacity, including the angle of the boom, the height of the hook to the ground, and the distance between the lifted load to the center of the machine.
+
+6. A generic warning sign that tells operators to stop their current action.
+
+The description for each visualization concept is presented in Section 4.
+
+§ 3.3 OBTAINING FEEDBACK FROM MOBILE CRANE OPERATORS
+
+To answer the third research question, we interviewed six mobile crane operators to validate the ease of use and possible benefits of the proposed visualizations for performing safe operations. The interviews were carried by two people, who are also the authors of this paper. After explaining both motivation and procedure of the interviews, as well as obtaining the informed consent from the operators, we collected some background information from the operators, such as age, experience as an operator, and different mobile cranes that they have used. In addition, we also asked if the operators have prior knowledge or experience on head-up displays. See Table 1 for some information about the operators that we have interviewed. Out of six operators that we interviewed, one operator has no knowledge about head-up displays. The remaining operators know about head-up displays, either through seeing commercials or driving a car that has a head-up display in it.
+
+ < g r a p h i c s >
+
+Figure 5: The tools that were used to test the operators' understanding on the proposed visualization concepts. The human toys were used to represent nearby obstacles, the coin was used to indicate where the machine's center of gravity is, and the pens were used to indicate both wind speed and direction.
+
+After that, we presented the visualization concepts printed on papers, which illustrated how the visualizations look like in certain situations. We firstly explained what is the meaning of each component within the visualization concepts. We also used some tools (see Figure 5) to demonstrate the meaning for the visualization concepts. Once the operators confirmed that they understood the logic behind each visualization concept, we continued with five tests which evaluated the operators' understanding on the concepts for proximity warning, balance, wind speed, swinging load, and relative load capacity. We then presented different examples of the visualization on papers to the operators. There were ten examples for each concept of proximity warning, eight examples for each concept that shows the machine's balance, eight examples for wind speed, four examples for load swinging, and eight examples for relative load capacity. Some of the examples are presented in Section 4. The test for proximity warning had increasing complexity, for example, starting from one obstacle to multiple obstacles with different heights. The operators were asked to use the provided tools, such as toys, coin, and pens (see Figure 5) and moved them according to the shown visualization (see Figure 6). This method was useful for both us and the operators, since we could understand the operators' way of thinking through their actions and the operators could show what they were thinking without having to explain everything verbally. Surprisingly, four operators had their own mobile crane replicas and we encouraged them to use their own instead. This process was repeated until all visualization concepts were described and evaluated. The generic warning sign was not evaluated, since its meaning was too obvious for the operators.
+
+ < g r a p h i c s >
+
+Figure 6: Some pictures that depict how we tested the operators' understanding, where the operators had to move around the provided tools according to the visualization shown on papers. a. The operators had to move the human toy(s) to the position where the obstacle is. b. The operators had to move the coin to the position where the machine's center of balance is. c. The operators had to arrange the tip of the pens to indicate the wind direction, while the number of pens represent the wind intensity; 1 pen = weak wind, 2 pens = medium wind, and 3 pens = strong wind. d. The operators were asked to move the hook that exists in the replica to show how much the lifted load is swinging.
+
+ < g r a p h i c s >
+
+Figure 7: a. The distance between the machine and the obstacle is divided into three levels: near (1 radius), medium (2 radius), and far (3 radius). b. The meaning for each segment in the first concept of proximity warning. c. The meaning for each segment in the second concept of proximity warning.
+
+Lastly, we provided a paper that has an image of the interior view of a mobile crane's cabin. The operators were asked to place the visualization concepts, which were printed on a transparent film and then cut into pieces, on the windshield according to their preferences (see Figure 8). They were also encouraged to exclude concepts that they considered less important. Eventually, the operators were asked to describe the reasons for their decisions.
+
+§ 4 THE PROPOSED VISUALIZATION CONCEPTS
+
+This section presents the description for each proposed visualisation concept that was generated from our design workshop.
+
+ < g r a p h i c s >
+
+Figure 8: The operators were asked to place the proposed visualization concepts on the windshield according to their preferences. The image of the crane's cabin was downloaded from [16].
+
+§ 4.1 PROXIMITY WARNING
+
+Both concepts for the proximity warning were made based on the top view of the mobile crane, with three levels of distance: near, medium, and far (see Figure 7a). In this study, we used humans as the form of obstacles for simplification purposes, and also because humans are moving objects. In practice, the obstacle can also be other things, such as buildings, trees, or overhead power lines. The visualization is always shown based on the direction where the cabin is facing.
+
+In the first concept, there are two groups of segments and each group represents the presence of obstacle(s) on the left side or the right side of the cabin (see Figure 7b). The left segments will be turned on when there is an obstacle on the left side of the cabin, and vice versa. As the visualization is split into two half circles, for obstacles that are exactly in the front or behind the machine, the same segment on both sides are turned on (see the bottom left image in Figure 9). The vertical segments show the position of the obstacle and its distance to the machine. The horizontal segments indicate three levels of altitude of the obstacle: lower, on the same level, or higher than the machine. The second concept is similar to the first concept, except that the visualization is in the form of a complete circle and the center parts indicate the altitude of the obstacle (see Figure 7c). See the images in Figure 9 for some examples on how the visualization will work in certain scenarios.
+
+ < g r a p h i c s >
+
+Figure 9: Some scenarios that illustrate how both concepts of proximity warning are used. The visualization is always shown based on the direction where the cabin is facing.
+
+§ 4.2 BALANCE-RELATED INFORMATION
+
+We have created two concepts which indicate the balance of the machine. The first concept is called 'center of gravity' and the second one is called 'loads on outriggers'. These names also suggest what kind of information being visualized.
+
+The concept of center of gravity was also made based on the top view of the machine and it shows the current position of the center of gravity with respect to the center of the machine (see the left side images in Figure 10). When the center of gravity is near the center of the machine (the circle in the center), it shows that the machine is in a very stable position. To maintain the machine's balance, operators should ensure that the center of gravity does not go beyond the outermost segments, as the risk of tipping over is higher. Each segment in this concept indicates the position of the center of gravity.
+
+The concept of loads on outriggers depicts the load that four outriggers have. Depending on the direction of the cabin and how far the boom is extended, each outrigger may have different loads. In this concept, there are three rectangles next to each outrigger. These rectangles are used to indicate three levels of load on each outrigger: low, medium, and high. The right side images in Figure 10 illustrate how this concept works in specific scenarios.
+
+§ 4.3 WIND SPEED AND DIRECTION
+
+In this concept, the arrows indicate the direction where the wind goes. In each direction, there are three arrows that indicate the force of the wind: low, medium, and strong. In the center, the segments indicate the estimated wind speed counted in kilometer per hour. See the images in Figure 11 for some scenarios that illustrate the use of this concept.
+
+ < g r a p h i c s >
+
+Figure 10: Some scenarios that depict the use of balance-related information based on the center of gravity (left) and loads on outriggers (right). The red circle represents where the center of gravity is, with respect to the machine. For both concepts, the visualizations are shown based on the direction of the front part of the machine, and not according to the cabin's direction.
+
+ < g r a p h i c s >
+
+Figure 11: The arrows in the left side images and the wind icon in the right side images represent both wind direction and wind force. The wind direction is always shown depending on where the cabin is facing. The numbers in the center indicate the estimated wind speed.
+
+§ 4.4 SWINGING OF THE LIFTED LOAD
+
+As the name implies, this concept indicates the swinging intensity of the lifted load. From the safety guidelines, we learned that the swinging could occur due to the wind, as well as the movement of the boom, and the swinging could affect the machine's balance. However, the visualization indicates the intensity of the swinging only, without telling the direction of the swinging. The reason behind this choice was due to the fact that the swinging could happen to any direction, and thus could complicate the visualization. This concept shows something like a pendulum. The center segment is turned on when the lifted load is not swinging. The next two segments are turned on when the lifted load is swinging a bit, while the outermost segments are turned when the swinging is stronger. See the images in Figure 12 to see how this concept works.
+
+ < g r a p h i c s >
+
+Figure 12: The images that illustrate how the concept works. If there is no swinging, the center segment will be turned on, while other segments will be turned off. Farther segments indicate stronger swinging.
+
+§ 4.5 RELATIVE LOAD CAPACITY
+
+Since the relative load capacity constantly changes depending on various factors [21], this concept shows four types of information: (1) angle of the boom, (2) height between the hook and the ground, (3) distance between the lifted load to the center of the machine, and (4) ten rectangles that each represents ${10}\%$ relative load capacity (see Figure 13). The relative load capacity for each mobile crane, including the maximum limit for each influencing factor is usually documented and operators are advised to refer to that before performing lifting operations [15]. Exceeding the limit will cause the machine to tip over. In this case, the operators should prevent all rectangles from being turned on. See the images in Figure 14 for some examples that illustrate how the this concept works.
+
+ < g r a p h i c s >
+
+Figure 13: The meaning for each component in the concept for showing the relative load capacity.
+
+§ 4.6 GENERIC WARNING SIGN
+
+The last concept was a generic warning sign that appears only when a collision or loss of balance is imminent to occur (see Figure 15). When this warning appears, operators should stop their current action.
+
+§ 5 RESULTS
+
+This section presents the feedback on each visualization concept, as well as where the information should be placed on the windshield.
+
+ < g r a p h i c s >
+
+Figure 14: The images that illustrate how the concept works. Both top and center images illustrate that, even though the machine is lifting the same object, the relative load capacity varies depending on the height of between the hook and the ground, as well as the distance between the lifted load and the center of the machine. The bottom image depicts that the relative load capacity is of course increasing if the load is heavier. Note that the numbers in Figure 16 are used for simplification purposes only in order to demonstrate the concept.
+
+ < g r a p h i c s >
+
+Figure 15: A generic warning that only appears when a hazardous situation is imminent to occur.
+
+§ 5.1 FEEDBACK ON PROXIMITY WARNING
+
+When there was only one obstacle, it was quite easy for the operators to understand the meaning of both concepts and pinpoint the location of the obstacle. However, for the first concept, the idea of turning on the same segments on both sides, when something is exactly in front of or behind the cabin, was interpreted differently by the operators (see the images in Figure 16). The first concept was considered insufficient for all different scenarios, since if there are two different obstacles and have similar proximity on both sides, then the visualization will be the same as what is used for showing the obstacle that exists exactly in front of or behind the cabin. For the second concept, as the visualization is formed in a complete circle, it does not have the same drawback as the first concept (see Figure 16). The operators could easily pinpoint multiple obstacles using the second concept regardless of their proximity, and thus the operators preferred the second concept over the first one.
+
+Furthermore, we also discovered that both concepts have another drawback for indicating multiple obstacles in different altitudes (see the images in Figure 17). In this case, it was not clear which obstacle is higher, on the same level, or lower than the machine.
+
+ < g r a p h i c s >
+
+Figure 16: The first concept of proximity warning was understood differently by the operators when the obstacle is exactly in the front or behind the cabin. However, this way of thinking was not wrong either, since if there are two obstacles, where one is in the left side and the another one is on the right side of the cabin, the visualization will then look the same.
+
+ < g r a p h i c s >
+
+Figure 17: Both concepts are insufficient to visualize all different scenarios. In this case, although the operators were able to pinpoint both distance and position of the obstacles, the altitude of multiple obstacles could not be determined.
+
+The operators also commented that it would be good if we can also show the indication of altitude directly in the segments that show the proximity of the obstacle. Despite this drawback, all the operators would like to have this kind of information on the windshield.
+
+§ 5.2 FEEDBACK ON BALANCE-RELATED INFORMATION
+
+Both concepts could be understood easily by the operators and there was no issue with the concepts. Only one operator preferred to have the concept of center of gravity, while five operators rated loads on outriggers as the better concept. The main reason was due to the fact that modern cranes already have similar visualization, thus they felt more familiar with it. However, only four out of six operators would like to have either concept presented in the cabin. The remaining two operators commented that this kind of information already exists on the head-down display, thus they felt that it is unnecessary to have it on the windshield as well.
+
+§ 5.3 FEEDBACK ON WIND SPEED AND DIRECTION
+
+The operators could easily comprehend the meaning of the concept, since modern mobile cranes already display something similar. Regarding the importance of having such information, the operators said that it highly depends on the weather. In a clear weather, this information is not needed, as the operators already know that the wind speed will be within acceptable limits. On the contrary, when operating the machine in other weather conditions, this information becomes critical for performing safe operations. Nonetheless, only three operators who would like to have this information all the time.
+
+§ 5.4 FEEDBACK ON SWINGING OF THE LIFTED LOAD
+
+The meaning of this concept was obvious for the operators. However, only two operators who would like to have this information on the windshield. The remaining operators commented that this information is not needed, as they could see the swinging and estimate how the swinging will affect the machine's balance.
+
+§ 5.5 FEEDBACK ON RELATIVE LOAD CAPACITY
+
+This concept was also well understood by the operators, since modern mobile cranes are already equipped with LMI systems, which indicate similar information. According to all the operators, this is the most important information for performing safe operations and they would like to have it on the windshield as well. However, they commented that the information about the angle of the boom could be removed, as it is not important.
+
+§ 5.6 FEEDBACK ON GENERIC WARNING SIGN
+
+Although the meaning of this concept was very obvious to the operators, only four out of six operators would like to have this warning shown on the windshield. The remaining two operators said that modern mobile cranes already have distracting auditory warning for imminent danger, thus the visual warning is no longer needed.
+
+§ 5.7 INFORMATION PLACEMENT
+
+As mentioned in Subsection 3.3, we also asked the operators to place where the information should be visualized on the windshield. In this activity, they were also allowed to include or exclude some of the visualization concepts according to their preferences. Based on the placements that have been made by the operators, we can observe that there is a pattern on where the information should be presented (see the images in Figure 18). We can see that the operators would like the information to be visualized peripherally. They commented that the central area has to be clear from any obstruction, otherwise it is going to harm their operations. However, an exception was made by two operators who put the generic warning sign in the centre, since this position could attract their attention immediately. Regarding the placement of other visualization concepts, we unfortunately could not get a firm indication from this study, as the operators' preferences are quite diverse.
+
+§ 6 DISCUSSION
+
+This section describes the reflection on the evaluation method that we used in this study, as well as further suggestions that were given by the operators in the end of the interviews.
+
+§ 6.1 REFLECTION ON THE EVALUATION METHOD USED IN THIS STUDY
+
+Since the visualisation concepts are proposals, and thus do not exist in their intended forms, we need to reason about their validity on the basis of the evaluation method that we used. Krippendorf [13] presents different levels of validity, in order of increasing strength, such as demonstrative validity, experimental validity, interpretative validity, methodological validity, and pragmatic validity. Due to the way this study was conducted, we are specifically discussing about demonstrative validity and methodological validity.
+
+Regarding demonstrative validity, we were able to show the meaning of the proposed visualization concepts and how they could possibly work in different situations through the printed concepts on papers, along with the tools that the operators could interact with. This arrangement enabled the operators not only to understand the meaning of the proposed visualisation concepts more easily, but also having ideas on how the proposed concepts would work in various scenarios. In addition, we were also able to discover what would make sense or would not make sense according to the operators' way of thinking. For example, as what is presented in Subsection 5.1, we could discover that both concepts of proximity are inadequate for all different situations.
+
+ < g r a p h i c s >
+
+Figure 18: The images that illustrate which visualization concepts that the operators preferred to have and where the information should be placed on the windshield. Note that 'HDD' refers to the head-down display that already exists inside the cabin.
+
+With respect to methodological validity, we decided to evaluate the proposed visualisation concepts in the paper form, since modifications could be incorporated easily in early stages. Despite using a low-fidelity prototype and some other tools, we were able to discover to what extent the proposed visualisation could suit the operators' needs and way of thinking in order to perform safe operations. Although the number of operators involved in this study is rather small, research on heavy machinery often involved small numbers of operators as the participants, either in observational studies [23] or experimental studies [24]. Our method was in contrast to what Kvalberg [14] has done, where the functional prototype has been developed, but there was no feedback from mobile crane operators. Needless to say, a prototype with higher fidelity that could be used in some scenarios, like what Chi et al. [7] and Fang et al. [11] have done, is required to determine to what extent the proposed visualization will benefit or hinder the operators.
+
+§ 6.2 SUGGESTIONS FOR THE PROPOSED VISUALISATION CON- CEPTS
+
+All the operators appreciated the effort of bringing the information closer to their line of sight. All of them agreed that this approach has the potential to improve safety in their operations, as they could acquire the information without diverting their attention from operational areas. Moreover, they also provided additional comments on how the transparent display could be made to better suit their needs.
+
+Firstly, the operators raised concerns on how much the transparent display will obstruct their view in practice. As mobile cranes could be used in any time of the day, they concerned that the brightness of the transparent display may be too much for their eyes when the operation is done in dark environments. On the contrary, having a bright display will be good in bright environments, thus the information can still be visible even though there is a direct sunlight. Therefore, besides automatic adoption to the ambient light intensity, it is should be possible to manually adjust the transparent display's brightness.
+
+Secondly, based on what is presented in Subsection 5.3, there were different opinions whether the information should always be presented on the windshield or not. Although the information is important, the information may not need to be visualised all the time. The operators also commented that it would be beneficial if they could choose what kind of information that will appear on the windshield, depending on their work environments. However, this kind of modification is not possible yet with the current transparent display, as the visualization is fixed when the display is manufactured. However, if there are multiple transparent displays showing different kinds of information, it is should be possible to manually decide which transparent display that should be turned on or turned off.
+
+§ 7 LIMITATIONS AND FUTURE WORK
+
+The proposed visualization concepts presented in this paper were generated based on the findings from the safety guidelines review. According to the feedback in Subsections 5.2, 5.4, and 5.6, the operators could obtain similar information through looking directly at the environment or safety features that already exist on the head-down display, and thus having similar information presented on the windshield may not be so beneficial for them. Therefore, it is also important to take into account the availability of existing information inside mobile cranes and how the information is delivered in order to ensure that only essential information is presented on the windshield. However, we did not take this approach in this study, since different manufacturers may install diverse supportive information systems in their mobile cranes. As the result, one kind of information may be available in one mobile crane, but unavailable in another mobile crane.
+
+In this study, we used a low-fidelity prototype, which was printed on papers, and some other tools to show the proposed visualisation concepts and to test the operators' understanding on the shown visualisation. With this arrangement, we were still able to convey the meaning of the proposed visualisation concepts to the operators, as well as to test their understanding with the help of the provided tools. Having said that, we are still limited in terms of the fidelity, and thus the results in this study are better to be considered as an indication.
+
+In the future, we are planning to revise the proposed visualisation concepts according to the feedback from the operators that we have obtained. We are also planning to build a higher-fidelity prototype by building the actual transparent display that visualizes the proposed concepts. The prototype could then be used in future evaluations within controlled environments or real-world settings in order to investigate the impact of having such visualization on operators' performance in certain scenarios. For example, the number of things in the working environment that operators need to observe when operating the machine may influence the level of attention on the presented information. Furthermore, future evaluations could also be carried out to discover which placement of information that provides the optimum result for the operators.
+
+§ 8 CONCLUSION
+
+In this paper, we have proposed and evaluated the visualization concepts using transparent displays that could be used to assist mobile crane operators to perform safe operations. We started the design process by gathering information from few safety guidelines, generating ideas by conducting a design workshop, and then obtaining feedback from the operators through interviews. The operators that we have interviewed appreciated this approach and the results from this study indicate what kind of information that operators need in order to perform safe operations, how we should visualize the information, and where to place the information on the windshield. Nonetheless, more studies, such as evaluations with some scenarios using high-fidelity prototypes, will need to be conducted to further determine both applicability and usefulness of this approach.
+
+§ ACKNOWLEDGMENTS
+
+This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement number 764951.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..d59af92e31f7e87573171d4f9688e9993b6154f9
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,357 @@
+# Determination and quantitative description of hollow body in point cloud
+
+## Abstract
+
+When volume 3d display system deals with point cloud data with hollow bodies, the hollow body area cannot be determined correctly causing the lack of color information in responding area. Existing researches lack a solution to determine hollow bodies. This paper firstly gives a quantitative description of hollow body and defines a set of parameters to describe size, shape, position of hollow bodies. Then this paper proposes a voxel connectivity regional-growth hollow body determination algorithm(VCRHD) to determine the hollow bodies in 3D point cloud. The algorithm has two phases. The first phase is to use a small amount of voxels to realizes the voxelization of the point cloud and calculate the approximate volume ratio of hollow bodies based on the voxel connectivity regional-growth principle. Then this paper uses the approximate volume ratio to determine the optimal number of voxels based on the experimental result. The second phase is to use the optimal number of voxels to determine the hollow bodies and calculate hollow body parameters which is proved to be efficient and accurate. In addition, this paper establishes a data set containing 287 different point cloud files in 7 different categories to test algorithm. The experimental results prove the feasibility of the algorithm. Finally, this paper analyzes the limitations of the algorithm and looks forward to the application prospect in the future.
+
+Index Terms: Computing methodologies-Volumetric models
+
+## 1 Introduction
+
+With the development of computer graphics, digital image processing and other technologies, people demand more realistic display effects. In real world, objects have $3\mathrm{D}$ physical dimensions containing length, width and height. However, the majority of display technologies remain 2D display technologies that satisfy some psychological depth cues but lack depth information, which will lead to degeneration of the functionalities of the human eyes' depth, dynamics, and parallax [1]. The human brain need to acquire information completely consistent with the $3\mathrm{D}$ real object which is called 3D information.
+
+Volumetric 3D display enables a range of cues to depth to be inherently associated with image content and permits images to be viewed directly by multiple simultaneous observers in a natural way [2]. So volumetric 3D display is a significant research direction of true 3D display. Recent years there are many researches about volumetric $3\mathrm{D}$ display. In order to solve the problem of large-scale 3D display, Xiaoxing Yan [3] and Yuan Liang [4] propose to construct a 3D entity sandbox display system. They firstly extract DEM data from point cloud and then build the the sandbox model by means of the mechanical structure. Then they install LEDs on the mechanical structure to visually display various $3\mathrm{D}$ terrain, architecture and other scenes. Jin su Lee, Mu young Lee, et al. [5] design a stereoscopic 3D monitor based on a periodic point source (PLS) structure using a multifocal lens array (MFLA). They construct point lights emitted by the multifocal lens array to form voxels of discrete $3\mathrm{D}$ images in the air. Jun Sun [6] studies a research on true three-dimensional dynamic display algorithm of the array LED three-dimensional display system and builds an array LED three-dimensional display system.
+
+However, such 3D display systems don't do well when processing data with hollow bodies such as building gates and bridges. When data is transmitted to the system for display, the voxels used to display hollow bodies won't receive any color information because these area have no points. So it can't reproduce the original 3D scene and will seriously affect the effect of the 3D display system. Therefore, if we can accurately find each hollow body in the point cloud and use the voxels of hollow bodies to simulate the color that the human eye should observe through them, we can achieve a more realistic three-dimensional display. Besides, the determination of hollow bodies can help in CAD and surface construction.
+
+This paper proposes the concept of hollow body and studies how to solve the problem of hollow bodies in point cloud. The contributions of the paper are as follows:
+
+-This paper proposes the concept of hollow body. Then this paper gives a quantitative description of hollow body and defines a set of parameters to describe its characteristics.
+
+-This paper proposes a novelty voxel connectivity regional-growth hollow body determination algorithm to determine the hollow bodies in point cloud. The algorithm has two phases. The first phase is to use a small amount of voxels to calculate approximate values of volume ratio of hollow bodies. Then this paper uses the approximate value to determine the optimal number of voxels based on our experimental result. The second phase is to use the optimal number of voxels that divide the point cloud to determine hollow bodies and hollow body parameters accurately and efficiently.
+
+-Considering the concept of hollow body is proposed for the first time, there aren't related experimental results to make comparison. Therefore, this paper establishes a dataset to verify the effect of the algorithm and provides a reference for subsequent work.
+
+## 2 Related works
+
+There are similar researches to determine the boundary of 3D point cloud and holes. The extraction method-s of boundary in $3\mathrm{\;d}$ point cloud can be divided into two categories:methods based on grids and directly on points. The methods based on the grids can identity the boundary through the topological relationship between the grid-s [7-12]. Yongtae Jun [8] firstly determine the seed boundary point by the principle that one of the adjacent three triangles of the boundary triangle mesh is empty and then obtain the closed loop boundary edge by tracking. Xian-feng Huang, Xiaoguang Cheng et al. [10] construct TIN grids for point cloud. Then they initialize a maximum boundary and gradually narrow the boundary edge by setting the threshold of the length of edge. The methods directly based on points firstly find the neighbors of each point and then calculate the geometric properties of points such as normal vector, density to determine the boundary [13-18]. Van Sin-h Nguyen, Trong Hai Trinh, Manh Ha Tran [14] project the point cloud into the two-dimensional grid on the xy plane and obtain the point cloud boundary based on the number of points in each grid and their adjacent grids. Fan LU, Song Li et al. [18] project the point cloud on the plane and use the inverse distance weight and the point cloud density to determine boundary feature points. Shaoyan Gai, Feipeng Da, Lulu Zeng, Yuan Huang [17] use a 2d phase map and the adjacent area of each point to determine the boundary. Then they use the row and column coordinates of the boundary point to remove the contour points.
+
+However, existing researches lack a solution to deal with the determination of hollow bodies. They can only determine the boundary of point cloud or the holes. So this paper gives a quantitative description of hollow body and proposes the VCRHD algorithm to determine of hollow bodies.
+
+## 3 Hollow body
+
+The hollow body is an empty area which is surrounded by points and has points above it. Besides it must connect with the outer area of the model. We set y-axis as the height axis, so the hollow body is an independent area that is closed to the upper area in y-axis and connects with the outer area in x-axis or z-axis direction. The regions that have no point or a small number of points may belong to hollow bodies. As shown in figure 1, the areas beyond the green boundaries are six hollow bodies. In this paper, we propose the VCRHD algorithm to find each hollow body in the point cloud and define volume ratio, normal line, depth ration to describe their size, position and shape.
+
+### 3.1 Determination of the hollow body
+
+#### 3.1.1 Overview
+
+The algorithm has two phases. The process of the two phases are similar. The propose of the first phase is to use a small amount of voxels to divide the point cloud and calculate approximate values of volume ratios of hollow bodies. The approximate value of volume ratio is used to calculate the optimal number of voxels dividing the point cloud which is more accurate and efficient based on the experimental result. In the second phase, the algorithm uses the optimal number of voxels to determine the hollow bodies and calculate the hollow body parameters accurately.The process of the algorithm is as figure 2 .
+
+#### 3.1.2 Voxelization of point cloud
+
+After obtaining the preprocessed point cloud data, we first rotate it through meshlab to adjust y-axis as its height axis and then implement voxelization of the point cloud according to the following rules.
+
+Step 1:Determine segmentation parameter of voxelization
+
+Calculate the range of point cloud in $\mathrm{x},\mathrm{y},\mathrm{z}$ axis as range_X, range_Y, range_Z and define min as the minimum of the three number. Meanwhile, define $\mathrm{k}$ as division parameter. ${d}_{t} = \min /\mathrm{k}$ . We find that the number of voxels used to achieve the same accuracy is different for point cloud data containing different volume ratios of hollow body. So we summarize the rule in the experimental part. Therefore $\mathrm{k}$ in the first phase is to ensure the model area of point cloud is divided into 24000 voxels which is the experimental result. And in the second phase we adjust $\mathrm{k}$
+
+
+
+Figure 2: Process of the VCRHD algorithm
+
+to divide point cloud based on the optimal number of voxels.
+
+Step 2: Define plane matrix and structure
+
+Arrange voxels into a $2\mathrm{\;d}$ matrix on the $\mathrm{{xz}}$ plane. The $\mathrm{x}$ -axis represents column of the matrix and the $\mathrm{z}$ -axis represents row of the matrix. The size of matrix is as formula(2).
+
+round $\left( {{\text{range}}_{ - }Z * k/\text{min} + 2}\right) *$ round $\left( {{\text{range}}_{ - }X * k/\text{min} + 2}\right)$(1)
+
+The reason for adding 2 is that points are only distributed around the boundary of the entity and there are no points inside the entity. Therefore, the wrong hollows inside the model will also be determined as hollow bodies. So this paper establishes a layer of voxels outside the minimum cube bounding box and uses the number of voxels in non-model area adjacent with each hollow body to filter out wrong hollow body inside the model.
+
+This paper defines a structure grid for each region of the matrix.Each structure contains an array containing round(range_Y/dt) voxels.The coordinates of the array represent the height information.The schematic diagram is as shown in Figure 3.
+
+Step 3:Calculate position of points
+
+After defining the voxels and the matrix structure, we will calculate the location of each point.Assuming that the coordinates of point i is $\left( {{x}_{i},{y}_{i},{z}_{i}}\right)$ and the minimum values in x_axis, y_axis, z_axis are x_min, y_min, z_min. So the row, column, height of voxel which point $\mathrm{i}$ is located in are as formula(3),(4),(5).
+
+$$
+\operatorname{round}\left( {\left( {{z}_{i} - {z}_{ - }\min }\right) /\left( {\text{ range }\_ Z/\text{ round }\left( {\text{ range }\_ Z/{d}_{t}}\right) }\right) }\right) + 1 \tag{2}
+$$
+
+$$
+\operatorname{round}\left( {\left( {{x}_{i} - {x}_{ - }\min }\right) /\left( {\text{ range }\_ X/\text{ round }\left( {\text{ range }\_ X/{d}_{t}}\right) }\right) + }\right. \tag{3}
+$$
+
+
+
+Figure 1: Bridge point cloud data with six hollow bodies
+
+
+
+Figure 3: Voxel segmentation of the minimum cube bounding box
+
+
+
+Figure 4: An example of classification of voxels
+
+## $\operatorname{round}\left( {\left( {{y}_{i} - {y}_{ - }\min }\right) /\left( {\text{ range }\_ Y/\text{ round }\left( {\text{ range }\_ Y/{d}_{t}}\right) }\right) }\right)$(4)
+
+#### 3.1.3 Determination of the group of hollow body voxels
+
+The rules for judging whether a voxel may belong to a hollow body are as follows:
+
+- The number of points it contains is less than the threshold.
+
+- There exists a voxel that doesn't belong to any hollow body above it.
+
+So this paper first calculates the maximum height of each grid of xz plane matrix. The voxels whose height is lower than the maximum of its region belong to model area and others belong to area outside the model. If a voxel in model area contains less than threshold of points, we will put them in a set $\mathrm{Q}$ for further process because they may belong to hollow bodies. Figure 4-b is a schematic diagram of xy plane. The red voxels belong to area outside the model. The green and blue voxels belong to model area and blue voxels will be put in set for segmentation.
+
+#### 3.1.4 Segmentation of the group of hollow body voxels
+
+This paper separates the group of voxels based on connectivity. Define the initial group number $\mathrm{s}$ as 1 and define a group LK to store the voxels of each group. Then initialize the label of voxels in group as 0 .
+
+Step 1:If the label of a voxel in group $\mathrm{Q}$ is 0, change its label to s and put it into LK.
+
+Step 2:We define the row, col, height of the voxel is r, c, h. If the voxels whose position are (r+1, c, h),(r-1, c, h), $\left( {r, c + 1, h}\right) ,\left( {r, c - 1, h}\right) ,\left( {r, c, h + 1}\right) ,\left( {r, c, h - 1}\right)$ are in Q, change their labels to s and put them into LK.
+
+Step3:Repeat step 2 when new voxels are put into LK until we can't find adjacent voxels.Add s to 1 and clear Lk to store next group of voxels and repeat step1 until all the voxels in $\mathrm{Q}$ have been processed.
+
+#### 3.1.5 Determination of correctness of hollow bodies
+
+For all groups separated, we use following rules to determine whether these groups are correct.
+
+- Size of hollow body
+
+Some groups contain a small number of voxels indicating that the proportion of the area is small. So this paper filters out these groups by defining volume ratio between hollow bodies and the model as a threshold.
+
+- Connectivity of hollow body
+
+The results contains wrong hollow bodies inside the model which is caused by the feature of point cloud. This paper uses the number of non-model voxels adjacent with each hollow body as the other threshold.
+
+The process of step 2 and step 3 is as figure 5 .
+
+At last, this paper traverses all the points in the point cloud data to determine the boundaries of each hollow body by the space position.
+
+### 3.2 Hollow body parameter
+
+This paper defines a set of parameters containing volume ratio, normal equation, depth ratio of hollow bodies in order to accurately describe their characteristics for further research. The first phase of the algorithm just need to calculate the volume ratio while the second phase of algorithm calculate all the three parameters.
+
+#### 3.2.1 Volume ratio
+
+This paper defines the result that volume of hollow body divides by volume of model as the volume radio in order to describe the size of each hollow body.
+
+We assume the volume of the model is $\mathrm{V}$ and the volume of hollow body i is ${S}_{i}$ , so the volume ratio of hollow body i is $\frac{{S}_{i}}{V}$ . The parameter ranges from 0 to 1 .However we can’t calculate the accurate volumes of model and hollow bodies in point cloud because of the variability and irregularity of the point cloud data. Therefore, we obtain the approximation of each volume ratio by calculating the result that the number of voxels in each hollow body divide by the number of voxels in model area.
+
+
+
+Figure 5: Process of hollow body determination
+
+#### 3.2.2 Normal equation
+
+This paper defines the normal equation of the hollow body in order to describe the position of each hollow body.
+
+We calculate the normal equation of the hollow body by slice method and it represents the direction of the hollow body extension. For the hollow body i, we first calculate the mean coordinates of the voxels that make up the hollow body and name it as the gravity point ${G}_{i}$ . We assume the voxel set belonging to hollow body i is ${T}_{i} = \left\{ {{a}_{1},{a}_{2}\ldots {a}_{n - 1},{a}_{n}}\right\}$ .
+
+In the voxel structure, i represents the row of matrix, j represents the column, $h$ represents the height, ${d}_{i}$ represents the spacing of row, ${d}_{j}$ represents the spacing of column, ${d}_{h}$ represents the spacing of height.The coordinate of ${G}_{i}$ is formula(6).
+
+$$
+\left( {\;{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.j}{n} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.h}{n} + {0.5}}\right) * {d}_{h}, Z\_ {min} + \left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.i}{n} - {0.5}}\right) * {d}_{i}\left( 5\right)
+$$
+
+${l}_{i}\left( 5\right)$
+
+Then, we use the slice method to slice the yz plane and the xz plane and calculate the row and column of voxel-s belonging to the hollow body i that appear at the first time or the last time. We assume the voxel set whose row equals the first row of the model area in hollow body i is ${R}_{i} = \left\{ {{r}_{1},{r}_{2}\ldots {r}_{m - 1},{r}_{m}}\right\}$ and the coordinate of ${H}_{i}$ is as formula(7).
+
+$$
+\left( {{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{m}{r}_{q}.j}{m} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{m}{r}_{q}.h}{m} + {0.5}}\right) * {d}_{h},{Z}_{ - }{min} + {0.5} * {d}_{i}) \tag{6}
+$$
+
+
+
+Figure 6: An example of normal line of hollow body
+
+Otherwise, We assume the voxel set whose row equal-s the last row of model area in hollow body i is ${N}_{i} = \left\{ {{b}_{1},{b}_{2}\ldots {b}_{f - 1},{b}_{f}}\right\}$ and the coordinate of ${H}_{i}$ is as formula(8).
+
+$$
+\left( {{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{f}{b}_{q}.j}{f} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{f}{b}_{q}.h}{f} + {0.5}}\right) * {d}_{h},{Z}_{ - }{min} + \left( {w - {0.5}}\right) * {d}_{i}) \tag{7}
+$$
+
+Similarly, we can also calculate the midpoint of the first column and the last column of the hollow body i. If the hollow body has midpoints in both boundary surfaces, we use the gravity point and the midpoint in the first row or first column of hollow body to calculate the normal equation.
+
+For hollow body i, we use the gravity point ${G}_{i}$ and the surface midpoint ${H}_{i}$ to calculate the normal line. The equation of the normal line is as formula(9) and the direction vector is as formula(10).
+
+$$
+\frac{x - {H}_{i}.x}{{G}_{i}.x - {H}_{i}.x} = \frac{y - {H}_{i}.y}{{G}_{i}.y - {H}_{i}.y} = \frac{z - {H}_{i}.z}{{G}_{i}.z - {H}_{i}.z} \tag{8}
+$$
+
+$$
+\left( {{G}_{i}.x - {H}_{i}.x,{G}_{i}.y - {H}_{i}.y,{G}_{i}.z - {H}_{i}.z}\right) \tag{9}
+$$
+
+If the hollow body connects with the space outside the model in both directions, we will define the direction that has a bigger depth ratio which is the next parameter as the main direction of the hollow body. The hollow normal equation can represent the space position of the hollow body. An example is shown in Figure 6. The line through HG is the normal line and the the direction of the normal line is from $\mathrm{H}$ to $\mathrm{G}$ .
+
+#### 3.2.3 Depth ratio
+
+This paper defines depth ratio of each hollow body in order to describe the shape of the hollow body. The parameter ranges from 0 to 1 . When the parameter approaches 0, it means the hollow body is very shallow. When the parameter equals 1, it means the hollow body is full penetrative. The calculation of the depth ratio depends on the normal of the hollow body. We assume the maximum of the point cloud in $\mathrm{x}$ -axis is ${\mathrm{x}}_{ - }\mathrm{{max}}$ , the minimum in $\mathrm{x}$ -axis is ${\mathrm{x}}_{ - }\mathrm{{min}}$ , the maximum in $\mathrm{z}$ -axis is $\mathrm{z}$ -max and the minimum in $\mathrm{z}$ -axis is z_min.For each hollow normal equation, if it is extended along the z-axis, we set $\mathrm{z} = \mathrm{z}$ _min and $\mathrm{z} = \mathrm{z}$ _max and calculate the distance ${S}_{\text{maxi }}$ between the two points. And then we set formula(11) and calculate the distance ${S}_{\text{row }}$ between the two points.
+
+$$
+z = {z}_{ - }\min + \left( {\text{ first_row } - 1}\right) * {d}_{i}
+$$
+
+$$
+z = {z}_{ - }\min + \text{ last_row } * {d}_{i} \tag{10}
+$$
+
+
+
+Figure 7: Test data
+
+If the normal is extended along the $\mathrm{x}$ -axis, we set $\mathrm{x} = {\mathrm{x}}_{ - }$ min and $\mathrm{x} = \mathrm{x}$ _max and calculate the distance between the two points ${S}_{maxj}$ . And then we set formula(12) and calculate the distance between the two points ${S}_{\text{col }}$ .
+
+$$
+x = {x}_{ - }\min + \left( {\text{ first_col } - 1}\right) * {d}_{j}
+$$
+
+$$
+x = {x}_{ - }\min + \text{ last_row } * {d}_{j} \tag{11}
+$$
+
+If both directions have a normal line, we will take max $\{ \frac{{S}_{\text{row}}}{{S}_{\text{maxi}}},\frac{{S}_{\text{col}}}{{S}_{\text{maxj}}}\}$ as depth ratio of the hollow body and the normal line in this direction will be the normal of the hollow body. If there only exists a normal line in one direction, then the corresponding distance ratio $\frac{{S}_{\text{row }}}{{S}_{\text{maxi }}}$ or $\frac{{S}_{\text{col }}}{{S}_{\text{maxi }}}$ will be the depth ratio of the hollow body.
+
+## 4 Experiment
+
+We can't calculate the accurate volumes of model and hollow bodies in point cloud because of the variability and irregularity of the point cloud data. Therefore, this paper chooses to establish several sets of point cloud data with differen-t sizes and shapes of regular hollow bodies. We calculate their accurate volumes to test the optimal number of voxels to process the point cloud data with different sizes of hollow bodies.
+
+We divide the test data into three groups according to the shape of the hollow body.The schematic diagram of the test data is shown in Figure 7. After the calculation of different values of $\mathrm{k}$ , the results are counted. The experiment finds that when the number of voxels that divide the model is close to 24000, 60000, 100000, 150,000, and 200000, the results are more representative. The experimental results are shown in Figure 8.
+
+After analyzing the experimental results, we find that the relationship between the number of voxels and the volume ratio $\mathrm{V}$ can be defined as a discrete function when we want to achieve ${95}\%$ correct rate of the volume ratio as formula(15).
+
+$$
+S = \left\{ \begin{array}{ll} {24000} & V \geq {0.5} \\ {60000} & {0.35} \leq V < {0.5} \\ {100000} & {0.25} \leq V < {0.35} \\ {150000} & {0.1} \leq V < {0.25} \\ {200000} & V < {0.1} \end{array}\right. \tag{12}
+$$
+
+The different choices for each interval can make the algorithm achieve the correct rate of more than 95%. If we add to the number of voxels, the correct rate will not have a significant improvement but the processing time will be greatly increased. So we use this discrete function as the basis for the subsequent determination of the hollow bodies.
+
+
+
+Results of first phase: 24000 voxels row:15 column:52 H:56 Number of voxels in model area: 22416 Number of voxels in non-model area: 21264 Hollow body 1 Number of voxels in hollow body 1 : 575 Volume ratio: 0.0256513
+
+Hollow body 2
+
+Number of voxels in hollow body 2: 4970
+
+Volume ratio: 0.221717
+
+Hollow body 3
+
+Number of voxels in hollow body 3: 575
+
+Volume ratio: 0.0256513
+
+(c)
+
+Figure 9: First phase result of European building point cloud
+
+For the data to be processed, we will first set the initial value of the number of voxels to 24000 and calculate the approximate value of the volume ratio of each hollow body. Then we use the smallest ratio and point cloud density which must ensure the distance between adjacent points is bigger than the length of side of voxels to calculate the optimal number of voxels that divide the point cloud. Figure 9 is the result of first phase of European building point cloud data and figure 10 is the second phase result.
+
+Considering the concept of hollow body is proposed in this paper for the first time, we can't make comparision with previous research. Therefore, this paper establishes a data set of 287 point cloud data from various $3\mathrm{D}$ model websites and $3\mathrm{\;d}$ reconstruction to verify the effect of the algorithm. The data set can be divided into seven types containing natural landscape, metal component, indoor furniture, jewelry sculpture, bridge, building and test data. Because there are usually only a few key points in a surface in 3d models, this paper uses the Loop subdivision surface [19] and the Catmull-Clark subdivision surface [20] through meshlab to achieve the up-sampling of the $3\mathrm{D}$ model to form a point cloud file. We will find more point cloud data with hollow bodies in further work and the website is under construction.
+
+After experimenting with the 287 experimental data, this paper compares the results with the calibrated hollow bodies. The results show that 236 data are successfully judged and 51 data fail.The correct rate is ${82.2}\%$ .Some results are as figure 1112 .
+
+| shape of surface of hollow body | acurate volume ratio | | | | | |
| rectangle | 0.72 | 95.00% | 96.88% | 97.26% | 98.01% | 98.35% |
| rectangle | 0.48 | 94.44% | 95.83% | 97.14% | 96.06% | 96.77% |
| rectangle | 0.3 | 93.47% | 95.13% | 96.67% | 95.88% | 96.00% |
| rectangle | 0.2 | 91.67% | 93.75% | 95.31% | 97.33% | 96.87% |
| rectangle | 0.12 | 91.16% | 93.47% | 95.17% | 97.02% | 96.77% |
| rectangle | 0.05 | 90.34% | 91.93% | 95.02% | 95.12% | 95.04% |
| circular | 0.53305 | 96.59% | 97.56% | 98.64% | 99.23% | 99. 12% |
| circular | 0.32725 | 92.24% | 93.90% | 96.18% | 96.69% | 97.40% |
| circular | 0.20944 | 91.07% | 93.25% | 94.61% | 95.35% | 96.46% |
| circular | 0.11781 | 88.03% | 92.40% | 93.79% | 95.53% | 95.89% |
| circular | 0.05236 | 81.17% | 90.02% | 91.13% | 93.09% | 95.11% |
| curve surface | 0.5333 | 96.51% | 97.11% | 97.48% | 98.02% | 98.53% |
| curve surface | 0.36 | 92.59% | 95. 68% | 96.22% | 96.28% | 97.64% |
| curve surface | 0.25 | 91.74% | 93.77% | 95.08% | 95.64% | 96.26% |
| curve surface | 0.19753 | 91. 18% | 93.15% | 94.11% | 95.21% | 95.43% |
| curve surface | 0.08 | 82.43% | 87. 16% | 90.73% | 92.86% | 95.01% |
+
+Figure 8: Experimental results of different number of voxels
+
+
+
+Figure 10: Second phase result of European building point cloud
+
+There are several reasons why the algorithm fails to process:
+
+- For some data with complex components such as patterns, the density of points in the pattern part is high and this will increase the density of the whole point cloud.So it may cause the algorithm to cause errors when estimating the optimal number of voxels that divide the minimum cube bounding box.
+
+- For the point cloud data obtained through 3D reconstruction, if the effect reconstruction is bad causing large holes appeared in the point cloud, the algorithm will fail to determine the correct hollow body.
+
+## 5 Conclusion
+
+Considering the display limitation of volumetric $3\mathrm{\;d}$ display, this paper proposes the concept of hollow body. This paper gives a quantitative description of hollow body and defines a set of parameters containing volume ratio, normal line, depth ratio to describe their size, position and shape. Then this paper proposes a novelty VCRHD algorithm to accurately determine the hollow bodies in point cloud. Because there aren't related work to deal with the hollow body before, this paper establishes a data set containing 287 different point cloud files to test algorithm and to be used as a reference for follow-up studies. The algorithm may be affected by the noisy points and holes in the point cloud which need to be improved in further research
+
+Accurately defining the hollow body can help subsequent work. Further reserach can realize a more realistic three-dimensional display by simulating the visual experience observed by the human eye through the hollow body and can help in CAD or surface construction. We hope researchers in these areas can benefit from the application of hollow body.
+
+## References
+
+[1] Lin Yang, Haiwei Dong, Alelaiwi Abdulhameed, El Saddik Abdulmotaled:See in 3D:state of the art of 3D display technologies.MULTIMEDIA TOOLS AND APPLICATION-S $\;\mathbf{{75}}\left( {24}\right) ,{17121} - {17155},$ DEC.2016.doi: $\;{10.1007}/\mathrm{s}{11042} - {015} -$ 2981-y
+
+[2] Blundell, Barry G.:On the Uncertain Future of the Volumetric 3D Display Paradigm.3D RESEARCH 8(2).JUN,2017.doi: 10.1007/s13319-017-0122-2
+
+[3] Xiaoxing Yan:Display research based on 3D entity sandbox.In:Tianjin University,2016.
+
+
+
+Hollow body1
+
+Normal equation: $\frac{\mathrm{x} + {79.4118}}{-{79.4084}} = \frac{\mathrm{y} - {7.16}}{-{0.0750799}} = \frac{\mathrm{z} + {134.468}}{-{0.0985718}}$
+
+Depth ratio in $x$ direction: 1
+
+Volume ratio: 0.0577731 Hollow body 2
+
+$$
+\text{Normal equation:}\frac{\mathrm{x} + {79.4118}}{-{79.4047}} = \frac{\mathrm{y} - {13.4512}}{-{0.13721}} = \frac{\mathrm{z} + {44.4676}}{0.01091}
+$$
+
+Depth ratio in $x$ direction: 1 Volume ratio: 0.0705077 Hollow body3
+
+$$
+\text{Normal equation:}\frac{x + {79.4118}}{-{79.4047}} = \frac{y - {13.4512}}{-{0.13721}} = \frac{z - {44.4677}}{0.0149612}
+$$
+
+Depth ratio in $x$ direction: 1 Volume ratio: 0.0705077 Hollow body4
+
+$$
+\text{Normal equation:}\frac{\mathrm{x} + {79.4118}}{-{79.4084}} = \frac{\mathrm{y} - {7.16}}{-{0.0750799}} = \frac{\mathrm{z} - {134.468}}{0.130142}
+$$
+
+Depth ratio in x direction: 1 Volume ratio: 0.0577731 (c) Figure 11: Bridge data
+
+
+
+Figure 12: Building data
+
+[4] Yuan Liang:Construction of 3D Entity Display System for data Processing And Virtual Display.In:Tianjin University,2017.
+
+[5] Lee JS, Lee MY, Kim JO, Kim CJ, Won YH:Novel volumetric 3D display based on point light source optical reconstruction using multi focal lens array.In:Conference on Advances in Display Technologies V, FEB,2015.doi: 10.1117/12.2078101
+
+[6] Jun Sun:Voxel Analysis and Software Design Based on Array LED 3D Display System.In:Huazhong University of Science and Technology,2017.
+
+[7] Orriols X, Binefa X:Finding breaking curves in 3D surfaces.LECTURE NOTES IN COMPUTER SCIENCE 2652(2003),681-688.
+
+[8] Yongtae Jun:A piecewise hole filling algorithm in reverse engineering.Computer-Aided Design,37(2005),263C270.doi: 10.1016/j.cad.2004.06.012
+
+[9] Jia Li, Huan Lin, Duo Qiang Zhang, Xiao Lu Xue.Extracting geometric edges from 3D point clouds based on normal vector change.Applied Mechanics and Materials.2014(571):729- 734.doi: 10.4028/www.scientific.net/AMM.571-572.729
+
+[10] Xianfeng Huang, Xiaoguang Cheng, Fan Zhang, GONG Jianya.Side ratio constrain based precise boundary tracing algorithm for distance point clouds[C].The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing 2008.
+
+[11] Emelyanov Alexander, Skala Vclav.Surface reconstruction from problem point clouds.Virtual Environment on PC Cluster 2002, workshop proceedings, Russia, p:68-75.doi: 11025/11694
+
+[12] Soo-Kyun Kim:Extraction of ridge and valley lines from unorganized points.Multimedia Tools and Applications,2013,63(1),265-279.doi: 10.1007/s11042- 012-0999-y
+
+[13] Kurlin Vitaliy.A fast and robust algorithm to count topologically persisitent holes in noisy clouds. 27th IEEE Conference on Computer Vision and Pattern Recognition(CVPR), JUN 23-28,2014.doi: 10.1109/CVPR.2014.189
+
+[14] Van Sinh Nguyen, Trong Hai Trinh, Manh Ha Tran.Hole Boundary Detection of a Surface of $3\mathrm{D}$ point clouds. 2015 International Conference on Advanced Computing and Applications.doi: 10.1109/ACOMP.2015.12
+
+[15] Tengfei Bao, Jinlei Zhao, Miao Xu.step edge detection method for 3D point clouds based on $2\mathrm{D}$ range images.Optik 126 (2015) 2706C2710.
+
+[16] Zhenqing Yang, Yonglei Yong.Boundary extraction based on point cloud slices[J].Computer Applications and Software,2014,31(1):222-224+245.
+
+[17] Shaoyan Gai, Feipeng Da, Lulu Zeng, Yuan Huang.Research on a hole filling algorithm of a point cloud based on structure from motion.Journal of the Optical Society of America A $\mathbf{{36}}\left( 2\right)$ , Feb,2019.doi: 10.1364/JOSAA.36.000A39
+
+[18] Fan LU, Song Li, Jingjing Cao, Xiaozheng E, Yong Zhou:Algorithm for extraction of point cloud boundary point based on inverse distance weight and density.COMPUTER ENGINEERING AND DESIGN 40(2),364-369(2019).doi: 10.16208/j.issn1000-7024.2019.02.012
+
+[19] Li Zhang, Xiangrong She, Xianyu Ge, Jieqing Tan:Adaptive fitting algorithm of progressive interpolation for Loop subdivision surface.International Journal of distributed sensor networks 14(11), Nov.2018.doi: 10.1177/1550147718812355
+
+[20] Shuqun Liu, Bei Zhang:Mathematical Model of Catmull-Clark Subdivision Scheme on Regular Mesh.In:Proceedings of 2017 2nd International Conference on Automation, Mechanical Control and Computational Engineering, March,2017.doi: 10.2991/amcce-17.2017.31
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b4f7df9baabb98bfcf4c9ca8e2ac124483faaa95
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/X5RyIdI4Mn0/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,337 @@
+§ DETERMINATION AND QUANTITATIVE DESCRIPTION OF HOLLOW BODY IN POINT CLOUD
+
+§ ABSTRACT
+
+When volume 3d display system deals with point cloud data with hollow bodies, the hollow body area cannot be determined correctly causing the lack of color information in responding area. Existing researches lack a solution to determine hollow bodies. This paper firstly gives a quantitative description of hollow body and defines a set of parameters to describe size, shape, position of hollow bodies. Then this paper proposes a voxel connectivity regional-growth hollow body determination algorithm(VCRHD) to determine the hollow bodies in 3D point cloud. The algorithm has two phases. The first phase is to use a small amount of voxels to realizes the voxelization of the point cloud and calculate the approximate volume ratio of hollow bodies based on the voxel connectivity regional-growth principle. Then this paper uses the approximate volume ratio to determine the optimal number of voxels based on the experimental result. The second phase is to use the optimal number of voxels to determine the hollow bodies and calculate hollow body parameters which is proved to be efficient and accurate. In addition, this paper establishes a data set containing 287 different point cloud files in 7 different categories to test algorithm. The experimental results prove the feasibility of the algorithm. Finally, this paper analyzes the limitations of the algorithm and looks forward to the application prospect in the future.
+
+Index Terms: Computing methodologies-Volumetric models
+
+§ 1 INTRODUCTION
+
+With the development of computer graphics, digital image processing and other technologies, people demand more realistic display effects. In real world, objects have $3\mathrm{D}$ physical dimensions containing length, width and height. However, the majority of display technologies remain 2D display technologies that satisfy some psychological depth cues but lack depth information, which will lead to degeneration of the functionalities of the human eyes' depth, dynamics, and parallax [1]. The human brain need to acquire information completely consistent with the $3\mathrm{D}$ real object which is called 3D information.
+
+Volumetric 3D display enables a range of cues to depth to be inherently associated with image content and permits images to be viewed directly by multiple simultaneous observers in a natural way [2]. So volumetric 3D display is a significant research direction of true 3D display. Recent years there are many researches about volumetric $3\mathrm{D}$ display. In order to solve the problem of large-scale 3D display, Xiaoxing Yan [3] and Yuan Liang [4] propose to construct a 3D entity sandbox display system. They firstly extract DEM data from point cloud and then build the the sandbox model by means of the mechanical structure. Then they install LEDs on the mechanical structure to visually display various $3\mathrm{D}$ terrain, architecture and other scenes. Jin su Lee, Mu young Lee, et al. [5] design a stereoscopic 3D monitor based on a periodic point source (PLS) structure using a multifocal lens array (MFLA). They construct point lights emitted by the multifocal lens array to form voxels of discrete $3\mathrm{D}$ images in the air. Jun Sun [6] studies a research on true three-dimensional dynamic display algorithm of the array LED three-dimensional display system and builds an array LED three-dimensional display system.
+
+However, such 3D display systems don't do well when processing data with hollow bodies such as building gates and bridges. When data is transmitted to the system for display, the voxels used to display hollow bodies won't receive any color information because these area have no points. So it can't reproduce the original 3D scene and will seriously affect the effect of the 3D display system. Therefore, if we can accurately find each hollow body in the point cloud and use the voxels of hollow bodies to simulate the color that the human eye should observe through them, we can achieve a more realistic three-dimensional display. Besides, the determination of hollow bodies can help in CAD and surface construction.
+
+This paper proposes the concept of hollow body and studies how to solve the problem of hollow bodies in point cloud. The contributions of the paper are as follows:
+
+-This paper proposes the concept of hollow body. Then this paper gives a quantitative description of hollow body and defines a set of parameters to describe its characteristics.
+
+-This paper proposes a novelty voxel connectivity regional-growth hollow body determination algorithm to determine the hollow bodies in point cloud. The algorithm has two phases. The first phase is to use a small amount of voxels to calculate approximate values of volume ratio of hollow bodies. Then this paper uses the approximate value to determine the optimal number of voxels based on our experimental result. The second phase is to use the optimal number of voxels that divide the point cloud to determine hollow bodies and hollow body parameters accurately and efficiently.
+
+-Considering the concept of hollow body is proposed for the first time, there aren't related experimental results to make comparison. Therefore, this paper establishes a dataset to verify the effect of the algorithm and provides a reference for subsequent work.
+
+§ 2 RELATED WORKS
+
+There are similar researches to determine the boundary of 3D point cloud and holes. The extraction method-s of boundary in $3\mathrm{\;d}$ point cloud can be divided into two categories:methods based on grids and directly on points. The methods based on the grids can identity the boundary through the topological relationship between the grid-s [7-12]. Yongtae Jun [8] firstly determine the seed boundary point by the principle that one of the adjacent three triangles of the boundary triangle mesh is empty and then obtain the closed loop boundary edge by tracking. Xian-feng Huang, Xiaoguang Cheng et al. [10] construct TIN grids for point cloud. Then they initialize a maximum boundary and gradually narrow the boundary edge by setting the threshold of the length of edge. The methods directly based on points firstly find the neighbors of each point and then calculate the geometric properties of points such as normal vector, density to determine the boundary [13-18]. Van Sin-h Nguyen, Trong Hai Trinh, Manh Ha Tran [14] project the point cloud into the two-dimensional grid on the xy plane and obtain the point cloud boundary based on the number of points in each grid and their adjacent grids. Fan LU, Song Li et al. [18] project the point cloud on the plane and use the inverse distance weight and the point cloud density to determine boundary feature points. Shaoyan Gai, Feipeng Da, Lulu Zeng, Yuan Huang [17] use a 2d phase map and the adjacent area of each point to determine the boundary. Then they use the row and column coordinates of the boundary point to remove the contour points.
+
+However, existing researches lack a solution to deal with the determination of hollow bodies. They can only determine the boundary of point cloud or the holes. So this paper gives a quantitative description of hollow body and proposes the VCRHD algorithm to determine of hollow bodies.
+
+§ 3 HOLLOW BODY
+
+The hollow body is an empty area which is surrounded by points and has points above it. Besides it must connect with the outer area of the model. We set y-axis as the height axis, so the hollow body is an independent area that is closed to the upper area in y-axis and connects with the outer area in x-axis or z-axis direction. The regions that have no point or a small number of points may belong to hollow bodies. As shown in figure 1, the areas beyond the green boundaries are six hollow bodies. In this paper, we propose the VCRHD algorithm to find each hollow body in the point cloud and define volume ratio, normal line, depth ration to describe their size, position and shape.
+
+§ 3.1 DETERMINATION OF THE HOLLOW BODY
+
+§ 3.1.1 OVERVIEW
+
+The algorithm has two phases. The process of the two phases are similar. The propose of the first phase is to use a small amount of voxels to divide the point cloud and calculate approximate values of volume ratios of hollow bodies. The approximate value of volume ratio is used to calculate the optimal number of voxels dividing the point cloud which is more accurate and efficient based on the experimental result. In the second phase, the algorithm uses the optimal number of voxels to determine the hollow bodies and calculate the hollow body parameters accurately.The process of the algorithm is as figure 2 .
+
+§ 3.1.2 VOXELIZATION OF POINT CLOUD
+
+After obtaining the preprocessed point cloud data, we first rotate it through meshlab to adjust y-axis as its height axis and then implement voxelization of the point cloud according to the following rules.
+
+Step 1:Determine segmentation parameter of voxelization
+
+Calculate the range of point cloud in $\mathrm{x},\mathrm{y},\mathrm{z}$ axis as range_X, range_Y, range_Z and define min as the minimum of the three number. Meanwhile, define $\mathrm{k}$ as division parameter. ${d}_{t} = \min /\mathrm{k}$ . We find that the number of voxels used to achieve the same accuracy is different for point cloud data containing different volume ratios of hollow body. So we summarize the rule in the experimental part. Therefore $\mathrm{k}$ in the first phase is to ensure the model area of point cloud is divided into 24000 voxels which is the experimental result. And in the second phase we adjust $\mathrm{k}$
+
+ < g r a p h i c s >
+
+Figure 2: Process of the VCRHD algorithm
+
+to divide point cloud based on the optimal number of voxels.
+
+Step 2: Define plane matrix and structure
+
+Arrange voxels into a $2\mathrm{\;d}$ matrix on the $\mathrm{{xz}}$ plane. The $\mathrm{x}$ -axis represents column of the matrix and the $\mathrm{z}$ -axis represents row of the matrix. The size of matrix is as formula(2).
+
+round $\left( {{\text{ range }}_{ - }Z * k/\text{ min } + 2}\right) *$ round $\left( {{\text{ range }}_{ - }X * k/\text{ min } + 2}\right)$(1)
+
+The reason for adding 2 is that points are only distributed around the boundary of the entity and there are no points inside the entity. Therefore, the wrong hollows inside the model will also be determined as hollow bodies. So this paper establishes a layer of voxels outside the minimum cube bounding box and uses the number of voxels in non-model area adjacent with each hollow body to filter out wrong hollow body inside the model.
+
+This paper defines a structure grid for each region of the matrix.Each structure contains an array containing round(range_Y/dt) voxels.The coordinates of the array represent the height information.The schematic diagram is as shown in Figure 3.
+
+Step 3:Calculate position of points
+
+After defining the voxels and the matrix structure, we will calculate the location of each point.Assuming that the coordinates of point i is $\left( {{x}_{i},{y}_{i},{z}_{i}}\right)$ and the minimum values in x_axis, y_axis, z_axis are x_min, y_min, z_min. So the row, column, height of voxel which point $\mathrm{i}$ is located in are as formula(3),(4),(5).
+
+$$
+\operatorname{round}\left( {\left( {{z}_{i} - {z}_{ - }\min }\right) /\left( {\text{ range }\_ Z/\text{ round }\left( {\text{ range }\_ Z/{d}_{t}}\right) }\right) }\right) + 1 \tag{2}
+$$
+
+$$
+\operatorname{round}\left( {\left( {{x}_{i} - {x}_{ - }\min }\right) /\left( {\text{ range }\_ X/\text{ round }\left( {\text{ range }\_ X/{d}_{t}}\right) }\right) + }\right. \tag{3}
+$$
+
+ < g r a p h i c s >
+
+Figure 1: Bridge point cloud data with six hollow bodies
+
+ < g r a p h i c s >
+
+Figure 3: Voxel segmentation of the minimum cube bounding box
+
+ < g r a p h i c s >
+
+Figure 4: An example of classification of voxels
+
+§ $\OPERATORNAME{ROUND}\LEFT( {\LEFT( {{Y}_{I} - {Y}_{ - }\MIN }\RIGHT) /\LEFT( {\TEXT{ RANGE }\_ Y/\TEXT{ ROUND }\LEFT( {\TEXT{ RANGE }\_ Y/{D}_{T}}\RIGHT) }\RIGHT) }\RIGHT)$ (4)
+
+§ 3.1.3 DETERMINATION OF THE GROUP OF HOLLOW BODY VOXELS
+
+The rules for judging whether a voxel may belong to a hollow body are as follows:
+
+ * The number of points it contains is less than the threshold.
+
+ * There exists a voxel that doesn't belong to any hollow body above it.
+
+So this paper first calculates the maximum height of each grid of xz plane matrix. The voxels whose height is lower than the maximum of its region belong to model area and others belong to area outside the model. If a voxel in model area contains less than threshold of points, we will put them in a set $\mathrm{Q}$ for further process because they may belong to hollow bodies. Figure 4-b is a schematic diagram of xy plane. The red voxels belong to area outside the model. The green and blue voxels belong to model area and blue voxels will be put in set for segmentation.
+
+§ 3.1.4 SEGMENTATION OF THE GROUP OF HOLLOW BODY VOXELS
+
+This paper separates the group of voxels based on connectivity. Define the initial group number $\mathrm{s}$ as 1 and define a group LK to store the voxels of each group. Then initialize the label of voxels in group as 0 .
+
+Step 1:If the label of a voxel in group $\mathrm{Q}$ is 0, change its label to s and put it into LK.
+
+Step 2:We define the row, col, height of the voxel is r, c, h. If the voxels whose position are (r+1, c, h),(r-1, c, h), $\left( {r,c + 1,h}\right) ,\left( {r,c - 1,h}\right) ,\left( {r,c,h + 1}\right) ,\left( {r,c,h - 1}\right)$ are in Q, change their labels to s and put them into LK.
+
+Step3:Repeat step 2 when new voxels are put into LK until we can't find adjacent voxels.Add s to 1 and clear Lk to store next group of voxels and repeat step1 until all the voxels in $\mathrm{Q}$ have been processed.
+
+§ 3.1.5 DETERMINATION OF CORRECTNESS OF HOLLOW BODIES
+
+For all groups separated, we use following rules to determine whether these groups are correct.
+
+ * Size of hollow body
+
+Some groups contain a small number of voxels indicating that the proportion of the area is small. So this paper filters out these groups by defining volume ratio between hollow bodies and the model as a threshold.
+
+ * Connectivity of hollow body
+
+The results contains wrong hollow bodies inside the model which is caused by the feature of point cloud. This paper uses the number of non-model voxels adjacent with each hollow body as the other threshold.
+
+The process of step 2 and step 3 is as figure 5 .
+
+At last, this paper traverses all the points in the point cloud data to determine the boundaries of each hollow body by the space position.
+
+§ 3.2 HOLLOW BODY PARAMETER
+
+This paper defines a set of parameters containing volume ratio, normal equation, depth ratio of hollow bodies in order to accurately describe their characteristics for further research. The first phase of the algorithm just need to calculate the volume ratio while the second phase of algorithm calculate all the three parameters.
+
+§ 3.2.1 VOLUME RATIO
+
+This paper defines the result that volume of hollow body divides by volume of model as the volume radio in order to describe the size of each hollow body.
+
+We assume the volume of the model is $\mathrm{V}$ and the volume of hollow body i is ${S}_{i}$ , so the volume ratio of hollow body i is $\frac{{S}_{i}}{V}$ . The parameter ranges from 0 to 1 .However we can’t calculate the accurate volumes of model and hollow bodies in point cloud because of the variability and irregularity of the point cloud data. Therefore, we obtain the approximation of each volume ratio by calculating the result that the number of voxels in each hollow body divide by the number of voxels in model area.
+
+ < g r a p h i c s >
+
+Figure 5: Process of hollow body determination
+
+§ 3.2.2 NORMAL EQUATION
+
+This paper defines the normal equation of the hollow body in order to describe the position of each hollow body.
+
+We calculate the normal equation of the hollow body by slice method and it represents the direction of the hollow body extension. For the hollow body i, we first calculate the mean coordinates of the voxels that make up the hollow body and name it as the gravity point ${G}_{i}$ . We assume the voxel set belonging to hollow body i is ${T}_{i} = \left\{ {{a}_{1},{a}_{2}\ldots {a}_{n - 1},{a}_{n}}\right\}$ .
+
+In the voxel structure, i represents the row of matrix, j represents the column, $h$ represents the height, ${d}_{i}$ represents the spacing of row, ${d}_{j}$ represents the spacing of column, ${d}_{h}$ represents the spacing of height.The coordinate of ${G}_{i}$ is formula(6).
+
+$$
+\left( {\;{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.j}{n} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.h}{n} + {0.5}}\right) * {d}_{h},Z\_ {min} + \left( {\frac{\mathop{\sum }\limits_{{t = 1}}^{n}{a}_{t}.i}{n} - {0.5}}\right) * {d}_{i}\left( 5\right)
+$$
+
+$$
+{l}_{i}\left( 5\right)
+$$
+
+Then, we use the slice method to slice the yz plane and the xz plane and calculate the row and column of voxel-s belonging to the hollow body i that appear at the first time or the last time. We assume the voxel set whose row equals the first row of the model area in hollow body i is ${R}_{i} = \left\{ {{r}_{1},{r}_{2}\ldots {r}_{m - 1},{r}_{m}}\right\}$ and the coordinate of ${H}_{i}$ is as formula(7).
+
+$$
+\left( {{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{m}{r}_{q}.j}{m} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{m}{r}_{q}.h}{m} + {0.5}}\right) * {d}_{h},{Z}_{ - }{min} + {0.5} * {d}_{i}) \tag{6}
+$$
+
+ < g r a p h i c s >
+
+Figure 6: An example of normal line of hollow body
+
+Otherwise, We assume the voxel set whose row equal-s the last row of model area in hollow body i is ${N}_{i} = \left\{ {{b}_{1},{b}_{2}\ldots {b}_{f - 1},{b}_{f}}\right\}$ and the coordinate of ${H}_{i}$ is as formula(8).
+
+$$
+\left( {{X}_{ - }\min + \left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{f}{b}_{q}.j}{f} - {0.5}}\right) * {d}_{j},{Y}_{ - }\min + }\right.
+$$
+
+$$
+\left( {\frac{\mathop{\sum }\limits_{{q = 1}}^{f}{b}_{q}.h}{f} + {0.5}}\right) * {d}_{h},{Z}_{ - }{min} + \left( {w - {0.5}}\right) * {d}_{i}) \tag{7}
+$$
+
+Similarly, we can also calculate the midpoint of the first column and the last column of the hollow body i. If the hollow body has midpoints in both boundary surfaces, we use the gravity point and the midpoint in the first row or first column of hollow body to calculate the normal equation.
+
+For hollow body i, we use the gravity point ${G}_{i}$ and the surface midpoint ${H}_{i}$ to calculate the normal line. The equation of the normal line is as formula(9) and the direction vector is as formula(10).
+
+$$
+\frac{x - {H}_{i}.x}{{G}_{i}.x - {H}_{i}.x} = \frac{y - {H}_{i}.y}{{G}_{i}.y - {H}_{i}.y} = \frac{z - {H}_{i}.z}{{G}_{i}.z - {H}_{i}.z} \tag{8}
+$$
+
+$$
+\left( {{G}_{i}.x - {H}_{i}.x,{G}_{i}.y - {H}_{i}.y,{G}_{i}.z - {H}_{i}.z}\right) \tag{9}
+$$
+
+If the hollow body connects with the space outside the model in both directions, we will define the direction that has a bigger depth ratio which is the next parameter as the main direction of the hollow body. The hollow normal equation can represent the space position of the hollow body. An example is shown in Figure 6. The line through HG is the normal line and the the direction of the normal line is from $\mathrm{H}$ to $\mathrm{G}$ .
+
+§ 3.2.3 DEPTH RATIO
+
+This paper defines depth ratio of each hollow body in order to describe the shape of the hollow body. The parameter ranges from 0 to 1 . When the parameter approaches 0, it means the hollow body is very shallow. When the parameter equals 1, it means the hollow body is full penetrative. The calculation of the depth ratio depends on the normal of the hollow body. We assume the maximum of the point cloud in $\mathrm{x}$ -axis is ${\mathrm{x}}_{ - }\mathrm{{max}}$ , the minimum in $\mathrm{x}$ -axis is ${\mathrm{x}}_{ - }\mathrm{{min}}$ , the maximum in $\mathrm{z}$ -axis is $\mathrm{z}$ -max and the minimum in $\mathrm{z}$ -axis is z_min.For each hollow normal equation, if it is extended along the z-axis, we set $\mathrm{z} = \mathrm{z}$ _min and $\mathrm{z} = \mathrm{z}$ _max and calculate the distance ${S}_{\text{ maxi }}$ between the two points. And then we set formula(11) and calculate the distance ${S}_{\text{ row }}$ between the two points.
+
+$$
+z = {z}_{ - }\min + \left( {\text{ first\_row } - 1}\right) * {d}_{i}
+$$
+
+$$
+z = {z}_{ - }\min + \text{ last\_row } * {d}_{i} \tag{10}
+$$
+
+ < g r a p h i c s >
+
+Figure 7: Test data
+
+If the normal is extended along the $\mathrm{x}$ -axis, we set $\mathrm{x} = {\mathrm{x}}_{ - }$ min and $\mathrm{x} = \mathrm{x}$ _max and calculate the distance between the two points ${S}_{maxj}$ . And then we set formula(12) and calculate the distance between the two points ${S}_{\text{ col }}$ .
+
+$$
+x = {x}_{ - }\min + \left( {\text{ first\_col } - 1}\right) * {d}_{j}
+$$
+
+$$
+x = {x}_{ - }\min + \text{ last\_row } * {d}_{j} \tag{11}
+$$
+
+If both directions have a normal line, we will take max $\{ \frac{{S}_{\text{ row }}}{{S}_{\text{ maxi }}},\frac{{S}_{\text{ col }}}{{S}_{\text{ maxj }}}\}$ as depth ratio of the hollow body and the normal line in this direction will be the normal of the hollow body. If there only exists a normal line in one direction, then the corresponding distance ratio $\frac{{S}_{\text{ row }}}{{S}_{\text{ maxi }}}$ or $\frac{{S}_{\text{ col }}}{{S}_{\text{ maxi }}}$ will be the depth ratio of the hollow body.
+
+§ 4 EXPERIMENT
+
+We can't calculate the accurate volumes of model and hollow bodies in point cloud because of the variability and irregularity of the point cloud data. Therefore, this paper chooses to establish several sets of point cloud data with differen-t sizes and shapes of regular hollow bodies. We calculate their accurate volumes to test the optimal number of voxels to process the point cloud data with different sizes of hollow bodies.
+
+We divide the test data into three groups according to the shape of the hollow body.The schematic diagram of the test data is shown in Figure 7. After the calculation of different values of $\mathrm{k}$ , the results are counted. The experiment finds that when the number of voxels that divide the model is close to 24000, 60000, 100000, 150,000, and 200000, the results are more representative. The experimental results are shown in Figure 8.
+
+After analyzing the experimental results, we find that the relationship between the number of voxels and the volume ratio $\mathrm{V}$ can be defined as a discrete function when we want to achieve ${95}\%$ correct rate of the volume ratio as formula(15).
+
+$$
+S = \left\{ \begin{array}{ll} {24000} & V \geq {0.5} \\ {60000} & {0.35} \leq V < {0.5} \\ {100000} & {0.25} \leq V < {0.35} \\ {150000} & {0.1} \leq V < {0.25} \\ {200000} & V < {0.1} \end{array}\right. \tag{12}
+$$
+
+The different choices for each interval can make the algorithm achieve the correct rate of more than 95%. If we add to the number of voxels, the correct rate will not have a significant improvement but the processing time will be greatly increased. So we use this discrete function as the basis for the subsequent determination of the hollow bodies.
+
+ < g r a p h i c s >
+
+Results of first phase: 24000 voxels row:15 column:52 H:56 Number of voxels in model area: 22416 Number of voxels in non-model area: 21264 Hollow body 1 Number of voxels in hollow body 1 : 575 Volume ratio: 0.0256513
+
+Hollow body 2
+
+Number of voxels in hollow body 2: 4970
+
+Volume ratio: 0.221717
+
+Hollow body 3
+
+Number of voxels in hollow body 3: 575
+
+Volume ratio: 0.0256513
+
+(c)
+
+Figure 9: First phase result of European building point cloud
+
+For the data to be processed, we will first set the initial value of the number of voxels to 24000 and calculate the approximate value of the volume ratio of each hollow body. Then we use the smallest ratio and point cloud density which must ensure the distance between adjacent points is bigger than the length of side of voxels to calculate the optimal number of voxels that divide the point cloud. Figure 9 is the result of first phase of European building point cloud data and figure 10 is the second phase result.
+
+Considering the concept of hollow body is proposed in this paper for the first time, we can't make comparision with previous research. Therefore, this paper establishes a data set of 287 point cloud data from various $3\mathrm{D}$ model websites and $3\mathrm{\;d}$ reconstruction to verify the effect of the algorithm. The data set can be divided into seven types containing natural landscape, metal component, indoor furniture, jewelry sculpture, bridge, building and test data. Because there are usually only a few key points in a surface in 3d models, this paper uses the Loop subdivision surface [19] and the Catmull-Clark subdivision surface [20] through meshlab to achieve the up-sampling of the $3\mathrm{D}$ model to form a point cloud file. We will find more point cloud data with hollow bodies in further work and the website is under construction.
+
+After experimenting with the 287 experimental data, this paper compares the results with the calibrated hollow bodies. The results show that 236 data are successfully judged and 51 data fail.The correct rate is ${82.2}\%$ .Some results are as figure 1112 .
+
+max width=
+
+shape of surface of hollow body acurate volume ratio X X X X X
+
+1-7
+rectangle 0.72 95.00% 96.88% 97.26% 98.01% 98.35%
+
+1-7
+rectangle 0.48 94.44% 95.83% 97.14% 96.06% 96.77%
+
+1-7
+rectangle 0.3 93.47% 95.13% 96.67% 95.88% 96.00%
+
+1-7
+rectangle 0.2 91.67% 93.75% 95.31% 97.33% 96.87%
+
+1-7
+rectangle 0.12 91.16% 93.47% 95.17% 97.02% 96.77%
+
+1-7
+rectangle 0.05 90.34% 91.93% 95.02% 95.12% 95.04%
+
+1-7
+circular 0.53305 96.59% 97.56% 98.64% 99.23% 99. 12%
+
+1-7
+circular 0.32725 92.24% 93.90% 96.18% 96.69% 97.40%
+
+1-7
+circular 0.20944 91.07% 93.25% 94.61% 95.35% 96.46%
+
+1-7
+circular 0.11781 88.03% 92.40% 93.79% 95.53% 95.89%
+
+1-7
+circular 0.05236 81.17% 90.02% 91.13% 93.09% 95.11%
+
+1-7
+curve surface 0.5333 96.51% 97.11% 97.48% 98.02% 98.53%
+
+1-7
+curve surface 0.36 92.59% 95. 68% 96.22% 96.28% 97.64%
+
+1-7
+curve surface 0.25 91.74% 93.77% 95.08% 95.64% 96.26%
+
+1-7
+curve surface 0.19753 91. 18% 93.15% 94.11% 95.21% 95.43%
+
+1-7
+curve surface 0.08 82.43% 87. 16% 90.73% 92.86% 95.01%
+
+1-7
+
+Figure 8: Experimental results of different number of voxels
+
+ < g r a p h i c s >
+
+Figure 10: Second phase result of European building point cloud
+
+There are several reasons why the algorithm fails to process:
+
+ * For some data with complex components such as patterns, the density of points in the pattern part is high and this will increase the density of the whole point cloud.So it may cause the algorithm to cause errors when estimating the optimal number of voxels that divide the minimum cube bounding box.
+
+ * For the point cloud data obtained through 3D reconstruction, if the effect reconstruction is bad causing large holes appeared in the point cloud, the algorithm will fail to determine the correct hollow body.
+
+§ 5 CONCLUSION
+
+Considering the display limitation of volumetric $3\mathrm{\;d}$ display, this paper proposes the concept of hollow body. This paper gives a quantitative description of hollow body and defines a set of parameters containing volume ratio, normal line, depth ratio to describe their size, position and shape. Then this paper proposes a novelty VCRHD algorithm to accurately determine the hollow bodies in point cloud. Because there aren't related work to deal with the hollow body before, this paper establishes a data set containing 287 different point cloud files to test algorithm and to be used as a reference for follow-up studies. The algorithm may be affected by the noisy points and holes in the point cloud which need to be improved in further research
+
+Accurately defining the hollow body can help subsequent work. Further reserach can realize a more realistic three-dimensional display by simulating the visual experience observed by the human eye through the hollow body and can help in CAD or surface construction. We hope researchers in these areas can benefit from the application of hollow body.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..f96554c7b0a1a062a1152c3b534a6dc5089152a6
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,199 @@
+# Target Acquisition for Handheld Virtual Panels in VR
+
+## ABSTRACT
+
+The Handheld Virtual Panel (HVP) is the virtual panel attached to the non-dominant hand's controller in virtual reality (VR). The HVP is the go-to technique for enabling menus and toolboxes in VR devices. In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach. Our results show that all four factors have significant effects on user performance. Based on the results, we propose guidelines towards the ergonomic and performant design of the HVP interfaces.
+
+## CCS CONCEPTS
+
+- Human-centered computing $\rightarrow$ Empirical studies in HCI.
+
+## ACM Reference Format:
+
+. 2019. Target Acquisition for Handheld Virtual Panels in VR. In Proceedings of ACM Conference (Conference'19). ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/1122445.1122456
+
+## 1 INTRODUCTION
+
+With the increasing popularity of consumer virtual reality (VR), we see more and more VR apps for creativity and productivity. These apps fundamentally require menus and toolboxes for the assortment of options and controls they offer. And the interaction artifact that is quickly becoming the go-to technique for this is the handheld virtual panel (HVP). The HVP provides the primary toolbox in Google's TiltBrush [15] (Figure 1(left)) and Blocks [14], Oculus's Quill [11] and Medium [10] (Figure 1(right)), and HTC Vive's MakeVR [18]. Szalvari et al. in 1997 [30, 31] proposed the personal interaction panel where the user hold a tracked tablet in the second hand while doing their primary interaction with the dominant hand using a stylus. HVPs extend that concept for virtual panels anchored to the controller in the non-dominant hand and using ray-tracing instead of a stylus. There are multiple advantages to such an interaction [20]. First, handheld windows move along with the user, so they are always within reach. Second, they do not overly clutter the user's view, unless explicitly moved by the user. Third, handheld windows take advantage of the proprioceptive sense because they are attached to the non-dominant hand.
+
+However, even with the ubiquity of HVP in products and research literature, we do not have a sense of what factors govern performance of target selection in HVPs. Consequently, there is a need to understand and quantify HVP target selection performance while considering these two factors: 1) hand motion here is governed by the direction of motion in relation to the ground due to the effects of gravity, and (2) since both the target and the pointer can be moved and controlled by the user during acquisition, user's approach will vary depending on the angle of movement in addition to distance and width.
+
+
+
+Figure 1: (left) A handheld virtual panel (HVP) in Google Tilt Brush. (right) A HVP in Oculus Medium. These are used with a controller.
+
+We conduct a study to measure HVP target acquisition performance in relation to four factors that relate to the direction of movement with respect to gravity, the angle of movement with respect to the body, distance, and width. The results show that the performance depends significantly on all four factors. Based on the results, we propose guidelines towards the ergonomic design of the HVP interfaces.
+
+## 2 RELATED WORK
+
+### 2.1 Handheld Virtual Panels
+
+In 1993, Feiner et al. [12] described three types of 2D windows in a virtual or augmented environment: Surround-fixed that are displayed at a fixed position within the surrounding, Display-fixed that are fixed at a position relative to the display itself, and World-fixed (or Object-fixed) that are fixed to objects in the 3D world. The HVP is an instance of the object-fixed window with the object being the non-dominant hand. Before Szalvari et al.'s proposal of the personal interaction panel [31], other works proposed handheld panels for specific VR scenarios using pen-and-tablet interfaces where the non-dominant hand held a tablet and the dominant hand held a pen to tap or draw on the tablet $\left\lbrack {3,5,{13},{29}}\right\rbrack$ .
+
+For instance, Stoakley et al.'s Windows-in-Miniature (WIM) [29] proposes a miniature copy of the virtual world in the non-dominant hand for navigation and manipulation. Other works study effects of visual $\left\lbrack {3,{17}}\right\rbrack$ , haptic $\left\lbrack {20}\right\rbrack$ feedback for bimanual input in VR with a panel in the non-dominant hand. Lindeman et al. [20] found that users are ${21}\%$ faster in shape selection tasks when using handheld 2D panels similar to HVP compared to surround-fixed panels that
+
+
+
+Figure 2: A participant doing the study with the Oculus Rift and the two controllers.
+
+float in front of the user. Similarly, Mine et al. [22] found that pointing to a target on a handheld panel was doubly fast than a fixed floating panel in space. However, none of the earlier works examine target acquisition in the HVP with respect to multiple target sizes and distances. Further, no existing work has examined the performance of the current HVP incarnation with the current hardware and interface. Consequently, we study the effect of distance and width on movement time for the HVP.
+
+### 2.2 Whole-Handed 3D Movements in Air
+
+While most works on handheld panels focus on direct pen or finger input, today's commercial VR systems rely on a controller in each hand with a ray tracing approach being used from the controller on the dominant hand to the targets on the panel. As hand tracking matures and becomes closer to commercial use in VR systems, we also hope to see explorations on hand-gesture based HVP interfaces. A related thread of work is ray tracing while using whole-handed 3D movements. Whole-handed 3D movements involve multiple limb movements, requiring higher muscular force and leading to variable movement trajectories, and hence variable pointing times [33]. Murata et al. [23] show that the direction of hand movement significantly affects movement time for a 3D pointing task. Following works $\left\lbrack {7,{34}}\right\rbrack$ found directional differences relating to shoulder and forearm motion. Zeng et al. [34] found that adduction movements are slower than abduction for $2\mathrm{D}$ targets using hand motion in 3D space (detected by Kinect).
+
+In our case, when using the HVP in current VR apps, a right-handed user holding the controller in the right hand usually approaches a tool on the panel in the left-hand from right to left direction. We investigate varying origins and angles in our study. There are other techniques and studies on target acquisition in 3D and in VR $\left\lbrack {1,4,8,9,{21},{24},{25},{28},{32}}\right\rbrack$ , but they address non-handheld, non-2D panel scenarios such as $3\mathrm{D}$ object selection in the scene.
+
+
+
+Figure 3: The ${12} \times {12}$ (largest width) HVP schematic that the user sees in VR. Red dot denotes the pointer at one of its starting positions (other is bottom-right corner). The two angles ${22.5}^{ \circ }$ and ${67.5}^{ \circ }$ are denoted relative to the right edge. The yellow square shows a target to be selected. It is currently at ${67.5}^{ \circ }$ at maximum distance.
+
+## 3 TARGET ACQUISITION STUDY
+
+Aside from the traditional factors of distance and width, we need to take into account the effect of gravity for multiple starting positions and angles of movement.
+
+### 3.1 Experiment Design
+
+Figure 2 shows a participant doing the study. Similar to current HVPs, the dominant-hand controller raycasts a pointer into the scene. Figure 3 shows the HVP schematic that the user sees in VR. For selection, the user navigates the pointer on to the desired target and presses a button on the controller. The user can also move the non-dominant hand to move the target on the panel closer to the pointer. We investigated four independent variables: 1) STARTPos: starting position of the pointer that determines the direction of movement with respect to gravity. STARTPos has two levels, top: top-right and bottom: bottom-right position of the panel. 2) ANGLE: angle of movement relative to the right edge of the panel at StartPos that offers an additional level of nuance into the effect of gravity based on the angle of motion with respect to the gravity vector. It has two levels: ${22.5}^{ \circ }\& {67.5}^{ \circ }$ . Figure 3 shows the angles for the top STARTPos. 3) DISTANCE: target distance from StartPos along the line of one of two angles. It has three exponentially increasing levels: $2\mathrm{\;{cm}},6\mathrm{\;{cm}},{18}\mathrm{\;{cm}}$ . 4) WIDTH: target width. We keep the panel size constant and vary width by changing number of targets (all square shaped). Distance had three levels: ${0.63}\mathrm{\;{cm}}$ ( ${48}\mathrm{X}{48}$ layout), ${1.25}\mathrm{\;{cm}}$ (24x24),2.5cm (12x12). Figure 3 shows the 12x12 layout. The panel size was kept slightly larger than existing panels in commercial applications to allow testing the distance factor with a larger range.
+
+In total, there were $2 \times 2 \times 3 \times 3 = {36}$ conditions and a within-subjects design was used. For each condition, participants performed 6 repetitions, resulting in 36x6 =216 trials per participant. Owing to the large number of conditions, complete Latin square counterbalancing 2019-12-13 00:53. Page 2 of 1-5. across participants is not possible. WIDTH was completely counterbalanced across participants. For each width, STARTPos was completely counterbalanced across participants. For each width and startpos, the order of trials (consisting of DISTANCE-ANGLE combinations) was randomized.
+
+### 3.2 Participants
+
+Twelve (7 female, 5 male) participants took part in the study (Range: ${18} - {29},\mathrm{\;M} = {22},\mathrm{\;{SD}} = {3.004})$ . All participants were right-handed and did not have any experience with VR. We believe the results will be similar for a mirrored study for left-handed users.
+
+### 3.3 Apparatus and Task
+
+The experimental application was developed in Unity3D. Participants wore an Oculus Rift head-mounted display and held Oculus Rift Touch Controllers, one on each hand, to interact with the VR environment. The task involved participants selecting targets on a HVP that is attached to the non-dominant hand, using the controller on the dominant hand that controls the raycast pointer. The user selects a target by clicking a button on the controller. For each trial, we measured the target acquisition time (time taken from the highlighting of the desired target until the button click), and errors (number of incorrect selections).
+
+### 3.4 Procedure
+
+After getting familiar with the apparatus and interface, participants performed 6 practice trials followed by the study. Before every trial, participants were required to bring the pointer back to the StartPos. The next target to be selected was highlighted ${0.5}\mathrm{\;s}$ after the pointer was back at StartPos. Participants selected targets by bringing the raycasted pointer within the target's area (upon which a dark border indicated visual feedback), and pushing down on the trigger located at the back of their controller. We purposely avoided fatigue by mandating a 30 s break after every 18 trials which the participants could extend if they wanted to. Upon incorrect selection, participants were not asked to redo the trial, but were given visual feedback that the selection was incorrect. Only the correct trials were part of the time analysis. Participants were instructed to perform the task as quickly and accurately as possible. At the end, a semi-structured interview was conducted.
+
+## 4 RESULTS
+
+### 4.1 Target Acquisition Time
+
+We conducted a 4-way ANOVA and found main effects of all four variables on target acquisition time. However, there were interaction effects of STARTPos*DISTANCE $(F\left( {{1.224},{13.463}}\right) = {6.028}, p <$ ${.05},{\eta }^{2} = {.354}$ with Greenhouse-Geisser correction (GG)) and of STARTPOS*ANGLE $\left( {F\left( {1,{11}}\right) = {21.776}, p < {.005},{\eta }^{2} = {.664}}\right)$ . Therefore, we ignore the main effects of STARTPOS, ANGLE, and DISTANCE, and analyze the interaction effects. Since there were no interaction effects involving WIDTH, we consider the main effect of WIDTH $\left( {F\left( {2,{22}}\right) = {104.241}, p < {.001},{\eta }^{2} = {.905}}\right)$ . All posthoc tests described below have been conducted using Bonferroni correction.
+
+2019-12-13 00:53. Page 3 of 1-5.
+
+4.1.1 Effect of Width. We conduct posthoc tests for WIDTH, which show that the target acquisition time for all three widths is significantly different from each other ( $p < {0.001}$ for all). Figure 4(left) shows the effect of width on target acquisition time. Thus, the effect of WIDTH is not affected by the other variables even though the other variables also have significant effects on time.
+
+4.1.2 Effect of StartPos and Distance. The effect of DISTANCE and STARTPos have an interaction. We conducted 1-way ANOVAs for each of the two STARTPos, top and bottom separately to see how distance affects the time in both. Figure 4(middle) shows the interaction plot. The effect of DISTANCE is significant for both top $\left( {F\left( {2,{22}}\right) = {6.856}, p < {.01},{\eta }^{2} = {.384}}\right)$ and bottom $(F\left( {{1.142},{12.558}}\right) =$ ${23.728}, p < {.001},{\eta }^{2} = {.683}$ with ${GG}$ ). For ${top}$ , posthoc tests show the medium distance targets take significantly lower time than for small (p<.05) and large distance targets (p < .01). However, for bottom, both small and medium distance targets take significantly lower time than the large distance targets $(p < {0.01}, p < {.001}$ respectively).
+
+While the large distance targets expectedly perform worst, for top, the medium distance's performance is significantly lower. This is an interesting result and is possibly due to selective tilting of the controller by the participants depending on the target location. Participants use a combination of hand movement and orienting the controller in the hand to have the raycast pointer reach the target. Since the medium distance targets are in the middle of the panel, users can reach it with a combination of orientation change and hand motion. However, since even a slight orientation can result in a large displacement of the raycast pointer, smaller targets would be overshot with a flick of the wrist. With bottom, since the user is moving against gravity, the small and medium distances are comparable, but very much lower than the large distance targets.
+
+4.1.3 Effect of StartPos and Angle. The effect of angle also depends on StartPos. Figure 4(right) shows the interaction plot. For top, ${22.5}^{ \circ }$ take a significantly lower time than ${67.5}^{ \circ }(F\left( {1,{11}}\right) = {11.793}, p <$ ${.01},{\eta }^{2} = {.517})$ . For bottom, the inverse is true with ${67.5}^{ \circ }$ taking a significantly lower time than ${22.5}^{ \circ }\left( {F\left( {1,{11}}\right) = {16.201}, p < {.005},{\eta }^{2} = }\right.$ .596). Again, owing to gravity, for bottom, the ${22.5}^{ \circ }$ angle requires the user to make more of an effort against gravity than ${67.5}^{ \circ }$ . It’s vice versa for top for the same reason.
+
+### 4.2 Errors
+
+No variable had a significant effect on error rate. While error rate values decreased with width ( ${6.5}\% ,{3.6}\% ,{1.8}\%$ ), the differences were not significant.
+
+### 4.3 Qualitative Feedback
+
+Unsurprisingly, majority of the participants reported the bottom starting position to be much more fatiguing. Some participants also mentioned that they thought that distance or angle had a very small effect on the difficulty of the task.
+
+
+
+Figure 4: Target Acquisition Time results from the evaluation. We show only the main effects and the interaction effects. (left) Mean Time vs Width. (middle) Mean Time vs Distance for both StartPos. (right) Mean Time vs Angle for both StartPos. Error bars are ${95}\%$ C.I.
+
+## 5 DISCUSSION
+
+### 5.1 Design Takeaways
+
+The results suggest that gravity played a major part even when our experiment design minimized fatigue between conditions. The effect would be much more pronounced with longer, fatigue-inducing tasks. Most current HVPs use a cube-style panel with equal vertical and horizontal sizes. One simple solution to minimize the effect of gravity would be to have HVPs that have larger horizontal widths than vertical.
+
+Our distance-based results suggest that minimizing hand motion and instead relying on wrist flicks to move the raycast pointer could help performance (see $\left\lbrack {{26},{27}}\right\rbrack$ ). Therefore, as opposed to having smaller panels, panel sizes can be increased (at least horizontally) to encourage the use of coarse wrist flicking.
+
+Further, the design needs to minimize motion when the user is performing tasks below the panel (for instance, creating a ground texture) and will need to go against gravity to reach the HVP. One solution here would be arrange targets on the panel such that the high frequency targets are placed at the bottom of the panel, thus making them easier to reach from the bottom, while not overtly affecting the performance from top. Another possibility is to retarget the HVP [2] at a lower position while the non-dominant hand remains at the same position so that the user has to move less against gravity to reach the HVP. Retargeting has not been explored in the context of HVPs and could be a really useful technique to counter such effects. However, the tradeoff of increasing the visuo-haptic disconnect in this case would need to be explored.
+
+Overall, we suggest three takeaways that should be considered by designers for HVPs depending on the context: 1) Panels with large horizontal widths as opposed to square shaped ones should be considered to counter effects of gravity and encourage wrist flicking, 2) Place high-frequency targets at the bottom of the panel, and 3) investigate retargeting of the HVP given the same non-dominant hand positions to minimize user motion against gravity.
+
+### 5.2 Bimanual Parallel Input
+
+While our work indicates some concrete directions to better the design of HVPs, one aspect that we did not explore in detail is the potential for HVPs to support bimanual parallel input. HVP is based on Guiard's kinematic chain model [16] for bimanual input, which proposes principles of asymmetric two-handed interface design. However, bimanual input may not always be useful. Buxton et al. [6] investigate parallelism, i.e., the degree to which the two hands are working parallelly, and concluded that participants are capable of parallelism and this improves task performance, but its use and efficiency depends on the mechanics of the task. Kabbash et al. [19] further showed that if a 2D task followed Guiard's model, it improves performance, and not following the model can worsen bimanual performance. With the HVP, users can potentially parallelly move both hands according to Guiard's kinematic chain model and improve their speed and performance. In addition to retargeting, bimanual parallel input is a promising direction for future exploration.
+
+## 6 CONCLUSION
+
+The handheld virtual panel is the most popular technique for accessing tools or menus in commercial VR creativity and productivity applications. In this paper, we conduct an evaluation of the target acquisition performance in the HVP as a measure of four variables. Our results show that all four have an effect on user performance. While there are expected effects such as reducing acquisition time with increasing width, the evaluation also suggests that gravity may be a crucial issue even when fatigue is minimized. Based on the results, we list takeaways to help improve the design of HVPs and indicate paths for future explorations. We believe addressing the limitations of HVPs uncovered in our study will go a long way in improving the user experience of HVP-based VR applications.
+
+## REFERENCES
+
+[1] Carlos Andujar and Ferran Argelaguet. 2007. Anisomorphic ray-casting manipulation for interacting with 2D GUIs. Comput. Graph. 31, 1 (jan 2007), 15-25. https://doi.org/10.1016/j.cag.2006.09.003
+
+[2] Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2016. Haptic Retargeting. In Proc. 2016 CHI Conf. Hum. Factors Comput. Syst. - CHI '16. ACM Press, New York, New York, USA, 1968-1979. https://doi.org/10.1145/2858036.2858226
+
+[3] Ravin Balakrishnan and Ken Hinckley. 1999. The role of kinesthetic reference frames in two-handed input performance. In Proc. 12th Annu. ACM Symp. User interface Softw. Technol. - UIST '99. ACM Press, New York, New York, USA, 171- 178. https://doi.org/10.1145/320719.322599
+
+[4] Hrvoje Benko and Steven Feiner. 2007. Balloon Selection: A Multi-Finger Technique for Accurate Low-Fatigue 3D Selection. In 2007 IEEE Symp. 3D User Interfaces. IEEE. https://doi.org/10.1109/3DUI.2007.340778
+
+2019-12-13 00:53. Page 4 of 1-5.
+
+531 [5] Mark Billinghurst, Sisinio Baldis, Lydia Matheson, and Mark Philips. 1997. 3D 532
+
+palette. In Proc. ACM Symp. Virtual Real. Softw. Technol. - VRST '97. ACM Press, New York, New York, USA, 155-156. https://doi.org/10.1145/261135.261163
+
+[6] W. Buxton and B. Myers. 1986. A study in two-handed input. In Proc. SIGCHI Conf. Hum. factors Comput. Syst. - CHI '86, Vol. 17. ACM Press, New York, New York, USA, 321-326. https://doi.org/10.1145/22627.22390
+
+[7] Yeonjoo Cha and Rohae Myung. 2013. Extended Fitts' law for 3D pointing tasks using 3D target arrangements. Int. J. Ind. Ergon. 43, 4 (jul 2013), 350-355. https://doi.org/10.1016/j.ergon.2013.05.005
+
+[8] Barrett M. Ens, Rory Finnegan, and Pourang P. Irani. 2014. The personal cockpit. In Proc. 32nd Annu. ACM Conf. Hum. factors Comput. Syst. - CHI '14. ACM Press, New York, New York, USA, 3171-3180. https://doi.org/10.1145/2556288.2557058
+
+[9] Mikael Eriksson. 2016. Reaching out to grasp in Virtual Reality : A qualitative usability evaluation of interaction techniques for selection and manipulation in a VR game. (2016).
+
+[10] Facebook. 2017. Oculus Medium. (2017). https://www.oculus.com/medium/k%\}0A
+
+[11] Facebook. 2017. Oculus Quill. https://www.oculus.com/story-studio/quill/
+
+[12] Steven Feiner, Blair MacIntyre, Marcus Haupt, and Eliot Solomon. 1993. Windows on the world. In Proc. 6th Annu. ACM Symp. User interface Softw. Technol. - UIST '93. ACM Press, New York, New York, USA, 145-155. https://doi.org/10.1145/ 168642.168657
+
+[13] A. Fuhrmann, H. Loffelmann, D. Schmalstieg, and M. Gervautz. 1998. Collaborative visualization in augmented reality. IEEE Comput. Graph. Appl. 18, 4 (1998), 54-59. https://doi.org/10.1109/38.689665
+
+[14] Google. 2017. Blocks - Create 3D models in VR - Google VR. https://vr.google.com/blocks/
+
+[15] Google. 2017. Tilt Brush by Google. https://www.tiltbrush.com/
+
+[16] Y Guiard. 1987. Asymmetric division of labor in human skilled bimanual action: the kinematic chain as a model. J. Mot. Behav. 19, 4 (dec 1987), 486-517. http: //www.ncbi.nlm.nih.gov/pubmed/15136274
+
+[17] Ken Hinckley, Randy Pausch, Dennis Proffitt, James Patten, and Neal Kassell. 1997. Cooperative bimanual action. In Proc. SIGCHI Conf. Hum. factors Comput. Syst. - CHI '97. ACM Press, New York, New York, USA, 27-34. https://doi.org/10.1145/258549.258571
+
+[18] HTC. 2017. MakeVR. http://www.viveformakers.com/
+
+[19] Paul Kabbash, William Buxton, and Abigail Sellen. 1994. Two-handed input in a compound task. In Proc. SIGCHI Conf. Hum. factors Comput. Syst. Celebr. Interdepend. - CHI '94. ACM Press, New York, New York, USA, 417-423. https: //doi.org/10.1145/191666.191808
+
+[20] Robert W. Lindeman, John L. Sibert, and James K. Hahn. 1999. Towards usable VR. In Proc. SIGCHI Conf. Hum. factors Comput. Syst. CHI is limit - CHI '99. ACM Press, New York, New York, USA, 64-71. https://doi.org/10.1145/302979.302995
+
+[21] Mike McGee, Brian Amento, Patrick Brooks, and Hope Harley. 1997. Fitts and VR: Evaluating Display and Input Devices with Fitts' Law. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 41, 2 (oct 1997), 1259-1262. https://doi.org/10.1177/ 1071181397041002119
+
+[22] Mark R. Mine, Frederick P. Brooks, and Carlo H. Sequin. 1997. Moving objects in space. In Proc. 24th Annu. Conf. Comput. Graph. Interact. Tech. - SIGGRAPH '97. ACM Press, New York, New York, USA, 19-26. https://doi.org/10.1145/258734.258747
+
+[23] Atsuo Murata and Hirokazu Iwase. 2001. Extending Fitts' law to a three-dimensional pointing task. Hum. Mov. Sci. 20, 6 (dec 2001), 791-805. https: //doi.org/10.1016/S0167-9457(01)00058-6
+
+[24] Ivan Poupyrev, Mark Billinghurst, Suzanne Weghorst, and Tadao Ichikawa. 1996. The go-go interaction technique. In Proc. 9th Annu. ACM Symp. User interface Softw. Technol. - UIST '96. ACM Press, New York, New York, USA, 79-80. https: //doi.org/10.1145/237091.237102
+
+[25] Mathieu Raynal, Emmanuel Dubois, and Bénédicte Schmitt. 2013. Towards Unification for Pointing Task Evaluation in 3D Desktop Virtual Environment. In Human Factors in Computing and Informatics, Andreas Holzinger, Martina Ziefle, Martin Hitz, and Matjaž Debevc (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 562-580.
+
+[26] Chris Shaw and Mark Green. 1994. Two-handed polygonal surface design. In ACM symposium on User interface software and technology. ACM Press, 205-212.
+
+[27] Christopher D. Shaw. 1998. Pain and Fatigue in Desktop VR. In Graphics Interface.
+
+[28] Chang Geun Song, No Jun Kwak, and Dong Hyun Jeong. 2000. Developing an efficient technique of selection and manipulation in immersive V.E. In Proc. ACM Symp. Virtual Real. Softw. Technol. - VRST '00. ACM Press, New York, New York, USA, 142. https://doi.org/10.1145/502390.502417
+
+[29] Richard Stoakley, Matthew J. Conway, and Randy Pausch. 1995. Virtual reality on a WIM. In Proc. SIGCHI Conf. Hum. factors Comput. Syst. - CHI '95. ACM Press, New York, New York, USA, 265-272. https://doi.org/10.1145/223904.223938
+
+[30] Z Szalavári. 1999. The Personal Interaction Panel-a two-handed Interface for Augmented Reality. (1999). http://diglib.eg.org/handle/10.2312/8134
+
+[31] Z Szalavari and M Gervautz. 1997. The personal interaction Panel: A Two handed interface for augmented reality. Comput. Graph. forum (1997). http: //onlinelibrary.wiley.com/doi/10.1111/1467-8659.00137/full
+
+533 534 535 536 537 538 539 585
+
+2019-12-13 00:53. Page 5 of 1-5.
+
+[32] R. J. Teather and W. Stuerzlinger. 2011. Pointing at 3D targets in a stereo head-tracked virtual environment. In 2011 IEEE Symposium on 3D User Interfaces (3DUI). 87-94. https://doi.org/10.1109/3DUI.2011.5759222
+
+[33] Gerard P. van Galen and Willem P. de Jong. 1995. Fitts' law as the outcome of a dynamic noise filtering model of motor control. Hum. Mov. Sci. 14, 4-5 (nov 1995), 539-571. https://doi.org/10.1016/0167-9457(95)00027-3
+
+[34] Xiaolu Zeng, Alan Hedge, and Francois Guimbretiere. 2012. Fitts' Law in 3D Space with Coordinated Hand Movements. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 56, 1 (sep 2012), 990-994. https://doi.org/10.1177/1071181312561207
+
+617 618 624 625 626 627 628 629 630
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..43f60235dc4ccf4156b617c2749c256fa21fb6b8
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YEhL2zUxfO/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,119 @@
+§ TARGET ACQUISITION FOR HANDHELD VIRTUAL PANELS IN VR
+
+§ ABSTRACT
+
+The Handheld Virtual Panel (HVP) is the virtual panel attached to the non-dominant hand's controller in virtual reality (VR). The HVP is the go-to technique for enabling menus and toolboxes in VR devices. In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach. Our results show that all four factors have significant effects on user performance. Based on the results, we propose guidelines towards the ergonomic and performant design of the HVP interfaces.
+
+§ CCS CONCEPTS
+
+ * Human-centered computing $\rightarrow$ Empirical studies in HCI.
+
+§ ACM REFERENCE FORMAT:
+
+. 2019. Target Acquisition for Handheld Virtual Panels in VR. In Proceedings of ACM Conference (Conference'19). ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/1122445.1122456
+
+§ 1 INTRODUCTION
+
+With the increasing popularity of consumer virtual reality (VR), we see more and more VR apps for creativity and productivity. These apps fundamentally require menus and toolboxes for the assortment of options and controls they offer. And the interaction artifact that is quickly becoming the go-to technique for this is the handheld virtual panel (HVP). The HVP provides the primary toolbox in Google's TiltBrush [15] (Figure 1(left)) and Blocks [14], Oculus's Quill [11] and Medium [10] (Figure 1(right)), and HTC Vive's MakeVR [18]. Szalvari et al. in 1997 [30, 31] proposed the personal interaction panel where the user hold a tracked tablet in the second hand while doing their primary interaction with the dominant hand using a stylus. HVPs extend that concept for virtual panels anchored to the controller in the non-dominant hand and using ray-tracing instead of a stylus. There are multiple advantages to such an interaction [20]. First, handheld windows move along with the user, so they are always within reach. Second, they do not overly clutter the user's view, unless explicitly moved by the user. Third, handheld windows take advantage of the proprioceptive sense because they are attached to the non-dominant hand.
+
+However, even with the ubiquity of HVP in products and research literature, we do not have a sense of what factors govern performance of target selection in HVPs. Consequently, there is a need to understand and quantify HVP target selection performance while considering these two factors: 1) hand motion here is governed by the direction of motion in relation to the ground due to the effects of gravity, and (2) since both the target and the pointer can be moved and controlled by the user during acquisition, user's approach will vary depending on the angle of movement in addition to distance and width.
+
+ < g r a p h i c s >
+
+Figure 1: (left) A handheld virtual panel (HVP) in Google Tilt Brush. (right) A HVP in Oculus Medium. These are used with a controller.
+
+We conduct a study to measure HVP target acquisition performance in relation to four factors that relate to the direction of movement with respect to gravity, the angle of movement with respect to the body, distance, and width. The results show that the performance depends significantly on all four factors. Based on the results, we propose guidelines towards the ergonomic design of the HVP interfaces.
+
+§ 2 RELATED WORK
+
+§ 2.1 HANDHELD VIRTUAL PANELS
+
+In 1993, Feiner et al. [12] described three types of 2D windows in a virtual or augmented environment: Surround-fixed that are displayed at a fixed position within the surrounding, Display-fixed that are fixed at a position relative to the display itself, and World-fixed (or Object-fixed) that are fixed to objects in the 3D world. The HVP is an instance of the object-fixed window with the object being the non-dominant hand. Before Szalvari et al.'s proposal of the personal interaction panel [31], other works proposed handheld panels for specific VR scenarios using pen-and-tablet interfaces where the non-dominant hand held a tablet and the dominant hand held a pen to tap or draw on the tablet $\left\lbrack {3,5,{13},{29}}\right\rbrack$ .
+
+For instance, Stoakley et al.'s Windows-in-Miniature (WIM) [29] proposes a miniature copy of the virtual world in the non-dominant hand for navigation and manipulation. Other works study effects of visual $\left\lbrack {3,{17}}\right\rbrack$ , haptic $\left\lbrack {20}\right\rbrack$ feedback for bimanual input in VR with a panel in the non-dominant hand. Lindeman et al. [20] found that users are ${21}\%$ faster in shape selection tasks when using handheld 2D panels similar to HVP compared to surround-fixed panels that
+
+ < g r a p h i c s >
+
+Figure 2: A participant doing the study with the Oculus Rift and the two controllers.
+
+float in front of the user. Similarly, Mine et al. [22] found that pointing to a target on a handheld panel was doubly fast than a fixed floating panel in space. However, none of the earlier works examine target acquisition in the HVP with respect to multiple target sizes and distances. Further, no existing work has examined the performance of the current HVP incarnation with the current hardware and interface. Consequently, we study the effect of distance and width on movement time for the HVP.
+
+§ 2.2 WHOLE-HANDED 3D MOVEMENTS IN AIR
+
+While most works on handheld panels focus on direct pen or finger input, today's commercial VR systems rely on a controller in each hand with a ray tracing approach being used from the controller on the dominant hand to the targets on the panel. As hand tracking matures and becomes closer to commercial use in VR systems, we also hope to see explorations on hand-gesture based HVP interfaces. A related thread of work is ray tracing while using whole-handed 3D movements. Whole-handed 3D movements involve multiple limb movements, requiring higher muscular force and leading to variable movement trajectories, and hence variable pointing times [33]. Murata et al. [23] show that the direction of hand movement significantly affects movement time for a 3D pointing task. Following works $\left\lbrack {7,{34}}\right\rbrack$ found directional differences relating to shoulder and forearm motion. Zeng et al. [34] found that adduction movements are slower than abduction for $2\mathrm{D}$ targets using hand motion in 3D space (detected by Kinect).
+
+In our case, when using the HVP in current VR apps, a right-handed user holding the controller in the right hand usually approaches a tool on the panel in the left-hand from right to left direction. We investigate varying origins and angles in our study. There are other techniques and studies on target acquisition in 3D and in VR $\left\lbrack {1,4,8,9,{21},{24},{25},{28},{32}}\right\rbrack$ , but they address non-handheld, non-2D panel scenarios such as $3\mathrm{D}$ object selection in the scene.
+
+ < g r a p h i c s >
+
+Figure 3: The ${12} \times {12}$ (largest width) HVP schematic that the user sees in VR. Red dot denotes the pointer at one of its starting positions (other is bottom-right corner). The two angles ${22.5}^{ \circ }$ and ${67.5}^{ \circ }$ are denoted relative to the right edge. The yellow square shows a target to be selected. It is currently at ${67.5}^{ \circ }$ at maximum distance.
+
+§ 3 TARGET ACQUISITION STUDY
+
+Aside from the traditional factors of distance and width, we need to take into account the effect of gravity for multiple starting positions and angles of movement.
+
+§ 3.1 EXPERIMENT DESIGN
+
+Figure 2 shows a participant doing the study. Similar to current HVPs, the dominant-hand controller raycasts a pointer into the scene. Figure 3 shows the HVP schematic that the user sees in VR. For selection, the user navigates the pointer on to the desired target and presses a button on the controller. The user can also move the non-dominant hand to move the target on the panel closer to the pointer. We investigated four independent variables: 1) STARTPos: starting position of the pointer that determines the direction of movement with respect to gravity. STARTPos has two levels, top: top-right and bottom: bottom-right position of the panel. 2) ANGLE: angle of movement relative to the right edge of the panel at StartPos that offers an additional level of nuance into the effect of gravity based on the angle of motion with respect to the gravity vector. It has two levels: ${22.5}^{ \circ }\& {67.5}^{ \circ }$ . Figure 3 shows the angles for the top STARTPos. 3) DISTANCE: target distance from StartPos along the line of one of two angles. It has three exponentially increasing levels: $2\mathrm{\;{cm}},6\mathrm{\;{cm}},{18}\mathrm{\;{cm}}$ . 4) WIDTH: target width. We keep the panel size constant and vary width by changing number of targets (all square shaped). Distance had three levels: ${0.63}\mathrm{\;{cm}}$ ( ${48}\mathrm{X}{48}$ layout), ${1.25}\mathrm{\;{cm}}$ (24x24),2.5cm (12x12). Figure 3 shows the 12x12 layout. The panel size was kept slightly larger than existing panels in commercial applications to allow testing the distance factor with a larger range.
+
+In total, there were $2 \times 2 \times 3 \times 3 = {36}$ conditions and a within-subjects design was used. For each condition, participants performed 6 repetitions, resulting in 36x6 =216 trials per participant. Owing to the large number of conditions, complete Latin square counterbalancing 2019-12-13 00:53. Page 2 of 1-5. across participants is not possible. WIDTH was completely counterbalanced across participants. For each width, STARTPos was completely counterbalanced across participants. For each width and startpos, the order of trials (consisting of DISTANCE-ANGLE combinations) was randomized.
+
+§ 3.2 PARTICIPANTS
+
+Twelve (7 female, 5 male) participants took part in the study (Range: ${18} - {29},\mathrm{\;M} = {22},\mathrm{\;{SD}} = {3.004})$ . All participants were right-handed and did not have any experience with VR. We believe the results will be similar for a mirrored study for left-handed users.
+
+§ 3.3 APPARATUS AND TASK
+
+The experimental application was developed in Unity3D. Participants wore an Oculus Rift head-mounted display and held Oculus Rift Touch Controllers, one on each hand, to interact with the VR environment. The task involved participants selecting targets on a HVP that is attached to the non-dominant hand, using the controller on the dominant hand that controls the raycast pointer. The user selects a target by clicking a button on the controller. For each trial, we measured the target acquisition time (time taken from the highlighting of the desired target until the button click), and errors (number of incorrect selections).
+
+§ 3.4 PROCEDURE
+
+After getting familiar with the apparatus and interface, participants performed 6 practice trials followed by the study. Before every trial, participants were required to bring the pointer back to the StartPos. The next target to be selected was highlighted ${0.5}\mathrm{\;s}$ after the pointer was back at StartPos. Participants selected targets by bringing the raycasted pointer within the target's area (upon which a dark border indicated visual feedback), and pushing down on the trigger located at the back of their controller. We purposely avoided fatigue by mandating a 30 s break after every 18 trials which the participants could extend if they wanted to. Upon incorrect selection, participants were not asked to redo the trial, but were given visual feedback that the selection was incorrect. Only the correct trials were part of the time analysis. Participants were instructed to perform the task as quickly and accurately as possible. At the end, a semi-structured interview was conducted.
+
+§ 4 RESULTS
+
+§ 4.1 TARGET ACQUISITION TIME
+
+We conducted a 4-way ANOVA and found main effects of all four variables on target acquisition time. However, there were interaction effects of STARTPos*DISTANCE $(F\left( {{1.224},{13.463}}\right) = {6.028},p <$ ${.05},{\eta }^{2} = {.354}$ with Greenhouse-Geisser correction (GG)) and of STARTPOS*ANGLE $\left( {F\left( {1,{11}}\right) = {21.776},p < {.005},{\eta }^{2} = {.664}}\right)$ . Therefore, we ignore the main effects of STARTPOS, ANGLE, and DISTANCE, and analyze the interaction effects. Since there were no interaction effects involving WIDTH, we consider the main effect of WIDTH $\left( {F\left( {2,{22}}\right) = {104.241},p < {.001},{\eta }^{2} = {.905}}\right)$ . All posthoc tests described below have been conducted using Bonferroni correction.
+
+2019-12-13 00:53. Page 3 of 1-5.
+
+4.1.1 Effect of Width. We conduct posthoc tests for WIDTH, which show that the target acquisition time for all three widths is significantly different from each other ( $p < {0.001}$ for all). Figure 4(left) shows the effect of width on target acquisition time. Thus, the effect of WIDTH is not affected by the other variables even though the other variables also have significant effects on time.
+
+4.1.2 Effect of StartPos and Distance. The effect of DISTANCE and STARTPos have an interaction. We conducted 1-way ANOVAs for each of the two STARTPos, top and bottom separately to see how distance affects the time in both. Figure 4(middle) shows the interaction plot. The effect of DISTANCE is significant for both top $\left( {F\left( {2,{22}}\right) = {6.856},p < {.01},{\eta }^{2} = {.384}}\right)$ and bottom $(F\left( {{1.142},{12.558}}\right) =$ ${23.728},p < {.001},{\eta }^{2} = {.683}$ with ${GG}$ ). For ${top}$ , posthoc tests show the medium distance targets take significantly lower time than for small (p<.05) and large distance targets (p < .01). However, for bottom, both small and medium distance targets take significantly lower time than the large distance targets $(p < {0.01},p < {.001}$ respectively).
+
+While the large distance targets expectedly perform worst, for top, the medium distance's performance is significantly lower. This is an interesting result and is possibly due to selective tilting of the controller by the participants depending on the target location. Participants use a combination of hand movement and orienting the controller in the hand to have the raycast pointer reach the target. Since the medium distance targets are in the middle of the panel, users can reach it with a combination of orientation change and hand motion. However, since even a slight orientation can result in a large displacement of the raycast pointer, smaller targets would be overshot with a flick of the wrist. With bottom, since the user is moving against gravity, the small and medium distances are comparable, but very much lower than the large distance targets.
+
+4.1.3 Effect of StartPos and Angle. The effect of angle also depends on StartPos. Figure 4(right) shows the interaction plot. For top, ${22.5}^{ \circ }$ take a significantly lower time than ${67.5}^{ \circ }(F\left( {1,{11}}\right) = {11.793},p <$ ${.01},{\eta }^{2} = {.517})$ . For bottom, the inverse is true with ${67.5}^{ \circ }$ taking a significantly lower time than ${22.5}^{ \circ }\left( {F\left( {1,{11}}\right) = {16.201},p < {.005},{\eta }^{2} = }\right.$ .596). Again, owing to gravity, for bottom, the ${22.5}^{ \circ }$ angle requires the user to make more of an effort against gravity than ${67.5}^{ \circ }$ . It’s vice versa for top for the same reason.
+
+§ 4.2 ERRORS
+
+No variable had a significant effect on error rate. While error rate values decreased with width ( ${6.5}\% ,{3.6}\% ,{1.8}\%$ ), the differences were not significant.
+
+§ 4.3 QUALITATIVE FEEDBACK
+
+Unsurprisingly, majority of the participants reported the bottom starting position to be much more fatiguing. Some participants also mentioned that they thought that distance or angle had a very small effect on the difficulty of the task.
+
+ < g r a p h i c s >
+
+Figure 4: Target Acquisition Time results from the evaluation. We show only the main effects and the interaction effects. (left) Mean Time vs Width. (middle) Mean Time vs Distance for both StartPos. (right) Mean Time vs Angle for both StartPos. Error bars are ${95}\%$ C.I.
+
+§ 5 DISCUSSION
+
+§ 5.1 DESIGN TAKEAWAYS
+
+The results suggest that gravity played a major part even when our experiment design minimized fatigue between conditions. The effect would be much more pronounced with longer, fatigue-inducing tasks. Most current HVPs use a cube-style panel with equal vertical and horizontal sizes. One simple solution to minimize the effect of gravity would be to have HVPs that have larger horizontal widths than vertical.
+
+Our distance-based results suggest that minimizing hand motion and instead relying on wrist flicks to move the raycast pointer could help performance (see $\left\lbrack {{26},{27}}\right\rbrack$ ). Therefore, as opposed to having smaller panels, panel sizes can be increased (at least horizontally) to encourage the use of coarse wrist flicking.
+
+Further, the design needs to minimize motion when the user is performing tasks below the panel (for instance, creating a ground texture) and will need to go against gravity to reach the HVP. One solution here would be arrange targets on the panel such that the high frequency targets are placed at the bottom of the panel, thus making them easier to reach from the bottom, while not overtly affecting the performance from top. Another possibility is to retarget the HVP [2] at a lower position while the non-dominant hand remains at the same position so that the user has to move less against gravity to reach the HVP. Retargeting has not been explored in the context of HVPs and could be a really useful technique to counter such effects. However, the tradeoff of increasing the visuo-haptic disconnect in this case would need to be explored.
+
+Overall, we suggest three takeaways that should be considered by designers for HVPs depending on the context: 1) Panels with large horizontal widths as opposed to square shaped ones should be considered to counter effects of gravity and encourage wrist flicking, 2) Place high-frequency targets at the bottom of the panel, and 3) investigate retargeting of the HVP given the same non-dominant hand positions to minimize user motion against gravity.
+
+§ 5.2 BIMANUAL PARALLEL INPUT
+
+While our work indicates some concrete directions to better the design of HVPs, one aspect that we did not explore in detail is the potential for HVPs to support bimanual parallel input. HVP is based on Guiard's kinematic chain model [16] for bimanual input, which proposes principles of asymmetric two-handed interface design. However, bimanual input may not always be useful. Buxton et al. [6] investigate parallelism, i.e., the degree to which the two hands are working parallelly, and concluded that participants are capable of parallelism and this improves task performance, but its use and efficiency depends on the mechanics of the task. Kabbash et al. [19] further showed that if a 2D task followed Guiard's model, it improves performance, and not following the model can worsen bimanual performance. With the HVP, users can potentially parallelly move both hands according to Guiard's kinematic chain model and improve their speed and performance. In addition to retargeting, bimanual parallel input is a promising direction for future exploration.
+
+§ 6 CONCLUSION
+
+The handheld virtual panel is the most popular technique for accessing tools or menus in commercial VR creativity and productivity applications. In this paper, we conduct an evaluation of the target acquisition performance in the HVP as a measure of four variables. Our results show that all four have an effect on user performance. While there are expected effects such as reducing acquisition time with increasing width, the evaluation also suggests that gravity may be a crucial issue even when fatigue is minimized. Based on the results, we list takeaways to help improve the design of HVPs and indicate paths for future explorations. We believe addressing the limitations of HVPs uncovered in our study will go a long way in improving the user experience of HVP-based VR applications.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4c98a02556f186496fd2c8a9c7d9e4d891e0d76
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,503 @@
+# Interactive Design of Gallery Walls via Mixed Reality
+
+## ABSTRACT
+
+We present a novel interactive design tool that allows users to create and visualize gallery walls via a mixed reality device. To use our tool, a user selects a wall to decorate and chooses a focal art item. Our tool then helps the user complete their design by optionally recommending additional art items or automatically completing both the selection and placement of additional art items. Our tool holistically considers common design criteria such as alignment, color, and style compatibility in the synthesis of a gallery wall. Through a mixed reality device, such as a Magic Leap One headset, the user can instantly visualize the gallery wall design in situ and can interactively modify the design in collaboration with our tool's suggestion engine. We describe the suggestion engine and its adaptability to users with different design goals. We also evaluate our mixed-reality-based tool for creating gallery wall designs and compare it with a 2D interface, providing insights for devising mixed reality interior design applications.
+
+## ACM Classification Keywords
+
+H.5.m. Information interfaces and presentation (e.g., HCI)
+
+## Author Keywords
+
+Design interfaces; mixed reality; spatial computing
+
+## INTRODUCTION
+
+The advent of mixed reality devices (e.g., Microsoft Hololens, Magic Leap One) gives rise to new and exciting opportunities for spatial computing. The superior immersive visualization and interaction experience provided by these devices promises to change the way interior design is performed as they allow users to instantly preview and modify designs in real spaces.
+
+Interior design has historically been a costly and time-intensive process. The conventional design process involves contemplating $\mathrm{f}$ abric swatches and inspirational photos, as well as talking to a designer. A professional designer may make use of 3D modeling software to preview a design on screen through sophisticated manual operations. The designer's client must then mentally translate what they see on the 2D screen to how the design may look in a real living space. Without convenient means for visualizing and modifying designs, the design process can be tedious and nonintuitive. Such limitations restrict the ability of general users to engage in interior design even though they may have creative ideas.
+
+
+
+Figure 1: (a) A user wearing a Magic Leap One headset designs a gallery wall using our tool. The figure shows what the user sees in mixed reality while designing: the control panels and his gallery wall design overlaid on the real wall. (b) Some gallery walls designed by users with our tool.
+
+In our work, we attempt to address these challenges by devising an interior design tool leveraging the visualization and interaction capabilities of the latest consumer-grade mixed reality devices. Since interior design is a broad area, as an early attempt to investigate such design applications based on mixed reality, we particularly focus on the design of gallery walls, which are common for decorating interior spaces such as living rooms, hotel lobbies, galleries, and other interior spaces. Figure 1 shows a user designing a gallery wall using our tool.
+
+A gallery wall refers to a cluster of wall art items artistically arranged on a wall. A gallery wall commonly contains a focal item near the center of the arrangement that sets the tone for the overall design. The other items are called auxiliary items and are placed around and with reference to the focal item. The auxiliary items are generally compatible with the focal item in terms of color and style. These definitions follow the conventions used by designers in creating a gallery wall design using a traditional workflow.
+
+Our tool is suitable for novice users. Users are able to directly visualize how the gallery wall design will look on the real wall while interactively creating the design. The visualization and user-interaction components of our tool keeps the user in the loop of the design process, allowing quick exploration of desirable designs through trial-and-error.
+
+Furthermore, by comparing the color and style compatibility between different art items, our suggestion engine can let the user quickly browse through many desirable design suggestions automatically generated by our tool, hence saving the manual and mental efforts involved in browsing through a large database of wall art items.
+
+---
+
+Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
+
+Graphics Interface 202021-22 May 2020, Toronto, Ontorio, Canada
+
+(C) 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 123-4567-24-567/08/06.
+
+DOI: http://dx.doi.org/10.475/123_4
+
+---
+
+We make the following major contributions in this work:
+
+- Based on interviews with professional designers, we devise a computational approach for facilitating and automating the design of gallery walls, which enables a novel mixed reality interactive design tool.
+
+- We demonstrate how mixed reality technology, which bridges the gap between real-world scene knowledge and design suggestions computed in a virtual setting, can be adopted for interior design. In our case, we particularly demonstrate how such an approach can be applied for designing gallery walls.
+
+- We conduct experiments to evaluate the user experience and performance of using our novel tool for gallery wall design. We also conduct a perceptual study to evaluate the quality of the gallery wall designs created by users with our tool.
+
+We believe these contributions will inspire future research in creating mixed reality interfaces for interior design.
+
+## RELATED WORK
+
+To the best of our knowledge, there is no existing work on using mixed or augmented reality for gallery wall design. Regardless, we briefly review the existing research work and commercial tools relevant to our problem domain.
+
+## Extended Reality for Interior Design
+
+Companies have been exploring the use of virtual, augmented, or mixed reality technologies for creating and visualizing interior design. Matterport uses a 3D camera to capture the color and depth of real-world living spaces, which can be visualized in 3D by users wearing a virtual reality headset. Such an approach finds promising applications for virtual real estate tours. On the other hand, roOomy provides virtual staging services, enabling users to see previews of interior designs via augmented reality devices showing virtual furniture objects overlaid on a real scene. Furniture retailers such as Wayfair also develop virtual and augmented reality experiences with capabilities such as customizing the design of outdoor spaces with furnishings and décor.
+
+Several companies provide web interfaces or mobile applications for designing gallery walls. For example, Shutterfly allows users to upload their photos and arrange them using preset layouts provided by the company. Art.com provides a mobile application that allows users to select individual art items or preconfigured gallery wall layouts from their large wall art collection and visualize them via augmented reality on a mobile device.
+
+Recently, several researchers proposed using extended reality for interior design [2, 10, 14, 22, 31, 30]. Zhang et al. [31] proposed an approach to add furniture items and relight scenes on a RGBD-enabled tablet. Yue et al. [30] developed a mixed reality system which allows users to efficiently edit a scene for applications such as room redecoration. Virtual content needs to be adapted to fit the current scene. Nuernberger et al. [14] devised a technique to align virtual content with the real world. Chae et al. [2] proposed a space manipulation technique for placing distant objects by dynamically squeezing surrounding spaces in augmented reality. Lindlbauer and Wilson [10] showed how to manipulate space and time in augmented reality.
+
+Compared to the existing approaches, our tool not only uses extended reality technologies for visualizing gallery wall designs, but also reasons about the spatial and color compatibility. Our tool simplifies the design process by suggesting desirable combinations and placements of art items, taking the color of the wall into account.
+
+## Automated Layout Design
+
+Recently, layout design automation has received much research attention. There are previous efforts on automatic 2D graphics layout design $\left\lbrack {{16},{15}}\right\rbrack$ , poster design $\left\lbrack {18}\right\rbrack$ , website design [17], magazine covers [8], and photo collages [19, 24]. We focus on reviewing automatic scene layout design works.
+
+Merrell et al. [12] proposed an interactive tool for furniture layout design based on interior design guidelines, while Yu et al. [28] devised an optimization framework for automatic furniture layout design. Fisher et al. [6] characterized structural relationships in scenes based on graph kernels and later proposed an example-based approach [5] for synthesizing 3D object arrangements for modeling partial scenes. More recently, Wang et al. [21] applied deep convolutional priors for indoor scene synthesis, while Weiss et al. [23] proposed a physics-based approach for fast and scalable furniture layout synthesis. Such works are mostly focused on the geometrical aspects of populating spaces with furniture. Complementary to this line of work, Chen et al. [3] created a tool called Magic Decorator for automatically assigning materials for objects in indoor scenes. Xu [26, 27] proposed a tool for layout beautification and arrangement. However, none of these approaches considers the stylistic arrangement of wall art items in a scene. We propose a novel approach for modeling common factors such as colors, spatial relationships, and semantic compatibility among art items to generate desirable gallery walls, which could complement the existing automated interior design approaches.
+
+We also note that recently CAD software companies such as Autodesk are applying generative design for automating layout synthesis [13]. Along with this line of work, we believe our generative design tool for semi-automating gallery wall design will also find good practical uses.
+
+## Interactive Scene Modeling
+
+Typically, users want the ability to control, modify, and visualize the design during the design process so that they can infuse their personal stylistic preferences in their designs. As such, interactive modeling tools play an important role in the design process. There is a large body of work on interactive modeling tools. We review recently proposed scene modeling interfaces.
+
+Along the direction of suggestive interfaces for scene modeling, Yu et al. proposed a suggestive interface called Clut-terPalette [29] that uses object distribution statistics learned from real-world scenes for suggesting appropriate furniture items to add to a scene. Matthew et al. [4] devised a context-based search engine for $3\mathrm{D}$ furniture models to add to a scene.
+
+
+
+Figure 2: A gallery wall created by a designer using a conventional workflow. The focal item (in yellow), as well as a diagonal pair (in orange) and a triangular group (in cyan) of auxiliary items are highlighted.
+
+Another line of work focuses on providing users with easy controls for creating objects while modeling a scene. As humans are accustomed to drawing sketches, a promising approach is to devise sketch-based interfaces for modeling scenes. For example, Xu et al. [25] proposed a sketch-based interface for retrieving and placing 3D models in scenes. Recently, Li et al. [9] devised an interface called SweepCanvas that allows users to perform 3D prototyping on an RGB-D image of a partial scene by sketching. Users can conveniently create virtual objects overlaid on top of the point cloud of a real scene.
+
+Compared to the existing interfaces, we propose a novel mixed reality-based interface for designing gallery wall layouts in situ, with the user seeing the design overlaid on top of the target wall. Seeing how the generated design fits into the real world, the user can easily modify the design by a few intuitive operations. We demonstrate in our evaluation experiments that our tool can allow not only designers but also novice users to quickly generate desirable gallery wall designs.
+
+## INTERVIEW WITH DESIGNERS ON WORKFLOW
+
+To devise a computational approach and a practical tool for designing gallery walls, we interviewed 4 professional designers from a large furnishings and décor company to better understand the way professional designers create gallery walls under current practice. Each of the designers has at least 5 years of interior design or staging experience and has designed dozens of gallery walls in their professional capacity.
+
+Figure 2 shows a gallery wall created by a designer. The general process of designing a gallery wall is as follows:
+
+
+
+Figure 3: System overview. Taking a database of art items and a wall as input, our tool suggests a focal art item as the starting point for designing a gallery wall. The user may apply a template to generate an initial design, and then interactively refines the design with the help of our suggestion engine.
+
+1. The designer first observes the style of the room, particularly paying attention to the colors of the wall and the furniture objects that the designer would like to decorate around.
+
+2. The designer explores a database containing many art items and chooses a focal art item, which is to be placed near the center of the gallery wall and serves as the reference for placing other auxiliary art items. The color of the focal art item should be compatible with the wall color.
+
+3. The designer selects and adds the auxiliary art items to the gallery wall. The colors and styles of these art items should contain some variety, yet they should all be compatible with those of the focal art item.
+
+4. The designer lays out the auxiliary art items around the focal art item nicely. One common consideration is balance: pairs of similar art items are placed at opposite sides of the focal art item.
+
+5. The designer refines the locations of the art items to achieve proper spacing and alignment between items.
+
+We devise our computational approach based on the above observations to automate the conventional design process. Typically, in creating a gallery wall for a common scene like a living room, it takes about 20 minutes for the designer to select compatible wall art items and another 20 minutes to arrange the items into a good layout.
+
+## SYSTEM OVERVIEW
+
+Our tool is realized as an application that runs on the Magic Leap One mixed reality headset. Figure 1 shows a user designing a gallery wall using our tool. The user wears the headset when using our application to create a gallery wall.
+
+The display on the mixed reality headset shows a user interface as well as the virtual gallery wall design overlaid on the real wall, such that the user can instantly preview how the gallery wall design will look on the real wall while designing. The user interacts with the user interface via a handheld controller (e.g., a Magic Leap One's Control) to perform operations such as selecting and dragging. Figure 3 shows an overview of our tool. It consists of two major components: a suggestion engine for generating design suggestions and a user interface for modifying the gallery wall design. The tool is connected to a large database of wall art items (pictures) from which suitable art items are automatically retrieved and suggested to the user.
+
+
+
+Figure 4: Designing a gallery wall with our tool. (a) The input empty wall. (b) The user selects a focal item. (c) The user applies a template for initializing the design. (d) The user modifies the design interactively. (e) The finished gallery wall design.
+
+## Workflow
+
+Figure 4 illustrates the typical user workflow while using our tool for designing a gallery wall. The input is a wall color. The output, which can be reached with as few as three user decisions, is a gallery wall design that goes well with the wall color. Our tool achieves color compatibility by using the wall color to retrieve candidate focal art items whose colors are compatible with the wall color. A user selects a focal art item from these suggestions, picks a focal item size from among offered art sizes, and either chooses a gallery wall template to launch the synthesis of a gallery wall design or browses recommended auxiliary items.
+
+The selection of auxiliary items, whether human-selected or template-selected, begins with the generation of a color palette based on the colors of the wall and the selected focal item. Based on this color palette and the style of the focal item, the system then retrieves from the database a selection of auxiliary art items that are either presented to the user as options or automatically incorporated into a layout.
+
+After the initialization, the user can interactively modify the gallery wall design via operations such as dragging-and-dropping and selecting in the $3\mathrm{D}$ space using a handheld controller. For example, the user can move, resize, replace, add, or remove any art items. Our tool also provides semiautomatic operations to help the user refine the design, for example, by performing automatic snapping of the art items. The design session ends when the user is satisfied with the gallery wall.
+
+## Art Items Data
+
+Database. Our tool is built upon a suggestion engine which helps users browse and make selections from a large database of wall art items while designing their gallery walls. The database, which contains 12,000 wall art items created by
+
+
+
+Figure 5: Samples of wall art items in our database, which contains more than 12,000 art items.
+
+professional artists, belongs to a company specialized in furniture and interior design business. Figure 5 depicts some example art items. The art items are mostly paintings and stylized photographs. Each art item comes with 1 to 5 realistic dimensions (from small to large) whose real replicate can be ordered and put on a real wall.
+
+Annotations. To compute the compatibility between different art items for making suggestions, each item is annotated with:
+
+1. Colors. The 5 most dominant colors of the art item are extracted by the k-means clustering algorithm and stored.
+
+2. Visual Features. 256 visual feature values are computed by a convolutional neural network, which encode the visual style of the art item. We trained a Siamese Network to perform such feature extraction using the aforementioned database containing many art items. The network takes image pairs as input, where a positive pair consists of images of items from the same category and a negative pair consists of images of items from different categories. Each input image is processed by a modified Inception-ResNet [20], whose last layer is changed to a fully connected layer, resulting in a 256-dimensional vector as the output. The network was trained to minimize the contrastive loss [7] of the vectors (embeddings) of the input image pairs.
+
+Tags: Style Tags: Subject
+
+| American Traditional | Asian Inspired | Beachy | Bohemian & Bold | Eclectic Modern |
| Cabin / Lodge | Coastal | Cottage / Country | Cottage Americana | Eclectic |
| French Country Modern & Contemporary Ornate Traditional Traditional | Glam Modern Farmhouse Posh & Luxe Tropical | Global Inspired Modern Rustic Rustic | Industrial Nautical Scandinavian | Mid-Century Modern Ornate Glam Sleek & Chic Modern |
+
+Abstract Abstract Bath & Laundry Buildings & Cityscapes Cities & Countries Entertainment Fantasy & Sci-Fi Fashion Floral & Botanical Food & Beverage
+
+Geometric Humor Inspirational Quotes & Sayings Landscape & Nature Maps
+
+Nautical & Beach People Spiritual & Religious Sports & Sports Teams Transportation Tags: Color
+
+| Tags: Color |
| Beige Clear Pink White | Black Gold Purple Yellow | Blue Gray Red | Brown Green Silver | Chrome Orange Tan |
+
+Table 1: The tags in our database, which are manually assigned to the art items by professional designers. The database consists of 3 categories, namely, style (27 tags), subject (20 tags), and color (17 tags).
+
+
+
+Figure 6: Examples of focal items retrieved by the suggestion engine based on different wall color schemes.
+
+## 3. Tags. Each item also carries tags manually specified by designers of the company. (See Table 1)
+
+Similarity between two art items is computed based on the L2 distance between their annotation vectors: the smaller the distance, the more similar the two items are. We find these annotations useful in devising our suggestion engine as they characterize the art items and are also common criteria used by designers for comparing art items. By formulating our scoring functions using these three types of annotations, we are able to devise an interface that allows the user to flexibly apply filters using a subset or all three types of annotation to retrieve relevant art item suggestions, making it easy for the user to browse through the large database of art items.
+
+## TECHNICAL APPROACH
+
+We provide details for our design suggestion engine. There are two major components: art items suggestion and templates. By using these components, the user can quickly browse through the database of art items and select items that fit with the wall, as well as obtaining a decent spatial arrangement of the items as an initialization of their design.
+
+## Wall Plane and Color
+
+Akin to the conventional workflow for designing a gallery wall, our approach starts with considering the wall color. The user wears the Magic Leap One headset and faces the target wall to be decorated. The wall plane is detected and extracted based on the headset's built-in functionality. The user can manually specify the wall color via a color picker in the user interface, or by using the headset's camera to take a picture of the wall whose average color is taken as the wall color.
+
+Based on the wall color, a neighbor color within $\pm \left( {{60}^{ \circ }\text{to}{90}^{ \circ }}\right)$ of the wall color is randomly selected from the HSV circular color space. The complementary color of the neighbor color $\left( {180}^{ \circ }\right.$ from the neighbor color in the HSV circular color space) is also selected. Figure 7 shows an example. The wall color is used as the basis for retrieving other colors for suggesting relevant art items.
+
+
+
+Figure 7: Wall's color palette example.
+
+## Suggested Focal Items
+
+Our goal in this step is to retrieve and suggest a list of art items from the database as candidate focal art items for the user. Figure 6 shows some suggested focal art items based on different wall colors.
+
+To achieve this, the wall color, neighbor color and complementary color then form a 3-color palette based on which compatible focal art items are selected. The wall compatibility score ${S}_{\text{foc }}$ of a candidate focal art item $\phi$ is defined as:
+
+$$
+{S}_{\text{foc }}\left( \phi \right) = 1 - \frac{1}{9}\mathop{\sum }\limits_{{{\mathbf{c}}_{\mathrm{w}} \in {C}_{\mathrm{w}}}}\min \left\{ {d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right) \mid {\mathbf{c}}_{\phi } \in {C}_{\phi }}\right\} , \tag{1}
+$$
+
+where ${C}_{\mathrm{w}}$ is a set containing the wall color, neighbor color and the complementary color in the HSV space; ${C}_{\phi }$ is a set containing the 5 dominant colors of the candidate focal art item $\phi$ in the HSV space. This scoring function evaluates how close the candidate focal art item $\phi$ ’s color palette is with respect to the wall's color palette. The closer they are, the higher the wall compatibility score.
+
+Note that ${\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi } \in {\mathbb{R}}^{3}$ are colors in the HSV space. $d\left( \text{.}\right) {is}$ a distance metric function to project the two colors ${\mathbf{c}}_{\mathrm{w}}$ and ${\mathbf{c}}_{\phi }$ into the HSV cone and to compute the squared distance between them in that cone [1]:
+
+$$
+d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right) = {\left( \sin \left( {H}_{\mathrm{w}}\right) {S}_{\mathrm{w}}{V}_{\mathrm{w}} - \sin \left( {H}_{\phi }\right) {S}_{\phi }{V}_{\phi }\right) }^{2}
+$$
+
+$$
++ {\left( \cos \left( {H}_{\mathrm{w}}\right) {S}_{\mathrm{w}}{V}_{\mathrm{w}} - \cos \left( {H}_{\phi }\right) {S}_{\phi }{V}_{\phi }\right) }^{2}
+$$
+
+$$
++ {\left( {V}_{\mathrm{w}} - {V}_{\phi }\right) }^{2}\text{,}
+$$
+
+where $H \in \lbrack 0,{2\pi }), S \in \left\lbrack {0,1}\right\rbrack$ , and $V \in \left\lbrack {0,1}\right\rbrack$ are the HSV channel values. The range of $d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right)$ is $\left\lbrack {0,3}\right\rbrack$ . Equation (1) sums up the differences of 3 pairs of colors, hence a normalization of 9 is used.
+
+Our approach computes the compatibility scores for all the art items in the database. The top-20 art items are retrieved and displayed in order of the compatibility scores with the highest-scoring item shown first. The user is supposed to select a focal art item from the list of suggested art items. However, if needed, the user can also explore the database to select any other art item as the focal art item using the Item Panel which we describe in a later section.
+
+## Suggested Auxiliary Art Items
+
+Akin to the conventional gallery wall design approach, the selected focal art item serves as a reference for the suggestion engine to suggest other compatible, auxiliary art items to add to the gallery wall design. To retrieve auxiliary art items from the database as suggestions, an overall compatibility score ${S}_{\text{aux }}$ is computed for each candidate auxiliary art item, which evaluates the style and color compatibility between the auxiliary art item and the selected focal art item:
+
+$$
+{S}_{\text{aux }}\left( \phi \right) = {w}_{\mathrm{c}}{S}_{\text{aux }}^{\mathrm{c}}\left( \phi \right) + {w}_{\mathrm{s}}{S}_{\text{aux }}^{\mathrm{s}}\left( \phi \right) , \tag{2}
+$$
+
+where ${w}_{\mathrm{c}}$ is the weight of the color compatibility score ${S}_{\text{aux }}^{\mathrm{c}}$ and ${w}_{\mathrm{s}}$ is the weight of the style compatibility score ${S}_{\text{aux }}^{\mathrm{s}}$ .
+
+Color Compatibility Score: A candidate auxiliary art item $\phi$ has a high color compatibility score ${S}_{\text{aux }}^{\mathrm{c}}$ if its colors are close to the dominant colors of the selected focal art item. Specifically, the color compatibility score of auxiliary art item $\phi$ is defined as follows:
+
+$$
+{S}_{\text{aux }}^{\mathrm{c}}\left( \phi \right) = 1 - \frac{1}{15}\mathop{\sum }\limits_{{{\mathbf{c}}_{\mathrm{f}} \in {C}_{\mathrm{f}}}}\min \left\{ {d\left( {{\mathbf{c}}_{\mathrm{f}},{\mathbf{c}}_{\phi }}\right) \mid {\mathbf{c}}_{\phi } \in {C}_{\phi }}\right\} , \tag{3}
+$$
+
+where ${C}_{\mathrm{f}}$ and ${C}_{\phi }$ are respectively sets containing the 5 dominant colors of the selected focal art item and of the auxiliary art item $\phi .{\mathbf{c}}_{\mathbf{f}},{\mathbf{c}}_{\phi } \in {\mathbb{R}}^{3}$ are colors in the HSV space. As Equation (3) sums up the differences of 5 pairs of colors, and each difference has a range of $\left\lbrack {0,3}\right\rbrack$ , a normalization of 15 is used.
+
+
+
+Figure 8: Art item suggestions. Based on the wall's color palette shown on the left, (a) several color compatible focal art items are suggested. Based on a selected focal art item (highlighted in red), (b) suggested auxiliary art items compatible in color and style. Auxiliary art items suggested by considering (c) only color or (d) only style.
+
+This scoring function evaluates how close the auxiliary art item $\phi$ ’s color palette is with respect to the selected focal art item's color palette. The closer they are, the higher the score is.
+
+Style Compatibility Score: A candidate auxiliary art item $\phi$ has a high style compatibility score ${S}_{\text{aux }}^{\mathrm{s}}$ if its visual feature vector is close to the selected focal art item's visual feature vector. The style compatibility score of auxiliary art item $\phi$ is defined as follows:
+
+$$
+{S}_{\text{aux }}^{\mathrm{s}}\left( \phi \right) = 1 - \frac{1}{\sqrt{n}}\begin{Vmatrix}{{\mathbf{v}}_{\mathrm{f}} - {\mathbf{v}}_{\phi }}\end{Vmatrix}, \tag{4}
+$$
+
+where ${\mathbf{v}}_{\mathrm{f}},{\mathbf{v}}_{\phi } \in {\mathbb{R}}^{n}$ are the $n$ -dimensional visual feature vectors of the focal item and auxiliary art item $\phi$ computed by the convolutional neural networks. The size of the dimension in our database is 256 .
+
+Overall, the compatibility score is computed for each art item in the database. The user interface displays the top-20 art items sorted in descending order of their compatibility scores as auxiliary art item suggestions. To provide flexibility in retrieving suggestions, our user interface allows the user to turn on and off the consideration of the color or the style compatibility score, which correspond to setting ${w}_{\mathrm{c}}$ or ${w}_{\mathrm{s}}$ as 1 or 0 . Figure 8 shows an illustration. The user can also select which of the 5 dominant colors ${C}_{\mathrm{f}}$ of the selected focal art item to consider in computing the color compatibility score.
+
+## Templates
+
+To allow the user to quickly generate an initial gallery wall design based on a focal item, our tool provides preset templates that the user can choose from and apply. Figure 12 shows some example templates that our tool provides. These templates encode spatial relationships of art items that are commonly applied by gallery wall designers. A template arranges groups of auxiliary art items symmetric about and around the focal art item, akin to the gallery wall designs created by the conventional design workflow. Figure 9(b) shows the 4 templates (with 5,7,9, and 11 items) that we provide with our tool in our experiments. These templates resemble the common patterns used by designers as we learned from the interview. Auxiliary items within a group have compatible color and style by default (Equation (2)).
+
+
+
+Figure 9: User interface of our tool via which a user wearing a mixed reality headset visualizes and designs a gallery wall. It consists of three components: (a) Design Canvas; (b) Template Panel; and (c) Item Panel.
+
+Initializing a Gallery Wall Design. Figure 10 illustrates how to apply a template to generate a gallery wall design. According to the layout of the chosen template, starting from the root (the focal art item), our approach inserts auxiliary art items which are compatible with the focal art item, group by group. Specifically, the items are added on the circumference of a circle centered at the focal item with a random radius $r \in \left\lbrack {{0.5d},{3d}}\right\rbrack$ , where $d$ is the diagonal length of the focal item. A pair of two auxiliary items would be added at the opposite sides of the circle. A group of three items would be added at the vertexes of a randomly-oriented equilateral triangle circumscribed by the circle. The gallery wall design generation finishes as all groups of auxiliary art items have been placed. The generated design is taken as an initial design based on which the user can modify interactively.
+
+Spatial Refinement. Our approach refines the spatial relationships between the items after the initialization and after every user interaction with the gallery wall design such as adding an item, removing an item, and moving an item.
+
+Snapping: To keep the gallery wall design compact, by default, all auxiliary items steer toward the focal item at the center while they maintain a certain minimum space between each other to avoid overlapping.
+
+Alignment: To keep the gallery wall design neat and uncluttered, by default, our approach aligns neighboring art items either horizontally or vertically by their edges so long as the alignment does not cause overlapping. (Figure 11(b))
+
+We include more examples of interactively modifying actions in the supplementary video.
+
+## USER INTERACTION
+
+Figure 9 shows the user interface of our tool which is displayed in mixed reality. It consists of three components: a) the Design Canvas where the user can interactively modify the current gallery wall design visualized on the real wall; b) the Template Panel where the user can select and apply a pre-
+
+
+
+Figure 10: Applying a template to generate a gallery wall design. (a) A template and its tree structure. The auxiliary items are placed symmetrically about the focal item at the center. (b) A gallery wall created by applying this template.
+
+
+
+Figure 11: The user can interactively modify a design in mixed reality using the functionalities of our user interface.
+
+set template for synthesizing an initial gallery wall design; and c) the Item Panel where the user can retrieve art items from the database by specifying different criteria. Each of the components allows users to refine a gallery wall design conveniently and desirably. We describe them in the following.
+
+## Design Canvas
+
+The design canvas visualizes and overlays the current gallery wall design on the real wall via the mixed reality headset's display. It also provides support for interactively adjusting both art items and the gallery wall layout:
+
+- Add. The user selects an art item in the current design and retrieves several art items from the database that are compatible in terms of dominant colors, visual features and tags, which he can add to the current design.
+
+- Replace. The user replaces an art item with another compatible art item from the database. (Figure 11(a))
+
+- Move. The user moves an art item by dragging it.
+
+- Resize. The user chooses another size for an art item.
+
+- Remove. The user removes an art item.
+
+## Template Panel
+
+The Template Panel allows the user to quickly generate an initial gallery wall design with items decently placed. It provides a number of preset templates that the user can apply to synthesize a gallery wall design based on a selected focal art item. It also provides other functionalities to enable automatic refinement of the spatial layout of the current design. A list of functionalities supported:
+
+- Apply a Template. Based on a placed focal art item, the user applies a template to synthesize a gallery wall design.
+
+- Add Random Group. Our tool automatically adds a group of 2 or 3 auxiliary art items, which are compatible with the
+
+
+
+Figure 12: The templates we use in our system. The blue item represents the focal item. The colored items represent the groups of 2 or 3 auxiliary items.
+
+focal art item, to the current design. The group of auxiliary items are symmetric about the focal item (Figure 12).
+
+- Align All. The user triggers our tool to align all the art items with respect to each other. The alignment is done along the horizontal (left, right, or center) or vertical (top, bottom, or center) direction. This functionality comes in handy because it could be tiring and difficult for users to preform multiple precise adjustments in the 3D space [11] using a handheld controller.
+
+- Snap. If enabled, an art item is snapped to its neighbor item as the user drags the art item around, i.e., it will steer toward the center of its neighbor item until a minimum spacing between the two items is reached (Figure 13).
+
+- Clear Wall. All the art items are removed from the design.
+
+## Item Panel
+
+The Item Panel is connected to the suggestion engine and the database of art items. Its primary function is to display a relevant list of art items that the user can add to the gallery wall design as a focal art item or auxiliary art items. The panel contains buttons that the user can click to set the criteria for retrieving relevant art items from the database. For example, the user can select whether to use color, or style, or both as criteria for determining compatibility between items. The user can also select which color(s) out of the 5 dominant colors of the focal art item to use for determining color compatibility. A list of functionalities supported:
+
+- Update Wall's Color Palette. Our tool re-generates the neighbor and complementary colors of the wall's color palette.
+
+- Find Focal. Based on the wall's color palette, our tool retrieves several of compatible art items as focal art item suggestions (according to equation (Equation 1)).
+
+- Find Auxiliary. The user selects criteria (e.g., colors, visual features, tags) based on which our tool retrieves 20 compatible art items as auxiliary art item suggestions. By default, the art items are sorted by their compatibility scores.
+
+## USER EVALUATION
+
+We developed our tool using C# on the Unity Game Engine installed with the Magic Leap Lumin SDK. We deployed our tool onto a Magic Leap One headset which we used for our user evaluation experiments.
+
+User Groups. We recruited two different groups of users to evaluate our tool.
+
+
+
+Figure 13: Snapping example. (a) The user drags an art item, which is (b) snapped toward the center of its neighbor item until a minimum spacing between the two is reached.
+
+Group 1: The first group was recruited to evaluate the user experience of designing a gallery wall using our mixed reality interface based on Magic Leap One versus using a 2D interface which mimics a traditional design tool on a laptop. We recruited 17 participants, who are the employees of a company, consisting of 12 males and 5 females, aged from 20 to 45 , the average age was 32 . All participants did not have experience using the Magic Leap One headset. Each participant designed 2 gallery walls, under Condition ${MR}$ and Condition ${2D}$ in a random order.
+
+Group 2: The second group was recruited to evaluate the user experience of designing a gallery wall without and without the template functionality. We recruited 24 participants, who are the college students, consisting of 16 males and 8 females, aged from 19 to 24 the average age was 22. Each participant designed 2 gallery walls under Condition ${2D}$ and Condition ${2DNT}$ in a random order.
+
+Conditions. We asked the participants to create gallery wall designs. The goal of each task was to design a gallery wall that fits with a living room with a pale gray wall and a blue sofa as shown in Figure 9.
+
+- Condition MR: The participant used our tool delivered through a mixed reality interface to create a gallery wall.
+
+- Condition ${2D}$ : The participant used our tool delivered through a 2D interface to create a gallery wall.
+
+- Condition 2DNT: The participant used our tool delivered through a 2D interface to create a gallery wall with no template functionality. That is, the "Apply a Template" and "Add Random Group" buttons under the Template Panel were disabled.
+
+Note that, both Condition ${MR}$ and Condition 2DNT used a background image showing a pale gray wall and a blue sofa. We include the $2\mathrm{D}$ interface we used for the user evaluation in the supplementary material.
+
+Procedure. Before each task, we briefed and trained the participant in creating a gallery wall design using our tool. We asked the participant to follow a 5-minute tutorial which guided him how to use our tool to create a gallery wall design step by step. Note that we showed the participant the tutorial whether he was tasked with creating a gallery wall using the mixed reality interface (Condition MR) or the 2D interface (Condition 2D). The participant could ask any question about the user interface to make sure he was familiar with using it.
+
+
+
+Figure 14: Example gallery wall designs created by participants in our user evaluation pool.
+
+The participant was then asked to create a gallery wall design that fits with the living room under a given condition. For Condition ${2D}$ and Condition ${2DNT}$ , the participant was presented with a photo of the living room when designing a gallery wall using the $2\mathrm{D}$ interface (please refer to the supplementary material for a screenshot of the interface). Our tool tracked the interaction metrics for later analysis.
+
+Figure 14 shows some gallery walls designed by the participants. Our supplementary material contains the results created by all participants.
+
+## EXPERIMENT RESULTS
+
+We discuss the user evaluation results with regard to performance, usage and user feedback. We use t-tests to evaluate if there is any significant difference between the results obtained under different pairs of conditions by reporting the p-values. We show our results in box plots for easy interpretation. Our supplementary material shows the numeric results and all designs created by participants under different conditions.
+
+## Performance
+
+We tracked the performance of retrieving items from the large database we used in our experiments on a desktop computer equipped with a i7-7700k 4.2GHz CPU, 16GB RAM, and an NVIDIA GeForce GTX 1080 8GB graphics card. Retrieving focal items took 3.12 seconds and retrieving auxiliary items took 1.21 seconds. All the other user interface operations ran at an interactive rate.
+
+Our tool tracks the participant's performance under each given condition. Here are the performance metrics tracked:
+
+
+
+Figure 15: Performance results in different settings. Color dots and bars show the means and medians. The p-value of t-test computed between the results of the two conditions in each group is shown. The p-values smaller than 0.05 which reject the null hypothesis are bolded.
+
+- Number of Clicks: The total number of times the participant clicked on a user interface component.
+
+- Number of Movements: The total number of times the participant adjusted the position of an art item.
+
+Mixed Reality versus 2D Interface. As the Group 1 results in Figure 15 show, under the MR condition where the participants designed via a Magic Leap One headset, they made fewer clicks (p<0.01 in t-test) and movements (p=0.04), compared to the $2\mathrm{D}$ condition where they designed via a $2\mathrm{D}$ interface.
+
+Under the MR condition, the participants could see their design directly visualized on the real wall, which might result in fewer adjustments. In our perceptual study, we find that there is no significant difference between the visual quality of the designs created under the MR and the 2D conditions. In all, the direct visualization brought about by mixed reality allows the participants to create gallery wall designs of similar visual quality (compared to designs created on a 2D screen) with fewer manual adjustments.
+
+
+
+Figure 16: Usage statistics of the items placed by a template (Section 5.4) under different conditions.
+
+Templates versus No Templates. As the Group 2 results in Figure 15(a) show, under the 2D condition where the participants created designs with the aid of templates they made fewer clicks $\left( {\mathrm{p} < {0.01}}\right)$ compared to the 2DNT condition where they created designs without the aid of templates. The template functionality helped them create designs more efficiently.
+
+## Usage
+
+Our tool also tracks the usage of the design suggestions generated by the Template Panel or Item Panel. Here are the metrics used:
+
+- Template Item Removed: If a participant applied a template for initializing a gallery wall design, the percentage of items generated by the template that he/she removed.
+
+- Template Item Modified: If a participant applied a template for initializing a gallery wall design, the percentage of items generated by the template that he/she modified.
+
+- Suggested Item Usage: What percentage of items in the final gallery wall design was chosen among the top-20 suggestions from the Item Panel retrieved according to the user's specified criteria.
+
+- Selected Item's Rank: The average rank of the items the participants selected from the suggestions of the Item Panel.
+
+Template Items. Figure 16 shows the usage statistics of the items placed by a template under different conditions. As Figure 16(a) shows, the participants removed about 20% and ${28}\%$ of the items placed by the templates under the MR and $2\mathrm{D}$ conditions respectively. As a template places highly similar items in a design, it seems that the participants tended to replace a few items with other less similar items to introduce some variation or contrast into the overall design. On the other hand, as Figure 16(b) shows, in average only about ${13}\%$ and $8\%$ of the items were modified under the MR and 2D conditions respectively. Overall, the participants kept a majority of the items placed by a template in their final gallery wall design.
+
+
+
+Figure 17: Usage statistics of the items suggested by the Item Panel (Section 6.3) under different conditions.
+
+Suggested Items. Figure 17 shows usage statistics of items suggested by the Item Panel. As Figure 17(a) shows, under the MR and 2D conditions, about ${82}\%$ and ${76}\%$ of the items used in the final design were chosen from the top-20 suggestions in the Item Panel retrieved according to their specified criteria. The Group 2 results show that there was a significant difference $\left( {\mathrm{p} < {0.01}}\right)$ between suggested item usages under the 2D and 2DNT conditions. Under the 2DNT condition when the participants could not use a template to initialize a gallery wall design, they tended to use items from the database more randomly.
+
+On the other hand, as Figure 17(b) shows, the average rank of the selected items ranges from about 8 to 10 under different conditions. It seems that, in using the Item Panel, the participants tended to choose items that match with their specified criteria but not in a very strict sense.
+
+## User Feedback
+
+We talked to the participants after the experiments. They generally merited the visualization brought about by our mixed reality tool for showing the design on the real wall, which made the interactive design process more intuitive compared to a typical computer screen, as they could directly see how their design could fit with the real space. On the downside, some participants reported initial user experience challenges while acclimating themselves to the device's field-of-view and headset fit. We include the participants' comments in our supplementary material.
+
+We also conducted a two-alternative forced-choice approach to evaluate the quality of the gallery wall designs created by the participants under two different conditions (MR & 2D or ${2D}\& {2DNT})$ . We include the details of the perceptual study in the supplementary material.
+
+## SUMMARY
+
+We proposed a novel mixed reality-based interactive design tool for gallery walls. By overlaying a virtual gallery wall on a real wall, our tool allows users to directly visualize their design in the real world as an integrated part of the creative process. Our suggestive design interface allows users to retrieve stylistically compatible items for creating their desired gallery walls.
+
+## Limitations
+
+We used a Magic Leap One headset for experimenting with our tool. The hardware still has limitations for consumer use. For example, the field-of-view is still a bit narrow for the user to see the overall design and the user interface at once; the user has to look around to see different things, which could be inconvenient.
+
+It could be tiring to use the handheld controller for $3\mathrm{D}$ interaction and manipulation in the 3D space for a prolonged period of time. Because of this, it would be challenging to use a sophisticated design tool that requires tedious and precise user input in mixed reality.
+
+We believe that the visualization of the design in the real space by mixed reality may benefit the users in envisioning and communicating their designs, allowing them to design in the real space directly and intuitively. In the future, it would be worthwhile to extend our mixed reality design tool to consider more sophisticated context of the 3D scene to simplify other interior design tasks, such as suggesting furniture placement. Performing advanced scene analysis in real time for enabling interior design applications in mixed reality presents a practical challenge, which could be resolved as the com-putaional power of mixed reality devices continue to increase.
+
+Due to the scope of this project, in this work we only focus on a subset of interior design, namely, gallery wall designs. We show that a mixed reality approach for gallery wall design is feasible. Our approach could be extended to consider 3D decorations (Figure 18), though they are less common and we did not consider them in our current approach.
+
+It would be helpful to have a large database of gallery wall designs from which our tool could learn the spatial relationships and compatibility between different art items in a gallery wall design, and use such knowledge for synthesizing new designs.
+
+## Future Work
+
+With the advances of artificial intelligence and natural language processing techniques, we hope that such techniques can be adopted in mixed reality for enabling natural and convenient user interaction experiences in the interior design process. For example, it would be helpful if a mixed reality interior design software application could understand voice commands from the user to decorate a space, minimizing the need for manual user input. With such advances, using a mixed reality software application for interior design will be as natural as talking to an interior designer who can instantly visualize the design for the user through mixed reality. We believe this is an exciting goal for enabling human-AI collaboration in design.
+
+For commercial purposes, art items could be further annotated with non-aesthetic tags, such as their prices, frequencies of being viewed or selected, and the semantic meanings they carry. Incorporating such additional tags could provide more desirable item recommendations while a user is creating their gallery wall designs.
+
+
+
+Figure 18: Example of a gallery wall design containing a 3D decoration object (a lion head) that can be visualized in mixed reality using our tool.
+
+## REFERENCES
+
+1. Dimitrios Androutsos, KN Plataniotiss, and Anastasios N Venetsanopoulos. 1998. Distance measures for color image retrieval. In Proceedings of the International Conference on Image Processing (ICIP), Vol. 2. IEEE, 770-774.
+
+2. Han Joo Chae, Jeong-in Hwang, and Jinwook Seo. 2018. Wall-based Space Manipulation Technique for Efficient Placement of Distant Objects in Augmented Reality. In The 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 45-52.
+
+3. Kang Chen, Kun Xu, Yizhou Yu, Tian-Yi Wang, and Shi-Min Hu. 2015. Magic decorator: automatic material suggestion for indoor digital scenes. ACM Transactions on Graphics (TOG) 34, 6 (2015), 232.
+
+4. Matthew Fisher and Pat Hanrahan. 2010. Context-based search for 3D models. In ACM Transactions on Graphics (TOG), Vol. 29. ACM, 182.
+
+5. Matthew Fisher, Daniel Ritchie, Manolis Savva, Thomas Funkhouser, and Pat Hanrahan. 2012. Example-based synthesis of 3D object arrangements. ACM Transactions on Graphics (TOG) 31, 6 (2012), 1-11.
+
+6. Matthew Fisher, Manolis Savva, and Pat Hanrahan. 2011. Characterizing structural relationships in scenes using graph kernels. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 34.
+
+7. Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2. IEEE, 1735-1742.
+
+8. Ali Jahanian, Jerry Liu, Qian Lin, Daniel Tretter, Eamonn O'Brien-Strain, Seungyon Claire Lee, Nic Lyons, and Jan Allebach. 2013. Recommendation
+
+system for automatic design of magazine covers. In Proceedings of the International Conference on Intelligent User Interfaces. ACM, 95-106.
+
+9. Yuwei Li, Xi Luo, Youyi Zheng, Pengfei Xu, and Hongbo Fu. 2017. SweepCanvas: Sketch-based 3D Prototyping on an RGB-D Image. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, 387-399.
+
+10. David Lindlbauer and Andy D Wilson. 2018. Remixed reality: Manipulating space and time in augmented reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 129.
+
+11. Ian Scott Mackenzie. 1992. Fitts' law as a performance model in human-computer interaction. Ph.D. Dissertation. University of Toronto.
+
+12. Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, and Vladlen Koltun. 2011. Interactive furniture layout using interior design guidelines. In ${ACM}$ Transactions on Graphics (TOG), Vol. 30. ACM, 87.
+
+13. Danil Nagy, Damon Lau, John Locke, Jim Stoddart, Lorenzo Villaggi, Ray Wang, Dale Zhao, and David Benjamin. 2017. Project Discover: An application of generative design for architectural space planning. In SimAUD 2017 Conference proceedings: Symposium on Simulation for Architecture and Urban Design.
+
+14. Benjamin Nuernberger, Eyal Ofek, Hrvoje Benko, and Andrew D Wilson. 2016. SnapToReality: Aligning augmented reality to the real world. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1233-1244.
+
+15. Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann. 2014. Learning layouts for single-pagegraphic designs. IEEE Transactions on Visualization and Computer Graphics 20, 8 (2014), 1200-1213.
+
+16. Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann. 2015. DesignScape: Design with interactive layout suggestions. In Proceedings of the 33rd annual ACM Conference on Human Factors in Computing Systems. ACM, 1221-1224.
+
+17. Xufang Pang, Ying Cao, Rynson WH Lau, and Antoni B Chan. 2016. Directing user attention via visual flow on web designs. ACM Transactions on Graphics (TOG) 35, 6 (2016), 240.
+
+18. Yuting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2016. Learning to Generate Posters of Scientific Papers. In AAAI. 51-57.
+
+19. Carsten Rother, Lucas Bordeaux, Youssef Hamadi, and Andrew Blake. 2006. Autocollage. In ACM Transactions on Graphics (TOG), Vol. 25. ACM, 847-852.
+
+20. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.
+
+21. Kai Wang, Manolis Savva, Angel X Chang, and Daniel Ritchie. 2018. Deep convolutional priors for indoor scene synthesis. ACM Transactions on Graphics (TOG) 37, 4 (2018), 70.
+
+22. Ariel Weingarten, Ben Lafreniere, George Fitzmaurice, and Tovi Grossman. 2019. DreamRooms: Prototyping Rooms in Collaboration with a Generative Process. In Proceedings of the 45th Graphics Interface Conference on Proceedings of Graphics Interface 2019. Canadian Human-Computer Communications Society, 1-9.
+
+23. Tomer Weiss, Alan Litteneker, Noah Duncan, Masaki Nakada, Chenfanfu Jiang, Lap-Fai Yu, and Demetri Terzopoulos. 2018. Fast and Scalable Position-Based Layout Synthesis. IEEE Transactions on Visualization and Computer Graphics (2018).
+
+24. Jun Xiao, Xuemei Zhang, Phil Cheatle, Yuli Gao, and C Brian Atkins. 2008. Mixed-initiative photo collage authoring. In Proceedings of the 16th ACM International Conference on Multimedia. ACM, 509-518.
+
+25. Kun Xu, Kang Chen, Hongbo Fu, Wei-Lun Sun, and Shi-Min Hu. 2013. Sketch2Scene: sketch-based co-retrieval and co-placement of 3D models. ${ACM}$ Transactions on Graphics (TOG) 32, 4 (2013), 123.
+
+26. Pengfei Xu, Hongbo Fu, Takeo Igarashi, and Chiew-Lan Tai. 2014. Global beautification of layouts with interactive ambiguity resolution. In Proceedings of the 27th annual ACM Symposium on User interface software and technology. ACM, 243-252.
+
+27. Pengfei Xu, Hongbo Fu, Chiew-Lan Tai, and Takeo Igarashi. 2015. GACA: Group-aware command-based arrangement of graphic elements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2787-2795.
+
+28. Lap-Fai Yu, Sai Kit Yeung, Chi-Keung Tang, Demetri Terzopoulos, Tony F. Chan, and Stanley Osher. 2011. Make it home: automatic optimization of furniture arrangement. ACM Transactions on Graphics (TOG) 30, 4 (2011), 86.
+
+29. Lap-Fai Yu, Sai-Kit Yeung, and Demetri Terzopoulos. 2016. The clutterpalette: An interactive tool for detailing indoor scenes. IEEE Transactions on Visualization and Computer Graphics 22, 2 (2016), 1138-1148.
+
+30. Ya-Ting Yue, Yong-Liang Yang, Gang Ren, and Wenping Wang. 2017. SceneCtrl: Mixed Reality Enhancement via Efficient Scene Editing. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, 427-436.
+
+31. Edward Zhang, Michael F Cohen, and Brian Curless. 2016. Emptying, refurnishing, and relighting indoor spaces. ACM Transactions on Graphics (TOG) 35, 6 (2016), 174.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..32da7cd06edcde0b851bdea281ab14d507bf8f7a
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/YnoenwVEiWS/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,450 @@
+§ INTERACTIVE DESIGN OF GALLERY WALLS VIA MIXED REALITY
+
+§ ABSTRACT
+
+We present a novel interactive design tool that allows users to create and visualize gallery walls via a mixed reality device. To use our tool, a user selects a wall to decorate and chooses a focal art item. Our tool then helps the user complete their design by optionally recommending additional art items or automatically completing both the selection and placement of additional art items. Our tool holistically considers common design criteria such as alignment, color, and style compatibility in the synthesis of a gallery wall. Through a mixed reality device, such as a Magic Leap One headset, the user can instantly visualize the gallery wall design in situ and can interactively modify the design in collaboration with our tool's suggestion engine. We describe the suggestion engine and its adaptability to users with different design goals. We also evaluate our mixed-reality-based tool for creating gallery wall designs and compare it with a 2D interface, providing insights for devising mixed reality interior design applications.
+
+§ ACM CLASSIFICATION KEYWORDS
+
+H.5.m. Information interfaces and presentation (e.g., HCI)
+
+§ AUTHOR KEYWORDS
+
+Design interfaces; mixed reality; spatial computing
+
+§ INTRODUCTION
+
+The advent of mixed reality devices (e.g., Microsoft Hololens, Magic Leap One) gives rise to new and exciting opportunities for spatial computing. The superior immersive visualization and interaction experience provided by these devices promises to change the way interior design is performed as they allow users to instantly preview and modify designs in real spaces.
+
+Interior design has historically been a costly and time-intensive process. The conventional design process involves contemplating $\mathrm{f}$ abric swatches and inspirational photos, as well as talking to a designer. A professional designer may make use of 3D modeling software to preview a design on screen through sophisticated manual operations. The designer's client must then mentally translate what they see on the 2D screen to how the design may look in a real living space. Without convenient means for visualizing and modifying designs, the design process can be tedious and nonintuitive. Such limitations restrict the ability of general users to engage in interior design even though they may have creative ideas.
+
+ < g r a p h i c s >
+
+Figure 1: (a) A user wearing a Magic Leap One headset designs a gallery wall using our tool. The figure shows what the user sees in mixed reality while designing: the control panels and his gallery wall design overlaid on the real wall. (b) Some gallery walls designed by users with our tool.
+
+In our work, we attempt to address these challenges by devising an interior design tool leveraging the visualization and interaction capabilities of the latest consumer-grade mixed reality devices. Since interior design is a broad area, as an early attempt to investigate such design applications based on mixed reality, we particularly focus on the design of gallery walls, which are common for decorating interior spaces such as living rooms, hotel lobbies, galleries, and other interior spaces. Figure 1 shows a user designing a gallery wall using our tool.
+
+A gallery wall refers to a cluster of wall art items artistically arranged on a wall. A gallery wall commonly contains a focal item near the center of the arrangement that sets the tone for the overall design. The other items are called auxiliary items and are placed around and with reference to the focal item. The auxiliary items are generally compatible with the focal item in terms of color and style. These definitions follow the conventions used by designers in creating a gallery wall design using a traditional workflow.
+
+Our tool is suitable for novice users. Users are able to directly visualize how the gallery wall design will look on the real wall while interactively creating the design. The visualization and user-interaction components of our tool keeps the user in the loop of the design process, allowing quick exploration of desirable designs through trial-and-error.
+
+Furthermore, by comparing the color and style compatibility between different art items, our suggestion engine can let the user quickly browse through many desirable design suggestions automatically generated by our tool, hence saving the manual and mental efforts involved in browsing through a large database of wall art items.
+
+Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
+
+Graphics Interface 202021-22 May 2020, Toronto, Ontorio, Canada
+
+(C) 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 123-4567-24-567/08/06.
+
+DOI: http://dx.doi.org/10.475/123_4
+
+We make the following major contributions in this work:
+
+ * Based on interviews with professional designers, we devise a computational approach for facilitating and automating the design of gallery walls, which enables a novel mixed reality interactive design tool.
+
+ * We demonstrate how mixed reality technology, which bridges the gap between real-world scene knowledge and design suggestions computed in a virtual setting, can be adopted for interior design. In our case, we particularly demonstrate how such an approach can be applied for designing gallery walls.
+
+ * We conduct experiments to evaluate the user experience and performance of using our novel tool for gallery wall design. We also conduct a perceptual study to evaluate the quality of the gallery wall designs created by users with our tool.
+
+We believe these contributions will inspire future research in creating mixed reality interfaces for interior design.
+
+§ RELATED WORK
+
+To the best of our knowledge, there is no existing work on using mixed or augmented reality for gallery wall design. Regardless, we briefly review the existing research work and commercial tools relevant to our problem domain.
+
+§ EXTENDED REALITY FOR INTERIOR DESIGN
+
+Companies have been exploring the use of virtual, augmented, or mixed reality technologies for creating and visualizing interior design. Matterport uses a 3D camera to capture the color and depth of real-world living spaces, which can be visualized in 3D by users wearing a virtual reality headset. Such an approach finds promising applications for virtual real estate tours. On the other hand, roOomy provides virtual staging services, enabling users to see previews of interior designs via augmented reality devices showing virtual furniture objects overlaid on a real scene. Furniture retailers such as Wayfair also develop virtual and augmented reality experiences with capabilities such as customizing the design of outdoor spaces with furnishings and décor.
+
+Several companies provide web interfaces or mobile applications for designing gallery walls. For example, Shutterfly allows users to upload their photos and arrange them using preset layouts provided by the company. Art.com provides a mobile application that allows users to select individual art items or preconfigured gallery wall layouts from their large wall art collection and visualize them via augmented reality on a mobile device.
+
+Recently, several researchers proposed using extended reality for interior design [2, 10, 14, 22, 31, 30]. Zhang et al. [31] proposed an approach to add furniture items and relight scenes on a RGBD-enabled tablet. Yue et al. [30] developed a mixed reality system which allows users to efficiently edit a scene for applications such as room redecoration. Virtual content needs to be adapted to fit the current scene. Nuernberger et al. [14] devised a technique to align virtual content with the real world. Chae et al. [2] proposed a space manipulation technique for placing distant objects by dynamically squeezing surrounding spaces in augmented reality. Lindlbauer and Wilson [10] showed how to manipulate space and time in augmented reality.
+
+Compared to the existing approaches, our tool not only uses extended reality technologies for visualizing gallery wall designs, but also reasons about the spatial and color compatibility. Our tool simplifies the design process by suggesting desirable combinations and placements of art items, taking the color of the wall into account.
+
+§ AUTOMATED LAYOUT DESIGN
+
+Recently, layout design automation has received much research attention. There are previous efforts on automatic 2D graphics layout design $\left\lbrack {{16},{15}}\right\rbrack$ , poster design $\left\lbrack {18}\right\rbrack$ , website design [17], magazine covers [8], and photo collages [19, 24]. We focus on reviewing automatic scene layout design works.
+
+Merrell et al. [12] proposed an interactive tool for furniture layout design based on interior design guidelines, while Yu et al. [28] devised an optimization framework for automatic furniture layout design. Fisher et al. [6] characterized structural relationships in scenes based on graph kernels and later proposed an example-based approach [5] for synthesizing 3D object arrangements for modeling partial scenes. More recently, Wang et al. [21] applied deep convolutional priors for indoor scene synthesis, while Weiss et al. [23] proposed a physics-based approach for fast and scalable furniture layout synthesis. Such works are mostly focused on the geometrical aspects of populating spaces with furniture. Complementary to this line of work, Chen et al. [3] created a tool called Magic Decorator for automatically assigning materials for objects in indoor scenes. Xu [26, 27] proposed a tool for layout beautification and arrangement. However, none of these approaches considers the stylistic arrangement of wall art items in a scene. We propose a novel approach for modeling common factors such as colors, spatial relationships, and semantic compatibility among art items to generate desirable gallery walls, which could complement the existing automated interior design approaches.
+
+We also note that recently CAD software companies such as Autodesk are applying generative design for automating layout synthesis [13]. Along with this line of work, we believe our generative design tool for semi-automating gallery wall design will also find good practical uses.
+
+§ INTERACTIVE SCENE MODELING
+
+Typically, users want the ability to control, modify, and visualize the design during the design process so that they can infuse their personal stylistic preferences in their designs. As such, interactive modeling tools play an important role in the design process. There is a large body of work on interactive modeling tools. We review recently proposed scene modeling interfaces.
+
+Along the direction of suggestive interfaces for scene modeling, Yu et al. proposed a suggestive interface called Clut-terPalette [29] that uses object distribution statistics learned from real-world scenes for suggesting appropriate furniture items to add to a scene. Matthew et al. [4] devised a context-based search engine for $3\mathrm{D}$ furniture models to add to a scene.
+
+ < g r a p h i c s >
+
+Figure 2: A gallery wall created by a designer using a conventional workflow. The focal item (in yellow), as well as a diagonal pair (in orange) and a triangular group (in cyan) of auxiliary items are highlighted.
+
+Another line of work focuses on providing users with easy controls for creating objects while modeling a scene. As humans are accustomed to drawing sketches, a promising approach is to devise sketch-based interfaces for modeling scenes. For example, Xu et al. [25] proposed a sketch-based interface for retrieving and placing 3D models in scenes. Recently, Li et al. [9] devised an interface called SweepCanvas that allows users to perform 3D prototyping on an RGB-D image of a partial scene by sketching. Users can conveniently create virtual objects overlaid on top of the point cloud of a real scene.
+
+Compared to the existing interfaces, we propose a novel mixed reality-based interface for designing gallery wall layouts in situ, with the user seeing the design overlaid on top of the target wall. Seeing how the generated design fits into the real world, the user can easily modify the design by a few intuitive operations. We demonstrate in our evaluation experiments that our tool can allow not only designers but also novice users to quickly generate desirable gallery wall designs.
+
+§ INTERVIEW WITH DESIGNERS ON WORKFLOW
+
+To devise a computational approach and a practical tool for designing gallery walls, we interviewed 4 professional designers from a large furnishings and décor company to better understand the way professional designers create gallery walls under current practice. Each of the designers has at least 5 years of interior design or staging experience and has designed dozens of gallery walls in their professional capacity.
+
+Figure 2 shows a gallery wall created by a designer. The general process of designing a gallery wall is as follows:
+
+ < g r a p h i c s >
+
+Figure 3: System overview. Taking a database of art items and a wall as input, our tool suggests a focal art item as the starting point for designing a gallery wall. The user may apply a template to generate an initial design, and then interactively refines the design with the help of our suggestion engine.
+
+1. The designer first observes the style of the room, particularly paying attention to the colors of the wall and the furniture objects that the designer would like to decorate around.
+
+2. The designer explores a database containing many art items and chooses a focal art item, which is to be placed near the center of the gallery wall and serves as the reference for placing other auxiliary art items. The color of the focal art item should be compatible with the wall color.
+
+3. The designer selects and adds the auxiliary art items to the gallery wall. The colors and styles of these art items should contain some variety, yet they should all be compatible with those of the focal art item.
+
+4. The designer lays out the auxiliary art items around the focal art item nicely. One common consideration is balance: pairs of similar art items are placed at opposite sides of the focal art item.
+
+5. The designer refines the locations of the art items to achieve proper spacing and alignment between items.
+
+We devise our computational approach based on the above observations to automate the conventional design process. Typically, in creating a gallery wall for a common scene like a living room, it takes about 20 minutes for the designer to select compatible wall art items and another 20 minutes to arrange the items into a good layout.
+
+§ SYSTEM OVERVIEW
+
+Our tool is realized as an application that runs on the Magic Leap One mixed reality headset. Figure 1 shows a user designing a gallery wall using our tool. The user wears the headset when using our application to create a gallery wall.
+
+The display on the mixed reality headset shows a user interface as well as the virtual gallery wall design overlaid on the real wall, such that the user can instantly preview how the gallery wall design will look on the real wall while designing. The user interacts with the user interface via a handheld controller (e.g., a Magic Leap One's Control) to perform operations such as selecting and dragging. Figure 3 shows an overview of our tool. It consists of two major components: a suggestion engine for generating design suggestions and a user interface for modifying the gallery wall design. The tool is connected to a large database of wall art items (pictures) from which suitable art items are automatically retrieved and suggested to the user.
+
+ < g r a p h i c s >
+
+Figure 4: Designing a gallery wall with our tool. (a) The input empty wall. (b) The user selects a focal item. (c) The user applies a template for initializing the design. (d) The user modifies the design interactively. (e) The finished gallery wall design.
+
+§ WORKFLOW
+
+Figure 4 illustrates the typical user workflow while using our tool for designing a gallery wall. The input is a wall color. The output, which can be reached with as few as three user decisions, is a gallery wall design that goes well with the wall color. Our tool achieves color compatibility by using the wall color to retrieve candidate focal art items whose colors are compatible with the wall color. A user selects a focal art item from these suggestions, picks a focal item size from among offered art sizes, and either chooses a gallery wall template to launch the synthesis of a gallery wall design or browses recommended auxiliary items.
+
+The selection of auxiliary items, whether human-selected or template-selected, begins with the generation of a color palette based on the colors of the wall and the selected focal item. Based on this color palette and the style of the focal item, the system then retrieves from the database a selection of auxiliary art items that are either presented to the user as options or automatically incorporated into a layout.
+
+After the initialization, the user can interactively modify the gallery wall design via operations such as dragging-and-dropping and selecting in the $3\mathrm{D}$ space using a handheld controller. For example, the user can move, resize, replace, add, or remove any art items. Our tool also provides semiautomatic operations to help the user refine the design, for example, by performing automatic snapping of the art items. The design session ends when the user is satisfied with the gallery wall.
+
+§ ART ITEMS DATA
+
+Database. Our tool is built upon a suggestion engine which helps users browse and make selections from a large database of wall art items while designing their gallery walls. The database, which contains 12,000 wall art items created by
+
+ < g r a p h i c s >
+
+Figure 5: Samples of wall art items in our database, which contains more than 12,000 art items.
+
+professional artists, belongs to a company specialized in furniture and interior design business. Figure 5 depicts some example art items. The art items are mostly paintings and stylized photographs. Each art item comes with 1 to 5 realistic dimensions (from small to large) whose real replicate can be ordered and put on a real wall.
+
+Annotations. To compute the compatibility between different art items for making suggestions, each item is annotated with:
+
+1. Colors. The 5 most dominant colors of the art item are extracted by the k-means clustering algorithm and stored.
+
+2. Visual Features. 256 visual feature values are computed by a convolutional neural network, which encode the visual style of the art item. We trained a Siamese Network to perform such feature extraction using the aforementioned database containing many art items. The network takes image pairs as input, where a positive pair consists of images of items from the same category and a negative pair consists of images of items from different categories. Each input image is processed by a modified Inception-ResNet [20], whose last layer is changed to a fully connected layer, resulting in a 256-dimensional vector as the output. The network was trained to minimize the contrastive loss [7] of the vectors (embeddings) of the input image pairs.
+
+Tags: Style Tags: Subject
+
+max width=
+
+American Traditional Asian Inspired Beachy Bohemian & Bold Eclectic Modern
+
+1-5
+Cabin / Lodge Coastal Cottage / Country Cottage Americana Eclectic
+
+1-5
+French Country Modern & Contemporary Ornate Traditional Traditional Glam Modern Farmhouse Posh & Luxe Tropical Global Inspired Modern Rustic Rustic Industrial Nautical Scandinavian Mid-Century Modern Ornate Glam Sleek & Chic Modern
+
+1-5
+
+Abstract Abstract Bath & Laundry Buildings & Cityscapes Cities & Countries Entertainment Fantasy & Sci-Fi Fashion Floral & Botanical Food & Beverage
+
+Geometric Humor Inspirational Quotes & Sayings Landscape & Nature Maps
+
+Nautical & Beach People Spiritual & Religious Sports & Sports Teams Transportation Tags: Color
+
+max width=
+
+5|c|Tags: Color
+
+1-5
+Beige Clear Pink White Black Gold Purple Yellow Blue Gray Red Brown Green Silver Chrome Orange Tan
+
+1-5
+
+Table 1: The tags in our database, which are manually assigned to the art items by professional designers. The database consists of 3 categories, namely, style (27 tags), subject (20 tags), and color (17 tags).
+
+ < g r a p h i c s >
+
+Figure 6: Examples of focal items retrieved by the suggestion engine based on different wall color schemes.
+
+§ 3. TAGS. EACH ITEM ALSO CARRIES TAGS MANUALLY SPECIFIED BY DESIGNERS OF THE COMPANY. (SEE TABLE 1)
+
+Similarity between two art items is computed based on the L2 distance between their annotation vectors: the smaller the distance, the more similar the two items are. We find these annotations useful in devising our suggestion engine as they characterize the art items and are also common criteria used by designers for comparing art items. By formulating our scoring functions using these three types of annotations, we are able to devise an interface that allows the user to flexibly apply filters using a subset or all three types of annotation to retrieve relevant art item suggestions, making it easy for the user to browse through the large database of art items.
+
+§ TECHNICAL APPROACH
+
+We provide details for our design suggestion engine. There are two major components: art items suggestion and templates. By using these components, the user can quickly browse through the database of art items and select items that fit with the wall, as well as obtaining a decent spatial arrangement of the items as an initialization of their design.
+
+§ WALL PLANE AND COLOR
+
+Akin to the conventional workflow for designing a gallery wall, our approach starts with considering the wall color. The user wears the Magic Leap One headset and faces the target wall to be decorated. The wall plane is detected and extracted based on the headset's built-in functionality. The user can manually specify the wall color via a color picker in the user interface, or by using the headset's camera to take a picture of the wall whose average color is taken as the wall color.
+
+Based on the wall color, a neighbor color within $\pm \left( {{60}^{ \circ }\text{ to }{90}^{ \circ }}\right)$ of the wall color is randomly selected from the HSV circular color space. The complementary color of the neighbor color $\left( {180}^{ \circ }\right.$ from the neighbor color in the HSV circular color space) is also selected. Figure 7 shows an example. The wall color is used as the basis for retrieving other colors for suggesting relevant art items.
+
+ < g r a p h i c s >
+
+Figure 7: Wall's color palette example.
+
+§ SUGGESTED FOCAL ITEMS
+
+Our goal in this step is to retrieve and suggest a list of art items from the database as candidate focal art items for the user. Figure 6 shows some suggested focal art items based on different wall colors.
+
+To achieve this, the wall color, neighbor color and complementary color then form a 3-color palette based on which compatible focal art items are selected. The wall compatibility score ${S}_{\text{ foc }}$ of a candidate focal art item $\phi$ is defined as:
+
+$$
+{S}_{\text{ foc }}\left( \phi \right) = 1 - \frac{1}{9}\mathop{\sum }\limits_{{{\mathbf{c}}_{\mathrm{w}} \in {C}_{\mathrm{w}}}}\min \left\{ {d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right) \mid {\mathbf{c}}_{\phi } \in {C}_{\phi }}\right\} , \tag{1}
+$$
+
+where ${C}_{\mathrm{w}}$ is a set containing the wall color, neighbor color and the complementary color in the HSV space; ${C}_{\phi }$ is a set containing the 5 dominant colors of the candidate focal art item $\phi$ in the HSV space. This scoring function evaluates how close the candidate focal art item $\phi$ ’s color palette is with respect to the wall's color palette. The closer they are, the higher the wall compatibility score.
+
+Note that ${\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi } \in {\mathbb{R}}^{3}$ are colors in the HSV space. $d\left( \text{ . }\right) {is}$ a distance metric function to project the two colors ${\mathbf{c}}_{\mathrm{w}}$ and ${\mathbf{c}}_{\phi }$ into the HSV cone and to compute the squared distance between them in that cone [1]:
+
+$$
+d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right) = {\left( \sin \left( {H}_{\mathrm{w}}\right) {S}_{\mathrm{w}}{V}_{\mathrm{w}} - \sin \left( {H}_{\phi }\right) {S}_{\phi }{V}_{\phi }\right) }^{2}
+$$
+
+$$
++ {\left( \cos \left( {H}_{\mathrm{w}}\right) {S}_{\mathrm{w}}{V}_{\mathrm{w}} - \cos \left( {H}_{\phi }\right) {S}_{\phi }{V}_{\phi }\right) }^{2}
+$$
+
+$$
++ {\left( {V}_{\mathrm{w}} - {V}_{\phi }\right) }^{2}\text{ , }
+$$
+
+where $H \in \lbrack 0,{2\pi }),S \in \left\lbrack {0,1}\right\rbrack$ , and $V \in \left\lbrack {0,1}\right\rbrack$ are the HSV channel values. The range of $d\left( {{\mathbf{c}}_{\mathrm{w}},{\mathbf{c}}_{\phi }}\right)$ is $\left\lbrack {0,3}\right\rbrack$ . Equation (1) sums up the differences of 3 pairs of colors, hence a normalization of 9 is used.
+
+Our approach computes the compatibility scores for all the art items in the database. The top-20 art items are retrieved and displayed in order of the compatibility scores with the highest-scoring item shown first. The user is supposed to select a focal art item from the list of suggested art items. However, if needed, the user can also explore the database to select any other art item as the focal art item using the Item Panel which we describe in a later section.
+
+§ SUGGESTED AUXILIARY ART ITEMS
+
+Akin to the conventional gallery wall design approach, the selected focal art item serves as a reference for the suggestion engine to suggest other compatible, auxiliary art items to add to the gallery wall design. To retrieve auxiliary art items from the database as suggestions, an overall compatibility score ${S}_{\text{ aux }}$ is computed for each candidate auxiliary art item, which evaluates the style and color compatibility between the auxiliary art item and the selected focal art item:
+
+$$
+{S}_{\text{ aux }}\left( \phi \right) = {w}_{\mathrm{c}}{S}_{\text{ aux }}^{\mathrm{c}}\left( \phi \right) + {w}_{\mathrm{s}}{S}_{\text{ aux }}^{\mathrm{s}}\left( \phi \right) , \tag{2}
+$$
+
+where ${w}_{\mathrm{c}}$ is the weight of the color compatibility score ${S}_{\text{ aux }}^{\mathrm{c}}$ and ${w}_{\mathrm{s}}$ is the weight of the style compatibility score ${S}_{\text{ aux }}^{\mathrm{s}}$ .
+
+Color Compatibility Score: A candidate auxiliary art item $\phi$ has a high color compatibility score ${S}_{\text{ aux }}^{\mathrm{c}}$ if its colors are close to the dominant colors of the selected focal art item. Specifically, the color compatibility score of auxiliary art item $\phi$ is defined as follows:
+
+$$
+{S}_{\text{ aux }}^{\mathrm{c}}\left( \phi \right) = 1 - \frac{1}{15}\mathop{\sum }\limits_{{{\mathbf{c}}_{\mathrm{f}} \in {C}_{\mathrm{f}}}}\min \left\{ {d\left( {{\mathbf{c}}_{\mathrm{f}},{\mathbf{c}}_{\phi }}\right) \mid {\mathbf{c}}_{\phi } \in {C}_{\phi }}\right\} , \tag{3}
+$$
+
+where ${C}_{\mathrm{f}}$ and ${C}_{\phi }$ are respectively sets containing the 5 dominant colors of the selected focal art item and of the auxiliary art item $\phi .{\mathbf{c}}_{\mathbf{f}},{\mathbf{c}}_{\phi } \in {\mathbb{R}}^{3}$ are colors in the HSV space. As Equation (3) sums up the differences of 5 pairs of colors, and each difference has a range of $\left\lbrack {0,3}\right\rbrack$ , a normalization of 15 is used.
+
+ < g r a p h i c s >
+
+Figure 8: Art item suggestions. Based on the wall's color palette shown on the left, (a) several color compatible focal art items are suggested. Based on a selected focal art item (highlighted in red), (b) suggested auxiliary art items compatible in color and style. Auxiliary art items suggested by considering (c) only color or (d) only style.
+
+This scoring function evaluates how close the auxiliary art item $\phi$ ’s color palette is with respect to the selected focal art item's color palette. The closer they are, the higher the score is.
+
+Style Compatibility Score: A candidate auxiliary art item $\phi$ has a high style compatibility score ${S}_{\text{ aux }}^{\mathrm{s}}$ if its visual feature vector is close to the selected focal art item's visual feature vector. The style compatibility score of auxiliary art item $\phi$ is defined as follows:
+
+$$
+{S}_{\text{ aux }}^{\mathrm{s}}\left( \phi \right) = 1 - \frac{1}{\sqrt{n}}\begin{Vmatrix}{{\mathbf{v}}_{\mathrm{f}} - {\mathbf{v}}_{\phi }}\end{Vmatrix}, \tag{4}
+$$
+
+where ${\mathbf{v}}_{\mathrm{f}},{\mathbf{v}}_{\phi } \in {\mathbb{R}}^{n}$ are the $n$ -dimensional visual feature vectors of the focal item and auxiliary art item $\phi$ computed by the convolutional neural networks. The size of the dimension in our database is 256 .
+
+Overall, the compatibility score is computed for each art item in the database. The user interface displays the top-20 art items sorted in descending order of their compatibility scores as auxiliary art item suggestions. To provide flexibility in retrieving suggestions, our user interface allows the user to turn on and off the consideration of the color or the style compatibility score, which correspond to setting ${w}_{\mathrm{c}}$ or ${w}_{\mathrm{s}}$ as 1 or 0 . Figure 8 shows an illustration. The user can also select which of the 5 dominant colors ${C}_{\mathrm{f}}$ of the selected focal art item to consider in computing the color compatibility score.
+
+§ TEMPLATES
+
+To allow the user to quickly generate an initial gallery wall design based on a focal item, our tool provides preset templates that the user can choose from and apply. Figure 12 shows some example templates that our tool provides. These templates encode spatial relationships of art items that are commonly applied by gallery wall designers. A template arranges groups of auxiliary art items symmetric about and around the focal art item, akin to the gallery wall designs created by the conventional design workflow. Figure 9(b) shows the 4 templates (with 5,7,9, and 11 items) that we provide with our tool in our experiments. These templates resemble the common patterns used by designers as we learned from the interview. Auxiliary items within a group have compatible color and style by default (Equation (2)).
+
+ < g r a p h i c s >
+
+Figure 9: User interface of our tool via which a user wearing a mixed reality headset visualizes and designs a gallery wall. It consists of three components: (a) Design Canvas; (b) Template Panel; and (c) Item Panel.
+
+Initializing a Gallery Wall Design. Figure 10 illustrates how to apply a template to generate a gallery wall design. According to the layout of the chosen template, starting from the root (the focal art item), our approach inserts auxiliary art items which are compatible with the focal art item, group by group. Specifically, the items are added on the circumference of a circle centered at the focal item with a random radius $r \in \left\lbrack {{0.5d},{3d}}\right\rbrack$ , where $d$ is the diagonal length of the focal item. A pair of two auxiliary items would be added at the opposite sides of the circle. A group of three items would be added at the vertexes of a randomly-oriented equilateral triangle circumscribed by the circle. The gallery wall design generation finishes as all groups of auxiliary art items have been placed. The generated design is taken as an initial design based on which the user can modify interactively.
+
+Spatial Refinement. Our approach refines the spatial relationships between the items after the initialization and after every user interaction with the gallery wall design such as adding an item, removing an item, and moving an item.
+
+Snapping: To keep the gallery wall design compact, by default, all auxiliary items steer toward the focal item at the center while they maintain a certain minimum space between each other to avoid overlapping.
+
+Alignment: To keep the gallery wall design neat and uncluttered, by default, our approach aligns neighboring art items either horizontally or vertically by their edges so long as the alignment does not cause overlapping. (Figure 11(b))
+
+We include more examples of interactively modifying actions in the supplementary video.
+
+§ USER INTERACTION
+
+Figure 9 shows the user interface of our tool which is displayed in mixed reality. It consists of three components: a) the Design Canvas where the user can interactively modify the current gallery wall design visualized on the real wall; b) the Template Panel where the user can select and apply a pre-
+
+ < g r a p h i c s >
+
+Figure 10: Applying a template to generate a gallery wall design. (a) A template and its tree structure. The auxiliary items are placed symmetrically about the focal item at the center. (b) A gallery wall created by applying this template.
+
+ < g r a p h i c s >
+
+Figure 11: The user can interactively modify a design in mixed reality using the functionalities of our user interface.
+
+set template for synthesizing an initial gallery wall design; and c) the Item Panel where the user can retrieve art items from the database by specifying different criteria. Each of the components allows users to refine a gallery wall design conveniently and desirably. We describe them in the following.
+
+§ DESIGN CANVAS
+
+The design canvas visualizes and overlays the current gallery wall design on the real wall via the mixed reality headset's display. It also provides support for interactively adjusting both art items and the gallery wall layout:
+
+ * Add. The user selects an art item in the current design and retrieves several art items from the database that are compatible in terms of dominant colors, visual features and tags, which he can add to the current design.
+
+ * Replace. The user replaces an art item with another compatible art item from the database. (Figure 11(a))
+
+ * Move. The user moves an art item by dragging it.
+
+ * Resize. The user chooses another size for an art item.
+
+ * Remove. The user removes an art item.
+
+§ TEMPLATE PANEL
+
+The Template Panel allows the user to quickly generate an initial gallery wall design with items decently placed. It provides a number of preset templates that the user can apply to synthesize a gallery wall design based on a selected focal art item. It also provides other functionalities to enable automatic refinement of the spatial layout of the current design. A list of functionalities supported:
+
+ * Apply a Template. Based on a placed focal art item, the user applies a template to synthesize a gallery wall design.
+
+ * Add Random Group. Our tool automatically adds a group of 2 or 3 auxiliary art items, which are compatible with the
+
+ < g r a p h i c s >
+
+Figure 12: The templates we use in our system. The blue item represents the focal item. The colored items represent the groups of 2 or 3 auxiliary items.
+
+focal art item, to the current design. The group of auxiliary items are symmetric about the focal item (Figure 12).
+
+ * Align All. The user triggers our tool to align all the art items with respect to each other. The alignment is done along the horizontal (left, right, or center) or vertical (top, bottom, or center) direction. This functionality comes in handy because it could be tiring and difficult for users to preform multiple precise adjustments in the 3D space [11] using a handheld controller.
+
+ * Snap. If enabled, an art item is snapped to its neighbor item as the user drags the art item around, i.e., it will steer toward the center of its neighbor item until a minimum spacing between the two items is reached (Figure 13).
+
+ * Clear Wall. All the art items are removed from the design.
+
+§ ITEM PANEL
+
+The Item Panel is connected to the suggestion engine and the database of art items. Its primary function is to display a relevant list of art items that the user can add to the gallery wall design as a focal art item or auxiliary art items. The panel contains buttons that the user can click to set the criteria for retrieving relevant art items from the database. For example, the user can select whether to use color, or style, or both as criteria for determining compatibility between items. The user can also select which color(s) out of the 5 dominant colors of the focal art item to use for determining color compatibility. A list of functionalities supported:
+
+ * Update Wall's Color Palette. Our tool re-generates the neighbor and complementary colors of the wall's color palette.
+
+ * Find Focal. Based on the wall's color palette, our tool retrieves several of compatible art items as focal art item suggestions (according to equation (Equation 1)).
+
+ * Find Auxiliary. The user selects criteria (e.g., colors, visual features, tags) based on which our tool retrieves 20 compatible art items as auxiliary art item suggestions. By default, the art items are sorted by their compatibility scores.
+
+§ USER EVALUATION
+
+We developed our tool using C# on the Unity Game Engine installed with the Magic Leap Lumin SDK. We deployed our tool onto a Magic Leap One headset which we used for our user evaluation experiments.
+
+User Groups. We recruited two different groups of users to evaluate our tool.
+
+ < g r a p h i c s >
+
+Figure 13: Snapping example. (a) The user drags an art item, which is (b) snapped toward the center of its neighbor item until a minimum spacing between the two is reached.
+
+Group 1: The first group was recruited to evaluate the user experience of designing a gallery wall using our mixed reality interface based on Magic Leap One versus using a 2D interface which mimics a traditional design tool on a laptop. We recruited 17 participants, who are the employees of a company, consisting of 12 males and 5 females, aged from 20 to 45, the average age was 32 . All participants did not have experience using the Magic Leap One headset. Each participant designed 2 gallery walls, under Condition ${MR}$ and Condition ${2D}$ in a random order.
+
+Group 2: The second group was recruited to evaluate the user experience of designing a gallery wall without and without the template functionality. We recruited 24 participants, who are the college students, consisting of 16 males and 8 females, aged from 19 to 24 the average age was 22. Each participant designed 2 gallery walls under Condition ${2D}$ and Condition ${2DNT}$ in a random order.
+
+Conditions. We asked the participants to create gallery wall designs. The goal of each task was to design a gallery wall that fits with a living room with a pale gray wall and a blue sofa as shown in Figure 9.
+
+ * Condition MR: The participant used our tool delivered through a mixed reality interface to create a gallery wall.
+
+ * Condition ${2D}$ : The participant used our tool delivered through a 2D interface to create a gallery wall.
+
+ * Condition 2DNT: The participant used our tool delivered through a 2D interface to create a gallery wall with no template functionality. That is, the "Apply a Template" and "Add Random Group" buttons under the Template Panel were disabled.
+
+Note that, both Condition ${MR}$ and Condition 2DNT used a background image showing a pale gray wall and a blue sofa. We include the $2\mathrm{D}$ interface we used for the user evaluation in the supplementary material.
+
+Procedure. Before each task, we briefed and trained the participant in creating a gallery wall design using our tool. We asked the participant to follow a 5-minute tutorial which guided him how to use our tool to create a gallery wall design step by step. Note that we showed the participant the tutorial whether he was tasked with creating a gallery wall using the mixed reality interface (Condition MR) or the 2D interface (Condition 2D). The participant could ask any question about the user interface to make sure he was familiar with using it.
+
+ < g r a p h i c s >
+
+Figure 14: Example gallery wall designs created by participants in our user evaluation pool.
+
+The participant was then asked to create a gallery wall design that fits with the living room under a given condition. For Condition ${2D}$ and Condition ${2DNT}$ , the participant was presented with a photo of the living room when designing a gallery wall using the $2\mathrm{D}$ interface (please refer to the supplementary material for a screenshot of the interface). Our tool tracked the interaction metrics for later analysis.
+
+Figure 14 shows some gallery walls designed by the participants. Our supplementary material contains the results created by all participants.
+
+§ EXPERIMENT RESULTS
+
+We discuss the user evaluation results with regard to performance, usage and user feedback. We use t-tests to evaluate if there is any significant difference between the results obtained under different pairs of conditions by reporting the p-values. We show our results in box plots for easy interpretation. Our supplementary material shows the numeric results and all designs created by participants under different conditions.
+
+§ PERFORMANCE
+
+We tracked the performance of retrieving items from the large database we used in our experiments on a desktop computer equipped with a i7-7700k 4.2GHz CPU, 16GB RAM, and an NVIDIA GeForce GTX 1080 8GB graphics card. Retrieving focal items took 3.12 seconds and retrieving auxiliary items took 1.21 seconds. All the other user interface operations ran at an interactive rate.
+
+Our tool tracks the participant's performance under each given condition. Here are the performance metrics tracked:
+
+ < g r a p h i c s >
+
+Figure 15: Performance results in different settings. Color dots and bars show the means and medians. The p-value of t-test computed between the results of the two conditions in each group is shown. The p-values smaller than 0.05 which reject the null hypothesis are bolded.
+
+ * Number of Clicks: The total number of times the participant clicked on a user interface component.
+
+ * Number of Movements: The total number of times the participant adjusted the position of an art item.
+
+Mixed Reality versus 2D Interface. As the Group 1 results in Figure 15 show, under the MR condition where the participants designed via a Magic Leap One headset, they made fewer clicks (p<0.01 in t-test) and movements (p=0.04), compared to the $2\mathrm{D}$ condition where they designed via a $2\mathrm{D}$ interface.
+
+Under the MR condition, the participants could see their design directly visualized on the real wall, which might result in fewer adjustments. In our perceptual study, we find that there is no significant difference between the visual quality of the designs created under the MR and the 2D conditions. In all, the direct visualization brought about by mixed reality allows the participants to create gallery wall designs of similar visual quality (compared to designs created on a 2D screen) with fewer manual adjustments.
+
+ < g r a p h i c s >
+
+Figure 16: Usage statistics of the items placed by a template (Section 5.4) under different conditions.
+
+Templates versus No Templates. As the Group 2 results in Figure 15(a) show, under the 2D condition where the participants created designs with the aid of templates they made fewer clicks $\left( {\mathrm{p} < {0.01}}\right)$ compared to the 2DNT condition where they created designs without the aid of templates. The template functionality helped them create designs more efficiently.
+
+§ USAGE
+
+Our tool also tracks the usage of the design suggestions generated by the Template Panel or Item Panel. Here are the metrics used:
+
+ * Template Item Removed: If a participant applied a template for initializing a gallery wall design, the percentage of items generated by the template that he/she removed.
+
+ * Template Item Modified: If a participant applied a template for initializing a gallery wall design, the percentage of items generated by the template that he/she modified.
+
+ * Suggested Item Usage: What percentage of items in the final gallery wall design was chosen among the top-20 suggestions from the Item Panel retrieved according to the user's specified criteria.
+
+ * Selected Item's Rank: The average rank of the items the participants selected from the suggestions of the Item Panel.
+
+Template Items. Figure 16 shows the usage statistics of the items placed by a template under different conditions. As Figure 16(a) shows, the participants removed about 20% and ${28}\%$ of the items placed by the templates under the MR and $2\mathrm{D}$ conditions respectively. As a template places highly similar items in a design, it seems that the participants tended to replace a few items with other less similar items to introduce some variation or contrast into the overall design. On the other hand, as Figure 16(b) shows, in average only about ${13}\%$ and $8\%$ of the items were modified under the MR and 2D conditions respectively. Overall, the participants kept a majority of the items placed by a template in their final gallery wall design.
+
+ < g r a p h i c s >
+
+Figure 17: Usage statistics of the items suggested by the Item Panel (Section 6.3) under different conditions.
+
+Suggested Items. Figure 17 shows usage statistics of items suggested by the Item Panel. As Figure 17(a) shows, under the MR and 2D conditions, about ${82}\%$ and ${76}\%$ of the items used in the final design were chosen from the top-20 suggestions in the Item Panel retrieved according to their specified criteria. The Group 2 results show that there was a significant difference $\left( {\mathrm{p} < {0.01}}\right)$ between suggested item usages under the 2D and 2DNT conditions. Under the 2DNT condition when the participants could not use a template to initialize a gallery wall design, they tended to use items from the database more randomly.
+
+On the other hand, as Figure 17(b) shows, the average rank of the selected items ranges from about 8 to 10 under different conditions. It seems that, in using the Item Panel, the participants tended to choose items that match with their specified criteria but not in a very strict sense.
+
+§ USER FEEDBACK
+
+We talked to the participants after the experiments. They generally merited the visualization brought about by our mixed reality tool for showing the design on the real wall, which made the interactive design process more intuitive compared to a typical computer screen, as they could directly see how their design could fit with the real space. On the downside, some participants reported initial user experience challenges while acclimating themselves to the device's field-of-view and headset fit. We include the participants' comments in our supplementary material.
+
+We also conducted a two-alternative forced-choice approach to evaluate the quality of the gallery wall designs created by the participants under two different conditions (MR & 2D or ${2D}\& {2DNT})$ . We include the details of the perceptual study in the supplementary material.
+
+§ SUMMARY
+
+We proposed a novel mixed reality-based interactive design tool for gallery walls. By overlaying a virtual gallery wall on a real wall, our tool allows users to directly visualize their design in the real world as an integrated part of the creative process. Our suggestive design interface allows users to retrieve stylistically compatible items for creating their desired gallery walls.
+
+§ LIMITATIONS
+
+We used a Magic Leap One headset for experimenting with our tool. The hardware still has limitations for consumer use. For example, the field-of-view is still a bit narrow for the user to see the overall design and the user interface at once; the user has to look around to see different things, which could be inconvenient.
+
+It could be tiring to use the handheld controller for $3\mathrm{D}$ interaction and manipulation in the 3D space for a prolonged period of time. Because of this, it would be challenging to use a sophisticated design tool that requires tedious and precise user input in mixed reality.
+
+We believe that the visualization of the design in the real space by mixed reality may benefit the users in envisioning and communicating their designs, allowing them to design in the real space directly and intuitively. In the future, it would be worthwhile to extend our mixed reality design tool to consider more sophisticated context of the 3D scene to simplify other interior design tasks, such as suggesting furniture placement. Performing advanced scene analysis in real time for enabling interior design applications in mixed reality presents a practical challenge, which could be resolved as the com-putaional power of mixed reality devices continue to increase.
+
+Due to the scope of this project, in this work we only focus on a subset of interior design, namely, gallery wall designs. We show that a mixed reality approach for gallery wall design is feasible. Our approach could be extended to consider 3D decorations (Figure 18), though they are less common and we did not consider them in our current approach.
+
+It would be helpful to have a large database of gallery wall designs from which our tool could learn the spatial relationships and compatibility between different art items in a gallery wall design, and use such knowledge for synthesizing new designs.
+
+§ FUTURE WORK
+
+With the advances of artificial intelligence and natural language processing techniques, we hope that such techniques can be adopted in mixed reality for enabling natural and convenient user interaction experiences in the interior design process. For example, it would be helpful if a mixed reality interior design software application could understand voice commands from the user to decorate a space, minimizing the need for manual user input. With such advances, using a mixed reality software application for interior design will be as natural as talking to an interior designer who can instantly visualize the design for the user through mixed reality. We believe this is an exciting goal for enabling human-AI collaboration in design.
+
+For commercial purposes, art items could be further annotated with non-aesthetic tags, such as their prices, frequencies of being viewed or selected, and the semantic meanings they carry. Incorporating such additional tags could provide more desirable item recommendations while a user is creating their gallery wall designs.
+
+ < g r a p h i c s >
+
+Figure 18: Example of a gallery wall design containing a 3D decoration object (a lion head) that can be visualized in mixed reality using our tool.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed178bbfa8e3626285eaea1e51a7d8107d3e44c5
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,221 @@
+# Improving View Independent Rendering: Towards Robust, Practical Multiview Effects
+
+This paper describes improvements to view independent rendering (VIR) designed to make its immediate application to soft shadows more practical, and its future application to other multiview effects such as reflections and depth of field more promising. Realtime rasterizers typically realize multiview effects by rendering a scene from multiple viewpoints, requiring multiple passes over scene geometry. VIR avoids this necessity by crafting a watertight point cloud and rendering it from multiple viewpoints in a single pass. We make VIR immediately more practical with an unbuffered implementation that avoids possible overflows, and improve its potential with more efficient sampling achieved with orthographic projection and stochastic culling. With these improvements, VIR continues to generate higher quality real time soft shadows than percentage-closer soft shadows (PCSS), in comparable time.
+
+## ACM Reference Format:
+
+. 2020. Improving View Independent Rendering: Towards Robust, Practical Multiview Effects. 1, 1 (April 2020), 5 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
+
+## 1 INTRODUCTION
+
+As computer graphics hardware has improved, so has its interactive imagery, moving from line drawings to filled polygons, to textured surfaces with specular reflections. However, further improvements in visual realism - effects such as soft shadows, depth of field, and object reflections - have been hindered by current hardware, which requires multiple model traversals to render the many views needed to sample area lights, different focal depths, and reflections.
+
+View-independent rasterization (VIR) avoids the complexity of multiple rendering passes [9] by using points as a display primitive. For every frame, it carefully transforms input triangles into a point cloud specialized to the current set of views. VIR then renders these views in parallel using the point cloud, with an order of magnitude fewer passes over the geometry.
+
+This short paper presents our contributions to VIR, designed to increase its immediate practicality and future potential:
+
+- Practicality: We improve VIR to eliminate the use of a point buffer, freeing developers from the necessity of buffer management to avoid overflow.
+
+- Potential: We improve VIR's orthogonal projection, making sampling more parsimonious and "watertight" (without holes). We also introduce stochastic culling of sub-pixel triangles to reduce sampling rates further.
+
+We verify that our improvements do not affect quality and performance by using them to render soft shadows. When rendering shadows comparable to those produced with a traditional, high-quality multipass technique, VIR continues to produce them in nearly an order of magnitude less time. When making shadows at practical few-millisecond speeds, VIR shadows are still of higher quality than percentage-closer soft shadows (PCSS) [2]. While our improvements in sampling do not realize performance improvements for soft shadows, we anticipate that they will for other multiview effects with heavier shader loads.
+
+## 2 RELATED WORK
+
+Rendering realistic imagery requires accurate simulation of light flow. However, accurate sampling of the light flow integral [4] can be difficult, particularly for effects such as soft shadows, depth of field (defocus blur), motion blur, and indirect reflections [5]). With rasterization hardware, often the fastest way to produce such samples is multiview rendering: storing many off-screen views in buffers, and combining them to produce a final view. However, the high cost of multiview rendering often requires sparse sampling and filtering to reduce resulting noise [11].
+
+To sidestep rasterization's limitations for multiview rendering, we rely on points [6]. In today's applications, triangles often outnumber pixels, leading many to argue that points are a better rendering primitive [3]. Yet points are not widely used since their discontinuity can create "holes" when views change. Existing point renderers, therefore, use dense point clouds that render slowly; or sparse clouds with complex reconstruction that again render slowly, or produce low-quality imagery.
+
+To improve point rendering and support multiview rendering, VIR [9] exploits rasterization hardware, which efficiently transforms triangles into points. For each frame, VIR generates a cloud of points customized to the current set of views in real time. It then renders the cloud in parallel into multiple views, reducing the number of geometry passes by a factor of ten. To accomplish this, for any triangle visible in at least one view, the geometry shader computes specialized viewing and projection matrices that center the triangle, orient it parallel to the view plane, and achieve a watertight sampling rate. It then applies the matrices to the triangle and rasterizes it to generate points. Next, the fragment shader writes each point to a buffer. When all points have been generated, the compute shader passes over this buffer, transforming and projecting each point into multiple views.
+
+## 3 IMPROVING VIEW INDEPENDENT RASTERIZATION
+
+We improve on Marrs et al.'s VIR implementation [9] in three ways to increase its practicality and potential. First, our implementation is bufferless, ending the possibility of overflow and simplifying VIR's use in practice. Second, we improve Marrs et al.'s orthogonal projection matrix, reducing the size of the resulting point cloud. Finally, we stochastically cull sub-pixel triangles, shrinking the point cloud further to improve speed. The resulting improved algorithm is already practical, with better soft shadows than PCSS at comparable speeds. Improved VIR also has the potential to bring both this practicality and a significant performance improvement to more complex multiview effects.
+
+---
+
+Author's address:
+
+---
+
+### 3.1 Bufferless VIR
+
+Marrs et al.'s VIR implementation stored points in a buffer, which the compute shader then rendered to multiple views. The buffer size must be set before run time, making overflows possible. Instead, we use a bufferless, entirely in-pipeline scheme that eliminates the possibility of run time buffer overflows. Rather than sending each point to a buffer, the fragment shader writes the point to multiple off-screen buffers for deferred shading. While we did not directly compare the speed of our unbuffered implementation to Marrs et al.'s buffered implementation, we expect the performance to be similar, as described in a related comparison by Marrs et al. [8].
+
+### 3.2 Watertight and Efficient Orthographic VIR
+
+Marrs et al.'s computation of the watertight multiview sampling rate ${s}_{mv}$ assumed that the fields of view for the point generation, offscreen and eye views were identical. But this cannot be true when the point generation view uses orthogonal projection, and other views do not. Unfortunately, point generation in perspective leads to sampling inefficiencies, with points spread unevenly across each triangle due to perspective distortion.
+
+
+
+Fig. 1. Improved Orthogonal Sampling Rate.
+
+Algorithm 1 shows the VIR algorithm, improved to use orthogonal projection. For each polygon, we find ${\rho }_{mv}$ , the maximum point density on the projected polygon's surface, across all views as illustrated in the figure 1. For each polygon $p$ , we first find the closest point on the polygon from a given view $v\left\lbrack 1\right\rbrack$ . This point has the highest sample density on the polygon for that view. We compute the area of a reverse-projected pixel centered on that point area ${p}_{p, v, p}$ by reverse-projecting its corners onto the polygon in model space. Across all views, the maximum sampling density ${\rho }_{mv}$ is given by the equation (1), and the orthogonal scaling factor ${s}_{m}v$ is given by (2)
+
+$$
+{\rho }_{mv} = {\forall }_{v \in V}\max \left( {{\rho }_{mv},\frac{{\operatorname{area}}_{p}}{{\operatorname{area}}_{P, v, p}}}\right) \tag{1}
+$$
+
+$$
+{s}_{mv} = \max \left( {{s}_{mv}, w \times \sqrt{\frac{{\rho }_{\text{ortho }}}{{\rho }_{mv}}}}\right) \tag{2}
+$$
+
+, Vol. 1, No. 1, Article . Publication date: April 2020. where $V$ is the set of all view centers of destination views, $w$ is the perspective distortion, ${\operatorname{area}}_{p}$ is the area of the polygon in model space, and ${\rho }_{\text{ortho }}$ is the sampling density for VIR’s orthographic projection, which depends on the chosen viewing volume.
+
+Algorithm 1 View Independent Rasterization
+
+---
+
+In Geometry Shader Stage:
+
+for each polygon (P) do
+
+ for each viewpoint (v) do
+
+ ${c}_{p} =$ Closest point on the polygon from the viewpoint
+
+ are ${a}_{p} =$ Area of pixel
+
+ ${\text{area}}_{P, v, p} =$ Area of reverse-projected pixel centered at ${c}_{p}$
+
+ ${\rho }_{mv} = \max \left( {{\rho }_{mv},\frac{{\operatorname{area}}_{p}}{{\operatorname{area}}_{P, v, p}}}\right)$
+
+ end for
+
+ Compute orthogonal scaling factor
+
+ ${s}_{mv} = \max \left( {{s}_{mv}, w \times \sqrt{\frac{{\rho }_{ortho}}{{\rho }_{mv}}}}\right)$
+
+ Apply VIR matrix $\left( {T}_{VIR}\right)$ and projection matrix $\left( {T}_{\text{ortho }}\right)$ to
+
+ the polygon (P)
+
+ ${P}^{\prime } = {T}_{\text{ortho }} \times {T}_{VIR} \times P$
+
+ Send the transformed polygon (P') to the rasterizer
+
+end for
+
+In Fragment Shader Stage:
+
+for each viewpoint (v) do
+
+ Write the generated point into the corresponding buffer using
+
+ atomic write operations.
+
+end for
+
+---
+
+The orthographic projection matrix is given in the equation (3).
+
+$$
+{T}_{\text{ortho }} = \left\lbrack \begin{matrix} {s}_{mv} & 0 & 0 & 0 \\ 0 & {s}_{mv} & 0 & 0 \\ 0 & 0 & \frac{2}{{z}_{\text{near }} - {z}_{\text{far }}} & \frac{{z}_{\text{far }} - {z}_{\text{near }}}{{z}_{\text{near }} + {z}_{\text{far }}} \\ 0 & 0 & 0 & 1 \end{matrix}\right\rbrack \tag{3}
+$$
+
+Our new orthogonal projection technique had minimal impact on VIR's performance and image quality, with orthogonal and perspective projection (as described by Marrs et al. in [9]) producing nearly identical soft shadows at similar speed. While orthogonal projection generated fewer points than Marrs et al.'s perspective projection, it required more time to do so, particularly in models with more triangles. For example, when generating 128 views for a model with 2 million triangles, orthogonal point generation rendered ${122K}$ points in ${18.74}\mathrm{\;{ms}}$ , whereas perspective sampling generated ${238}\mathrm{K}$ points in 17.55 ms. However, we expect that for most other multiview effects (e.g. environment mapping), increased shading loads will make orthogonal point generation's smaller point clouds advantageous.
+
+### 3.3 VIR with Stochastic Culling
+
+To improve speed further, we stochastically cull (and avoid generating points for) triangles that span less than $1/{8}^{th}$ of a pixel in VIR's point generation view. The smaller the proportion of the pixel covered by the triangle ${T}_{pp}$ , the more likely ${T}_{pp}$ will be culled, with probability $1 - \left( {8 \times {T}_{pp}}\right)$ . Stochastic culling breaks our watertight guarantee, but we have not yet observed any holes in soft shadows generated with models ranging from ${30}\mathrm{K}$ to $2\mathrm{M}$ triangles. For example, Figure 5 shows a perceptual comparison of improved VIR with and without culling using HDR-VDP2 [7]; cool heatmap colors indicate little or no difference. Across this same range of models, we found that stochastic culling is most beneficial when subpixel triangles are common. This replicates Marrs et al's [9] finding that for large triangles (spanning dozens of pixels or more), VIR is less efficient than standard rasterization.
+
+## 4 RESULTS
+
+We demonstrate the practicality and potential of improved VIR with soft shadows. Below, we offer comparisons to both high quality and high-speed shadow algorithms, as well as a brief comparison to Marr's et al.'s implementation [9].
+
+### 4.1 High Quality Comparision
+
+As an evaluation platform, we used OpenGL 4.5 on a PC with an Intel i7-8700K @ 3.70 GHz CPU and an NVIDIA 1080Ti GPU, running Windows 10 OS. We rendered several scenes, with detail concentrated in the central ${30}\%$ of the field of view. All scenes were dynamic, rotating around themselves twice (720 degs), while lights remained stationary, casting moving shadows. We used 32-bit unsigned depth buffers, with a resolution of ${1024}^{2}$ . For each light source sample, we set field of view to ${45}^{ \circ }$ .
+
+For VIR, we used all three of our improvements: a bufferless implementation, orthographic point generation, and stochastic culling. Like Marrs et al., we produced 128 views in four passes (32 per pass, the warp size of our GPU). As a high quality comparison, we used multiview rendering (MVR), which used 128 passes to create 128 standard shadow maps [12]. To compare the performance of these methods, we averaged GPU run-time and the number of points generated over 1256 frames of execution.
+
+Table 1 shows results for several models [10]. The leftmost column shows the number of triangles per model. The adjacent three show improved VIR's point cloud size, the time required to generate that point cloud, and the total time to generate VIR's point cloud and construct depth maps (with results including stochastic culling in brackets). For comparison, the next column reports the total time taken by MVR to make depth maps, and the rightmost column reports performance improvement as the ratio of MVR time over VIR time, highlighted in blue. The illumination technique is the same for VIR and MVR, we do not include it. VIR renders these dynamic, complex soft shadows up to 3.4 times faster than MVR without stochastic culling, and up to 7 times faster with it.
+
+Figure 2 shows soft shadows generated by VIR and MVR. Though VIR is faster than MVR, its visual quality is quite similar to high quality MVR and stable under animation. VIR includes the hallmarks of high quality shadows, such as soft penumbras and contact hardening (sharper shadows closer to the light). Because VIR and MVR both use shadow mapping and differ only in how they generate depth buffers, both suffer the same artifacts (e.g. "peter panning" and acne). Note that the breaks in the dragon's shadow with VIR are smaller than in MVR; VIR's view independent samples silhouettes more densely than view dependent MVR.
+
+GPU Performance of VIR [with stochastic culling] Soft Shadows for 128 Views
+
+| Models (# tris) | VIR #points | VIR pt gen (ms) | pt gen + depth (ms) | MVR (ms) | ✘ Faster |
| Tree (151.7K) | 275.0K $\left\lbrack {228.9K}\right\rbrack$ | 0.82 | 3.80 [2.54] | 7.33 | 1.92 [2.88] |
| Dragon (883.3K) | ${684.6K}$ [489.6K] | 12.91 | 16.58 [13.78] | 57.01 | 3.44 [4.13] |
| Buddha (1.1M) | 586.1K $\left\lbrack {225.6K}\right\rbrack$ | 8.49 | 21.52 [12.22] | 69.33 | 3.22 [5.67] |
| Lucy (2.0M) | 1.1M $\left\lbrack {250.8K}\right\rbrack$ | 12.91 | 38.00 [17.18] | 122.31 | 3.22 [7.12] |
+
+Table 1. Speed comparisons of View Independent Rendering (VIR) and Multiview Rasterization (MVR), with Models in the left column, VIR in the middle three, and MVR to the right. We highlight VIR's performance improvements in blue. Results using stochastic culling in are in brackets ([]).
+
+### 4.2 High Speed Comparison
+
+To evaluate the use of improved VIR in a more practical, real time setting, we compare improved VIR to percentage-closer soft shadows (PCSS) [2]. We generated soft shadows using 16 views in 2.6 ms and compared it to an image generated by PCSS with 96 samples per pixel (32 blockers and 64 filter samples), in 2.5 msec. The resulting images are shown in Figure 3. Figure 4 shows the perceptual comparison of these images against a reference 128-view MVR solution using HDR-VDP2 [7]. The image generated by VIR has less error than PCSS, especially at the region where the rods cast shadows on the dragon.
+
+To further gauge the effect of the size of the triangles on our improved VIR technique, we sorted triangles into three classes: subpixel triangles, with the length of the longest side less than a pixel; supra-pixel 1 triangles, with the longest side length greater than a pixel and less than 10 pixels; and supra-pixel 2 triangles, with longest side of the triangle is greater than 10 pixels. We then studied our technique on the scene which had mostly subpixel triangles, mostly supra-pixel 1 triangle and mostly supra-pixel 2 triangles. We observed that with the majority of triangles being sub-pixel, we were able to achieve a huge speedup. With a large number of supra-pixel 1 polygons, VIR was still faster than MVR. But with supra-pixel 2 polygons, VIR didn't do well and MVR proved to be a better candidate. One way to handle a scene with a mix of all sizes of polygons is to have a hybrid of VIR and MVR method. By rendering small polygons with VIR pass and the large polygons with MVR pass.
+
+
+
+Fig. 2. Soft shadows generated using MVR (left) and using our VIR implementation (right). Both generate 128 depth maps. MVR takes 57 ms to generate them, whereas our implementation generates points and depth maps in ${16.6}\mathrm{{ms}}$ .
+
+
+
+Fig. 3. (Left) Shadow rendered using PCSS (32 blocker and 64 PCF samples). Shadow artifacts can be seen on the dragon, where 2 rods cast shadows on it. (Right) Soft shadows rendered with our VIR implementation using 16 views and all improvements. PCSS takes 2.5 ms, whereas our implementation 2.6 ms. delivering better quality.
+
+
+
+Fig. 4. A perceptual comparison of Figure 3's images. HDR-VDP2 compares each to a 128-view MVR render of the same scene. Red indicates a more perceivable difference.
+
+### 4.3 Improvements Comparison
+
+Our VIR improvements did not improve soft shadow quality or speed over Marrs et al.'s implementation. We expect our improvements to show their merit for other, more demanding multiview effects, such as environment mapping and defocus blur.
+
+We did not directly compare improved VIR to the original on the same hardware, but Marrs et al.'s results are similar to our own, on comparable hardware. Although improved VIR did generate sparser - but still watertight - point clouds, the effort required to do so canceled out performance gains for soft shadowing. However, soft shadowing shader loads are minimal: only a depth comparison is required. Other multiview effects require much more complex shaders and should realize the performance benefits of our VIR improvements, making those effects practical as well.
+
+
+
+Fig. 5. A perceptual comparison of improved VIR with and without stochastic culling. HDR-VDP2 shows little or no difference.
+
+## 5 LIMITATIONS, CONCLUSIONS AND FUTURE WORK
+
+Marrs et al.'s original VIR implementation was able to cull points in the compute shader by comparing several local points and rendering only the closest (unoccluded) sample for each view pixel. Our bufferless implementation does not use compute shaders, and cannot perform this local point culling. More significantly, our stochastic triangle culling breaks the guarantee of watertight sampling, though it has not created holes in our testing.
+
+Despite these limitations, the VIR improvements we describe here make VIR immediately more practical and promise wider utility in the future. With a bufferless implementation, developers need no longer risk runtime overflows. We also show results demonstrating higher quality soft shadows than PCSS at practical rendering speeds. In the future, we plan to explore the potential of improved VIR with multiview effects having more demanding shading loads, such as environment mapping, diffuse global illumination and defocus or motion blur. We will also study global probabilistic limits for holes resulting from stochastic triangle culling. Finally, we plan to examine applications of improved VIR to light field displays, which demand tens or hundreds of views in every frame.
+
+## REFERENCES
+
+[1] Davide Eberly. 1999. Distance between point and triangle in 3D. Magic Software (1999). http://www.magic-software.com/Documentation/pt3tri3.pdf
+
+[2] Randima Fernando. 2005. Percentage-closer soft shadows. In ACM SIGGRAPH 2005 Sketches. ACM, 35.
+
+[3] Markus Gross and Hanspeter Pfister. 2011. Point-based graphics. Elsevier.
+
+[4] James T Kajiya. 1986. The rendering equation. In ACM Siggraph Computer Graphics, Vol. 20(4). ACM, 143-150.
+
+[5] Jaakko Lehtinen, Timo Aila, Jiawen Chen, Samuli Laine, and Frédo Durand. 2011. Temporal light field reconstruction for rendering distribution effects. In ${ACM}$ Trans. Graphics, Vol. 30(4). ACM, 55.
+
+[6] Marc Levoy and Turner Whitted. 1985. The use of points as display primitives (Technical Report TR 85-022).
+
+[7] Rafat Mantiuk, Kil Joong Kim, Allan G Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graphics 30, 4 (2011), 40.
+
+[8] Adam Marrs, Benjamin Watson, and Christopher Healey. 2018. View-warped Multi-view Soft Shadows for Local Area Lights. J. Comp. Graphics Techniques 7, 3 (2018).
+
+[9] Adam Marrs, Benjamin Watson, and Christopher G Healey. 2017. Real-time view independent rasterization for multi-view rendering. In Proc. Eurographics: Short Papers. Eurographics Association, 17-20.
+
+[10] Morgan McGuire. 2017. Computer Graphics Archive. https://casual-effects.com/ data
+
+[11] Peter Shirley, Timo Aila, Jonathan Cohen, Eric Enderton, Samuli Laine, David Luebke, and Morgan McGuire. 2011. A local image reconstruction algorithm for stochastic rendering. In Symp. Ictv. 3D Graphics & Games. ACM, 5.
+
+[12] Lance Williams. 1978. Casting curved shadows on curved surfaces. In ${ACM}$ Siggraph Computer Graphics, Vol. 12(3). ACM, 270-274.
+
+547
+
+548
+
+549
+
+550
+
+551
+
+552
+
+553
+
+554
+
+555
+
+556
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..eeb8b8c83547d5fd62938dda21e6da8ef6a09814
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/_xm2evILU8m/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,183 @@
+§ IMPROVING VIEW INDEPENDENT RENDERING: TOWARDS ROBUST, PRACTICAL MULTIVIEW EFFECTS
+
+This paper describes improvements to view independent rendering (VIR) designed to make its immediate application to soft shadows more practical, and its future application to other multiview effects such as reflections and depth of field more promising. Realtime rasterizers typically realize multiview effects by rendering a scene from multiple viewpoints, requiring multiple passes over scene geometry. VIR avoids this necessity by crafting a watertight point cloud and rendering it from multiple viewpoints in a single pass. We make VIR immediately more practical with an unbuffered implementation that avoids possible overflows, and improve its potential with more efficient sampling achieved with orthographic projection and stochastic culling. With these improvements, VIR continues to generate higher quality real time soft shadows than percentage-closer soft shadows (PCSS), in comparable time.
+
+§ ACM REFERENCE FORMAT:
+
+. 2020. Improving View Independent Rendering: Towards Robust, Practical Multiview Effects. 1, 1 (April 2020), 5 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
+
+§ 1 INTRODUCTION
+
+As computer graphics hardware has improved, so has its interactive imagery, moving from line drawings to filled polygons, to textured surfaces with specular reflections. However, further improvements in visual realism - effects such as soft shadows, depth of field, and object reflections - have been hindered by current hardware, which requires multiple model traversals to render the many views needed to sample area lights, different focal depths, and reflections.
+
+View-independent rasterization (VIR) avoids the complexity of multiple rendering passes [9] by using points as a display primitive. For every frame, it carefully transforms input triangles into a point cloud specialized to the current set of views. VIR then renders these views in parallel using the point cloud, with an order of magnitude fewer passes over the geometry.
+
+This short paper presents our contributions to VIR, designed to increase its immediate practicality and future potential:
+
+ * Practicality: We improve VIR to eliminate the use of a point buffer, freeing developers from the necessity of buffer management to avoid overflow.
+
+ * Potential: We improve VIR's orthogonal projection, making sampling more parsimonious and "watertight" (without holes). We also introduce stochastic culling of sub-pixel triangles to reduce sampling rates further.
+
+We verify that our improvements do not affect quality and performance by using them to render soft shadows. When rendering shadows comparable to those produced with a traditional, high-quality multipass technique, VIR continues to produce them in nearly an order of magnitude less time. When making shadows at practical few-millisecond speeds, VIR shadows are still of higher quality than percentage-closer soft shadows (PCSS) [2]. While our improvements in sampling do not realize performance improvements for soft shadows, we anticipate that they will for other multiview effects with heavier shader loads.
+
+§ 2 RELATED WORK
+
+Rendering realistic imagery requires accurate simulation of light flow. However, accurate sampling of the light flow integral [4] can be difficult, particularly for effects such as soft shadows, depth of field (defocus blur), motion blur, and indirect reflections [5]). With rasterization hardware, often the fastest way to produce such samples is multiview rendering: storing many off-screen views in buffers, and combining them to produce a final view. However, the high cost of multiview rendering often requires sparse sampling and filtering to reduce resulting noise [11].
+
+To sidestep rasterization's limitations for multiview rendering, we rely on points [6]. In today's applications, triangles often outnumber pixels, leading many to argue that points are a better rendering primitive [3]. Yet points are not widely used since their discontinuity can create "holes" when views change. Existing point renderers, therefore, use dense point clouds that render slowly; or sparse clouds with complex reconstruction that again render slowly, or produce low-quality imagery.
+
+To improve point rendering and support multiview rendering, VIR [9] exploits rasterization hardware, which efficiently transforms triangles into points. For each frame, VIR generates a cloud of points customized to the current set of views in real time. It then renders the cloud in parallel into multiple views, reducing the number of geometry passes by a factor of ten. To accomplish this, for any triangle visible in at least one view, the geometry shader computes specialized viewing and projection matrices that center the triangle, orient it parallel to the view plane, and achieve a watertight sampling rate. It then applies the matrices to the triangle and rasterizes it to generate points. Next, the fragment shader writes each point to a buffer. When all points have been generated, the compute shader passes over this buffer, transforming and projecting each point into multiple views.
+
+§ 3 IMPROVING VIEW INDEPENDENT RASTERIZATION
+
+We improve on Marrs et al.'s VIR implementation [9] in three ways to increase its practicality and potential. First, our implementation is bufferless, ending the possibility of overflow and simplifying VIR's use in practice. Second, we improve Marrs et al.'s orthogonal projection matrix, reducing the size of the resulting point cloud. Finally, we stochastically cull sub-pixel triangles, shrinking the point cloud further to improve speed. The resulting improved algorithm is already practical, with better soft shadows than PCSS at comparable speeds. Improved VIR also has the potential to bring both this practicality and a significant performance improvement to more complex multiview effects.
+
+Author's address:
+
+§ 3.1 BUFFERLESS VIR
+
+Marrs et al.'s VIR implementation stored points in a buffer, which the compute shader then rendered to multiple views. The buffer size must be set before run time, making overflows possible. Instead, we use a bufferless, entirely in-pipeline scheme that eliminates the possibility of run time buffer overflows. Rather than sending each point to a buffer, the fragment shader writes the point to multiple off-screen buffers for deferred shading. While we did not directly compare the speed of our unbuffered implementation to Marrs et al.'s buffered implementation, we expect the performance to be similar, as described in a related comparison by Marrs et al. [8].
+
+§ 3.2 WATERTIGHT AND EFFICIENT ORTHOGRAPHIC VIR
+
+Marrs et al.'s computation of the watertight multiview sampling rate ${s}_{mv}$ assumed that the fields of view for the point generation, offscreen and eye views were identical. But this cannot be true when the point generation view uses orthogonal projection, and other views do not. Unfortunately, point generation in perspective leads to sampling inefficiencies, with points spread unevenly across each triangle due to perspective distortion.
+
+ < g r a p h i c s >
+
+Fig. 1. Improved Orthogonal Sampling Rate.
+
+Algorithm 1 shows the VIR algorithm, improved to use orthogonal projection. For each polygon, we find ${\rho }_{mv}$ , the maximum point density on the projected polygon's surface, across all views as illustrated in the figure 1. For each polygon $p$ , we first find the closest point on the polygon from a given view $v\left\lbrack 1\right\rbrack$ . This point has the highest sample density on the polygon for that view. We compute the area of a reverse-projected pixel centered on that point area ${p}_{p,v,p}$ by reverse-projecting its corners onto the polygon in model space. Across all views, the maximum sampling density ${\rho }_{mv}$ is given by the equation (1), and the orthogonal scaling factor ${s}_{m}v$ is given by (2)
+
+$$
+{\rho }_{mv} = {\forall }_{v \in V}\max \left( {{\rho }_{mv},\frac{{\operatorname{area}}_{p}}{{\operatorname{area}}_{P,v,p}}}\right) \tag{1}
+$$
+
+$$
+{s}_{mv} = \max \left( {{s}_{mv},w \times \sqrt{\frac{{\rho }_{\text{ ortho }}}{{\rho }_{mv}}}}\right) \tag{2}
+$$
+
+, Vol. 1, No. 1, Article . Publication date: April 2020. where $V$ is the set of all view centers of destination views, $w$ is the perspective distortion, ${\operatorname{area}}_{p}$ is the area of the polygon in model space, and ${\rho }_{\text{ ortho }}$ is the sampling density for VIR’s orthographic projection, which depends on the chosen viewing volume.
+
+Algorithm 1 View Independent Rasterization
+
+In Geometry Shader Stage:
+
+for each polygon (P) do
+
+ for each viewpoint (v) do
+
+ ${c}_{p} =$ Closest point on the polygon from the viewpoint
+
+ are ${a}_{p} =$ Area of pixel
+
+ ${\text{ area }}_{P,v,p} =$ Area of reverse-projected pixel centered at ${c}_{p}$
+
+ ${\rho }_{mv} = \max \left( {{\rho }_{mv},\frac{{\operatorname{area}}_{p}}{{\operatorname{area}}_{P,v,p}}}\right)$
+
+ end for
+
+ Compute orthogonal scaling factor
+
+ ${s}_{mv} = \max \left( {{s}_{mv},w \times \sqrt{\frac{{\rho }_{ortho}}{{\rho }_{mv}}}}\right)$
+
+ Apply VIR matrix $\left( {T}_{VIR}\right)$ and projection matrix $\left( {T}_{\text{ ortho }}\right)$ to
+
+ the polygon (P)
+
+ ${P}^{\prime } = {T}_{\text{ ortho }} \times {T}_{VIR} \times P$
+
+ Send the transformed polygon (P') to the rasterizer
+
+end for
+
+In Fragment Shader Stage:
+
+for each viewpoint (v) do
+
+ Write the generated point into the corresponding buffer using
+
+ atomic write operations.
+
+end for
+
+The orthographic projection matrix is given in the equation (3).
+
+$$
+{T}_{\text{ ortho }} = \left\lbrack \begin{matrix} {s}_{mv} & 0 & 0 & 0 \\ 0 & {s}_{mv} & 0 & 0 \\ 0 & 0 & \frac{2}{{z}_{\text{ near }} - {z}_{\text{ far }}} & \frac{{z}_{\text{ far }} - {z}_{\text{ near }}}{{z}_{\text{ near }} + {z}_{\text{ far }}} \\ 0 & 0 & 0 & 1 \end{matrix}\right\rbrack \tag{3}
+$$
+
+Our new orthogonal projection technique had minimal impact on VIR's performance and image quality, with orthogonal and perspective projection (as described by Marrs et al. in [9]) producing nearly identical soft shadows at similar speed. While orthogonal projection generated fewer points than Marrs et al.'s perspective projection, it required more time to do so, particularly in models with more triangles. For example, when generating 128 views for a model with 2 million triangles, orthogonal point generation rendered ${122K}$ points in ${18.74}\mathrm{\;{ms}}$ , whereas perspective sampling generated ${238}\mathrm{K}$ points in 17.55 ms. However, we expect that for most other multiview effects (e.g. environment mapping), increased shading loads will make orthogonal point generation's smaller point clouds advantageous.
+
+§ 3.3 VIR WITH STOCHASTIC CULLING
+
+To improve speed further, we stochastically cull (and avoid generating points for) triangles that span less than $1/{8}^{th}$ of a pixel in VIR's point generation view. The smaller the proportion of the pixel covered by the triangle ${T}_{pp}$ , the more likely ${T}_{pp}$ will be culled, with probability $1 - \left( {8 \times {T}_{pp}}\right)$ . Stochastic culling breaks our watertight guarantee, but we have not yet observed any holes in soft shadows generated with models ranging from ${30}\mathrm{K}$ to $2\mathrm{M}$ triangles. For example, Figure 5 shows a perceptual comparison of improved VIR with and without culling using HDR-VDP2 [7]; cool heatmap colors indicate little or no difference. Across this same range of models, we found that stochastic culling is most beneficial when subpixel triangles are common. This replicates Marrs et al's [9] finding that for large triangles (spanning dozens of pixels or more), VIR is less efficient than standard rasterization.
+
+§ 4 RESULTS
+
+We demonstrate the practicality and potential of improved VIR with soft shadows. Below, we offer comparisons to both high quality and high-speed shadow algorithms, as well as a brief comparison to Marr's et al.'s implementation [9].
+
+§ 4.1 HIGH QUALITY COMPARISION
+
+As an evaluation platform, we used OpenGL 4.5 on a PC with an Intel i7-8700K @ 3.70 GHz CPU and an NVIDIA 1080Ti GPU, running Windows 10 OS. We rendered several scenes, with detail concentrated in the central ${30}\%$ of the field of view. All scenes were dynamic, rotating around themselves twice (720 degs), while lights remained stationary, casting moving shadows. We used 32-bit unsigned depth buffers, with a resolution of ${1024}^{2}$ . For each light source sample, we set field of view to ${45}^{ \circ }$ .
+
+For VIR, we used all three of our improvements: a bufferless implementation, orthographic point generation, and stochastic culling. Like Marrs et al., we produced 128 views in four passes (32 per pass, the warp size of our GPU). As a high quality comparison, we used multiview rendering (MVR), which used 128 passes to create 128 standard shadow maps [12]. To compare the performance of these methods, we averaged GPU run-time and the number of points generated over 1256 frames of execution.
+
+Table 1 shows results for several models [10]. The leftmost column shows the number of triangles per model. The adjacent three show improved VIR's point cloud size, the time required to generate that point cloud, and the total time to generate VIR's point cloud and construct depth maps (with results including stochastic culling in brackets). For comparison, the next column reports the total time taken by MVR to make depth maps, and the rightmost column reports performance improvement as the ratio of MVR time over VIR time, highlighted in blue. The illumination technique is the same for VIR and MVR, we do not include it. VIR renders these dynamic, complex soft shadows up to 3.4 times faster than MVR without stochastic culling, and up to 7 times faster with it.
+
+Figure 2 shows soft shadows generated by VIR and MVR. Though VIR is faster than MVR, its visual quality is quite similar to high quality MVR and stable under animation. VIR includes the hallmarks of high quality shadows, such as soft penumbras and contact hardening (sharper shadows closer to the light). Because VIR and MVR both use shadow mapping and differ only in how they generate depth buffers, both suffer the same artifacts (e.g. "peter panning" and acne). Note that the breaks in the dragon's shadow with VIR are smaller than in MVR; VIR's view independent samples silhouettes more densely than view dependent MVR.
+
+GPU Performance of VIR [with stochastic culling] Soft Shadows for 128 Views
+
+max width=
+
+Models (# tris) VIR #points VIR pt gen (ms) pt gen + depth (ms) MVR (ms) ✘ Faster
+
+1-6
+Tree (151.7K) 275.0K $\left\lbrack {228.9K}\right\rbrack$ 0.82 3.80 [2.54] 7.33 1.92 [2.88]
+
+1-6
+Dragon (883.3K) ${684.6K}$ [489.6K] 12.91 16.58 [13.78] 57.01 3.44 [4.13]
+
+1-6
+Buddha (1.1M) 586.1K $\left\lbrack {225.6K}\right\rbrack$ 8.49 21.52 [12.22] 69.33 3.22 [5.67]
+
+1-6
+Lucy (2.0M) 1.1M $\left\lbrack {250.8K}\right\rbrack$ 12.91 38.00 [17.18] 122.31 3.22 [7.12]
+
+1-6
+
+Table 1. Speed comparisons of View Independent Rendering (VIR) and Multiview Rasterization (MVR), with Models in the left column, VIR in the middle three, and MVR to the right. We highlight VIR's performance improvements in blue. Results using stochastic culling in are in brackets ([]).
+
+§ 4.2 HIGH SPEED COMPARISON
+
+To evaluate the use of improved VIR in a more practical, real time setting, we compare improved VIR to percentage-closer soft shadows (PCSS) [2]. We generated soft shadows using 16 views in 2.6 ms and compared it to an image generated by PCSS with 96 samples per pixel (32 blockers and 64 filter samples), in 2.5 msec. The resulting images are shown in Figure 3. Figure 4 shows the perceptual comparison of these images against a reference 128-view MVR solution using HDR-VDP2 [7]. The image generated by VIR has less error than PCSS, especially at the region where the rods cast shadows on the dragon.
+
+To further gauge the effect of the size of the triangles on our improved VIR technique, we sorted triangles into three classes: subpixel triangles, with the length of the longest side less than a pixel; supra-pixel 1 triangles, with the longest side length greater than a pixel and less than 10 pixels; and supra-pixel 2 triangles, with longest side of the triangle is greater than 10 pixels. We then studied our technique on the scene which had mostly subpixel triangles, mostly supra-pixel 1 triangle and mostly supra-pixel 2 triangles. We observed that with the majority of triangles being sub-pixel, we were able to achieve a huge speedup. With a large number of supra-pixel 1 polygons, VIR was still faster than MVR. But with supra-pixel 2 polygons, VIR didn't do well and MVR proved to be a better candidate. One way to handle a scene with a mix of all sizes of polygons is to have a hybrid of VIR and MVR method. By rendering small polygons with VIR pass and the large polygons with MVR pass.
+
+ < g r a p h i c s >
+
+Fig. 2. Soft shadows generated using MVR (left) and using our VIR implementation (right). Both generate 128 depth maps. MVR takes 57 ms to generate them, whereas our implementation generates points and depth maps in ${16.6}\mathrm{{ms}}$ .
+
+ < g r a p h i c s >
+
+Fig. 3. (Left) Shadow rendered using PCSS (32 blocker and 64 PCF samples). Shadow artifacts can be seen on the dragon, where 2 rods cast shadows on it. (Right) Soft shadows rendered with our VIR implementation using 16 views and all improvements. PCSS takes 2.5 ms, whereas our implementation 2.6 ms. delivering better quality.
+
+ < g r a p h i c s >
+
+Fig. 4. A perceptual comparison of Figure 3's images. HDR-VDP2 compares each to a 128-view MVR render of the same scene. Red indicates a more perceivable difference.
+
+§ 4.3 IMPROVEMENTS COMPARISON
+
+Our VIR improvements did not improve soft shadow quality or speed over Marrs et al.'s implementation. We expect our improvements to show their merit for other, more demanding multiview effects, such as environment mapping and defocus blur.
+
+We did not directly compare improved VIR to the original on the same hardware, but Marrs et al.'s results are similar to our own, on comparable hardware. Although improved VIR did generate sparser - but still watertight - point clouds, the effort required to do so canceled out performance gains for soft shadowing. However, soft shadowing shader loads are minimal: only a depth comparison is required. Other multiview effects require much more complex shaders and should realize the performance benefits of our VIR improvements, making those effects practical as well.
+
+ < g r a p h i c s >
+
+Fig. 5. A perceptual comparison of improved VIR with and without stochastic culling. HDR-VDP2 shows little or no difference.
+
+§ 5 LIMITATIONS, CONCLUSIONS AND FUTURE WORK
+
+Marrs et al.'s original VIR implementation was able to cull points in the compute shader by comparing several local points and rendering only the closest (unoccluded) sample for each view pixel. Our bufferless implementation does not use compute shaders, and cannot perform this local point culling. More significantly, our stochastic triangle culling breaks the guarantee of watertight sampling, though it has not created holes in our testing.
+
+Despite these limitations, the VIR improvements we describe here make VIR immediately more practical and promise wider utility in the future. With a bufferless implementation, developers need no longer risk runtime overflows. We also show results demonstrating higher quality soft shadows than PCSS at practical rendering speeds. In the future, we plan to explore the potential of improved VIR with multiview effects having more demanding shading loads, such as environment mapping, diffuse global illumination and defocus or motion blur. We will also study global probabilistic limits for holes resulting from stochastic triangle culling. Finally, we plan to examine applications of improved VIR to light field displays, which demand tens or hundreds of views in every frame.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6f336b1ef0e91581d1ad241de74b98c2748308e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,329 @@
+# Evaluation of Body-Referenced Graphical Menus in Virtual Environments
+
+Irina Lediaeva* Joseph J. LaViola Jr** University of Central Florida University of Central Florida
+
+## Abstract
+
+Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.
+
+Keywords: 3D Menus; Menu Placements; Menu Selection Techniques; Menu Shapes; Virtual Reality
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality; Human-centered computing-Interaction design-Interaction design process and methods-User interface design
+
+## 1 INTRODUCTION
+
+Graphical menus are an integral and essential component of user interfaces and have been widely used for $2\mathrm{D}$ desktop applications. Given its wide popularity in desktop applications, graphical menus have been also integrated into virtual environments. Graphical menus in 3D user interfaces are the adapted 2D menus that have proven to be a successful system control technique [16]. For example, Angus and Sowizral [1] developed a hand-held flat panel display that was embedded in a virtual environment and provided a familiar metaphor within the VE context for the users who used graphical menus for desktop applications.
+
+However, once the adapted 2D menus are integrated into 3D space, there are also design challenges that need resolving, such as how best to reach a menu item in 3D space as well as the lack of tactile feedback [9]. Moreover, there are other considerations for designing and implementing graphical menus as system control techniques in 3D user interfaces, such as positioning of graphical user interface elements in space [18], choosing an appropriate selection technique [13], representation of the menu [20] and its overall structure [22].
+
+There are many varieties of graphical menus in virtual environments that have been extensively evaluated by researchers including the TULIP menus [5], spin and ring menus [12], and various menu shapes and menu element sizes [4]. Moreover, Mine et al. [18] have explored body-referenced graphical menus, in which the menu items are fixed to the user's body (and not the head). Such body-referenced graphical menus have several advantages, such as providing "a physical real-world frame of reference, a more direct and precise sense of control, and an 'eyes off" interaction where the users do not have to constantly watch what they are doing." For example, Azai et al. [2,3,4] proposed a menu system that appears at various body parts, such as users' hands, arms, upper legs, and abdomen. However, we found that body-referenced graphical menus are an insufficiently investigated research topic. Such menus are still emerging in the field of virtual environment, and there is a lack of usability studies that would provide design guidelines for developers to implement. This work attempts to close this gap and to provide a comprehensive evaluation of body-referenced graphical menus in virtual environments.
+
+Therefore, the objective of this research is to conduct a comparative study to provide insights and gain a deeper understating of how to design body-referenced graphical menus for virtual environments that support fast completion times, minimize error, and feel natural [5]. During the study, we explored and compared various menu placements, such as placing the menu on the hand [3], arm [2], attaching it to the participant's waist [4] and displaying it in a virtual environment (or in a fixed position in the world) [22]. We examined two menu shapes (linear and radial) [22] and three menu selection techniques (ray-casting [15], head-tracking [21], and eye-tracking [21,23]). Since response time and ease of use of a graphical menu can significantly affect user experience, we gathered typical metrics such as average selection time, the number of wrong selections or error rate and target reentries, the overall efficiency, user satisfaction and comfort [9].
+
+To evaluate and compare combinations of menu placements, its shapes, and selection techniques, we conducted a user study with 24 participants. We make the following contribution to research on body-referenced graphical menus in virtual environments: design recommendations and insights on user preference of body-referenced graphical menus in virtual environments including recommendations for various menu shapes and selection techniques.
+
+---
+
+* irinalediaeva@knights.ucf.edu
+
+** jjl@cs.ucf.edu
+
+---
+
+
+
+Figure 1: Sample screenshots from the virtual environment. We compared spatial, arm, hand, and waist menu placements (left to right).
+
+## 2 RELATED WORK
+
+Graphical menus in a virtual environment can be classified based on various criteria, such as menu techniques (adapted 2D menus,1- DOF menus, 3D widgets), placements (world-referenced, head-referenced, body-referenced, etc.), shapes (linear, radial, pie menus, etc.), selections (pointer-directed, gaze-directed, device-directed, etc.). Therefore, the following criteria should be considered when designing graphical menus in a virtual environment: placement, representation and structure (e.g., the spatial layout of the menu, number of menu items, its size, the distance between menu items), and selection [16]. Thus, we situate our study within three main streams of research: 1) menu placements, 2) menu shapes, and 3) menu selection techniques.
+
+### 2.1 Menu Placements
+
+The placement of the menu influences the user's ability to access it (good placement provides a spatial reference) and the amount of menu occlusion in the environment [16]. Feiner et al. [10] first addressed the placement considerations and created a menu taxonomy where the graphical menus can be placed at a fixed location in the virtual world (world-referenced), connected to a virtual object (object-referenced), attached to the user's head (head-referenced) or the rest of the body (body-referenced), or placed in reference to a physical object (device-referenced).
+
+Menu systems that employ the user's body as a graphical menu have been proposed by multiple researchers $\left\lbrack {2,3,4,{18}}\right\rbrack$ . For example, Azai et al. proposed a method of displaying the graphical menu in augmented reality on the user's forearm [2] and a menu system that appears at various body parts including not only hands or arms but also the upper legs and abdomen [4]. The researchers found that placing the graphical menu on the body enables the user to operate the menus comfortably and freely [4].
+
+Some research has been done on how to use menus in ways that depart from the typical 2D desktop metaphor. For example, Bowman et al. [6] evaluated the usefulness of letting the participant's fingers perform menu item selection using finger-contact gloves (Pinch Gloves), where the menu items were assigned to different fingers. Another way the menus can be connected to the body is through the body referential zones that are part of an "at-hand" interface [25]. For example, a tool belt surrounding the user may allow them to select objects or options by reaching to a certain location and making a selection. In this study, we particularly focus on investigating body-referenced graphical menus (arm, hand, and waist menus) and comparing it with a conventional world-referenced spatial menu.
+
+### 2.2 Menu Shapes
+
+The items of the graphical menu can be organized in different ways, adopting a radial shape, where the menu items have a circular form, or linear forms, where the menu items have a rectangular form, among other possible configurations (e.g., pie and ring menus). Researchers have also explored various layouts or shapes of the graphical menus in various environments. For example, Callahan et al. [7] found that menu items in a circular layout perform better in terms of selection time compared to a linear layout in a 2D plane. Similarly, Komerska et al. [17] found that selection using the pie menu for a 3D haptic enhanced environment is considerably faster and more accurate than linear menus. Additionally, Gebhardt et al. [11] presented a formal evaluation of hierarchical pie menus in a virtual environment. Their results indicated high performance and efficient design of this menu type in virtual reality applications. Monteiro et al. [19] found that even though linear and radial menus performed well, the users still preferred the traditional linear menu type and the fixed wall placement of the menu. In this paper we focus on two frequently and widely used types of menus in a virtual environment: linear and radial [22].
+
+### 2.3 Menu Selection Techniques
+
+A menu selection is another form of interaction that is derived from desktop 2D user interfaces. As with desktop menu systems, the user is presented with a list of choices from which they need to select a corresponding menu item. A common selection technique that emulates 2D user interface techniques is where a direction selector or a controller is used to point at, scroll through, highlight, and "click" or trigger a controller button to select various menu items [25].
+
+Researchers have proposed different selection techniques for selecting menu items: ray-casting, head-, eye-, and gesture-based selection techniques. Ray-casting is one of the most well-known menu selection techniques, where a ray is projected from the hand position to the plane of the graphical menu [15]. Further, Qian et al. [21] investigated eye-based and head-based selection techniques and concluded that the eye-only selection offered the worst performance in terms of error rate and selection times.
+
+To best of our knowledge, this study is the first to systematically explore graphical menus in a virtual environment with relevance to menu placements, shapes, and selection techniques. The main contribution of this study is to provide a set of design guidelines for developing body-referenced graphical menus in virtual environments.
+
+## 3 USER STUDY
+
+We conducted a user study to evaluate a variety of graphical menus placements (spatial, arm, hand, waist) in a virtual environment (Figure 1-2) as well as its menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze). The following sections describe the design, tasks, and measurements.
+
+### 3.1 Study Design
+
+This within-subjects study consisted of 3 independent variables: menu placement (spatial, arm, hand, waist), menu shape (linear and radial), and menu selection technique (ray-casting with a controller device, head, and eye gaze). In total, we had $4 \times 2 \times 3 = {24}$ conditions and for each condition, the participant conducted 10 trials which make a total of 240 selections per participant as part of the user study. Each condition was presented to the user in a random order based on a Latin square for constructing "Williams designs" [24]. For each condition, users were asked to select 10 randomly generated items (one item at a time).
+
+
+
+Figure 2: Sample screenshot from the virtual environment with a radial graphical menu (spatial menu placement).
+
+### 3.2 Dependent Variables
+
+Our dependent variables were average task completion time, where the average is taken over the 10 trials for that condition, error rates, number of target re-entries, and post-questionnaire responses. We automatically recorded our dependent variables throughout the whole study session and stored the data in a text file. Task completion time (TCT) was measured as the time from the moment the system displayed a message to the moment the user selected a menu item. Error rates (ERR) were recorded every time the user pressed the wrong menu item which was different than the system message requested (the percentage of wrong selections for a given condition). Number of target re-entries (TRE) was measured as a number of times the pointer left the volume of the target menu item and then went again inside the target.
+
+### 3.3 Study Hypotheses
+
+Based on the related work $\left\lbrack {7,{19},{21}}\right\rbrack$ and our pilot studies during the design of our experiment, we devised the following hypotheses for our study:
+
+- Hypothesis 1 (H1): Spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique will let users perform the menu tasks faster, make less errors and take a smaller number of target re-entries in comparison with the other types of graphical menus in a virtual environment.
+
+- Hypothesis 2 (H2): Participants will prefer to use spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique than the other types of graphical menus in a virtual environment.
+
+### 3.4 Tasks
+
+In the study, our participants were immersed in a virtual environment that portrayed a virtual library in which the user needed to select books. We had a total of 24 conditions in our study. Each condition consisted of 10 tasks. In these tasks, the participant was asked to select a graphical menu item using the given selection technique. We used 6 menu items for each type of the menu (we found this number appropriate for a menu), and all the menu items were labeled with numbers from 1 to 6 in our experiment. The menu items were shuffled for each condition to prevent a learning effect.
+
+For each menu task, the book number was randomly generated and automatically displayed in a system message (e.g., "Choose Book 4"). For each menu condition, the system message was placed near the center of the virtual table with books. The system message also included information about the current trial, menu placement, and selection technique that were automatically generated by the controller algorithm (to account for all conditions). In other words, once the participant performed a menu task, the system automatically generated and placed a new menu condition in the virtual environment and displayed that information to the participant with a book number in the system message. The participant had a 2-second break between the menu item selection tasks.
+
+For the selection techniques, the participant used a controller device (with a ray cast), a head movement (with head-tracking), or eye-tracking to point at a menu element; then they needed to press the controller trigger button to select the corresponding menu item. Furthermore, for each selection technique, the participant could see a bullet mark that was highlighted (e.g., at the end of the ray cast) once the participant navigated toward the graphical menu. That way the participant could easily select the menu items using all the aforementioned selection techniques. The laser selection pointer with a bullet mark was only displayed for a ray cast selection technique. For the eye gaze and head selection techniques, only a bullet mark was visible.
+
+The participant was informed whether a selection was right or wrong by a sound alert. If the selection was right, the participant could proceed to the next menu task. This process continued until all the menu combinations were completed.
+
+### 3.5 Participants and Apparatus
+
+We recruited 24 participants (20 males and 4 females) between the ages of 18 to 32 years old $\left( {\mu = {21.54},\sigma = {4.06}}\right)$ , of which two participants were left-handed and one was ambidextrous. 16 participants had a high school diploma, 2 participants had an associate degree, and 6 participants had a bachelor's degree as their highest completed level of education. 11 participants identified their ethnicity as White/Caucasian, 6 participants were Hispanic/Latino, 5 participants were Asian/Pacific Islander, and 2 participants were Multiethnic/Other. Furthermore, 11 participants specified that they wore glasses or contacts. A Likert scale from 1 to 7 with 1 representing little experience or familiarity and 7 representing very experienced or greater familiarity was used to measure the following in a demographic survey: participants experience with VR $\left( {\mu = {4.46},\sigma = {1.63}}\right)$ , familiarity with game controllers $\left( {\mu = {5.75},\sigma = {1.66}}\right)$ , eye-tracking technology $(\mu = {3.67}$ , $\sigma = {1.84})$ , and video games $\left( {\mu = {5.42},\sigma = {1.6}}\right)$ .
+
+The experiment duration ranged from 30 to 45 minutes and all participants were paid $\$ {10}$ for their time. The experiment setup consisted of a 55" Sony HDTV for the experimenter to view, a Tobii HTC Vive headset, which is a retrofitted version of the HTC Vive headset with complete eye-tracking integration from Tobii Pro, Vive controllers, and a Vive tracker attached to a TrackBelt band in order to track the position of the user's waist. We used the Unity game engine for implementing all the graphical menus in conjunction with Tobii XR SDK ${}^{1}$ for integrating eye-tracking technology for the eye gaze selection technique, and Finall ${\mathrm{K}}^{2}$ for tracking the full body of the user and placing the graphical menus on the user body. The software was run on a Windows 10 desktop computer in a lab setting, equipped with an Intel Core i7-4790K CPU, an NVIDIA GeForce GTX 1080 GPU, and 16 Gigabytes (GB) of RAM.
+
+---
+
+${}^{1}$ https://vr.tobii.com/developer/
+
+---
+
+Table 1: Post-questionnaire. Questions 1-7 were asked to rate each placement menu (spatial, arm, hand, waist) on a 7-point Likert scale, questions 8 and 9 were multiple-choice questions, and the question 10 was open-ended.
+
+| Q1 | To what extent did you like the placement menus? |
| Q2 | How mentally demanding were the placement menus? |
| Q3 | How physically demanding were the placement menus? |
| Q4 | How successfully were you able to choose the menu items you were asked to select? |
| Q5 | Did you feel that you were trying your best? |
| Q6 | To what extent did you feel frustrated using the placement menus? |
| Q7 | To what extent did you feel that the placement menus were hard to use? |
| Q8 | Which shape of menu would you prefer for the menu placements in VR? Linear, radial, or all equally? |
| Q9 | Which selection technique would you prefer for the menu placements? Controller, head, eye gaze, or all equally? |
| Q10 | What are your further comments on your experience with the graphical menus in VR? |
+
+### 3.6 Study Procedure
+
+The study started with the participant seated in front of the TV display and the experimenter seated to the side. Participants were given a consent form that explained the study procedure and the participant's rights and responsibilities. They were then given a demographic survey that collected general information about the participant age, gender, ethnicity, dexterity, familiarity with virtual reality, game controllers, eye-tracking technology, and how often the participants played video games. After that, the participant was guided on how to position their headset and adjust the interpupillary distance (to align the lenses with the distance between the participant's pupils) for the best visual experience followed by a quick 5-point calibration to adjust the eye tracker.
+
+During the session, participants were only seated (they did not stand or walk around) with the headset on, an attached Vive tracker to their waist, and two controllers in their hands for selecting the menu items. The participants' limbs were at rest on their side. The participants were asked to comfortably lean on the back of a chair and be in a relaxed position. The participants' limbs were always in the necessary starting position to trigger the menus. The position of the arm and hand menu placements were changed either to the left or right of the participant as well as the trigger selection method (right or left controller) based on the participant's dexterity (left-handed or right-handed) in order to make sure the participant felt comfortable while completing the menu tasks.
+
+Next, the participant conducted a training session of 5-10 minutes in order to get familiar with the virtual environment and the different types of graphical menus including menu placements and selection techniques. During the training session, each menu combination was completed in 5 trials. The system displayed a message with the number of the book from 1 to 6 selected randomly. The participant needed to select the corresponding item from the menu.
+
+Once the training session was completed and the participant felt comfortable with the selection techniques and menu placements, the participant started the study session to further evaluate various combinations of the graphical menus. At the end, the participant filled out a post-questionnaire (Table 1) using a 7-point Likert scale (e.g., from 1 or "not at all mentally demanding" to 7 or "extremely mentally demanding") for each placement menu (spatial, arm, hand, waist) and ranked them based on the overall preference, how mentally or physically demanding the placement menus were, frustration, and its ease of use. Additionally, we asked the participant to select preferred menu shapes and selection techniques for each placement menu and leave additional comments on experience with the body-referenced graphical menus.
+
+## 4 RESULTS
+
+### 4.1 Quantitative Results
+
+We used a repeated measures ANOVA per each dependent variable. For the ANOVA results that are significant, we performed pairwise sample t-tests to see what conditions are specifically significant. We used Holm's sequential Bonferroni adjustment to correct for type I errors [14]. Table 2 shows the results of repeated measures ANOVA analysis.
+
+Table 2: Repeated measures ANOVA results for comparing menu placement, shape, and selection technique.
+
+| Source | Task Completion Time | Error Rates | Number of $\mathbf{{TargetRe} - }$ Entries |
| Placement | ${F}_{3,{69}} =$ 13.735 $p < .{0005}$ | ${F}_{3,{69}} = {3.41}$ $p < {.05}$ | ${F}_{3.69} = {0.393}$ $p = {.758}$ |
| Shape | ${F}_{1,{23}} = {1.078}$ $p = {.31}$ | ${F}_{1,{23}} = {2.448}$ $p = {.131}$ | ${F}_{1,{23}} = {2.217}$ $p = {.15}$ |
| Selection | ${F}_{2.46} = {6.562}$ $p < {.005}$ | ${F}_{2,{46}} = {9.741}$ $p < {.0005}$ | ${F}_{2,{46}} =$ 39.075 $p < {.0005}$ |
| Placement $\times$ Shape | ${F}_{3.69} = {2.881}$ $p < {.05}$ | ${F}_{3.69} = {2.065}$ $p = {.113}$ | ${F}_{3.69} = {2.889}$ $p < {.05}$ |
| Shape $\times$ Selection | ${F}_{2.46} = {1.642}$ $p = {.205}$ | ${F}_{2.46} = {0.443}$ $p = {.645}$ | ${F}_{2.46} = {1.947}$ $p = {.154}$ |
| Placement $\times$ Selection | ${F}_{6,{138}} = {2.47}$ $p < {.05}$ | ${F}_{6,{138}} =$ 1.921 $p = {.082}$ | ${F}_{6,{138}} =$ 2.485 $p < {.05}$ |
| Placement $\times$ Shape $\times$ Selection | ${F}_{6,{138}} =$ 1.706 $p = {.124}$ | ${F}_{6,{138}} =$ 2.469 $p < {.05}$ | ${F}_{6,{138}} =$ 0.459 $p = {.837}$ |
+
+#### 4.1.1 Main Effect of Placement
+
+We found significant difference in average task completion time $\left( {{F}_{3,{69}} = {13.735}, p < {.0005}}\right)$ and error rates $\left( {{F}_{3,{69}} = {3.41}, p < {.05}}\right)$ between menu placements.
+
+Task Completion Time: Participants took significantly longer to complete the menu tasks when the graphical menu was placed on the arm than when the graphical menu was placed spatially $\left( {t}_{23}\right.$ $= - {5.253}, p < {.0005})$ , on the waist $\left( {{t}_{23} = {4.59}, p < {.0005}}\right)$ or on the hand $\left( {{t}_{23} = 4, p < {.005}}\right)$ .
+
+---
+
+${}^{2}$ http://www.root-motion.com/final-ik.html
+
+---
+
+Error Rates: Participants made significantly more errors when using the hand placement graphical menus than the arm graphical menus $\left( {{t}_{23} = - {4.079}, p < {.005}}\right)$ .
+
+#### 4.1.2 Main Effect of Shape
+
+Menu shapes did not have any significant effect on task completion time $\left( {{F}_{1,{23}} = {1.078}, p = {.31}}\right)$ , error rates $\left( {{F}_{1,{23}} = {2.448}, p = {.131}}\right)$ or number of target re-entries $\left( {{F}_{1,{23}} = {2.217}, p = {.15}}\right)$ .
+
+#### 4.1.3 Main Effect of Selection Technique
+
+We found significant difference in average task completion time $\left( {{F}_{2,{46}} = {6.562}, p < {.005}}\right)$ , error rates $\left( {{F}_{2,{46}} = {9.741}, p < {.0005}}\right)$ , and number of target re-entries $\left( {{F}_{2,{46}} = {39.075}, p < {.0005}}\right)$ . between selection techniques.
+
+Task Completion Time: Further, we found that the head selection technique took significantly more time to complete a menu task across different placements than the ray-casting selection technique $\left( {{t}_{23} = - {7.238}, p < {.0005}}\right)$ .
+
+Error Rates: Participants made significantly more errors when using the eye gaze selection technique than other selection techniques, such as the head $\left( {{t}_{23} = - {3.868}, p < {.005}}\right)$ or ray-casting $\left( {{t}_{23} = - {2.219}, p < {.005}}\right)$ techniques. Likewise, participants made significantly more errors using the ray-casting selection technique than using the head selection technique $\left( {{t}_{23} = {3.391}, p < {.05}}\right)$ .
+
+Number of Target Re-Entries: The eye gaze selection technique had a significantly higher number of target re-entries than the ray-casting $\left( {{t}_{23} = - {5.864}, p < {.0005}}\right)$ or head $\left( {{t}_{23} = - {6.688}, p < }\right.$ 2005) selection techniques. Also, we found the ray-casting was significantly higher in terms of number of target re-entries in comparison with the head selection technique $\left( {{t}_{23} = {3.008}, p < }\right.$ .005).
+
+#### 4.1.4 Interaction Effect of Placement $\times$ Shape
+
+We found significant differences in average task completion time $\left( {{F}_{3,{69}} = {2.881}, p < {.05}}\right)$ and number of target re-entries $\left( {{F}_{3,{69}} = }\right.$ ${2.889}, p < {.05})$ between menu placements and shapes.
+
+Task Completion Time: We found that the arm placement menu in conjunction with the linear shape took significantly more time to complete a menu task than the linear hand $\left( {{t}_{23} = {4.994}, p < {.0005}}\right)$ , spatial $\left( {{t}_{23} = {4.558}, p < {.0005}}\right)$ , waist $\left( {{t}_{23} = {3.573}, p < {.01}}\right)$ and the radial spatial $\left( {{t}_{23} = {5.481}, p < {.0005}}\right)$ and waist $\left( {{t}_{23} = {4.533}, p < }\right.$ .0005) graphical menus. Additionally, the arm menu placement with the radial shape took more time than the linear spatial menu $\left( {{t}_{23} = {3.787}, p < {.01}}\right)$ and radial spatial $\left( {{t}_{23} = {3.494}, p < {.01}}\right)$ and waist $\left( {{t}_{23} = {3.476}, p < {.01}}\right)$ menus.
+
+Number of Target Re-Entries: The linear arm menu had a significantly higher number of target re-entries than the radial arm placement menu $\left( {{t}_{23} = {3.726}, p < {.01}}\right)$ .
+
+#### 4.1.5 Interaction Effect of Shape $\times$ Selection
+
+We did not find any significant effect on task completion time $\left( {F}_{2,{46}}\right.$ $= {1.642}, p = {.205}$ ), error rates $\left( {{F}_{2.46} = {0.443}, p = {.645}}\right)$ or number of target re-entries $\left( {{F}_{2,{46}} = {1.947}, p = {.154}}\right)$ between menu shapes and selection techniques.
+
+#### 4.1.6 Interaction Effect of Placement $\times$ Selection
+
+We found significant differences in average task completion time $\left( {{F}_{6,{138}} = {2.47}, p < {.05}}\right)$ and number of target re-entries $\left( {{F}_{6,{138}} = }\right.$ ${2.485}, p < {.05})$ between menu placements and selection techniques.
+
+Task Completion Time: For the hand placement menus, we found that it takes significantly more time for participants to complete a menu task using the head selection technique than ray-casting $\left( {{t}_{23} = - {6.06}, p < {.0005}}\right)$ . For spatial $\left( {{t}_{23} = - {5.102}, p < {.0005}}\right)$ and waist $\left( {{t}_{23} = - {4.648}, p < {.0005}}\right)$ menus, again the head selection technique performed poorly than the ray-casting technique.
+
+Number of Target Re-Entries: We noticed that the eye-gazing selection had a higher number of target re-entries for each placement. For example, the arm $\left( {{t}_{23} = - {5.383}, p < {.0005}}\right)$ , waist $\left( {t}_{23}\right.$ $= - {6.263}, p < {.0005})$ , spatial $\left( {{t}_{23} = - {4.045}, p < {.01}}\right)$ eye-gaze menus had significantly more re-entries than the arm, waist, and spatial head selection technique. Furthermore, the hand $\left( {{t}_{23} = - {7.233}, p < }\right.$ ${.0005})$ , waist $\left( {{t}_{23} = - {6.042}, p < {.0005}}\right)$ , and spatial $\left( {{t}_{23} = - {3.82}, p < }\right.$ .01 ) eye gaze menus performed poorly in terms of re-entries than the hand, waist, and spatial ray-casting graphical menus.
+
+#### 4.1.7 Interaction Effect of Placement $\times$ Shape $\times$ Selection
+
+We found that menu placements in conjunction with menu shapes and selection techniques had a significant effect on error rates $\left( {{F}_{6,{138}} = {2.469}, p < {.05}}\right)$ . Additionally, based on our hypotheses, we did more investigations to see if the spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique are indeed better than the other combinations of the graphical menus in terms of error rates. We found that there was no significant effect in error rates between the spatial radial ray-casting menu and other combinations of graphical menus.
+
+### 4.2 Qualitative Results
+
+We did the Chi-squared test for the gathered post-questionnaire data about preferred menu shapes and selection techniques. We did not find any significant difference for the spatial and arm menus in terms of menu shapes. However, we found significance for the hand ${X}^{2}\left( {2, N = {24}}\right) = {12.5}, p < {.05}$ and waist ${X}^{2}\left( {2, N = {24}}\right) = {7.605}, p$ $< {.05}$ menus. Moreover, 12 people thought that all shapes were equivalent for the spatial graphical menus, 9 and 8 participants preferred the linear and radial shapes respectively for the arm graphical menus, 14 participants preferred the radial shape for the hand graphical menus, and 13 participants thought that the linear shape is a good fit for the waist menu (Figure 3).
+
+
+
+Figure 3: Menu shape preference for each placement menu.
+
+For the selection techniques, we found significant difference for the spatial ${X}^{2}\left( {6, N = {24}}\right) = {22.66}, p < {.05}$ , arm ${X}^{2}\left( {6, N = {24}}\right) =$ ${19.62}, p < {.05}$ , and waist ${X}^{2}\left( {6, N = {24}}\right) = {19.12}, p < {.05}$ menu placements. Also, participants thought that all selection techniques were equivalent for the spatial graphical menus, eye gaze was a good fit for the arm menus, eye gaze or ray-casting selection techniques were overall better for a hand menu, and eye gaze was a favorite selection technique for the waist menu (Figure 4). The spatial graphical menu was ranked as the overall best menu placement and the arm graphical menu as the worst placement. The spatial graphical menu was also ranked as best in terms of ease of use, mental and physical demand, and selection rates (Figure 5).
+
+
+
+Figure 4: Selection technique preference for each placement menu.
+
+To analyze the Likert scale data, we used Friedman's test followed by a post-hoc analysis using Wilcoxon signed-rank tests for pairs (Table 3). Average ratings for post-questionnaire questions 1 to 7 are summarized in Figure 5. From the results that we obtained we concluded the following:
+
+- Participants liked the spatial menus compared to the arm, hand, and waist graphical menus. The arm menu was the participant's least favorite.
+
+- Spatial and hand graphical menus are less mentally demanding compared to the arm and waist graphical menus.
+
+- Spatial menus were significantly less physically demanding compared to the hand, arm, and waist graphical menus. The arm menu was the most physically difficult.
+
+- Participants were able to successfully select the menu items from all the types of placement graphical menus.
+
+- The frustration level was higher for the arm graphical menu and significantly lower for the spatial graphical menu.
+
+- Participants thought that the arm graphical menu was significantly harder to use than the other types of placement menus.
+
+Based on the comments that we gathered from the post-questionnaire, we found the following emerging themes. 6 of the participants reported that the arm menu was in an uncomfortable position, was difficult and tiresome to use: "Ifound the arm slightly more physically demanding simply because it required me to move my neck downward a lot."-P13
+
+
+
+Figure 5: Post-questionnaire ratings for each placement menu.
+
+Also, for 3 of the participants, the arm menu felt shorter in a virtual environment: "It felt that the arm was shorter in VR causing the arm menu to feel like it was at my shoulder."-P18
+
+In the case of the ray-casting selection technique with the arm placement menu, the participant's primary hand had to be stretched every time they needed to select the menu item that resulted in a bad experience: "The worst experience is with arm and controller because painting a laser on to arm is difficult." -P14
+
+For the hand graphical menu, the participants preferred this menu type for games in virtual environments: "I think that for VR games the hand placement would be the best since it's relatively small and easy to use." -P10
+
+Furthermore, the users put the waist graphical menu into the same category with the arm menu in terms of its difficulty: "When using the menus placed on the waist, the user has to look down which may cause stain on neck after prolonged use." -P15
+
+Overall, the participants found the spatial menus easy to use, significantly less physically and mentally demanding in comparison with other graphical menus: "I preferred the spatial menus as they were the easiest to select using all three selection methods. It required the least amount of movement which I liked." -P3
+
+Notwithstanding, 14 participants also reported the eye gaze as their favorite selection technique: "The easiest and fastest method of selecting a menu was the eye gaze. I did not need to spend any time aiming my head or the controller to the menu item I wanted to select." -P15
+
+However, for 5 of them, the eye gaze had issues with accuracy: "Eye gaze sometimes was not accurate, I had to look at the above menu item to get the one below. It only applies to middle menu options." -P4
+
+Below, we discuss the implications of our findings and provide design recommendations for implementing body-referenced graphical menus in virtual environments.
+
+Table 3: Results on Friedman's test and post-hoc analysis for Likert scale data.
+
+| # | Friedman's Test | Spatial vs Arm | Spatial vs Hand | Spatial vs Waist | Arm vs Hand | Arm vs Waist | Hand vs Waist |
| Q1 | ${\chi }^{2}\left( 3\right) = {39.701}, p < {.0005}$ | $Z = - {4.215}, p < {.001}$ | $Z = - {3.58}, p < {.001}$ | $Z = - {4.247}, p < {.001}$ | $Z = - {2.728}, p < {.01}$ | $Z = - {0.23}, p = {.818}$ | $Z = - {2.183}, p < {.05}$ |
| Q2 | ${\chi }^{2}\left( 3\right) = {24.019}, p < {.0005}$ | $Z = - {3.53}, p < {.001}$ | $Z = - {2.922}, p < {.01}$ | $Z = - {3.367}, p < {.01}$ | $Z = - {2.402}, p < {.05}$ | $Z = - {1.93}, p = {.054}$ | $Z = - {0.956}, p = {.339}$ |
| Q3 | ${\chi }^{2}\left( 3\right) = {44.005}, p < {.0005}$ | $Z = - {4.215}, p < {.001}$ | $Z = - {3.95}, p < {.001}$ | $Z = - {3.841}, p < {.001}$ | $Z = - {3.292}, p < {.01}$ | $Z = - {2.372}, p < {.05}$ | $Z = - {0.951}, p = {.341}$ |
| Q4 | ${\gamma }^{2}\left( 3\right) = {28.253}, p < {.0005}$ | $Z = - {3.863}, p < {.001}$ | $Z = - {3.093}, p < {.01}$ | $Z = - {3.211}, p < {.01}$ | $Z = - {2.043}, p < {.05}$ | $Z = - {1.919}, p = {.055}$ | $Z = - {0.106}, p = {.915}$ |
| Q5 | ${\chi }^{2}\left( 3\right) = {3.48}, p = {.323}$ | $Z = - {1.663}, p = {.102}$ | $Z = - {1.511}, p = {.131}$ | $Z = - {1.414}, p = {.157}$ | $Z = - {0.368}, p = {.713}$ | $Z = - {0.552}, p = {.581}$ | $Z = - {0.184}, p = {.854}$ |
| Q6 | ${\chi }^{2}\left( 3\right) = {35.645}, p < {.0005}$ | $Z = - {4.035},\mathrm{p} < {.001}$ | $Z = - {3.46}, p < {.01}$ | $Z = - {3.552}, p < {.001}$ | $Z = - {3.031}, p < {.01}$ | $Z = - {3.142}, p < {.01}$ | $Z = - {0.15}, p = {.881}$ |
| Q7 | ${\chi }^{2}\left( 3\right) = {43.882}, p < {.0005}$ | $Z = - {4.121}, p < {.001}$ | $Z = - {3.562}, p < {.001}$ | $Z = - {3.777}, p < {.001}$ | $Z = - {3.198}, p < {.01}$ | $Z = - {3.283}, p < {.01}$ | $Z = - {0.655}, p = {.513}$ |
+
+## 5 Discussion
+
+### 5.1 Menu Placements
+
+Placing a Graphical Menu on an Arm. Our experiment indicates that the arm menu placement took a significantly longer time to complete the menu task in comparison with the spatial, hand, and waist placement menus. This is primarily because the user needed to adjust their arm position in order to select the corresponding menu items, and the system message was in a fixed position but located further compared, for example, to the spatial menu. Additionally, the user needed to turn their head up and down to look at the system message that showed them what book to choose from the menu. Overall, the arm graphical menu was the least favored among participants. The participants found this type of menu hard to use (physically and mentally), were highly frustrated while completing the menu tasks, and felt that the menu was in an uncomfortable position. This is primarily because we used an off-the-shelf HTC Vive implementation and tracked the body using two controllers and one additional tracker attached to a participant's waist without any additional custom hardware. On one hand, this approach reflects real-world usage but, on the other hand, does not give that high accuracy. However, we foresee better full-body tracking accuracy to make the body placements feel more intuitive and natural. For example, Caserman et al. [8] presented an accurate, low-latency body tracking approach using a Vive headset and trackers that can be incorporated by VR developers "to create an immersive VR experience, by animating the motions of the avatar as smoothly, rapidly and as accurately as possible."
+
+For the arm menu with a linear shape, the participants reported that it was hard to reach the menu items that were placed closer to the elbow. Our quantitative results also indicate that linear and radial shapes of the arm placement menus are significantly slower than the linear or radial spatial graphical menus. The majority of the participants noted that the arm menu in conjunction with the head or ray-casting felt awkward and difficult. In the case of the head input, the participants had to turn their heads up and down a lot of times and keep adjusting their arm. Overall, the participants concluded that the arm placement technique felt more intuitive when it was combined with a radial shape or linear shape and eye gaze selection technique. Interestingly, even though the participants preferred eye gaze for the arm placement, this selection technique took a higher number of target re-entries for completing menu tasks than the arm head selection technique. Thus, we suggest implementing an arm placement menu with any menu shape and eye gaze selection technique as it provides a more natural and physically easier interaction. Additionally, we found that for the arm graphical menu, it is strongly recommended to shorten the amount of interaction, because a prolonged duration causes arm strain, especially, when the arm has to be held up.
+
+Attaching a Graphical Menu to a Waist. We found that the waist menu was significantly faster than the arm graphical menu. The participants found this menu physically hard and difficult to use. Also, the majority of the participants reported that the waist menu in conjunction with the head input would cause neck strain. Therefore, we highly recommend placing the system messages near the waist graphical menu and to not use it for a prolonged duration. For the waist menus, the eye gaze selection technique usually had a higher number of target re-entries than the head or ray-casting, and the head selection technique had a higher average completion time than ray-casting. We find that the best interaction technique for the waist menus is the linear shape (based on the participants' preference) with the ray-casting selection technique.
+
+Placing a Graphical Menu on a Hand. The hand menu was the participant's second favorite graphical menu. We found that for the hand menu, task completion time is significantly faster than for the arm menu, but it is also more prone to errors than the arm placement menu. Additionally, we found that the head selection takes more time for the hand placement menu than for the ray-casting technique. Further, eye gaze has a significant number of target reentries than ray-casting. Even though 4 of the participants reported the hand graphical menu as their favorite, others found it is somewhat hard and tricky to use. We believe this is primarily because the participants had to hold up their hands and adjust its position in order to choose the menu items. Overall, based on the feedback and results from the participants, we suggest using the hand menu in conjunction with a radial shape (as it was mostly favored by the participants but did not have any issues with other quantitative metrics) and the ray-casting selection technique.
+
+Placing a Graphical Menu in the Virtual World. Overall, the spatial graphical menu was the most favored among participants. The spatial menu was only significantly faster to use in comparison with the arm menu. The users noted that the spatial menu would be better to use with a radial shape and ray-casting or eye gaze selection techniques. Even though this placement menu was participants' overall best graphical menu to use, for the spatial menus, we found that the head selection took more time to complete a menu task and the eye gaze menu had more target re-entries than ray-casting. Therefore, we suggest using the spatial graphical menu with ray-casting and any menu shapes.
+
+### 5.2 Menu Selection Techniques and Shapes
+
+For the selection techniques, we found the participants made significantly more errors when selecting the menu items with eye gaze. Likewise, the number of target re-entries was significantly higher in the case of eye gaze than with the ray-casting or head selection techniques. This is primarily because eye-tracking technology is highly sensitive to eye movement making the user accidentally select the wrong menu items and leave the target menu item and then go again inside the target significantly more often. In the future, we foresee better eye-tracking accuracy that will make the eye gaze selection technique more intuitive and accurate to use [23].
+
+The ray-casting selection technique was faster than the head selection technique. However, we found that the participants made more errors when selecting the menu items with ray-casting than with the head selection. Likewise, the number of target re-entries was higher in the case of ray-casting than with the head selection technique. This is primarily because the ray-casting input does not require the user to precisely select the menu items and adjust the head position (which makes the head selection more time-consuming). Overall, ray-casting felt intuitive and easy to use for the participants. Moreover, the laser pointer of ray-casting gave the user additional control over the graphical menus.
+
+The head selection technique was the participant's least favorite. We found that the head input took more time in comparison with the ray-casting selection technique because each participant had to adjust the head movement in order to select the menu item and be more accurate and precise. This is also the reason why the head input is less prone to errors and number of target re-entries. The participants reported that the head controls did not have a very good place on any graphical menu.
+
+Overall, shapes did not matter significantly for the participants. Also, we did not find any significant difference in task completion time, error rates, or number of target re-entries which resonates with a finding of Santos at al. [22] that suggests that the user experience is not greatly affected by linear or radial menu shapes.
+
+Based on the results of our experiment, we were unable to accept the $\mathbf{{H1}}$ and $\mathbf{{H2}}$ hypotheses. We did not find any significant difference in task completion time, error rates, and number of target re-entries for the spatial graphical menu with the radial shape and the ray-casting selection technique in comparison with the other graphical menus. Moreover, even though the participants preferred a more conventional spatial menu, shapes and selection did not matter for them, as they selected all two menu shapes (linear and radial) menu shapes and all three selection techniques (ray-casting, head, and eye gaze) as their preferred for the spatial menu.
+
+## 6 LIMITATIONS AND FUTURE WORK
+
+There are a few factors that could have affected our results. In particular, 2 participants had difficulty with eye-tracking technology. We noticed that those participants wore thick lenses that made eye-tracking less accurate at recognizing eye movement. Additionally, for the arm placement menu, the users had to rotate the arm accordingly before being able to even see or select items from the menu which made the arm graphical menu less comfortable. Ideally, the arm menu should be more flexible and intuitive to use with a better design approach. Also, the participants reported that they wanted their virtual arm to be longer. We believe that with better full-body tracking technology (e.g., by using additional custom hardware), we can solve this issue and match the real user's arm with a virtual arm. Further, the system message was placed in the center near the virtual object in the VR environment and was not attached to a placement menu (e.g., near the arm placement menu). Ideally, the system message should have the instruction panel close to each type of menu to homogenize the distance from the instructions to the target. We also did not implement additional pointer smoothing or padding between control elements that could help participants make less errors. Moreover, the text was sometimes unclear to read when appeared on the participant's hand or arm. We think this is primarily because of the headset resolution that can be solved with a high-resolution VR headset.
+
+Given a variety of criteria that need to be considered when implementing graphical menus in VEs, in the future, it would be interesting to investigate how various menu hierarchical depths or sizes and other types of graphical menus (e.g., object-referenced or device-referenced) affect the task completion time, error rates, number of target re-entries, and other quantitative and qualitative metrics. Additionally, even though Bowman et al. [5] specified that "body-centered menus also do not inherently support a hierarchy of menu items", it would be interesting to see how depths/number of menu elements indeed affect such menus. In general, our study was developed to account for performing menu tasks in a static standing context, however, in the future, we would like to investigate how body-referenced menus can be applied for more dynamic virtual environments.
+
+## 7 CONCLUSION
+
+We presented an in-depth systematic study on evaluation of body-referenced graphical menus in virtual environments in terms of different placements (spatial, arm, hand, and waist), menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze). Our results show that the spatial, hand, and waist menus are significantly faster than the arm menus. Moreover, we found that the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques, however, we did not find any significant difference in task completion time, error rates, and number of target re-entries for the menu shapes. We found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one. We also provided design guidelines and recommendations for body-referenced graphical menus including its preferred shapes and selection techniques.
+
+## ACKNOWLEDGMENTS
+
+This work is supported in part by NSF Award IIS-1638060 and Army RDECOM Award W911QX13C0052. We also thank the anonymous reviewers for their insightful feedback. We are further grateful to the Interactive Systems and User Experience Lab at UCF for their support.
+
+## REFERENCES
+
+[1] I. G. Angus and H. A. Sowizral, "Embedding the 2D interaction metaphor in a real 3D virtual environment," presented at the IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology, San Jose, CA, 1995, pp. 282-293.
+
+[2] T. Azai, S. Ogawa, M. Otsuki, F. Shibata, and A. Kimura, "Selection and Manipulation Methods for a Menu Widget on the Human Forearm," in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2017, pp. 357-360.
+
+[3] T. Azai, M. Otsuki, F. Shibata, and A. Kimura, "Open Palm Menu: A Virtual Menu Placed in Front of the Palm," in Proceedings of the 9th Augmented Human International Conference, New York, NY, USA, 2018, pp. 17:1-17:5.
+
+[4] T. Azai, S. Ushiro, J. Li, M. Otsuki, F. Shibata, and A. Kimura, "Tap-tap Menu: Body Touching for Virtual Interactive Menus," in Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 2018, pp. 57:1- 57:2.
+
+[5] D. A. Bowman and C. A. Wingrave, "Design and evaluation of menu systems for immersive virtual environments," in Proceedings IEEE Virtual Reality 2001, 2001, pp. 149-156.
+
+[6] D. A. Bowman, C. A. Wingrave, J. M. Campbell, and V. Q. Ly, "Using Pinch Gloves(TM) for both Natural and Abstract Interaction Techniques in Virtual Environments," 2001.
+
+[7] J. Callahan, D. Hopkins, M. Weiser, and B. Shneiderman, "An empirical comparison of pie vs. linear menus," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Washington, D.C., USA, 1988, pp. 95-100.
+
+[8] Caserman Polona, Garcia-Agundez Augusto, Konrad Robert, Göbe Stefan, and Steinmetz Ralf, "Real-time body tracking in virtual reality using a Vive tracker," Virtual Real., Jun. 2019.
+
+[9] R. Dachselt and A. Hübner, "Three-dimensional menus: A survey and taxonomy," Comput. Graph., vol. 31, no. 1, pp. 53-65, Jan. 2007.
+
+[10] S. Feiner, B. Macintyre, and D. Seligmann, "Knowledge-based Augmented Reality," Commun ACM, vol. 36, no. 7, pp. 53-62, Jul. 1993.
+
+[11] S. Gebhardt, S. Pick, F. Leithold, B. Hentschel, and T. Kuhlen, "Extended Pie Menus for Immersive Virtual Environments," IEEE Trans. Vis. Comput. Graph., vol. 19, no. 4, pp. 644-651, Apr. 2013.
+
+[12] D. Gerber and D. Bechmann, "The spin menu: a menu system for virtual environments," in IEEE Proceedings. VR 2005. Virtual Reality, 2005., Bonn, Germany, 2005, pp. 271-272.
+
+[13] C. Hand, "A Survey of 3D Interaction Techniques," Comput. Graph. Forum, vol. 16, no. 5, pp. 269-281, 1997.
+
+[14] S. Holm, "A Simple Sequentially Rejective Multiple Test Procedure," Scand. J. Stat., vol. 6, no. 2, pp. 65-70, 1979.
+
+[15] R. H. Jacoby and S. R. Ellis, "Using virtual menus in a virtual environment," presented at the SPIE/IS&T 1992 Symposium on Electronic Imaging: Science and Technology, San Jose, CA, 1992, pp. 39-48.
+
+[16] J. J. LaViola. Jr, E. Kruijff, R. P. McMahan, D. Bowman, and I. P. Poupyrev, 3D User Interfaces: Theory and Practice. Addison-Wesley Professional, 2017.
+
+[17] R. Komerska and C. Ware, "A study of haptic linear and pie menus in a 3D fish tank VR environment," in 12th International Symposium on Haptic Interfaces for Virtual Environment and
+
+Teleoperator Systems, 2004. HAPTICS '04. Proceedings., 2004, pp. 224-231.
+
+[18] M. R. Mine, F. P. Brooks Jr., and C. H. Sequin, "Moving Objects in Space: Exploiting Proprioception in Virtual-environment Interaction," in Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1997, pp. 19-26.
+
+[19] P. Monteiro, H. Coelho, G. Gonçalves, M. Melo, and M. Bessa, "Comparison of Radial and Panel Menus in Virtual Reality," IEEE Access, vol. 7, pp. 116370-116379, 2019.
+
+[20] Namgyu Kim, G. J. Kim, Chan-Mo Park, Inseok Lee, and S. H. Lim, "Multimodal menu presentation and selection in immersive virtual environments," in Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048), New Brunswick, NJ, USA, 2000, p. 281.
+
+[21] Y. Y. Qian and R. J. Teather, "The eyes don't have it: an empirical comparison of head-based and eye-based selection in virtual reality," in Proceedings of the 5th Symposium on Spatial User Interaction - SUI '17, Brighton, United Kingdom, 2017, pp. 91- 98.
+
+[22] A. Santos, T. Zarraonandia, P. Díaz, and I. Aedo, "A Comparative Study of Menus in Virtual Reality Environments," in Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, New York, NY, USA, 2017, pp. 294-299.
+
+[23] L. E. Sibert and R. J. K. Jacob, "Evaluation of eye gaze interaction," in Proceedings of the SIGCHI conference on Human Factors in Computing Systems, The Hague, The Netherlands, 2000, pp. 281- 288.
+
+[24] B.-S. Wang, X.-J. Wang, and L.-K. Gong, "The Construction of a Williams Design and Randomization in Cross-Over Clinical Trials Using SAS," J. Stat. Softw., vol. 29, no. Code Snippet 1, 2009.
+
+[25] "Understanding Virtual Reality - 2nd Edition." [Online]. Available: https://www.elsevier.com/books/understanding-virtual-reality/sherman/978-0-12-800965-9.[Accessed: 21-Nov-2019].
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b0c814d2db7b8b31e5049d5a364999d9ae261903
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/k_qtvPF0QUJ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,344 @@
+§ EVALUATION OF BODY-REFERENCED GRAPHICAL MENUS IN VIRTUAL ENVIRONMENTS
+
+Irina Lediaeva* Joseph J. LaViola Jr** University of Central Florida University of Central Florida
+
+§ ABSTRACT
+
+Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.
+
+Keywords: 3D Menus; Menu Placements; Menu Selection Techniques; Menu Shapes; Virtual Reality
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality; Human-centered computing-Interaction design-Interaction design process and methods-User interface design
+
+§ 1 INTRODUCTION
+
+Graphical menus are an integral and essential component of user interfaces and have been widely used for $2\mathrm{D}$ desktop applications. Given its wide popularity in desktop applications, graphical menus have been also integrated into virtual environments. Graphical menus in 3D user interfaces are the adapted 2D menus that have proven to be a successful system control technique [16]. For example, Angus and Sowizral [1] developed a hand-held flat panel display that was embedded in a virtual environment and provided a familiar metaphor within the VE context for the users who used graphical menus for desktop applications.
+
+However, once the adapted 2D menus are integrated into 3D space, there are also design challenges that need resolving, such as how best to reach a menu item in 3D space as well as the lack of tactile feedback [9]. Moreover, there are other considerations for designing and implementing graphical menus as system control techniques in 3D user interfaces, such as positioning of graphical user interface elements in space [18], choosing an appropriate selection technique [13], representation of the menu [20] and its overall structure [22].
+
+There are many varieties of graphical menus in virtual environments that have been extensively evaluated by researchers including the TULIP menus [5], spin and ring menus [12], and various menu shapes and menu element sizes [4]. Moreover, Mine et al. [18] have explored body-referenced graphical menus, in which the menu items are fixed to the user's body (and not the head). Such body-referenced graphical menus have several advantages, such as providing "a physical real-world frame of reference, a more direct and precise sense of control, and an 'eyes off" interaction where the users do not have to constantly watch what they are doing." For example, Azai et al. [2,3,4] proposed a menu system that appears at various body parts, such as users' hands, arms, upper legs, and abdomen. However, we found that body-referenced graphical menus are an insufficiently investigated research topic. Such menus are still emerging in the field of virtual environment, and there is a lack of usability studies that would provide design guidelines for developers to implement. This work attempts to close this gap and to provide a comprehensive evaluation of body-referenced graphical menus in virtual environments.
+
+Therefore, the objective of this research is to conduct a comparative study to provide insights and gain a deeper understating of how to design body-referenced graphical menus for virtual environments that support fast completion times, minimize error, and feel natural [5]. During the study, we explored and compared various menu placements, such as placing the menu on the hand [3], arm [2], attaching it to the participant's waist [4] and displaying it in a virtual environment (or in a fixed position in the world) [22]. We examined two menu shapes (linear and radial) [22] and three menu selection techniques (ray-casting [15], head-tracking [21], and eye-tracking [21,23]). Since response time and ease of use of a graphical menu can significantly affect user experience, we gathered typical metrics such as average selection time, the number of wrong selections or error rate and target reentries, the overall efficiency, user satisfaction and comfort [9].
+
+To evaluate and compare combinations of menu placements, its shapes, and selection techniques, we conducted a user study with 24 participants. We make the following contribution to research on body-referenced graphical menus in virtual environments: design recommendations and insights on user preference of body-referenced graphical menus in virtual environments including recommendations for various menu shapes and selection techniques.
+
+* irinalediaeva@knights.ucf.edu
+
+** jjl@cs.ucf.edu
+
+ < g r a p h i c s >
+
+Figure 1: Sample screenshots from the virtual environment. We compared spatial, arm, hand, and waist menu placements (left to right).
+
+§ 2 RELATED WORK
+
+Graphical menus in a virtual environment can be classified based on various criteria, such as menu techniques (adapted 2D menus,1- DOF menus, 3D widgets), placements (world-referenced, head-referenced, body-referenced, etc.), shapes (linear, radial, pie menus, etc.), selections (pointer-directed, gaze-directed, device-directed, etc.). Therefore, the following criteria should be considered when designing graphical menus in a virtual environment: placement, representation and structure (e.g., the spatial layout of the menu, number of menu items, its size, the distance between menu items), and selection [16]. Thus, we situate our study within three main streams of research: 1) menu placements, 2) menu shapes, and 3) menu selection techniques.
+
+§ 2.1 MENU PLACEMENTS
+
+The placement of the menu influences the user's ability to access it (good placement provides a spatial reference) and the amount of menu occlusion in the environment [16]. Feiner et al. [10] first addressed the placement considerations and created a menu taxonomy where the graphical menus can be placed at a fixed location in the virtual world (world-referenced), connected to a virtual object (object-referenced), attached to the user's head (head-referenced) or the rest of the body (body-referenced), or placed in reference to a physical object (device-referenced).
+
+Menu systems that employ the user's body as a graphical menu have been proposed by multiple researchers $\left\lbrack {2,3,4,{18}}\right\rbrack$ . For example, Azai et al. proposed a method of displaying the graphical menu in augmented reality on the user's forearm [2] and a menu system that appears at various body parts including not only hands or arms but also the upper legs and abdomen [4]. The researchers found that placing the graphical menu on the body enables the user to operate the menus comfortably and freely [4].
+
+Some research has been done on how to use menus in ways that depart from the typical 2D desktop metaphor. For example, Bowman et al. [6] evaluated the usefulness of letting the participant's fingers perform menu item selection using finger-contact gloves (Pinch Gloves), where the menu items were assigned to different fingers. Another way the menus can be connected to the body is through the body referential zones that are part of an "at-hand" interface [25]. For example, a tool belt surrounding the user may allow them to select objects or options by reaching to a certain location and making a selection. In this study, we particularly focus on investigating body-referenced graphical menus (arm, hand, and waist menus) and comparing it with a conventional world-referenced spatial menu.
+
+§ 2.2 MENU SHAPES
+
+The items of the graphical menu can be organized in different ways, adopting a radial shape, where the menu items have a circular form, or linear forms, where the menu items have a rectangular form, among other possible configurations (e.g., pie and ring menus). Researchers have also explored various layouts or shapes of the graphical menus in various environments. For example, Callahan et al. [7] found that menu items in a circular layout perform better in terms of selection time compared to a linear layout in a 2D plane. Similarly, Komerska et al. [17] found that selection using the pie menu for a 3D haptic enhanced environment is considerably faster and more accurate than linear menus. Additionally, Gebhardt et al. [11] presented a formal evaluation of hierarchical pie menus in a virtual environment. Their results indicated high performance and efficient design of this menu type in virtual reality applications. Monteiro et al. [19] found that even though linear and radial menus performed well, the users still preferred the traditional linear menu type and the fixed wall placement of the menu. In this paper we focus on two frequently and widely used types of menus in a virtual environment: linear and radial [22].
+
+§ 2.3 MENU SELECTION TECHNIQUES
+
+A menu selection is another form of interaction that is derived from desktop 2D user interfaces. As with desktop menu systems, the user is presented with a list of choices from which they need to select a corresponding menu item. A common selection technique that emulates 2D user interface techniques is where a direction selector or a controller is used to point at, scroll through, highlight, and "click" or trigger a controller button to select various menu items [25].
+
+Researchers have proposed different selection techniques for selecting menu items: ray-casting, head-, eye-, and gesture-based selection techniques. Ray-casting is one of the most well-known menu selection techniques, where a ray is projected from the hand position to the plane of the graphical menu [15]. Further, Qian et al. [21] investigated eye-based and head-based selection techniques and concluded that the eye-only selection offered the worst performance in terms of error rate and selection times.
+
+To best of our knowledge, this study is the first to systematically explore graphical menus in a virtual environment with relevance to menu placements, shapes, and selection techniques. The main contribution of this study is to provide a set of design guidelines for developing body-referenced graphical menus in virtual environments.
+
+§ 3 USER STUDY
+
+We conducted a user study to evaluate a variety of graphical menus placements (spatial, arm, hand, waist) in a virtual environment (Figure 1-2) as well as its menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze). The following sections describe the design, tasks, and measurements.
+
+§ 3.1 STUDY DESIGN
+
+This within-subjects study consisted of 3 independent variables: menu placement (spatial, arm, hand, waist), menu shape (linear and radial), and menu selection technique (ray-casting with a controller device, head, and eye gaze). In total, we had $4 \times 2 \times 3 = {24}$ conditions and for each condition, the participant conducted 10 trials which make a total of 240 selections per participant as part of the user study. Each condition was presented to the user in a random order based on a Latin square for constructing "Williams designs" [24]. For each condition, users were asked to select 10 randomly generated items (one item at a time).
+
+ < g r a p h i c s >
+
+Figure 2: Sample screenshot from the virtual environment with a radial graphical menu (spatial menu placement).
+
+§ 3.2 DEPENDENT VARIABLES
+
+Our dependent variables were average task completion time, where the average is taken over the 10 trials for that condition, error rates, number of target re-entries, and post-questionnaire responses. We automatically recorded our dependent variables throughout the whole study session and stored the data in a text file. Task completion time (TCT) was measured as the time from the moment the system displayed a message to the moment the user selected a menu item. Error rates (ERR) were recorded every time the user pressed the wrong menu item which was different than the system message requested (the percentage of wrong selections for a given condition). Number of target re-entries (TRE) was measured as a number of times the pointer left the volume of the target menu item and then went again inside the target.
+
+§ 3.3 STUDY HYPOTHESES
+
+Based on the related work $\left\lbrack {7,{19},{21}}\right\rbrack$ and our pilot studies during the design of our experiment, we devised the following hypotheses for our study:
+
+ * Hypothesis 1 (H1): Spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique will let users perform the menu tasks faster, make less errors and take a smaller number of target re-entries in comparison with the other types of graphical menus in a virtual environment.
+
+ * Hypothesis 2 (H2): Participants will prefer to use spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique than the other types of graphical menus in a virtual environment.
+
+§ 3.4 TASKS
+
+In the study, our participants were immersed in a virtual environment that portrayed a virtual library in which the user needed to select books. We had a total of 24 conditions in our study. Each condition consisted of 10 tasks. In these tasks, the participant was asked to select a graphical menu item using the given selection technique. We used 6 menu items for each type of the menu (we found this number appropriate for a menu), and all the menu items were labeled with numbers from 1 to 6 in our experiment. The menu items were shuffled for each condition to prevent a learning effect.
+
+For each menu task, the book number was randomly generated and automatically displayed in a system message (e.g., "Choose Book 4"). For each menu condition, the system message was placed near the center of the virtual table with books. The system message also included information about the current trial, menu placement, and selection technique that were automatically generated by the controller algorithm (to account for all conditions). In other words, once the participant performed a menu task, the system automatically generated and placed a new menu condition in the virtual environment and displayed that information to the participant with a book number in the system message. The participant had a 2-second break between the menu item selection tasks.
+
+For the selection techniques, the participant used a controller device (with a ray cast), a head movement (with head-tracking), or eye-tracking to point at a menu element; then they needed to press the controller trigger button to select the corresponding menu item. Furthermore, for each selection technique, the participant could see a bullet mark that was highlighted (e.g., at the end of the ray cast) once the participant navigated toward the graphical menu. That way the participant could easily select the menu items using all the aforementioned selection techniques. The laser selection pointer with a bullet mark was only displayed for a ray cast selection technique. For the eye gaze and head selection techniques, only a bullet mark was visible.
+
+The participant was informed whether a selection was right or wrong by a sound alert. If the selection was right, the participant could proceed to the next menu task. This process continued until all the menu combinations were completed.
+
+§ 3.5 PARTICIPANTS AND APPARATUS
+
+We recruited 24 participants (20 males and 4 females) between the ages of 18 to 32 years old $\left( {\mu = {21.54},\sigma = {4.06}}\right)$ , of which two participants were left-handed and one was ambidextrous. 16 participants had a high school diploma, 2 participants had an associate degree, and 6 participants had a bachelor's degree as their highest completed level of education. 11 participants identified their ethnicity as White/Caucasian, 6 participants were Hispanic/Latino, 5 participants were Asian/Pacific Islander, and 2 participants were Multiethnic/Other. Furthermore, 11 participants specified that they wore glasses or contacts. A Likert scale from 1 to 7 with 1 representing little experience or familiarity and 7 representing very experienced or greater familiarity was used to measure the following in a demographic survey: participants experience with VR $\left( {\mu = {4.46},\sigma = {1.63}}\right)$ , familiarity with game controllers $\left( {\mu = {5.75},\sigma = {1.66}}\right)$ , eye-tracking technology $(\mu = {3.67}$ , $\sigma = {1.84})$ , and video games $\left( {\mu = {5.42},\sigma = {1.6}}\right)$ .
+
+The experiment duration ranged from 30 to 45 minutes and all participants were paid $\$ {10}$ for their time. The experiment setup consisted of a 55" Sony HDTV for the experimenter to view, a Tobii HTC Vive headset, which is a retrofitted version of the HTC Vive headset with complete eye-tracking integration from Tobii Pro, Vive controllers, and a Vive tracker attached to a TrackBelt band in order to track the position of the user's waist. We used the Unity game engine for implementing all the graphical menus in conjunction with Tobii XR SDK ${}^{1}$ for integrating eye-tracking technology for the eye gaze selection technique, and Finall ${\mathrm{K}}^{2}$ for tracking the full body of the user and placing the graphical menus on the user body. The software was run on a Windows 10 desktop computer in a lab setting, equipped with an Intel Core i7-4790K CPU, an NVIDIA GeForce GTX 1080 GPU, and 16 Gigabytes (GB) of RAM.
+
+${}^{1}$ https://vr.tobii.com/developer/
+
+Table 1: Post-questionnaire. Questions 1-7 were asked to rate each placement menu (spatial, arm, hand, waist) on a 7-point Likert scale, questions 8 and 9 were multiple-choice questions, and the question 10 was open-ended.
+
+max width=
+
+Q1 To what extent did you like the placement menus?
+
+1-2
+Q2 How mentally demanding were the placement menus?
+
+1-2
+Q3 How physically demanding were the placement menus?
+
+1-2
+Q4 How successfully were you able to choose the menu items you were asked to select?
+
+1-2
+Q5 Did you feel that you were trying your best?
+
+1-2
+Q6 To what extent did you feel frustrated using the placement menus?
+
+1-2
+Q7 To what extent did you feel that the placement menus were hard to use?
+
+1-2
+Q8 Which shape of menu would you prefer for the menu placements in VR? Linear, radial, or all equally?
+
+1-2
+Q9 Which selection technique would you prefer for the menu placements? Controller, head, eye gaze, or all equally?
+
+1-2
+Q10 What are your further comments on your experience with the graphical menus in VR?
+
+1-2
+
+§ 3.6 STUDY PROCEDURE
+
+The study started with the participant seated in front of the TV display and the experimenter seated to the side. Participants were given a consent form that explained the study procedure and the participant's rights and responsibilities. They were then given a demographic survey that collected general information about the participant age, gender, ethnicity, dexterity, familiarity with virtual reality, game controllers, eye-tracking technology, and how often the participants played video games. After that, the participant was guided on how to position their headset and adjust the interpupillary distance (to align the lenses with the distance between the participant's pupils) for the best visual experience followed by a quick 5-point calibration to adjust the eye tracker.
+
+During the session, participants were only seated (they did not stand or walk around) with the headset on, an attached Vive tracker to their waist, and two controllers in their hands for selecting the menu items. The participants' limbs were at rest on their side. The participants were asked to comfortably lean on the back of a chair and be in a relaxed position. The participants' limbs were always in the necessary starting position to trigger the menus. The position of the arm and hand menu placements were changed either to the left or right of the participant as well as the trigger selection method (right or left controller) based on the participant's dexterity (left-handed or right-handed) in order to make sure the participant felt comfortable while completing the menu tasks.
+
+Next, the participant conducted a training session of 5-10 minutes in order to get familiar with the virtual environment and the different types of graphical menus including menu placements and selection techniques. During the training session, each menu combination was completed in 5 trials. The system displayed a message with the number of the book from 1 to 6 selected randomly. The participant needed to select the corresponding item from the menu.
+
+Once the training session was completed and the participant felt comfortable with the selection techniques and menu placements, the participant started the study session to further evaluate various combinations of the graphical menus. At the end, the participant filled out a post-questionnaire (Table 1) using a 7-point Likert scale (e.g., from 1 or "not at all mentally demanding" to 7 or "extremely mentally demanding") for each placement menu (spatial, arm, hand, waist) and ranked them based on the overall preference, how mentally or physically demanding the placement menus were, frustration, and its ease of use. Additionally, we asked the participant to select preferred menu shapes and selection techniques for each placement menu and leave additional comments on experience with the body-referenced graphical menus.
+
+§ 4 RESULTS
+
+§ 4.1 QUANTITATIVE RESULTS
+
+We used a repeated measures ANOVA per each dependent variable. For the ANOVA results that are significant, we performed pairwise sample t-tests to see what conditions are specifically significant. We used Holm's sequential Bonferroni adjustment to correct for type I errors [14]. Table 2 shows the results of repeated measures ANOVA analysis.
+
+Table 2: Repeated measures ANOVA results for comparing menu placement, shape, and selection technique.
+
+max width=
+
+Source Task Completion Time Error Rates Number of $\mathbf{{TargetRe} - }$ Entries
+
+1-4
+Placement ${F}_{3,{69}} =$ 13.735 $p < .{0005}$ ${F}_{3,{69}} = {3.41}$ $p < {.05}$ ${F}_{3.69} = {0.393}$ $p = {.758}$
+
+1-4
+Shape ${F}_{1,{23}} = {1.078}$ $p = {.31}$ ${F}_{1,{23}} = {2.448}$ $p = {.131}$ ${F}_{1,{23}} = {2.217}$ $p = {.15}$
+
+1-4
+Selection ${F}_{2.46} = {6.562}$ $p < {.005}$ ${F}_{2,{46}} = {9.741}$ $p < {.0005}$ ${F}_{2,{46}} =$ 39.075 $p < {.0005}$
+
+1-4
+Placement $\times$ Shape ${F}_{3.69} = {2.881}$ $p < {.05}$ ${F}_{3.69} = {2.065}$ $p = {.113}$ ${F}_{3.69} = {2.889}$ $p < {.05}$
+
+1-4
+Shape $\times$ Selection ${F}_{2.46} = {1.642}$ $p = {.205}$ ${F}_{2.46} = {0.443}$ $p = {.645}$ ${F}_{2.46} = {1.947}$ $p = {.154}$
+
+1-4
+Placement $\times$ Selection ${F}_{6,{138}} = {2.47}$ $p < {.05}$ ${F}_{6,{138}} =$ 1.921 $p = {.082}$ ${F}_{6,{138}} =$ 2.485 $p < {.05}$
+
+1-4
+Placement $\times$ Shape $\times$ Selection ${F}_{6,{138}} =$ 1.706 $p = {.124}$ ${F}_{6,{138}} =$ 2.469 $p < {.05}$ ${F}_{6,{138}} =$ 0.459 $p = {.837}$
+
+1-4
+
+§ 4.1.1 MAIN EFFECT OF PLACEMENT
+
+We found significant difference in average task completion time $\left( {{F}_{3,{69}} = {13.735},p < {.0005}}\right)$ and error rates $\left( {{F}_{3,{69}} = {3.41},p < {.05}}\right)$ between menu placements.
+
+Task Completion Time: Participants took significantly longer to complete the menu tasks when the graphical menu was placed on the arm than when the graphical menu was placed spatially $\left( {t}_{23}\right.$ $= - {5.253},p < {.0005})$ , on the waist $\left( {{t}_{23} = {4.59},p < {.0005}}\right)$ or on the hand $\left( {{t}_{23} = 4,p < {.005}}\right)$ .
+
+${}^{2}$ http://www.root-motion.com/final-ik.html
+
+Error Rates: Participants made significantly more errors when using the hand placement graphical menus than the arm graphical menus $\left( {{t}_{23} = - {4.079},p < {.005}}\right)$ .
+
+§ 4.1.2 MAIN EFFECT OF SHAPE
+
+Menu shapes did not have any significant effect on task completion time $\left( {{F}_{1,{23}} = {1.078},p = {.31}}\right)$ , error rates $\left( {{F}_{1,{23}} = {2.448},p = {.131}}\right)$ or number of target re-entries $\left( {{F}_{1,{23}} = {2.217},p = {.15}}\right)$ .
+
+§ 4.1.3 MAIN EFFECT OF SELECTION TECHNIQUE
+
+We found significant difference in average task completion time $\left( {{F}_{2,{46}} = {6.562},p < {.005}}\right)$ , error rates $\left( {{F}_{2,{46}} = {9.741},p < {.0005}}\right)$ , and number of target re-entries $\left( {{F}_{2,{46}} = {39.075},p < {.0005}}\right)$ . between selection techniques.
+
+Task Completion Time: Further, we found that the head selection technique took significantly more time to complete a menu task across different placements than the ray-casting selection technique $\left( {{t}_{23} = - {7.238},p < {.0005}}\right)$ .
+
+Error Rates: Participants made significantly more errors when using the eye gaze selection technique than other selection techniques, such as the head $\left( {{t}_{23} = - {3.868},p < {.005}}\right)$ or ray-casting $\left( {{t}_{23} = - {2.219},p < {.005}}\right)$ techniques. Likewise, participants made significantly more errors using the ray-casting selection technique than using the head selection technique $\left( {{t}_{23} = {3.391},p < {.05}}\right)$ .
+
+Number of Target Re-Entries: The eye gaze selection technique had a significantly higher number of target re-entries than the ray-casting $\left( {{t}_{23} = - {5.864},p < {.0005}}\right)$ or head $\left( {{t}_{23} = - {6.688},p < }\right.$ 2005) selection techniques. Also, we found the ray-casting was significantly higher in terms of number of target re-entries in comparison with the head selection technique $\left( {{t}_{23} = {3.008},p < }\right.$ .005).
+
+§ 4.1.4 INTERACTION EFFECT OF PLACEMENT $\TIMES$ SHAPE
+
+We found significant differences in average task completion time $\left( {{F}_{3,{69}} = {2.881},p < {.05}}\right)$ and number of target re-entries $\left( {{F}_{3,{69}} = }\right.$ ${2.889},p < {.05})$ between menu placements and shapes.
+
+Task Completion Time: We found that the arm placement menu in conjunction with the linear shape took significantly more time to complete a menu task than the linear hand $\left( {{t}_{23} = {4.994},p < {.0005}}\right)$ , spatial $\left( {{t}_{23} = {4.558},p < {.0005}}\right)$ , waist $\left( {{t}_{23} = {3.573},p < {.01}}\right)$ and the radial spatial $\left( {{t}_{23} = {5.481},p < {.0005}}\right)$ and waist $\left( {{t}_{23} = {4.533},p < }\right.$ .0005) graphical menus. Additionally, the arm menu placement with the radial shape took more time than the linear spatial menu $\left( {{t}_{23} = {3.787},p < {.01}}\right)$ and radial spatial $\left( {{t}_{23} = {3.494},p < {.01}}\right)$ and waist $\left( {{t}_{23} = {3.476},p < {.01}}\right)$ menus.
+
+Number of Target Re-Entries: The linear arm menu had a significantly higher number of target re-entries than the radial arm placement menu $\left( {{t}_{23} = {3.726},p < {.01}}\right)$ .
+
+§ 4.1.5 INTERACTION EFFECT OF SHAPE $\TIMES$ SELECTION
+
+We did not find any significant effect on task completion time $\left( {F}_{2,{46}}\right.$ $= {1.642},p = {.205}$ ), error rates $\left( {{F}_{2.46} = {0.443},p = {.645}}\right)$ or number of target re-entries $\left( {{F}_{2,{46}} = {1.947},p = {.154}}\right)$ between menu shapes and selection techniques.
+
+§ 4.1.6 INTERACTION EFFECT OF PLACEMENT $\TIMES$ SELECTION
+
+We found significant differences in average task completion time $\left( {{F}_{6,{138}} = {2.47},p < {.05}}\right)$ and number of target re-entries $\left( {{F}_{6,{138}} = }\right.$ ${2.485},p < {.05})$ between menu placements and selection techniques.
+
+Task Completion Time: For the hand placement menus, we found that it takes significantly more time for participants to complete a menu task using the head selection technique than ray-casting $\left( {{t}_{23} = - {6.06},p < {.0005}}\right)$ . For spatial $\left( {{t}_{23} = - {5.102},p < {.0005}}\right)$ and waist $\left( {{t}_{23} = - {4.648},p < {.0005}}\right)$ menus, again the head selection technique performed poorly than the ray-casting technique.
+
+Number of Target Re-Entries: We noticed that the eye-gazing selection had a higher number of target re-entries for each placement. For example, the arm $\left( {{t}_{23} = - {5.383},p < {.0005}}\right)$ , waist $\left( {t}_{23}\right.$ $= - {6.263},p < {.0005})$ , spatial $\left( {{t}_{23} = - {4.045},p < {.01}}\right)$ eye-gaze menus had significantly more re-entries than the arm, waist, and spatial head selection technique. Furthermore, the hand $\left( {{t}_{23} = - {7.233},p < }\right.$ ${.0005})$ , waist $\left( {{t}_{23} = - {6.042},p < {.0005}}\right)$ , and spatial $\left( {{t}_{23} = - {3.82},p < }\right.$ .01 ) eye gaze menus performed poorly in terms of re-entries than the hand, waist, and spatial ray-casting graphical menus.
+
+§ 4.1.7 INTERACTION EFFECT OF PLACEMENT $\TIMES$ SHAPE $\TIMES$ SELECTION
+
+We found that menu placements in conjunction with menu shapes and selection techniques had a significant effect on error rates $\left( {{F}_{6,{138}} = {2.469},p < {.05}}\right)$ . Additionally, based on our hypotheses, we did more investigations to see if the spatial graphical menus in conjunction with the radial shape and the ray-casting selection technique are indeed better than the other combinations of the graphical menus in terms of error rates. We found that there was no significant effect in error rates between the spatial radial ray-casting menu and other combinations of graphical menus.
+
+§ 4.2 QUALITATIVE RESULTS
+
+We did the Chi-squared test for the gathered post-questionnaire data about preferred menu shapes and selection techniques. We did not find any significant difference for the spatial and arm menus in terms of menu shapes. However, we found significance for the hand ${X}^{2}\left( {2,N = {24}}\right) = {12.5},p < {.05}$ and waist ${X}^{2}\left( {2,N = {24}}\right) = {7.605},p$ $< {.05}$ menus. Moreover, 12 people thought that all shapes were equivalent for the spatial graphical menus, 9 and 8 participants preferred the linear and radial shapes respectively for the arm graphical menus, 14 participants preferred the radial shape for the hand graphical menus, and 13 participants thought that the linear shape is a good fit for the waist menu (Figure 3).
+
+ < g r a p h i c s >
+
+Figure 3: Menu shape preference for each placement menu.
+
+For the selection techniques, we found significant difference for the spatial ${X}^{2}\left( {6,N = {24}}\right) = {22.66},p < {.05}$ , arm ${X}^{2}\left( {6,N = {24}}\right) =$ ${19.62},p < {.05}$ , and waist ${X}^{2}\left( {6,N = {24}}\right) = {19.12},p < {.05}$ menu placements. Also, participants thought that all selection techniques were equivalent for the spatial graphical menus, eye gaze was a good fit for the arm menus, eye gaze or ray-casting selection techniques were overall better for a hand menu, and eye gaze was a favorite selection technique for the waist menu (Figure 4). The spatial graphical menu was ranked as the overall best menu placement and the arm graphical menu as the worst placement. The spatial graphical menu was also ranked as best in terms of ease of use, mental and physical demand, and selection rates (Figure 5).
+
+ < g r a p h i c s >
+
+Figure 4: Selection technique preference for each placement menu.
+
+To analyze the Likert scale data, we used Friedman's test followed by a post-hoc analysis using Wilcoxon signed-rank tests for pairs (Table 3). Average ratings for post-questionnaire questions 1 to 7 are summarized in Figure 5. From the results that we obtained we concluded the following:
+
+ * Participants liked the spatial menus compared to the arm, hand, and waist graphical menus. The arm menu was the participant's least favorite.
+
+ * Spatial and hand graphical menus are less mentally demanding compared to the arm and waist graphical menus.
+
+ * Spatial menus were significantly less physically demanding compared to the hand, arm, and waist graphical menus. The arm menu was the most physically difficult.
+
+ * Participants were able to successfully select the menu items from all the types of placement graphical menus.
+
+ * The frustration level was higher for the arm graphical menu and significantly lower for the spatial graphical menu.
+
+ * Participants thought that the arm graphical menu was significantly harder to use than the other types of placement menus.
+
+Based on the comments that we gathered from the post-questionnaire, we found the following emerging themes. 6 of the participants reported that the arm menu was in an uncomfortable position, was difficult and tiresome to use: "Ifound the arm slightly more physically demanding simply because it required me to move my neck downward a lot."-P13
+
+ < g r a p h i c s >
+
+Figure 5: Post-questionnaire ratings for each placement menu.
+
+Also, for 3 of the participants, the arm menu felt shorter in a virtual environment: "It felt that the arm was shorter in VR causing the arm menu to feel like it was at my shoulder."-P18
+
+In the case of the ray-casting selection technique with the arm placement menu, the participant's primary hand had to be stretched every time they needed to select the menu item that resulted in a bad experience: "The worst experience is with arm and controller because painting a laser on to arm is difficult." -P14
+
+For the hand graphical menu, the participants preferred this menu type for games in virtual environments: "I think that for VR games the hand placement would be the best since it's relatively small and easy to use." -P10
+
+Furthermore, the users put the waist graphical menu into the same category with the arm menu in terms of its difficulty: "When using the menus placed on the waist, the user has to look down which may cause stain on neck after prolonged use." -P15
+
+Overall, the participants found the spatial menus easy to use, significantly less physically and mentally demanding in comparison with other graphical menus: "I preferred the spatial menus as they were the easiest to select using all three selection methods. It required the least amount of movement which I liked." -P3
+
+Notwithstanding, 14 participants also reported the eye gaze as their favorite selection technique: "The easiest and fastest method of selecting a menu was the eye gaze. I did not need to spend any time aiming my head or the controller to the menu item I wanted to select." -P15
+
+However, for 5 of them, the eye gaze had issues with accuracy: "Eye gaze sometimes was not accurate, I had to look at the above menu item to get the one below. It only applies to middle menu options." -P4
+
+Below, we discuss the implications of our findings and provide design recommendations for implementing body-referenced graphical menus in virtual environments.
+
+Table 3: Results on Friedman's test and post-hoc analysis for Likert scale data.
+
+max width=
+
+# Friedman's Test Spatial vs Arm Spatial vs Hand Spatial vs Waist Arm vs Hand Arm vs Waist Hand vs Waist
+
+1-8
+Q1 ${\chi }^{2}\left( 3\right) = {39.701},p < {.0005}$ $Z = - {4.215},p < {.001}$ $Z = - {3.58},p < {.001}$ $Z = - {4.247},p < {.001}$ $Z = - {2.728},p < {.01}$ $Z = - {0.23},p = {.818}$ $Z = - {2.183},p < {.05}$
+
+1-8
+Q2 ${\chi }^{2}\left( 3\right) = {24.019},p < {.0005}$ $Z = - {3.53},p < {.001}$ $Z = - {2.922},p < {.01}$ $Z = - {3.367},p < {.01}$ $Z = - {2.402},p < {.05}$ $Z = - {1.93},p = {.054}$ $Z = - {0.956},p = {.339}$
+
+1-8
+Q3 ${\chi }^{2}\left( 3\right) = {44.005},p < {.0005}$ $Z = - {4.215},p < {.001}$ $Z = - {3.95},p < {.001}$ $Z = - {3.841},p < {.001}$ $Z = - {3.292},p < {.01}$ $Z = - {2.372},p < {.05}$ $Z = - {0.951},p = {.341}$
+
+1-8
+Q4 ${\gamma }^{2}\left( 3\right) = {28.253},p < {.0005}$ $Z = - {3.863},p < {.001}$ $Z = - {3.093},p < {.01}$ $Z = - {3.211},p < {.01}$ $Z = - {2.043},p < {.05}$ $Z = - {1.919},p = {.055}$ $Z = - {0.106},p = {.915}$
+
+1-8
+Q5 ${\chi }^{2}\left( 3\right) = {3.48},p = {.323}$ $Z = - {1.663},p = {.102}$ $Z = - {1.511},p = {.131}$ $Z = - {1.414},p = {.157}$ $Z = - {0.368},p = {.713}$ $Z = - {0.552},p = {.581}$ $Z = - {0.184},p = {.854}$
+
+1-8
+Q6 ${\chi }^{2}\left( 3\right) = {35.645},p < {.0005}$ $Z = - {4.035},\mathrm{p} < {.001}$ $Z = - {3.46},p < {.01}$ $Z = - {3.552},p < {.001}$ $Z = - {3.031},p < {.01}$ $Z = - {3.142},p < {.01}$ $Z = - {0.15},p = {.881}$
+
+1-8
+Q7 ${\chi }^{2}\left( 3\right) = {43.882},p < {.0005}$ $Z = - {4.121},p < {.001}$ $Z = - {3.562},p < {.001}$ $Z = - {3.777},p < {.001}$ $Z = - {3.198},p < {.01}$ $Z = - {3.283},p < {.01}$ $Z = - {0.655},p = {.513}$
+
+1-8
+
+§ 5 DISCUSSION
+
+§ 5.1 MENU PLACEMENTS
+
+Placing a Graphical Menu on an Arm. Our experiment indicates that the arm menu placement took a significantly longer time to complete the menu task in comparison with the spatial, hand, and waist placement menus. This is primarily because the user needed to adjust their arm position in order to select the corresponding menu items, and the system message was in a fixed position but located further compared, for example, to the spatial menu. Additionally, the user needed to turn their head up and down to look at the system message that showed them what book to choose from the menu. Overall, the arm graphical menu was the least favored among participants. The participants found this type of menu hard to use (physically and mentally), were highly frustrated while completing the menu tasks, and felt that the menu was in an uncomfortable position. This is primarily because we used an off-the-shelf HTC Vive implementation and tracked the body using two controllers and one additional tracker attached to a participant's waist without any additional custom hardware. On one hand, this approach reflects real-world usage but, on the other hand, does not give that high accuracy. However, we foresee better full-body tracking accuracy to make the body placements feel more intuitive and natural. For example, Caserman et al. [8] presented an accurate, low-latency body tracking approach using a Vive headset and trackers that can be incorporated by VR developers "to create an immersive VR experience, by animating the motions of the avatar as smoothly, rapidly and as accurately as possible."
+
+For the arm menu with a linear shape, the participants reported that it was hard to reach the menu items that were placed closer to the elbow. Our quantitative results also indicate that linear and radial shapes of the arm placement menus are significantly slower than the linear or radial spatial graphical menus. The majority of the participants noted that the arm menu in conjunction with the head or ray-casting felt awkward and difficult. In the case of the head input, the participants had to turn their heads up and down a lot of times and keep adjusting their arm. Overall, the participants concluded that the arm placement technique felt more intuitive when it was combined with a radial shape or linear shape and eye gaze selection technique. Interestingly, even though the participants preferred eye gaze for the arm placement, this selection technique took a higher number of target re-entries for completing menu tasks than the arm head selection technique. Thus, we suggest implementing an arm placement menu with any menu shape and eye gaze selection technique as it provides a more natural and physically easier interaction. Additionally, we found that for the arm graphical menu, it is strongly recommended to shorten the amount of interaction, because a prolonged duration causes arm strain, especially, when the arm has to be held up.
+
+Attaching a Graphical Menu to a Waist. We found that the waist menu was significantly faster than the arm graphical menu. The participants found this menu physically hard and difficult to use. Also, the majority of the participants reported that the waist menu in conjunction with the head input would cause neck strain. Therefore, we highly recommend placing the system messages near the waist graphical menu and to not use it for a prolonged duration. For the waist menus, the eye gaze selection technique usually had a higher number of target re-entries than the head or ray-casting, and the head selection technique had a higher average completion time than ray-casting. We find that the best interaction technique for the waist menus is the linear shape (based on the participants' preference) with the ray-casting selection technique.
+
+Placing a Graphical Menu on a Hand. The hand menu was the participant's second favorite graphical menu. We found that for the hand menu, task completion time is significantly faster than for the arm menu, but it is also more prone to errors than the arm placement menu. Additionally, we found that the head selection takes more time for the hand placement menu than for the ray-casting technique. Further, eye gaze has a significant number of target reentries than ray-casting. Even though 4 of the participants reported the hand graphical menu as their favorite, others found it is somewhat hard and tricky to use. We believe this is primarily because the participants had to hold up their hands and adjust its position in order to choose the menu items. Overall, based on the feedback and results from the participants, we suggest using the hand menu in conjunction with a radial shape (as it was mostly favored by the participants but did not have any issues with other quantitative metrics) and the ray-casting selection technique.
+
+Placing a Graphical Menu in the Virtual World. Overall, the spatial graphical menu was the most favored among participants. The spatial menu was only significantly faster to use in comparison with the arm menu. The users noted that the spatial menu would be better to use with a radial shape and ray-casting or eye gaze selection techniques. Even though this placement menu was participants' overall best graphical menu to use, for the spatial menus, we found that the head selection took more time to complete a menu task and the eye gaze menu had more target re-entries than ray-casting. Therefore, we suggest using the spatial graphical menu with ray-casting and any menu shapes.
+
+§ 5.2 MENU SELECTION TECHNIQUES AND SHAPES
+
+For the selection techniques, we found the participants made significantly more errors when selecting the menu items with eye gaze. Likewise, the number of target re-entries was significantly higher in the case of eye gaze than with the ray-casting or head selection techniques. This is primarily because eye-tracking technology is highly sensitive to eye movement making the user accidentally select the wrong menu items and leave the target menu item and then go again inside the target significantly more often. In the future, we foresee better eye-tracking accuracy that will make the eye gaze selection technique more intuitive and accurate to use [23].
+
+The ray-casting selection technique was faster than the head selection technique. However, we found that the participants made more errors when selecting the menu items with ray-casting than with the head selection. Likewise, the number of target re-entries was higher in the case of ray-casting than with the head selection technique. This is primarily because the ray-casting input does not require the user to precisely select the menu items and adjust the head position (which makes the head selection more time-consuming). Overall, ray-casting felt intuitive and easy to use for the participants. Moreover, the laser pointer of ray-casting gave the user additional control over the graphical menus.
+
+The head selection technique was the participant's least favorite. We found that the head input took more time in comparison with the ray-casting selection technique because each participant had to adjust the head movement in order to select the menu item and be more accurate and precise. This is also the reason why the head input is less prone to errors and number of target re-entries. The participants reported that the head controls did not have a very good place on any graphical menu.
+
+Overall, shapes did not matter significantly for the participants. Also, we did not find any significant difference in task completion time, error rates, or number of target re-entries which resonates with a finding of Santos at al. [22] that suggests that the user experience is not greatly affected by linear or radial menu shapes.
+
+Based on the results of our experiment, we were unable to accept the $\mathbf{{H1}}$ and $\mathbf{{H2}}$ hypotheses. We did not find any significant difference in task completion time, error rates, and number of target re-entries for the spatial graphical menu with the radial shape and the ray-casting selection technique in comparison with the other graphical menus. Moreover, even though the participants preferred a more conventional spatial menu, shapes and selection did not matter for them, as they selected all two menu shapes (linear and radial) menu shapes and all three selection techniques (ray-casting, head, and eye gaze) as their preferred for the spatial menu.
+
+§ 6 LIMITATIONS AND FUTURE WORK
+
+There are a few factors that could have affected our results. In particular, 2 participants had difficulty with eye-tracking technology. We noticed that those participants wore thick lenses that made eye-tracking less accurate at recognizing eye movement. Additionally, for the arm placement menu, the users had to rotate the arm accordingly before being able to even see or select items from the menu which made the arm graphical menu less comfortable. Ideally, the arm menu should be more flexible and intuitive to use with a better design approach. Also, the participants reported that they wanted their virtual arm to be longer. We believe that with better full-body tracking technology (e.g., by using additional custom hardware), we can solve this issue and match the real user's arm with a virtual arm. Further, the system message was placed in the center near the virtual object in the VR environment and was not attached to a placement menu (e.g., near the arm placement menu). Ideally, the system message should have the instruction panel close to each type of menu to homogenize the distance from the instructions to the target. We also did not implement additional pointer smoothing or padding between control elements that could help participants make less errors. Moreover, the text was sometimes unclear to read when appeared on the participant's hand or arm. We think this is primarily because of the headset resolution that can be solved with a high-resolution VR headset.
+
+Given a variety of criteria that need to be considered when implementing graphical menus in VEs, in the future, it would be interesting to investigate how various menu hierarchical depths or sizes and other types of graphical menus (e.g., object-referenced or device-referenced) affect the task completion time, error rates, number of target re-entries, and other quantitative and qualitative metrics. Additionally, even though Bowman et al. [5] specified that "body-centered menus also do not inherently support a hierarchy of menu items", it would be interesting to see how depths/number of menu elements indeed affect such menus. In general, our study was developed to account for performing menu tasks in a static standing context, however, in the future, we would like to investigate how body-referenced menus can be applied for more dynamic virtual environments.
+
+§ 7 CONCLUSION
+
+We presented an in-depth systematic study on evaluation of body-referenced graphical menus in virtual environments in terms of different placements (spatial, arm, hand, and waist), menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze). Our results show that the spatial, hand, and waist menus are significantly faster than the arm menus. Moreover, we found that the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques, however, we did not find any significant difference in task completion time, error rates, and number of target re-entries for the menu shapes. We found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one. We also provided design guidelines and recommendations for body-referenced graphical menus including its preferred shapes and selection techniques.
+
+§ ACKNOWLEDGMENTS
+
+This work is supported in part by NSF Award IIS-1638060 and Army RDECOM Award W911QX13C0052. We also thank the anonymous reviewers for their insightful feedback. We are further grateful to the Interactive Systems and User Experience Lab at UCF for their support.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..f64672b2ff95445d4f878fd24a47651924c51933
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,557 @@
+# Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns
+
+Vinayak R. Krishnamurthy*
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Katherine Boyd ${}^{§}$
+
+Department of Visualization
+
+Texas A&M University
+
+Ergun Akleman ${}^{ \dagger }$
+
+Departments of Visualization &
+
+Computer Science and Eng.,
+
+Texas A&M University
+
+Chia-An Fu ${}^{\pi }$
+
+Department of Visualization
+
+Texas A&M University
+
+Sai Ganesh Subramanian ${}^{ \ddagger }$
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Matthew Ebert"
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Courtney Starrett**
+
+Department of Visualization
+
+Texas A&M University
+
+Neeraj Yadav ${}^{\dagger \dagger }$
+
+Department of Architecture
+
+Texas A&M University
+
+
+
+Figure 1: The computational pipeline for the geometric design and fabrication of woven tiles is shown. This particular example illustrates the tiles generated using the plain weave symmetries filling 2.5D space. The Figure 1c shows the curves in fundamental domain. The yellow curve shows the basic element of the repeating curve segment. All other curve segments in the fundamental domain can be obtained by rotating and translating this yellow curve. The Figure 1d shows overall assembly by removing the tile that corresponds to yellow curve. We obtained the shapes of top surfaces also with Voronoi decomposition.
+
+## Abstract
+
+In this paper, we introduce a geometric design and fabrication framework for a family of interlocking space-filling shapes which we call bi-axial woven tiles. Our framework is based on a unique combination of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+## 1 INTRODUCTION
+
+### 1.1 Motivation
+
+Space-filling shapes have applications in a wide range of areas from chemistry and biology to engineering and architecture [48]. Using space-filling shapes, we can compose and decompose complicated shell and volume structures for design and architectural applications. Space-filling shapes that are also tileable, can be further provide an economical way for constructing structures because they can be mass-produced. Despite their practical importance, the variety of 2.5D and 3D space-filling tiles at our disposal are quite limited. The most commonly known and used space-filling shapes are usually regular prisms such as rectangular bricks since they are relatively easy to manufacture and are widely available. However, reliance on regular prisms, significantly constrains our design space for obtaining reliable and robust structures [16, 45, 58, 70, 71], particularly when current additive manufacturing techniques are gradually becoming more affordable across engineering and construction domains. In this paper, we introduce a geometric design and fabrication framework for a new class of interlocking space-filling shapes which we call bi-axial woven tiles.
+
+Systematic design of modular, tileable and, simultaneously interlocking building blocks is a challenging task. We find that there is currently no principled approach that would allow one to generate such building blocks. To this end, we present a general conceptual framework that takes as input a set of curves determined through fabric weave patterns and uses these curves as Voronoi sites to partition space. This allows one to decompose space into any arbitrary partition induced by the input curves wherein each partition can be considered as a tile.
+
+---
+
+*e-mail: vinayak@tamu.edu
+
+${}^{ \dagger }$ e-mail: ergun.akleman@gmail.com
+
+${}^{ \ddagger }$ e-mail: sai3097ganesh@tamu.edu
+
+§e-mail: katherineboyd@tamu.edu
+
+Te-mail: sqree@tamu.edu
+
+Il e-mail: matt_ebert@tamu.edu
+
+**e-mail: cstarrett@tamu.edu
+
+${}^{\dagger \dagger }$ e-mail: nrj31y@tamu.edu
+
+---
+
+### 1.2 Inspiration & Rationale
+
+While our framework is general, we specifically chose bi-axial weave patterns to demonstrate our approach. The inspiration for using biaxial weave patterns came from the fact that woven fabrics can form strong structures from relatively weak threads through interlacing (or interlocking) [49]. Therefore, woven structures have been known to have several applications ranging from textiles to composite materials. Using this as our rationale, we intended to investigate the possibility of constructing tiled assemblies of interlocking space-filling shapes that leverage the thread interlacing process from woven fabrics, specifically 2-fold structures. The advantage of choosing weave patterns is that they are closed under symmetry operations thereby allowing us to systematically and intuitively design and construct an entire family of interlocking space-filling shapes — bi-axial woven tiles. In addition to providing simple and intuitive control, woven tiles also relate to the structural characteristics of woven fabrics, which have been known to have several applications ranging from textiles to composite materials.
+
+In addition to the potential advantages rooted in mechanical behavior, 2-fold fabric structures are particularly useful for our purpose because of their geometric simplicity and intuitiveness. They provide a simple approach for designing interlocking space-filling tiles. A particular subset of 2-fold fabrics, known as 2-way and genus-1, are particularly useful for simple and intuitive control. They can be constructed using regular square grid as the guide shape and they include most popular weaving structures such as plain, twill, and satin.
+
+### 1.3 Summary of Approach
+
+Using the properties of 2-fold 2-way genus-1 fabrics, our approach is to obtain desired curves segments that are closed under symmetry operations. One simplification of these fabrics is that each curve segment can be chosen to be planar (see Figure 1a). In addition, we can define all well-known fabric patterns such as plain, twill and satin using only three parameters. The fundamental domain of these symmetry operations is a prism with a square base because of 2-way and genus-1 property (see Figure 1b). In other words, we only have to compute Voronoi decomposition of the fundamental domain. Then, the Voronoi region of the curve segment in the center, shown as yellow in Figure 1a, is used as the space-filling tile.
+
+We present a simplified method to compute Voronoi decomposition of fundamental domain with these curve segments. We first sample each curve segment to obtain a piece-wise linear approximation. We compute 3D Voronoi decomposition for each sample point. This process gives us a set of convex Voronoi polyhedra for the same curve segment. The union of these convex polyhedra gives us desired space filling tile. We identify simple and robust algorithms to take union of all convex Voronoi polyhedra that comes from the same piece-wise linear curve segment. We also developed a tile beautification process inspired by the fact that the points of equal distance to a planar surface and a line parallel to the surface lie on a parabolic cylinder. We add two planar surfaces that sandwich the control curves from top and bottom also as Voronoi sites. Resulting Voronoi decomposition automatically provides nice boundaries that consist of parabolic regions. The 2D equivalent of the idea is shown in Figure 2.
+
+To demonstrate our approach, we have designed many interlocking and space-filling tiles. We call them woven tiles since 2-fold
+
+
+
+
+
+(a) Voronoi decomposition two en- (b) Voronoi decomposition two enclosing lines with $S$ shape pieces that closing lines with $S$ shape pieces that forms a square wave. forms a triangular wave.
+
+Figure 2: A 2D example of beautification of boundaries. Note that inclusion of two enclosing lines allows to create curved outer boundaries in Voronoi decomposition. The effect is more visible with interaction of the sharp corners of triangular wave. In 3D, since we use a surface and curve, we obtain curved boundaries as ornament.
+
+fabrics refers woven structures. This terminology is also helpful since we can use weaving terminology to describe the variety of tiles produced by this approach as plain, twill or satin woven tiles Because of their symmetry properties, these tiles can be assembled in more than a single configuration. Some assembly structures can even create loops as shown in Figure 1e. For these cases, we have shown that it is possible to lock the pieces using one flexible piece.
+
+### 1.4 Our Contributions
+
+Our overarching contribution in this work is a general conceptual framework for generating space-filling and interlocking tiles based on the fundamental principles of fabric weave patterns in conjunction with space decomposition using 3D Voronoi partition. Based on this framework, we make four specific contributions as listed below:
+
+1. We use our general framework to develop a simple and intuitive methodology for the design and construct Bi-Axial Woven Tiles, space-filling tiles derived from the symmetries induced by woven fabrics The basic idea is to use curves representing 2-way 2-fold weaving patterns (such as plain, twill, and satin) as Voronoi sites for decomposing 3-space.
+
+2. We introduce a simple and effective algorithm for approximating the Voronoi decomposition of space with labelled curve segments as the Voronoi sites. The algorithm uses a simple process that first discretizes a curve segment into a sequence of points and then constructs a Voronoi cell of the curve simply by computing the union of constitutive Voronoi cells for each point on the curve The first advantage of this method is its simplicity — it allows us to directly use standard Voronoi cell computation for points for curves. Secondly, it allows for an elegant computation of the Voronoi cell surface as a triangle mesh using a simple topological operation - removing the internal polygonal faces of adjacent constitutive cells of points.
+
+3. We demonstrate several cases of Bi-Axial Woven Tiles and demonstrate techniques for the fabrication and assembly these tiles. We show the fabrication these tiles with a variety of materials (plastic, wax, and metal) by using different 3D printing, molding, and casting techniques. Furthermore, we demonstrate that these tiles can be assembled more than single configuration. From the same group, it is even possible to obtain two assemblies with different chirality (i.e. mirrored versions of each other).
+
+4. Finally, we present a comparative structural evaluation of plain, twill, and satin tile assemblies. The finite element analyses (FEA) of these assemblies under under planar and normal loading conditions reveal that weaving allows distribution of planar and normal loads across tiles through the contact surfaces, generated with our methodology. We describe the qualitative relationship between the symmetries induced by the weave patterns to the stress distribution in the tiled assemblies.
+
+## 2 Related Work
+
+### 2.1 Space filling Polyhedra
+
+Space filling polyhedra, which can be used to tessellate (or decompose) a space [37], are defined as a cellular structure whose replicas together can fill all of space watertight, i.e. without having any voids between them [48]. While 2D tessellations and 2D space filling tiles are well-understood [37], problems related to 2.5D and 3D tessellations and tiles (i.e. shell and volume structures respectively) are still perceived as difficult. The perception of difficulty of $3\mathrm{D}$ tessellations probably comes from the belief that tetrahedron can fill space since ${500}\mathrm{{BC}}$ . In fact, many failed efforts were made to prove this widespread belief [57].
+
+It is now known that the cube is the only space filling Platonic solid [25]. This partly explains the widespread use of regular prisms as space filling tiles. We are indebted to Goldberg, whose exhaustive cataloguing from 1972 to 1982, helped us to access all known space-filling polyhedra [26-28]. We now know that there are only eight space-filling convex polyhedra and only five of them have regular faces, namely the triangular prism, hexagonal prism, cube, truncated octahedron [72, 73], and Johnson solid gyrobifastigium [7, 43]. Five of these eight space filling shapes are "primary" parallelohedra [14], namely cube, hexagonal prism, rhombic dodecahedron, elongated dodecahedron, and truncated octahedron. Space filling polyhedra is still active research area in mathematics [56]. However, as far as we know there exist no space filling shapes that can also interlock.
+
+### 2.2 Interlocking Structures
+
+History is rich with examples of puzzle-like interlocking structures, which is analyzed under the names such as stereotomy [20-22], nexorades [9, 10, 17] and topological interlocking [11, 18, 19, 68]. One of the most remarkable examples of interlocking structures is the Abeille flat vault, which is designed by French architect and engineer Joseph Abeille [23, 62]. Nexorades, which are also called the Leonardo grids, are types of structures that are constructed using notched rods that fit into the notches of adjacent rods [54, 59].
+
+### 2.3 Geometry and Topology of Fabric Weaves
+
+We observe that many interlocked structures can be viewed as knots and links that are decomposed into curve segments. This view simplifies the design process since we can build our framework by borrowing concepts directly mathematics literature. It can also provide significant intuition for the design since bi-axial textile weaving structures, which are also called 2-fold 2-way fabrics, also form knots and links by viewing them as structures embedded on a toroidal surface [29, 30]. The word 2-way, which is usually called biaxial, means that the strands run in two directions at right angles to each other- warp or vertical and weft or horizontal. The word 2-fold means there are never more than two strands crossing each other [6].
+
+The popularity of 2-fold 2-way fabrics comes from the fact that the textile weaving structures are usually manufactured using loom devices by interlacing of two sets of strands, called warp and weft, at right angles to each other (see Figure 3a). Since the warp and weft strands are at right angles to each other, they form rows and columns. We colored warp thread blue and weft threads yellow to differentiate these two threads as shown in Figure 3c). The names warp and weft are not arbitrary in practice. In the loom device, the weft (also called filling) strands are considered the ones that go under and over warp strands to create a fabric. The basic purpose of any loom device is to hold the warp strands under tension such that weft strands can be interwoven. Using this basic framework, it is possible to manufacture a wide variety of weaving structures by raising and lowering different warp strands (or in other words by playing with ups and downs in each row).
+
+There was no formal mathematical foundation behind bi-axial weaving until Grunbaum and Shephard, who are known by their contributions to 2D tiling [36, 38], investigated the mathematical properties of bi-axial weaving in 1980's in a series of papers [33- 35, 37]. By viewing weaves as matrices as shown in Figure 3c they simplified the problem of classifying and analyzing woven structures.
+
+
+
+Figure 3: The fundamental domain of 2-way 2-fold fabrics is a rectangle and they can be represented as a simple matrix. The warp threads are colored blue and weft threads are colored yellow to differentiate the two threads in the final matrix.
+
+
+
+Figure 4: Three parameters, $a, b$ and, $c$ , are sufficient to define all of the important 2-fold, 2-way genus-1 fabrics
+
+Grunbaum and Shephard studied a subset of 2-fold 2-way patterns that have a transitive symmetry group on the strands of the fabric, which they called isonemal fabrics [34]. They identified all isonemal patterns that hang together for periods up to 17 [33, 34]. Roth [55], Thomas [64-66] and Zelinka [75, 76] and Griswold [31, 32] theoretically and practically investigated symmetry and other properties of isonemal fabrics. The identification of the hanging-together property is simpler for a certain type of isonemal fabrics that are called genus-1 [34]. Genus-1 means that each row with length $n$ is obtained from the row above it by a shift of $c$ units to the right, for some fixed value of the parameter $c$ . Genus-1 fabrics includes two special and well known isonemal fabrics, twills and satins. A twill pattern is the one each row of a design is obtained from the row above it by a shift of one square in a fixed direction (either left or right). A satin pattern is the one in which each row or column has only one blue square in the fundamental domain given by nxn matrix. To further simplify the design, we assume that the row with length $n$ consists of $a$ number of consecutive weft (yellow) threads and $b = n - b$ number of consecutive warp (blue) threads as shown in Figure 4.
+
+These $\left\lbrack {a, b, c}\right\rbrack$ -fabrics are guaranteed to hang together if $n = a + b$ and $c$ ’s are relatively prime [35]. The most widely used fabric pattern, plain weaving, is given as $\left\lbrack {1,1,1}\right\rbrack$ -fabric using $\left\lbrack {a, b, c}\right\rbrack$ notation. Twills are given as either $\left\lbrack {a, b,1}\right\rbrack$ or $\left\lbrack {a, b, - 1}\right\rbrack$ . Satins are described by $b = 1$ and ${c}^{2} = 1{\;\operatorname{mod}\;\left( {a + b}\right) }\left\lbrack {12}\right\rbrack$ . The genus-1 isonemal fabrics described by the $\left\lbrack {a, b, c}\right\rbrack$ notation not only include well-known patterns such as plain, twill, and satin but also a wide variety of additional bi-axial weaving patterns as shown in Figure 5. For instance, for the $\left\lbrack {3,3,2}\right\rbrack$ pattern shown in the figure, the notation $\left\lbrack {a, b, c}\right\rbrack$ can represent a non-fabric that can fall-apart. Fortunately, as discussed earlier it is easier to avoid the non-fabrics that can fall-apart unlike a general isonemal weaving case. We can simply check whether $n = a + b$ and $c$ ’s are relatively prime or if any row or column has no alternating crossing [35]. In conclusion, the $\left\lbrack {a, b, c}\right\rbrack$ notation provides a simple process to design control curves for bi-axial woven tiles. Figure 5 also demonstrates that among the $\left\lbrack {a, b, c}\right\rbrack$ patterns, the pattern is rotation-invariant only for plain, twill, and satin cases. This is because in plain, twill, and satin cases, warp and weft patterns are guaranteed to be mirrored versions of each other [37]. Since this is required to obtain a single tile, we focus on only plain, twill and satin woven tiles in this paper.
+
+
+
+Figure 5: Examples of isonemal genus-1 patterns that can be represented by three parameters shown in Figure 4. Unnamed pattern $\left\lbrack {6,7,4}\right\rbrack$ hang together, but $\left\lbrack {3,3,2}\right\rbrack$ falls apart since $3 + 3$ and 2 are not relatively prime.
+
+## 3 Theoretical Framework
+
+To our knowledge, none of the existing approaches for producing interlocking structures currently provide space-filling pieces simultaneously. For instance, Leonardo grids are simply finite cylinder shapes that leave most space empty. Our approach in this paper is to fill (or decompose) the space appropriately using Voronoi decomposition. It appears that the concept of filling space using Voronoi decomposition actually came from Delaunay's original intention for the use of Delaunay diagrams. He was the first to use symmetry operations on points and Voronoi diagrams to produce space filling polyhedra, which he called Stereohedra [15, 56]. Recent work on Delaunay Lofts extended points to specific types of curves to obtain more complicated space filling structures [60]. In this paper, we first observe that any shape (a line, a curve, or even a surface) can be used as a Voronoi site to fill the space. If our initial configurations of the shapes are "good" such as being closed under symmetry operations, we are guaranteed to obtain interesting decomposition of the space — this is the real premise of this paper.
+
+The essential conceptual contribution of allowing any type of shapes as Voronoi sites is the extension of potential space filling shapes from simple polyhedra to almost any shape with curved edges and curved faces. In fact, allowing curved edges and faces significantly extends the design space of space-filling polyhedra. For instance, Escher's complicated 2D space filling tiles have been created by using curved edges [41, 51]. Another recently developed space filling shapes, called Delaunay Lofts [60] extended the design space by allowing curved edges and curved faces. Allowing any type of shapes as Voronoi sites not only enables a systematic search of desired shapes from large number of potential candidates, but also provides a simple design methodology to construct space filling structures.
+
+Based on this point of view, the key parameters for the classification of space-filling shapes are essentially the topological and geometric properties of Voronoi sites and their overall arrangements that are usually be obtained by symmetry transformations (rotation, translation, and mirror operations). The types of shapes and transformations uniquely determine the properties of the space decomposition. Now, based on this view point, let us again look at Stereohedra and Delaunay lofts.
+
+For Stereohedra, the shapes of Voronoi sites are points, $3\mathrm{D}{L}_{2}$ norm is used for distance computation, underlying space is $3\mathrm{D}$ and any symmetry operation in 3D are allowed [15, 56]. Based on these properties, we conclude that Stereohedra can theoretically represent every convex space filling polyhedra in 3D. Since the points are used as Voronoi sites and ${L}_{2}$ norm is used, the faces must be planar and edges must be straight in the resulting Voronoi decomposition of the 3D space.
+
+For Delaunay lofts, on the other hand, the shapes of Voronoi sites are curves that are given in the form of $x = f\left( z\right)$ and $y = g\left( z\right)$ , for every planar layer $z = c$ where $c$ is a real constant, a $2\mathrm{D}{L}_{2}$ norm is used to compute distance, underlying space is 2.5 or 3D and only 17 wallpaper symmetries are allowed in every layer $z = c$ [60]. Based on these properties, we conclude that Delaunay lofts (1) consists of a stacked layers of planar convex polygons with straight edges, and (2) in each layer there can be only one convex polygon. In Delaunay lofts the number of sides of the stacked convex polygons can change from one layer to another. In conclusion, the faces of the Delaunay lofts are ruled surfaces since they consist of sweeping lines. Edges of the faces can be curved.
+
+For bi-axial woven tiles in this paper, the shapes of Voronoi sites are curve segments obtained by decomposing planar periodic curves that are given -essentially1- in the form of $z = F\left( {x + n}\right) = F\left( x\right)$ and $z = G\left( {y + n}\right) = G\left( y\right)$ , where $n = a + b$ the period of fabric, where $F$ can be any periodic function as far as it consists of $a$ -length up regions and $b$ -length down regions as shown in Figure 6. The function $G$ is just the mirror of $F$ with $a$ -length down regions and $b$ - length up regions. The curve segments are obtained from these two periodic functions by just restricting its domain into a region such as $\left( {{x}_{0},{x}_{0} + {kn}}\right\rbrack$ . These curve segments are closed under symmetries of bi-axial weaving patterns, that are given by ${90}^{0}$ rotation and translation operations. $3\mathrm{D}{L}_{2}$ norm is used for distance computation. Underlying space is normally 2.5D, i.e. a planar shell structure [1].
+
+
+
+Figure 6: Examples of periodic curves that can be used as Voronoi sites, i.e. control curves.
+
+Based on these properties, it is clear that the resulting tiles would usually be genus-0 surfaces with curved faces and edges. Because of its bi-axial property, the fundamental domain for these tiles would always be a rectangular prism, an extruded version of the original rectangular fundamental domain of corresponding 2-way 2-fold fabric [39]. Therefore, the tiles that perfectly decompose this rectangular prism domain will also fill all 3D space.
+
+---
+
+${}^{1}$ We actually use parametric curves. This is only for providing a quick and simple explanation without a loss of generality
+
+---
+
+
+
+Figure 7: The basic degree-1 NURBS curves that are used to construct woven tiles. Each curve is created by changing positions of 11 control points. The figures at the top are actual curves. The figures at the bottom are points that are created by sampling the initial curves. These points that approximate the curves are used as Voronoi sites.
+
+## 4 DESIGN AND FABRICATION PROCESS
+
+Our bi-axial woven tile design process consists of three steps: (1) Designing curve segments; (2) Designing 3D configuration of the curves segments to be used as Voronoi sites; and (3) Decomposition of the space using Voronoi tessellation. For all steps, we have used the simplest approaches which simplify the design process and provides robust computation.
+
+### 4.1 Designing Curve Segments
+
+We designed our control curves by using Non-Uniform Rational B-Splines (NURBS). We initially allowed the higher degree curves to allow ${C}^{1}$ and ${C}^{2}$ continuity, but, quickly realized that piecewise-linear curves are sufficient to obtain desired results for woven tiles. Therefore, we designed all curves with degree 1 NURBS. For all cases, we use the same 11 control points. We simply move the positions of the control points to obtain the curve segments for desired weaving pattern as shown in 7 . To construct these curves, in addition to three weaving parameters, i.e. $a, b$ , and $c$ , we provide one additional control: the angle of connection of two consecutive tiles. By changing the angle we can obtain Square Waves, which appears to be binary function such as the ones shown in Figures 7a and 7b, and Partly Triangular Waves, which appears to be regular piece-wise linear such as the ones shown Figures 7d, 7c and 7e. The two consecutive tiles produced by square waves can sit at the top of each other as shown in Figures 11a and 11b With partly triangular tiles, we can adjust this angle as shown in Figures 11d and 11c and [Lle]
+
+### 4.2 Designing Voronoi Sites
+
+Based on three weaving parameters, i.e. $a, b$ , and $c$ , we have developed an interface to create 3D curve segments that are closed under symmetry operations of 2 -fold 2-way genus-1 fabrics. The algorithm consists of three stages be given as follows:
+
+1. Create initial curve segment as $x = {F}_{x}\left( t\right) , y = 0$ and $z = {F}_{z}\left( t\right)$ based on $a$ and $b$ values, and curve type. Without loss of generalization, assume $t \in \left\lbrack {0,1}\right\rbrack , z \in \left\lbrack {0,1}\right\rbrack$ , and $x \in \left\lbrack {-n/2, n/2}\right\rbrack$ . Note that $n =$ $a + b = {F}_{x}\left( 1\right) - {F}_{x}\left( 0\right) .$
+
+2. Create two replicas of the curve and translate them along the $x$ axis by adding and subtracting its period $n = a + b$ respectively. This creates three copies of initial curve that follows each other.
+
+3. Create two replicas of of these three curves. Translate one of them using(c,1,0)vector and translate the other(-c, - 1,0). This translation operation must be done in modulo ${3n}$ .
+
+- Remark 1: This operation creates a ${3n} \times 2 \times 1$ rectangular prism domain, which is sufficient to compute tiles. Note that we assume the height of the curves is 1 unit.
+
+- Remark 2: This rectangular domain is not a fundamental domain of the curve symmetries. It is only applicable for genus-1 case.
+
+4. Create perpendicular curve segments.
+
+5. Remark 3: Perpendicular curve segments are guaranteed to be the same for plain, twill and satin. Therefore, we only focus on thise to obtain single tile.
+
+In practice we create these curves in a larger rectangular domain as shown in Figures 8, 9, and 10 to see the structure of the curves better. These rectangular domains must be larger than the ${3n} \times$ $2 \times 1$ domain we described earlier to guarantee we obtain at least one tile that can fill the space. In other words, at least one curve must be covered with its neighboring curves to guarantee that the Voronoi region that corresponds that particular curve segment fill the space. In Figures 8, 9, and 10, which shows two plain, two twill and one satin cases, the center curve is colored yellow. We have implemented this interface by using SideFX's Houdini, which is a robust 3D software that provides a node-based system for fast and easy interface development.
+
+### 4.3 Decomposition of the Space
+
+Accurate decomposition of a given space using curves as Voronoi sites can be quite complicated. We, therefore, have developed a simple method that provides us reasonably good approximation of decomposition. Our method consist of four stages:
+
+1. Sample the original curve segments by obtaining the same number of points for each curve segment.
+
+2. For beautification step, create and sample two sandwiching (or bounding) planes. If not, skip this step. All the examples in this section are created using beautification step.
+
+3. Label points as follows:
+
+- The points that are originated from central yellow curve are labeled using one label, say 0 .
+
+- All other points are labeled using another label, say, 1.
+
+- Remark 1: If the beautification step is used, the points coming from the sandwiching planes are also labeled 1 .
+
+4. Decompose the space using 3D Voronoi of these points, which gives us a set of labeled Voronoi regions, which are convex polyhedra that are labeled either 0 or 1 .
+
+5. Take union of all Voronoi regions labeled 0 to obtain desired space filling tile. Union operation consists of only face removal operations as follows:
+
+- Remove the shared faces of two consecutive convex polyhedra coming from two consecutive sample points on the curve.
+
+- Remark 2: These faces will always have the same vertex positions with opposing order.
+
+
+
+Figure 8: An example for designing control curves for $\left\lbrack {1,1,1}\right\rbrack$ plain woven tiles.
+
+
+
+Figure 9: An example for designing control curves for $\left\lbrack {2,2,1}\right\rbrack$ twill woven tiles.
+
+
+
+Figure 10: An example for designing control curves for $\left\lbrack {7,1,3}\right\rbrack$ satin woven tiles.
+
+
+
+Figure 11: Examples of plain, twill and satin woven tiles using the basic degree-1 NURBS curves shown in Figure 7,
+
+
+
+Figure 12: Examples of assemblies that show only the tiles cut to stay in rectangular domain.
+
+- Remark 3: If underlying mesh data structure provides consistent information, this operation is guaranteed to provide a 2-manifold mesh. Even if the underlying data structure does not provide consistent information, the operation creates a disconnected set of polygons that can still be $3\mathrm{D}$ printed using an STL file.
+
+- Remark 4: If the beautification step is skipped, i.e. two sandwiching planes are not used, take an intersection with bounding rectangular prism.
+
+6. Optional Step: Take union of Voronoi regions with label 1 to obtain a hollow space that correspond to the mold that can be used to mass produce space filling tiles. Note that we need to take again an intersection with bounding rectangular prism if the beautification step is skipped.
+
+We have implemented this stage in both in Matlab and Houdini. For 3D Voronoi decomposition of points, we used build in functions available in Matlab and Houdini.
+
+
+
+Figure 13: Examples of assemblies with uncut tiles .
+
+
+
+Figure 14: Examples of casting aluminum using lost wax method.
+
+
+
+Figure 15: Assembly of plain woven tiles. One of the black pieces is a flexible silicone piece and is needed to successfully assemble plain tiles.
+
+### 4.4 Fabrication
+
+All examples in this section are created using beautification step. We have printed the tiles shown in Figures 11d, 11c and 11e using both standard resin and elastic resin. For the purpose of investigation of various material properties and potential manufacturing options we made rubber molds of the tiles shown Figures 11d, and 11c for casting silicon rubber and wax versions. The wax tiles were used to cast aluminum tiles via the lost wax casting process as shown in Figure 14a. Also shown in Figure 14b is the assembly of wax and aluminum tiles.
+
+## 5 Physical Assembly
+
+The geometry and topology of weaves has a rich research history with several open questions relating to the ability of the weaves to hold together. The works by Grunbaum et al. [37] assume that the threads being woven are infinitely long. This, obviously is not the case with woven tiles, making it more difficult to completely and formally characterize the assembly of woven tiled. Therefore, our first evaluative step was to physically assemble common weaving patterns (plain, twill, and satin), with the goal to explore how the symmetries induced by these patterns affect the method of creating assemblies of the respective tiles. We are particularly interested in two aspects of woven tile assembly: (a) locking ability which maps to the holding-together property of the weaves and (b) chiral configurations of woven tile assemblies.
+
+### 5.1 Locking Ability of Woven Tiles
+
+The topology of a weaving pattern directly affects the locking ability of its corresponding woven tile. For instance, plain weave tiling results in self-locking configurations (Figure 15) identical to a plain woven fabric. Therefore, if zero tolerance is assumed, plain woven tiles cannot theoretically be assembled together with tiles constructed out of rigid materials such as PLA or Aluminium. In the $2 \times 2$ plain woven tile assembly shown in Figure 15a, one of the two black tiles (also the dark green tile in Figure 1e) is a compliant tile made of silicone, constructed through casting. This assembly is structurally stable and the geometry of the elements itself holds the structure together. Specifically, both the assembly and disassembly of the plain woven tiling is possible only through the application of force. In addition to introducing a flexible element, we also experimented with all four pieces cast in wax as well as Aluminium. In this case, the shrinkage in the individual pieces allowed for the tiling to be assembled (Figure 14b).
+
+
+
+Figure 17: Assembly of satin woven tiles.
+
+In case of twill weaves, we do not encounter the locking problem. As seen in Figure 16, the twill assembly can be simply created by an alternating placement of tiles along each of the axis (the white and blue tiles represent each axis). Therefore, neither the assembly nor disassembly require any application of force and we did not need any flexible pieces for twill (Figure 16c). There are two observations we make here. First, in the plain woven tiling, exactly half of each tile is above one adjacent tile and the other half is underneath a second adjacent tile. Second, in case of twill assembly, the unit tiles do share this alternate above-underneath relationship with their neighbors. However, note that if two twill woven tiles are combined to create a double-length tile (Figure 16d), we obtain the above-underneath relationship that will likely produce a perfectly interlocking tiling (thereby needing flexible tiles akin to the plain-woven case).
+
+In case of satin weaves (Figure 17), we come to similar conclusions - there is a minimal number of repetitions of each tile to ensure a tightly packed interlocked assembly. While we can say for certain that the number of repetitions must be higher than twill, we currently do not claim what the number of repetitions should be. We believe that much work needs to be done in order to develop a formal theory for locking ability of woven tiles.
+
+
+
+Figure 18: An example of chirality in plain woven tile assemblies.
+
+
+
+Figure 19: Von-Mises stress distribution on single woven tiles
+
+### 5.2 Chirality
+
+A chiral object is one that is non-superposable on its mirror image. Chirality is a fundamental to several natural phenomena and engineering applications. Our first example that explores chirality is the plain-woven assembly wherein we observed that assembling the same plain-woven tiles in mirrored configurations leads to chital assemblies (Figure 18). Penne's work on planar layouts [50] provides a formal explanation to this propery by connecting projective geometry and topology.
+
+## 6 Structural Evaluation
+
+Multiple uses of our methodology can be noted by using tiles as structural building blocks. For example, we can assemble tiles of plain weave to use them as a reinforced slab blocks. This is because, the nature of contacts between the tiles allow the forces/stresses to be distributed from one tile to other very easily. To explore this aspect better, we performed simulations of the individual shape separately and the response of the shape in the assembly. For our evaluation, we considered three commonly known plain, twill, and satin weaves and analyzed their response to basic mechanical loading conditions. The main motivation is to observe key relationships between the symmetries induced by these well-known weave patterns and the corresponding mechanical behavior.
+
+### 6.1 Evaluation Methodology
+
+FEA is considered as one of the most powerful tools for studying the mechanical properties of textile composites owing to the fact that the interactions between the unit cells are complex in nature [61] in addition to experimental methods [44]. Based on this, present FEA analysis with two objectives. First, we are interested in understanding the effect of contact between the interlocked woven tiles. Second, we wanted to observe the stress distributions for an individual woven tile and patterns that emerge as an effect of the weaving pattern.
+
+ | Plain | Twill | Satin |
| Minimum Stress (Pa) | 2099 | 4.76e-9 | 1212 |
| Average Stress (MPa) | .93 | .59 | .97 |
| Maximum Stress (MPa) | 12.11 | 48.41 | 19.64 |
| Minimum Displacement (m) | 0.00 | 0.00 | 0.00 |
| Average Displacement (m) | 2.91e-6 | 1.97e-5 | 8.36e-6 |
| Maximum Displacement (m) | 1.22e-5 | 1.97e-6 | 4.13e-5 |
+
+Table 1: Minimum, maximum and average stresses and displacements for woven tile assemblies under normal loading.
+
+We used the ANSYS Workbench 2019 R1 and conducted static structural analysis for all simulations. We specifically explored two loading conditions: planar (tensile load applied on the plane of the tiled assembly) and normal (compressive load applied normal to the plane of the tiled assembly). For simplicity, we assume unit forces(1N)and moments(1N - m)across all simulations. All the dimensions were chosen in accordance with the 3D printed shapes. For the analysis, we first imported a given woven tile as a solid body in SolidWorks 2019 and created assemblies of these tiles. Here, the size of the assembly was an important factor for a fair comparison. We used the satin weaves as the benchmark since it required the largest number of tiles. Based on this, we created an $8 \times 8$ assembly for plain, twill, and satin woven tile assemblies. For materials considerations, we used the material properties of PLA (Polylactic acid). Specifically, we set the density to ${1250}\mathrm{{kg}}/{\mathrm{m}}^{3}$ , Young’s modulus to ${3.45} * {10}^{9}\mathrm{\;{Pa}}$ and the Poisson’s Ratio to 0.39. We further assume all the contact regions to be friction-less. For each loading condition, Von-Mises stress and the total deformations were evaluated.
+
+### 6.2 Tensile Loading on Single Tiles
+
+We begin with a simple test of tensile loading of individual unit tiles for plain, twill, and satin cases (Figure 19). The first key observation for the same $1\mathrm{\;N}$ load is that the twill woven tile admits the minimum value of the maximum stress(43MPa)as compared to the plain and satin tiles ( ${69}\mathrm{{MPa}}$ and ${62}\mathrm{{MPa}}$ respectively). What is commonly evident across all cases is that the maximum stresses are located at the saddle points of the tiles. Also note that these are the critical points where any two tiles come in contact for a given assembly causing a high stress region. Therefore, the individual stress contours (Figure 19) help us identify the critical high stress regions through which load is internally transferred from part to part in a woven assembly. Furthermore, since the critical stresses are transferred through two doubly-curved surfaces in contact, the stress can be propagated in multiple directions.
+
+### 6.3 Common Behavioral Characteristics Across Weaves
+
+The deformation of the weave assembly behaves similar to a solid block of material for both planar and normal loading conditions. This is expected since the tiles are space filling and interlocking. The stress characteristics of the shapes, however, was found to have a deeper relationship with the the geometry of the weave pattern (Figure 21 and 20). Below, we make observations regarding these distributions for normal and planar loading conditions.
+
+### 6.4 Assembly Under Normal Loading
+
+Threads of plain, twill and satin were created by joining individual tiles. These threads were assembled to form an $8 \times 8$ assembly (Figure 20). The faces on the perimeter were fixed and a unit normal force was applied perpendicular to the plane of the assembly in the central region.
+
+
+
+Figure 20: Von-Mises stress distributions and deformation of 8x8 assemblies of plain (left), twill (middle) and satin symmetries under normal loading. The assemblies are based on threads constructed out of multiple tiles. A uniform unit normal force was applied in the central region of the assembly indicated by the direction of arrow. The faces in the region indicated by the ground symbol were assigned as fixed supports.
+
+We observe that the woven nature of the tiles allows the deformation caused by the central load to be distributed uniformly around the axis of application in the assembly. Given that the deformation of the assembly is similar to a solid block, the contacts between the treads along the two axes transfer the incoming forces and stresses to the adjacent tiles. The distribution for each assembly respects the geometry of the weave pattern. For example, in the plain woven case (Figure 20 left panel), the stress starts as maximum at the center tiles and assumes a radial checkerboard pattern as it shifts toward the periphery. For instance, for plain woven assembly, we observe a transition from a concentrated stress distribution to a radial checkerboard pattern reminiscent of the original plain woven symmetries (Figure 5 top-left image). Interestingly, in twill we notice that stress propagates in a spiral-like manner (Figure 20 middle panel) from the central region.
+
+From a cursory numerical comparison (Table 1), there are two interesting observations to be made. First, the twill assembly has the lowest average stress(0.59MPa)while the plain and satin assemblies exhibit comparable average stresses. This is supported in textile engineering literature [63]. However, what is interesting that twill simultaneously exhibits the largest range of stress(0 - 48MPa)when compared to plain(0 - 12MPa)and satin cases. This implies that while twill may be better suitable for load-sensitive applications, plain and satin assemblies may be better candidates for scenarios that need durability.
+
+### 6.5 Assembly Under Planar Loading
+
+In case of planar loading, we applied tensile load on threads in one axial direction and observed the effect the orthogonal threads. We notice that the average and maximum stress values were lower in the assembly compared to individual simulations for all weaving patterns. This means that the assembly allows for an efficient stress distribution.
+
+Similar to stress patterns in normal loading, the planar simulations show a pronounced stress arrangement. While the plane of maximum stress distribution is orthogonal to the load direction in plain woven tiles, we note that in case of twill it is aligned approximately perpendicular to the weave direction as also noted by previous works [8,42]. The plan of maximum stress is in fact orthogonal to the diagonal lines produced on the face if the assembly is developed into a twill fabric. In the contrary, no apparent stress patterns can be noticed in satin assembly. We also note a significantly high value of maximum stress in satin compared to twill and plain (Table 2). This also agrees with the fact that in clothing satin, being a long yarn floats makes it unstable [63] and needs a much densely woven fabric to counter this.
+
+The twill assemblies again exhibit the lowest average Von Mises stress levels(0.42MPa)as compared to plain(0.57MPa)and satin (3 MPa). This is aligned with current literature on weave mechanics wherein plain and twill weaves display superior resistance to tensile loading in comparison to satin weaves because of lower frequency of alternation between the two axes [63]. Similar to nomal loading, twill also exhibits maximum range of stress values(0 - 23MPa) when compared to plain(0 - 10MPa)and satin(0 - 793MPa)tile assemblies. Here, once again, satin clearly demonstrates significantly high maximum stress when compared to plain and twill as is evident from the topology of the satin weave.
+
+## 7 Discussion
+
+The work presented in this paper provides (1) many new directions that need to be explored further; and (2) many interesting questions that need to investigated further. In the rest of this section, we discuss some of the future directions to explore and some of the questions to investigate.
+
+ | Plain | Twill | Satin |
| Minimum Stress (Pa) | 308 | 2.92e-7 | 2.31e-5 |
| Average Stress (MPa) | .57 | .4217 | 3.08 |
| Maximum Stress (MPa) | 10.43 | 23.32 | 793.80 |
| Minimum Displacement (m) | 0.00 | 0.00 | 0.00 |
| Average Displacement (m) | 1.54e-6 | 9.27e-6 | 2.27e-6 |
| Maximum Displacement (m) | 4.14e-6 | 7.91e-6 | 1.50e-5 |
+
+Table 2: Minimum, maximum and average stresses and displacements for woven tile assemblies under planar loading
+
+### 7.1 Generalization based on Knots and Links
+
+2-fold fabric structures are much richer than just 2-way genus-1 fabrics. Their real power can be best understood with extended graph rotation systems (EGRS) that was introduced in early 2010's [5, 6]. EGRS allows us to use orientable 2-manifold meshes as guide shapes to represent knots and links. The guide shapes help us to classify the fabrics. For instance, the guide shapes for 2-fold 2-way fabrics are regular grids embedded on genus-1 surfaces. For 2-fold 3-way fabrics, we need regular hexagonal or regular triangular grid embedded on genus-1 surfaces [6]. This is useful since some of the Leonardo grid designs are based on also 3-way woven patterns [54]. Using regular maps [13, 67], it is also possible to obtain hyperbolic tiling. Using the regular maps that correspond to hyperbolic tiles as guide shapes, 2-fold k-way genus-n fabrics can be obtained. From these fabrics, one can also obtain space filling shapes. For practical applications, there is a need for a significant amount theoretical work.
+
+### 7.2 Locking
+
+The key open question that we hope to answer in our future work is a formally supported computational methodology for determining minimum tile repetition to generate pure interlocking of woven tiles. Here, Dawson's work on the enumeration of weave families can provide an important starting point as a means to develop such a method based on sound mathematical principles. We see that the locking ability of woven tiles is related to three interlinked concepts in geometry and topology literature, namely, liftability [53], oriented matroids [77], and planar layouts of lines in 3-space [50]. To simply determine repetitions for locking is only the first step. Once we obtain a locking configuration, the second challenge is to determine the minimum number of flexible/compliant elements to make the assembly possible. We only showed this example for the plain woven tiles (Figure 1e). To the best of our knowledge, a general strategy for this problem is currently unavailable.
+
+### 7.3 Chirality
+
+As we have seen in our results, chirality is a key aspect of further investigation in this research. A recent work discovered how to produce handedness in auxetic unit cells that shear as they expand by changing the symmetries and alignments of repeating unit cells [46]. Using the symmetry and alignment rules we can potentially expand our woven tiles to develop a new class of rigid and compliant structures [40, 52, 74]. Recent works on knot periodicity in reticular chemistry [47] and tri-axial weaves (also known as mad weaves) [24] are fundamental examples of how the geometry and physics of chirality are connected. Thus, identifying any fundamental multi-physical behavior of the assemblies shown in this work and beyond would allow us to construct assemblies with several practical applications such as mechanically augmented structures in mechanical, architectural, aerospace [2, 3], and materials [69] engineering. The main gap that must first be filled, however, is a complete characterization of chirality of woven tiles including and beyond plain, twill, and satin varieties.
+
+
+
+Figure 21: Von-Mises stress distributions and deformation of 8x8 assemblies of plain (left), twill (middle) and satin symmetries under planar loading. The assemblies are based on threads constructed out of multiple tiles.
+
+### 7.4 Structural Behavior & Topology
+
+We observed correlation between the weave topology and the structural behavior of woven tile assemblies. However, our analysis and observations are currently qualitative. Therefore, a formal and constitutive methodology for connecting the topology and structural properties is an important future direction that needs attention. As an important example, determining the relationship between direction of stress distributions to the weave parameters (the numbers $a, b$ , and $c$ in Figure 4) will allow for systematic design of woven tiles for specific applications.
+
+## 8 Conclusion & Future Directions
+
+In this paper, we have developed a methodology to design interlocking space-filling tiles that we call bi-axial woven tiles that are generated using the topology of bi-axial woven fabrics. To this end, we developed a method to create desired input curves segments using the properties of 2-fold 2-way genus-1 fabrics. We further developed a simple method to compute Voronoi decomposition of the curve segments. We demonstrated our general methodology by designing, fabricating, assembling, and mechanically analyzing woven tile assemblies. We 3D-printed some of these tiles and physically observed their mathematical and physical properties. We also developed molds to directly cast these shapes with a wider range of materials such as silicone and aluminium. While our physical evaluation of the individual and assembled properties of these tiles aligns with the current literature on woven fabrics, we show some interesting additional properties that were not previously apparent. Furthermore, our results suggest that interlocking these tiles have potential to replace existing extrusion based building blocks (such as bricks) which do not provide interlocking capability.
+
+We want to point out that 2 -fold fabrics are not really a final frontier. It is also possible to represent k-fold fabrics using 3-manifold meshes as guide shapes [4]. The extension to k -fold fabrics requires even more theoretical foundations, but it demonstrates the potential. A significant advantage of using guide shapes is that the topological properties of the knotted structures do not change with any geometric perturbation of the guide shapes. In conclusion, even though we chose our proof of concept tiles from 2-fold 2-way types, the ideas in this paper can be extended into more general types of fabrics with the maturation of theoretical work in regular maps and 3-manifold meshes.
+
+## References
+
+[1] S. Adriaenssens, P. Block, D. Veenendaal, and C. Williams. Shell structures for architecture: form finding and optimization. Routledge, Abingdon, United Kingdom, 2014.
+
+[2] A. Airoldi, P. Bettini, P. Panichelli, M. F. Oktem, and G. Sala. Chiral topologies for composite morphing structures-part i: Development of a chiral rib for deformable airfoils. physica status solidi (b), 252(7):1435- 1445, 2015.
+
+[3] A. Airoldi, P. Bettini, P. Panichelli, and G. Sala. Chiral topologies for composite morphing structures-part ii: Novel configurations and technological processes. physica status solidi (b), 252(7):1446-1454, 2015.
+
+[4] E. Akleman, J. Chen, and J. L. Gross. Block meshes: Topologically robust shape modeling with graphs embedded on 3-manifolds. Computers & Graphics, 46:306-326, 2015.
+
+[5] E. Akleman, J. Chen, and J. L. Gross. Extended graph rotation systems as a model for cyclic weaving on orientable surfaces. Discrete Applied Mathematics, 193:61-79, 2015.
+
+[6] E. Akleman, J. Chen, Q. Xing, and J. Gross. Cyclic plain-weaving with extended graph rotation systems. ACM Transactions on Graphics; Proceedings of SIGGRAPH'2009, 28(3):78.1-78.8, August 2009.
+
+[7] S. Alvarez. The gyrobifastigium, not an uncommon shape in chemistry. Coordination Chemistry Reviews, 350:3-13, 2017.
+
+[8] M. J. Avanaki and A. A. A. Jeddi. Mechanical behavior of regular twill weave structures; part i: 3d meso-scale geometrical modelling. Journal of Engineered Fibers and Fabrics, 10(1):155892501501000112, 2015.
+
+[9] O. Baverel, H. Nooshin, Y. Kuroiwa, and G. Parke. Nexorades. International Journal of Space Structures, 15(2):155-159, 2000.
+
+[10] O. L. Baverel. Nexorades: a family of interwoven space structures. PhD thesis, University of Surrey, 2000.
+
+[11] M. Carlesso, A. Molotnikov, T. Krause, K. Tushtev, S. Kroll, K. Rezwan, and Y. Estrin. Enhancement of sound absorption properties using topologically interlocked elements. Scripta Materialia, 66(7):483-486, 2012.
+
+[12] Y.-L. Chen, E. Akleman, J. Chen, and Q. Xing. Designing biaxial textile weaving patterns. Hyperseeing: Special Issue on ISAMA 2010 - Ninth Interdisciplinary Conference of the International Society of the Arts, Mathematics, and Architecture, 6(1):53-62, 2010.
+
+[13] M. Conder and P. Dobcsányi. Determination of all regular maps of small genus. Journal of Combinatorial Theory, Series B, 81(2):224- 242, 2001.
+
+[14] H. S. M. Coxeter. Regular polytopes. Courier Corporation, N. Chelmsford MA, 1973.
+
+[15] B. N. Delaunay and N. N. Sandakova. Theory of stereohedra. Trudy Matematicheskogo Instituta imeni VA Steklova, 64:28-51, 1961.
+
+[16] M. Deuss, D. Panozzo, E. Whiting, Y. Liu, P. Block, O. Sorkine-Hornung, and M. Pauly. Assembling self-supporting structures. ACM Trans. Graph., 33(6):214:1-214:10, Nov. 2014. doi: 10.1145/2661229. 2661266
+
+[17] C. Douthe and O. Baverel. Design of nexorades or reciprocal frame systems with the dynamic relaxation method. Computers $\mathcal{E}$ Structures, 87(21):1296-1307, 2009.
+
+[18] A. Dyskin, Y. Estrin, A. Kanel-Belov, and E. Pasternak. A new concept in design of materials and structures: Assemblies of interlocked tetrahedron-shaped elements. Scripta Materialia, 44(12):2689-2694, 2001.
+
+[19] Y. Estrin, A. Dyskin, and E. Pasternak. Topological interlocking as a material design concept. Materials Science and Engineering: $C$ , 31(6):1189-1194, 2011.
+
+[20] R. Evans. The projective cast: architecture and its three geometries. MIT press, Cambridge, Mass, 1995.
+
+[21] G. Fallacara. Toward a stereotomic design: Experimental constructions and didactic experiences. In Proceedings of the Third International Congress on Construction History, pp. 553-560. Construction History Society of America, Chicago, IL, 2009.
+
+[22] S. Fernando, R. Saunders, and S. Weir. Surveying stereotomy: Investigations in arches, vaults and digital stone masonry. In Future of Architectural Research, p. 82. Architectural Research Centers Consortium, San Antonio, TX, 2015.
+
+[23] F. Fleury. Evaluation of the perpendicular flat vault inventor's intuitions through. large scale instrumented testing. In Proceedings of the Third International Congress on Construction History, pp. 561-568. Construction History Society of America, Chicago, IL, 2009.
+
+[24] P. Gailiunas. Mad weave. Journal of Mathematics and the Arts, 11(1):40-58, 2017.
+
+[25] M. Gardner. Sixth book of mathematical games from Scientific American. WH Freeman, San Francisco, CA, 1971.
+
+[26] M. Goldberg. The space-filling pentahedra. Journal of Combinatorial Theory, Series A, 13(3):437-443, 1972.
+
+[27] M. Goldberg. Convex polyhedral space-fillers of more than twelve faces. Geometriae Dedicata, 8(4):491-500, 1979.
+
+[28] M. Goldberg. On the space-filling enneahedra. Geometriae Dedicata, 12(3):297-306, 1982.
+
+[29] S. Grishanov, V. Meshkov, and A. Omelchenko. A topological study of textile structures. part i: An introduction to topological methods. Textile research journal, 79(8):702-713, 2009.
+
+[30] S. Grishanov, V. Meshkov, and A. Omelchenko. A topological study of textile structures. part ii: Topological invariants in application to textile structures. Textile research journal, 79(9):822-836, 2009.
+
+[31] R. E. Griswold. Color complementation, part 1: Color-alternate weaves. Web Technical Report, Computer Science Department, University of Arizona, 2004.
+
+[32] R. E. Griswold. From drawdown to draft - a programmer's view.
+
+Web Technical Report, Computer Science Department, University of Arizona, 2004.
+
+[33] B. Grunbaum and G. Shephard. A catalogue of isonemal fabrics.
+
+Annals of the New York Academy of Sciences, 440:279-298, 1985.
+
+[34] B. Grunbaum and G. Shephard. An extension to the catalogue of isonemal fabrics. Discrete Mathematics, 60:155-192, 1986.
+
+[35] B. Grunbaum and G. Shephard. Isonemal fabrics. American Mathematical Monthly, 95:5-30, 1988.
+
+[36] B. Grünbaum and G. C. Shephard. Tilings by regular polygons. Mathematics Magazine, 50(5):227-247, 1977.
+
+[37] B. Grünbaum and G. C. Shephard. Satins and twills: An introduction to the geometry of fabrics. Mathematics Magazine, 53(3):139-161, 1980.
+
+[38] B. Grünbaum and G. C. Shephard. Tilings with congruent tiles. Bulletin of the American Mathematical Society, 3(3):951-973, 1980.
+
+[39] B. Grünbaum and G. C. Shephard. Tilings and patterns. W. H. Freeman and Company, Austin, TX, 1987.
+
+[40] C. S. Ha, M. E. Plesha, and R. S. Lakes. Chiral three-dimensional isotropic lattices with negative poisson's ratio. physica status solidi (b), 253(7):1243-1251, 2016.
+
+[41] M. Howison and C. H. Séquin. Cad tools for creating space-filing 3d escher tiles. Computer-Aided Design and Applications, 6(6):737-748, 2009.
+
+[42] M. Jamshidi Avanaki and A. A. Jeddi. Theoretical analysis of geometrical and mechanical parameters in plain woven structures. The Journal of The Textile Institute, 108(3):418-427, 2017.
+
+[43] N. W. Johnson. Convex polyhedra with regular faces. Canadian Journal of Mathematics, 18:169-200, 1966.
+
+[44] M. Karayaka and P. Kurath. Deformation and failure behavior of woven composite laminates. 1994.
+
+[45] M. Kilian, D. Pellis, J. Wallner, and H. Pottmann. Material-minimizing forms and structures. ACM Trans. Graphics, 36(6):article 173, 2017. Proc. SIGGRAPH Asia. doi: 10.1145/3130800.3130827
+
+[46] J. I. Lipton, R. MacCurdy, Z. Manchester, L. Chin, D. Cellucci, and D. Rus. Handedness in shearing auxetics creates rigid and compliant structures. Science, 360(6389):632-635, 2018.
+
+[47] Y. Liu, M. O'Keeffe, M. M. Treacy, and O. M. Yaghi. The geometry of periodic knots, polycatenanes and weaving from a chemical perspective: a library for reticular chemistry. Chemical Society Reviews, 47(12):4642-4664, 2018.
+
+[48] A. L. Loeb. Space-filling polyhedra. In Space Structures, pp. 127-132. Springer, NewYork, NY, 1991.
+
+[49] S.-P. Ng, K. J. Lau, and P. Tse. 3d finite element analysis of tensile notched strength of $2/2$ twill weave fabric composites with drilled circular hole. Composites Part B: Engineering, 31(2):113-132, 2000.
+
+[50] R. Penne. Configurations of few lines in 3-space. isotopy, chirality and planar layouts. Geometriae Dedicata, 45(1):49-82, 1993.
+
+[51] E. R. Ranucci. Master of tessellations: Mc escher, 1898-1972. The Mathematics Teacher, 67(4):299-306, 1974.
+
+[52] E. Renuart and C. Viney. Biological fibrous materials: Self-assembled structures and optimised properties. In Structural Biological Materials (Edited by M. Elices), pp. 221-267. Pergamon/Elsevier Science, Oxford, UK, 2000.
+
+[53] J. Richter-Gebert. Combinatorial obstructions to the lifting of weaving diagrams. Discrete & Computational Geometry, 10(3):287-312, 1993.
+
+[54] R. Roelofs. Three-dimensional and dynamic constructions based on leonardo grids. International Journal of Space Structures, 22(3):191- 200, 2007.
+
+[55] R. L. Roth. The symmetry groups of periodic isonemal fabrics. Ge-ometriae Dedicata, Springer Netherlands, 48(2):191-210, 1993.
+
+[56] M. W. Schmitt. On Space Groups and Dirichlet-Voronoi Stereohedra. PhD thesis, Berlin: Freien Universität Berlin, 2016.
+
+[57] M. Senechal. Which tetrahedra fill space? Mathematics Magazine, 54(5):227-243, 1981.
+
+[58] H. V. Shin, C. F. Porst, E. Vouga, J. Ochsendorf, and F. Durand. Reconciling elastic and equilibrium methods for static analysis. ACM Trans. Graph., 35(2):13:1-13:16, Feb. 2016. doi: 10.1145/2835173
+
+[59] P. Song, C.-W. Fu, P. Goswami, J. Zheng, N. J. Mitra, and D. Cohen-Or. Reciprocal frame structures made easy. ACM Transactions on Graphics (TOG), 32(4):94, 2013.
+
+[60] S. G. Subramanian, M. Eng, V. R. Krishnamurthy, and E. Akleman. Delaunay lofts: A biologically inspired approach for modeling space filling modular structures. Computers & Graphics, 82(8):73-83, 2019.
+
+[61] P. Tan, L. Tong, and G. Steven. Modelling for predicting the mechanical properties of textile composites-a review. Composites Part A: Applied Science and Manufacturing, 28(11):903-922, 1997.
+
+[62] O. Tessmann and M. Becker. Extremely heavy and incredibly light: performative assemblies in dynamic environments. In 18th International Conference on Computer-Aided Architectural Design Research in Asia: Open Systems, CAADRIA 2013; Singapore; Singapore; 15 May 2013 through 18 May 2013, pp. 469-478. The Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong, 2013.
+
+[63] L. Thomas. 7 - woven structures and their impact on the function and performance of smart clothing. In J. McCann and D. Bryson, eds., Smart Clothes and Wearable Technology, Woodhead Publishing Series in Textiles, pp. 131-155. Woodhead Publishing, 2009. doi: 10. 1533/9781845695668.2.131
+
+[64] R. Thomas. Isonemal prefabrics with only parallel axes of symmetry. Discrete Mathematics, 309(9):2696-2711, 2009.
+
+[65] R. Thomas. Isonemal prefabrics with perpendicular axes of symmetry. Discrete Mathematics, 309(9):2696-2711, 2009.
+
+[66] R. Thomas. Isonemal prefabrics with no axis of symmetry. Discrete Mathematics, 310:1307-1324, 2010.
+
+[67] J. J. Van Wijk. Symmetric tiling of closed surfaces: Visualization of regular maps. ACM Transactions on Graphics (TOG), 28(3):49, 2009.
+
+[68] M. Weizmann, O. Amir, and Y. J. Grobman. Topological interlocking in buildings: A case for the design and construction of floors. Automation in Construction, 72:18-25, 2016.
+
+[69] S. Wheeland, F. Bayatpur, A. V. Amirkhizi, and S. Nemat-Nasser. Chiral braided and woven composites: design, fabrication, and electromagnetic characterization. In Z. Ounaies and S. S. Seelecke, eds., Behavior and Mechanics of Multifunctional Materials and Composites 2011, vol. 7978, pp. 278 - 282. International Society for Optics and Photonics, SPIE, 2011. doi: 10.1117/12.880542
+
+[70] E. Whiting, J. Ochsendorf, and F. Durand. Procedural modeling of structurally-sound masonry buildings. ACM Trans. Graph., 28(5):112:1-112:9, Dec. 2009. doi: 10.1145/1618452.1618458
+
+[71] E. Whiting, H. Shin, R. Wang, J. Ochsendorf, and F. Durand. Structural optimization of $3\mathrm{\;d}$ masonry buildings. ACM Trans. Graph., 31(6):159:1-159:11, Nov. 2012. doi: 10.1145/2366145.2366178
+
+[72] R. Williams. The geometrical foundation of natural structure: A source book of design. Dover, New York, NY, 1979.
+
+[73] R. E. Williams. Space-filling polyhedron: its relation to aggregates of soap bubbles, plant cells, and metal crystallites. Science, 161(3838):276-277, 1968.
+
+[74] W. Wu, W. Hu, G. Qian, H. Liao, X. Xu, and F. Berto. Mechanical design and multifunctional applications of chiral mechanical metama-terials: A review. Materials & Design, 180:107950, 2019.
+
+[75] B. Zelinka. Isonemality and mononemality of woven fabrics. Applications of Mathematics, 3:194-198, 1983.
+
+[76] B. Zelinka. Symmetries of woven fabrics. Applications of Mathematics, 29(1):14-22, 1984.
+
+[77] G. M. Ziegler. Oriented matroids. The Electronic Journal of Combinatorics, pp. DS4-Sep, 2012.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a106b3debded4539419cc245088ca59140a5baca
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/koAjrniDnBR/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,433 @@
+§ BI-AXIAL WOVEN TILES: INTERLOCKING SPACE-FILLING SHAPES BASED ON SYMMETRIES OF BI-AXIAL WEAVING PATTERNS
+
+Vinayak R. Krishnamurthy*
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Katherine Boyd ${}^{§}$
+
+Department of Visualization
+
+Texas A&M University
+
+Ergun Akleman ${}^{ \dagger }$
+
+Departments of Visualization &
+
+Computer Science and Eng.,
+
+Texas A&M University
+
+Chia-An Fu ${}^{\pi }$
+
+Department of Visualization
+
+Texas A&M University
+
+Sai Ganesh Subramanian ${}^{ \ddagger }$
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Matthew Ebert"
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Courtney Starrett**
+
+Department of Visualization
+
+Texas A&M University
+
+Neeraj Yadav ${}^{\dagger \dagger }$
+
+Department of Architecture
+
+Texas A&M University
+
+ < g r a p h i c s >
+
+Figure 1: The computational pipeline for the geometric design and fabrication of woven tiles is shown. This particular example illustrates the tiles generated using the plain weave symmetries filling 2.5D space. The Figure 1c shows the curves in fundamental domain. The yellow curve shows the basic element of the repeating curve segment. All other curve segments in the fundamental domain can be obtained by rotating and translating this yellow curve. The Figure 1d shows overall assembly by removing the tile that corresponds to yellow curve. We obtained the shapes of top surfaces also with Voronoi decomposition.
+
+§ ABSTRACT
+
+In this paper, we introduce a geometric design and fabrication framework for a family of interlocking space-filling shapes which we call bi-axial woven tiles. Our framework is based on a unique combination of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+§ 1 INTRODUCTION
+
+§ 1.1 MOTIVATION
+
+Space-filling shapes have applications in a wide range of areas from chemistry and biology to engineering and architecture [48]. Using space-filling shapes, we can compose and decompose complicated shell and volume structures for design and architectural applications. Space-filling shapes that are also tileable, can be further provide an economical way for constructing structures because they can be mass-produced. Despite their practical importance, the variety of 2.5D and 3D space-filling tiles at our disposal are quite limited. The most commonly known and used space-filling shapes are usually regular prisms such as rectangular bricks since they are relatively easy to manufacture and are widely available. However, reliance on regular prisms, significantly constrains our design space for obtaining reliable and robust structures [16, 45, 58, 70, 71], particularly when current additive manufacturing techniques are gradually becoming more affordable across engineering and construction domains. In this paper, we introduce a geometric design and fabrication framework for a new class of interlocking space-filling shapes which we call bi-axial woven tiles.
+
+Systematic design of modular, tileable and, simultaneously interlocking building blocks is a challenging task. We find that there is currently no principled approach that would allow one to generate such building blocks. To this end, we present a general conceptual framework that takes as input a set of curves determined through fabric weave patterns and uses these curves as Voronoi sites to partition space. This allows one to decompose space into any arbitrary partition induced by the input curves wherein each partition can be considered as a tile.
+
+*e-mail: vinayak@tamu.edu
+
+${}^{ \dagger }$ e-mail: ergun.akleman@gmail.com
+
+${}^{ \ddagger }$ e-mail: sai3097ganesh@tamu.edu
+
+§e-mail: katherineboyd@tamu.edu
+
+Te-mail: sqree@tamu.edu
+
+Il e-mail: matt_ebert@tamu.edu
+
+**e-mail: cstarrett@tamu.edu
+
+${}^{\dagger \dagger }$ e-mail: nrj31y@tamu.edu
+
+§ 1.2 INSPIRATION & RATIONALE
+
+While our framework is general, we specifically chose bi-axial weave patterns to demonstrate our approach. The inspiration for using biaxial weave patterns came from the fact that woven fabrics can form strong structures from relatively weak threads through interlacing (or interlocking) [49]. Therefore, woven structures have been known to have several applications ranging from textiles to composite materials. Using this as our rationale, we intended to investigate the possibility of constructing tiled assemblies of interlocking space-filling shapes that leverage the thread interlacing process from woven fabrics, specifically 2-fold structures. The advantage of choosing weave patterns is that they are closed under symmetry operations thereby allowing us to systematically and intuitively design and construct an entire family of interlocking space-filling shapes — bi-axial woven tiles. In addition to providing simple and intuitive control, woven tiles also relate to the structural characteristics of woven fabrics, which have been known to have several applications ranging from textiles to composite materials.
+
+In addition to the potential advantages rooted in mechanical behavior, 2-fold fabric structures are particularly useful for our purpose because of their geometric simplicity and intuitiveness. They provide a simple approach for designing interlocking space-filling tiles. A particular subset of 2-fold fabrics, known as 2-way and genus-1, are particularly useful for simple and intuitive control. They can be constructed using regular square grid as the guide shape and they include most popular weaving structures such as plain, twill, and satin.
+
+§ 1.3 SUMMARY OF APPROACH
+
+Using the properties of 2-fold 2-way genus-1 fabrics, our approach is to obtain desired curves segments that are closed under symmetry operations. One simplification of these fabrics is that each curve segment can be chosen to be planar (see Figure 1a). In addition, we can define all well-known fabric patterns such as plain, twill and satin using only three parameters. The fundamental domain of these symmetry operations is a prism with a square base because of 2-way and genus-1 property (see Figure 1b). In other words, we only have to compute Voronoi decomposition of the fundamental domain. Then, the Voronoi region of the curve segment in the center, shown as yellow in Figure 1a, is used as the space-filling tile.
+
+We present a simplified method to compute Voronoi decomposition of fundamental domain with these curve segments. We first sample each curve segment to obtain a piece-wise linear approximation. We compute 3D Voronoi decomposition for each sample point. This process gives us a set of convex Voronoi polyhedra for the same curve segment. The union of these convex polyhedra gives us desired space filling tile. We identify simple and robust algorithms to take union of all convex Voronoi polyhedra that comes from the same piece-wise linear curve segment. We also developed a tile beautification process inspired by the fact that the points of equal distance to a planar surface and a line parallel to the surface lie on a parabolic cylinder. We add two planar surfaces that sandwich the control curves from top and bottom also as Voronoi sites. Resulting Voronoi decomposition automatically provides nice boundaries that consist of parabolic regions. The 2D equivalent of the idea is shown in Figure 2.
+
+To demonstrate our approach, we have designed many interlocking and space-filling tiles. We call them woven tiles since 2-fold
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+(a) Voronoi decomposition two en- (b) Voronoi decomposition two enclosing lines with $S$ shape pieces that closing lines with $S$ shape pieces that forms a square wave. forms a triangular wave.
+
+Figure 2: A 2D example of beautification of boundaries. Note that inclusion of two enclosing lines allows to create curved outer boundaries in Voronoi decomposition. The effect is more visible with interaction of the sharp corners of triangular wave. In 3D, since we use a surface and curve, we obtain curved boundaries as ornament.
+
+fabrics refers woven structures. This terminology is also helpful since we can use weaving terminology to describe the variety of tiles produced by this approach as plain, twill or satin woven tiles Because of their symmetry properties, these tiles can be assembled in more than a single configuration. Some assembly structures can even create loops as shown in Figure 1e. For these cases, we have shown that it is possible to lock the pieces using one flexible piece.
+
+§ 1.4 OUR CONTRIBUTIONS
+
+Our overarching contribution in this work is a general conceptual framework for generating space-filling and interlocking tiles based on the fundamental principles of fabric weave patterns in conjunction with space decomposition using 3D Voronoi partition. Based on this framework, we make four specific contributions as listed below:
+
+1. We use our general framework to develop a simple and intuitive methodology for the design and construct Bi-Axial Woven Tiles, space-filling tiles derived from the symmetries induced by woven fabrics The basic idea is to use curves representing 2-way 2-fold weaving patterns (such as plain, twill, and satin) as Voronoi sites for decomposing 3-space.
+
+2. We introduce a simple and effective algorithm for approximating the Voronoi decomposition of space with labelled curve segments as the Voronoi sites. The algorithm uses a simple process that first discretizes a curve segment into a sequence of points and then constructs a Voronoi cell of the curve simply by computing the union of constitutive Voronoi cells for each point on the curve The first advantage of this method is its simplicity — it allows us to directly use standard Voronoi cell computation for points for curves. Secondly, it allows for an elegant computation of the Voronoi cell surface as a triangle mesh using a simple topological operation - removing the internal polygonal faces of adjacent constitutive cells of points.
+
+3. We demonstrate several cases of Bi-Axial Woven Tiles and demonstrate techniques for the fabrication and assembly these tiles. We show the fabrication these tiles with a variety of materials (plastic, wax, and metal) by using different 3D printing, molding, and casting techniques. Furthermore, we demonstrate that these tiles can be assembled more than single configuration. From the same group, it is even possible to obtain two assemblies with different chirality (i.e. mirrored versions of each other).
+
+4. Finally, we present a comparative structural evaluation of plain, twill, and satin tile assemblies. The finite element analyses (FEA) of these assemblies under under planar and normal loading conditions reveal that weaving allows distribution of planar and normal loads across tiles through the contact surfaces, generated with our methodology. We describe the qualitative relationship between the symmetries induced by the weave patterns to the stress distribution in the tiled assemblies.
+
+§ 2 RELATED WORK
+
+§ 2.1 SPACE FILLING POLYHEDRA
+
+Space filling polyhedra, which can be used to tessellate (or decompose) a space [37], are defined as a cellular structure whose replicas together can fill all of space watertight, i.e. without having any voids between them [48]. While 2D tessellations and 2D space filling tiles are well-understood [37], problems related to 2.5D and 3D tessellations and tiles (i.e. shell and volume structures respectively) are still perceived as difficult. The perception of difficulty of $3\mathrm{D}$ tessellations probably comes from the belief that tetrahedron can fill space since ${500}\mathrm{{BC}}$ . In fact, many failed efforts were made to prove this widespread belief [57].
+
+It is now known that the cube is the only space filling Platonic solid [25]. This partly explains the widespread use of regular prisms as space filling tiles. We are indebted to Goldberg, whose exhaustive cataloguing from 1972 to 1982, helped us to access all known space-filling polyhedra [26-28]. We now know that there are only eight space-filling convex polyhedra and only five of them have regular faces, namely the triangular prism, hexagonal prism, cube, truncated octahedron [72, 73], and Johnson solid gyrobifastigium [7, 43]. Five of these eight space filling shapes are "primary" parallelohedra [14], namely cube, hexagonal prism, rhombic dodecahedron, elongated dodecahedron, and truncated octahedron. Space filling polyhedra is still active research area in mathematics [56]. However, as far as we know there exist no space filling shapes that can also interlock.
+
+§ 2.2 INTERLOCKING STRUCTURES
+
+History is rich with examples of puzzle-like interlocking structures, which is analyzed under the names such as stereotomy [20-22], nexorades [9, 10, 17] and topological interlocking [11, 18, 19, 68]. One of the most remarkable examples of interlocking structures is the Abeille flat vault, which is designed by French architect and engineer Joseph Abeille [23, 62]. Nexorades, which are also called the Leonardo grids, are types of structures that are constructed using notched rods that fit into the notches of adjacent rods [54, 59].
+
+§ 2.3 GEOMETRY AND TOPOLOGY OF FABRIC WEAVES
+
+We observe that many interlocked structures can be viewed as knots and links that are decomposed into curve segments. This view simplifies the design process since we can build our framework by borrowing concepts directly mathematics literature. It can also provide significant intuition for the design since bi-axial textile weaving structures, which are also called 2-fold 2-way fabrics, also form knots and links by viewing them as structures embedded on a toroidal surface [29, 30]. The word 2-way, which is usually called biaxial, means that the strands run in two directions at right angles to each other- warp or vertical and weft or horizontal. The word 2-fold means there are never more than two strands crossing each other [6].
+
+The popularity of 2-fold 2-way fabrics comes from the fact that the textile weaving structures are usually manufactured using loom devices by interlacing of two sets of strands, called warp and weft, at right angles to each other (see Figure 3a). Since the warp and weft strands are at right angles to each other, they form rows and columns. We colored warp thread blue and weft threads yellow to differentiate these two threads as shown in Figure 3c). The names warp and weft are not arbitrary in practice. In the loom device, the weft (also called filling) strands are considered the ones that go under and over warp strands to create a fabric. The basic purpose of any loom device is to hold the warp strands under tension such that weft strands can be interwoven. Using this basic framework, it is possible to manufacture a wide variety of weaving structures by raising and lowering different warp strands (or in other words by playing with ups and downs in each row).
+
+There was no formal mathematical foundation behind bi-axial weaving until Grunbaum and Shephard, who are known by their contributions to 2D tiling [36, 38], investigated the mathematical properties of bi-axial weaving in 1980's in a series of papers [33- 35, 37]. By viewing weaves as matrices as shown in Figure 3c they simplified the problem of classifying and analyzing woven structures.
+
+ < g r a p h i c s >
+
+Figure 3: The fundamental domain of 2-way 2-fold fabrics is a rectangle and they can be represented as a simple matrix. The warp threads are colored blue and weft threads are colored yellow to differentiate the two threads in the final matrix.
+
+ < g r a p h i c s >
+
+Figure 4: Three parameters, $a,b$ and, $c$ , are sufficient to define all of the important 2-fold, 2-way genus-1 fabrics
+
+Grunbaum and Shephard studied a subset of 2-fold 2-way patterns that have a transitive symmetry group on the strands of the fabric, which they called isonemal fabrics [34]. They identified all isonemal patterns that hang together for periods up to 17 [33, 34]. Roth [55], Thomas [64-66] and Zelinka [75, 76] and Griswold [31, 32] theoretically and practically investigated symmetry and other properties of isonemal fabrics. The identification of the hanging-together property is simpler for a certain type of isonemal fabrics that are called genus-1 [34]. Genus-1 means that each row with length $n$ is obtained from the row above it by a shift of $c$ units to the right, for some fixed value of the parameter $c$ . Genus-1 fabrics includes two special and well known isonemal fabrics, twills and satins. A twill pattern is the one each row of a design is obtained from the row above it by a shift of one square in a fixed direction (either left or right). A satin pattern is the one in which each row or column has only one blue square in the fundamental domain given by nxn matrix. To further simplify the design, we assume that the row with length $n$ consists of $a$ number of consecutive weft (yellow) threads and $b = n - b$ number of consecutive warp (blue) threads as shown in Figure 4.
+
+These $\left\lbrack {a,b,c}\right\rbrack$ -fabrics are guaranteed to hang together if $n = a + b$ and $c$ ’s are relatively prime [35]. The most widely used fabric pattern, plain weaving, is given as $\left\lbrack {1,1,1}\right\rbrack$ -fabric using $\left\lbrack {a,b,c}\right\rbrack$ notation. Twills are given as either $\left\lbrack {a,b,1}\right\rbrack$ or $\left\lbrack {a,b, - 1}\right\rbrack$ . Satins are described by $b = 1$ and ${c}^{2} = 1{\;\operatorname{mod}\;\left( {a + b}\right) }\left\lbrack {12}\right\rbrack$ . The genus-1 isonemal fabrics described by the $\left\lbrack {a,b,c}\right\rbrack$ notation not only include well-known patterns such as plain, twill, and satin but also a wide variety of additional bi-axial weaving patterns as shown in Figure 5. For instance, for the $\left\lbrack {3,3,2}\right\rbrack$ pattern shown in the figure, the notation $\left\lbrack {a,b,c}\right\rbrack$ can represent a non-fabric that can fall-apart. Fortunately, as discussed earlier it is easier to avoid the non-fabrics that can fall-apart unlike a general isonemal weaving case. We can simply check whether $n = a + b$ and $c$ ’s are relatively prime or if any row or column has no alternating crossing [35]. In conclusion, the $\left\lbrack {a,b,c}\right\rbrack$ notation provides a simple process to design control curves for bi-axial woven tiles. Figure 5 also demonstrates that among the $\left\lbrack {a,b,c}\right\rbrack$ patterns, the pattern is rotation-invariant only for plain, twill, and satin cases. This is because in plain, twill, and satin cases, warp and weft patterns are guaranteed to be mirrored versions of each other [37]. Since this is required to obtain a single tile, we focus on only plain, twill and satin woven tiles in this paper.
+
+ < g r a p h i c s >
+
+Figure 5: Examples of isonemal genus-1 patterns that can be represented by three parameters shown in Figure 4. Unnamed pattern $\left\lbrack {6,7,4}\right\rbrack$ hang together, but $\left\lbrack {3,3,2}\right\rbrack$ falls apart since $3 + 3$ and 2 are not relatively prime.
+
+§ 3 THEORETICAL FRAMEWORK
+
+To our knowledge, none of the existing approaches for producing interlocking structures currently provide space-filling pieces simultaneously. For instance, Leonardo grids are simply finite cylinder shapes that leave most space empty. Our approach in this paper is to fill (or decompose) the space appropriately using Voronoi decomposition. It appears that the concept of filling space using Voronoi decomposition actually came from Delaunay's original intention for the use of Delaunay diagrams. He was the first to use symmetry operations on points and Voronoi diagrams to produce space filling polyhedra, which he called Stereohedra [15, 56]. Recent work on Delaunay Lofts extended points to specific types of curves to obtain more complicated space filling structures [60]. In this paper, we first observe that any shape (a line, a curve, or even a surface) can be used as a Voronoi site to fill the space. If our initial configurations of the shapes are "good" such as being closed under symmetry operations, we are guaranteed to obtain interesting decomposition of the space — this is the real premise of this paper.
+
+The essential conceptual contribution of allowing any type of shapes as Voronoi sites is the extension of potential space filling shapes from simple polyhedra to almost any shape with curved edges and curved faces. In fact, allowing curved edges and faces significantly extends the design space of space-filling polyhedra. For instance, Escher's complicated 2D space filling tiles have been created by using curved edges [41, 51]. Another recently developed space filling shapes, called Delaunay Lofts [60] extended the design space by allowing curved edges and curved faces. Allowing any type of shapes as Voronoi sites not only enables a systematic search of desired shapes from large number of potential candidates, but also provides a simple design methodology to construct space filling structures.
+
+Based on this point of view, the key parameters for the classification of space-filling shapes are essentially the topological and geometric properties of Voronoi sites and their overall arrangements that are usually be obtained by symmetry transformations (rotation, translation, and mirror operations). The types of shapes and transformations uniquely determine the properties of the space decomposition. Now, based on this view point, let us again look at Stereohedra and Delaunay lofts.
+
+For Stereohedra, the shapes of Voronoi sites are points, $3\mathrm{D}{L}_{2}$ norm is used for distance computation, underlying space is $3\mathrm{D}$ and any symmetry operation in 3D are allowed [15, 56]. Based on these properties, we conclude that Stereohedra can theoretically represent every convex space filling polyhedra in 3D. Since the points are used as Voronoi sites and ${L}_{2}$ norm is used, the faces must be planar and edges must be straight in the resulting Voronoi decomposition of the 3D space.
+
+For Delaunay lofts, on the other hand, the shapes of Voronoi sites are curves that are given in the form of $x = f\left( z\right)$ and $y = g\left( z\right)$ , for every planar layer $z = c$ where $c$ is a real constant, a $2\mathrm{D}{L}_{2}$ norm is used to compute distance, underlying space is 2.5 or 3D and only 17 wallpaper symmetries are allowed in every layer $z = c$ [60]. Based on these properties, we conclude that Delaunay lofts (1) consists of a stacked layers of planar convex polygons with straight edges, and (2) in each layer there can be only one convex polygon. In Delaunay lofts the number of sides of the stacked convex polygons can change from one layer to another. In conclusion, the faces of the Delaunay lofts are ruled surfaces since they consist of sweeping lines. Edges of the faces can be curved.
+
+For bi-axial woven tiles in this paper, the shapes of Voronoi sites are curve segments obtained by decomposing planar periodic curves that are given -essentially1- in the form of $z = F\left( {x + n}\right) = F\left( x\right)$ and $z = G\left( {y + n}\right) = G\left( y\right)$ , where $n = a + b$ the period of fabric, where $F$ can be any periodic function as far as it consists of $a$ -length up regions and $b$ -length down regions as shown in Figure 6. The function $G$ is just the mirror of $F$ with $a$ -length down regions and $b$ - length up regions. The curve segments are obtained from these two periodic functions by just restricting its domain into a region such as $\left( {{x}_{0},{x}_{0} + {kn}}\right\rbrack$ . These curve segments are closed under symmetries of bi-axial weaving patterns, that are given by ${90}^{0}$ rotation and translation operations. $3\mathrm{D}{L}_{2}$ norm is used for distance computation. Underlying space is normally 2.5D, i.e. a planar shell structure [1].
+
+ < g r a p h i c s >
+
+Figure 6: Examples of periodic curves that can be used as Voronoi sites, i.e. control curves.
+
+Based on these properties, it is clear that the resulting tiles would usually be genus-0 surfaces with curved faces and edges. Because of its bi-axial property, the fundamental domain for these tiles would always be a rectangular prism, an extruded version of the original rectangular fundamental domain of corresponding 2-way 2-fold fabric [39]. Therefore, the tiles that perfectly decompose this rectangular prism domain will also fill all 3D space.
+
+${}^{1}$ We actually use parametric curves. This is only for providing a quick and simple explanation without a loss of generality
+
+ < g r a p h i c s >
+
+Figure 7: The basic degree-1 NURBS curves that are used to construct woven tiles. Each curve is created by changing positions of 11 control points. The figures at the top are actual curves. The figures at the bottom are points that are created by sampling the initial curves. These points that approximate the curves are used as Voronoi sites.
+
+§ 4 DESIGN AND FABRICATION PROCESS
+
+Our bi-axial woven tile design process consists of three steps: (1) Designing curve segments; (2) Designing 3D configuration of the curves segments to be used as Voronoi sites; and (3) Decomposition of the space using Voronoi tessellation. For all steps, we have used the simplest approaches which simplify the design process and provides robust computation.
+
+§ 4.1 DESIGNING CURVE SEGMENTS
+
+We designed our control curves by using Non-Uniform Rational B-Splines (NURBS). We initially allowed the higher degree curves to allow ${C}^{1}$ and ${C}^{2}$ continuity, but, quickly realized that piecewise-linear curves are sufficient to obtain desired results for woven tiles. Therefore, we designed all curves with degree 1 NURBS. For all cases, we use the same 11 control points. We simply move the positions of the control points to obtain the curve segments for desired weaving pattern as shown in 7 . To construct these curves, in addition to three weaving parameters, i.e. $a,b$ , and $c$ , we provide one additional control: the angle of connection of two consecutive tiles. By changing the angle we can obtain Square Waves, which appears to be binary function such as the ones shown in Figures 7a and 7b, and Partly Triangular Waves, which appears to be regular piece-wise linear such as the ones shown Figures 7d, 7c and 7e. The two consecutive tiles produced by square waves can sit at the top of each other as shown in Figures 11a and 11b With partly triangular tiles, we can adjust this angle as shown in Figures 11d and 11c and [Lle]
+
+§ 4.2 DESIGNING VORONOI SITES
+
+Based on three weaving parameters, i.e. $a,b$ , and $c$ , we have developed an interface to create 3D curve segments that are closed under symmetry operations of 2 -fold 2-way genus-1 fabrics. The algorithm consists of three stages be given as follows:
+
+1. Create initial curve segment as $x = {F}_{x}\left( t\right) ,y = 0$ and $z = {F}_{z}\left( t\right)$ based on $a$ and $b$ values, and curve type. Without loss of generalization, assume $t \in \left\lbrack {0,1}\right\rbrack ,z \in \left\lbrack {0,1}\right\rbrack$ , and $x \in \left\lbrack {-n/2,n/2}\right\rbrack$ . Note that $n =$ $a + b = {F}_{x}\left( 1\right) - {F}_{x}\left( 0\right) .$
+
+2. Create two replicas of the curve and translate them along the $x$ axis by adding and subtracting its period $n = a + b$ respectively. This creates three copies of initial curve that follows each other.
+
+3. Create two replicas of of these three curves. Translate one of them using(c,1,0)vector and translate the other(-c, - 1,0). This translation operation must be done in modulo ${3n}$ .
+
+ * Remark 1: This operation creates a ${3n} \times 2 \times 1$ rectangular prism domain, which is sufficient to compute tiles. Note that we assume the height of the curves is 1 unit.
+
+ * Remark 2: This rectangular domain is not a fundamental domain of the curve symmetries. It is only applicable for genus-1 case.
+
+4. Create perpendicular curve segments.
+
+5. Remark 3: Perpendicular curve segments are guaranteed to be the same for plain, twill and satin. Therefore, we only focus on thise to obtain single tile.
+
+In practice we create these curves in a larger rectangular domain as shown in Figures 8, 9, and 10 to see the structure of the curves better. These rectangular domains must be larger than the ${3n} \times$ $2 \times 1$ domain we described earlier to guarantee we obtain at least one tile that can fill the space. In other words, at least one curve must be covered with its neighboring curves to guarantee that the Voronoi region that corresponds that particular curve segment fill the space. In Figures 8, 9, and 10, which shows two plain, two twill and one satin cases, the center curve is colored yellow. We have implemented this interface by using SideFX's Houdini, which is a robust 3D software that provides a node-based system for fast and easy interface development.
+
+§ 4.3 DECOMPOSITION OF THE SPACE
+
+Accurate decomposition of a given space using curves as Voronoi sites can be quite complicated. We, therefore, have developed a simple method that provides us reasonably good approximation of decomposition. Our method consist of four stages:
+
+1. Sample the original curve segments by obtaining the same number of points for each curve segment.
+
+2. For beautification step, create and sample two sandwiching (or bounding) planes. If not, skip this step. All the examples in this section are created using beautification step.
+
+3. Label points as follows:
+
+ * The points that are originated from central yellow curve are labeled using one label, say 0 .
+
+ * All other points are labeled using another label, say, 1.
+
+ * Remark 1: If the beautification step is used, the points coming from the sandwiching planes are also labeled 1 .
+
+4. Decompose the space using 3D Voronoi of these points, which gives us a set of labeled Voronoi regions, which are convex polyhedra that are labeled either 0 or 1 .
+
+5. Take union of all Voronoi regions labeled 0 to obtain desired space filling tile. Union operation consists of only face removal operations as follows:
+
+ * Remove the shared faces of two consecutive convex polyhedra coming from two consecutive sample points on the curve.
+
+ * Remark 2: These faces will always have the same vertex positions with opposing order.
+
+ < g r a p h i c s >
+
+Figure 8: An example for designing control curves for $\left\lbrack {1,1,1}\right\rbrack$ plain woven tiles.
+
+ < g r a p h i c s >
+
+Figure 9: An example for designing control curves for $\left\lbrack {2,2,1}\right\rbrack$ twill woven tiles.
+
+ < g r a p h i c s >
+
+Figure 10: An example for designing control curves for $\left\lbrack {7,1,3}\right\rbrack$ satin woven tiles.
+
+ < g r a p h i c s >
+
+Figure 11: Examples of plain, twill and satin woven tiles using the basic degree-1 NURBS curves shown in Figure 7,
+
+ < g r a p h i c s >
+
+Figure 12: Examples of assemblies that show only the tiles cut to stay in rectangular domain.
+
+ * Remark 3: If underlying mesh data structure provides consistent information, this operation is guaranteed to provide a 2-manifold mesh. Even if the underlying data structure does not provide consistent information, the operation creates a disconnected set of polygons that can still be $3\mathrm{D}$ printed using an STL file.
+
+ * Remark 4: If the beautification step is skipped, i.e. two sandwiching planes are not used, take an intersection with bounding rectangular prism.
+
+6. Optional Step: Take union of Voronoi regions with label 1 to obtain a hollow space that correspond to the mold that can be used to mass produce space filling tiles. Note that we need to take again an intersection with bounding rectangular prism if the beautification step is skipped.
+
+We have implemented this stage in both in Matlab and Houdini. For 3D Voronoi decomposition of points, we used build in functions available in Matlab and Houdini.
+
+ < g r a p h i c s >
+
+Figure 13: Examples of assemblies with uncut tiles .
+
+ < g r a p h i c s >
+
+Figure 14: Examples of casting aluminum using lost wax method.
+
+ < g r a p h i c s >
+
+Figure 15: Assembly of plain woven tiles. One of the black pieces is a flexible silicone piece and is needed to successfully assemble plain tiles.
+
+§ 4.4 FABRICATION
+
+All examples in this section are created using beautification step. We have printed the tiles shown in Figures 11d, 11c and 11e using both standard resin and elastic resin. For the purpose of investigation of various material properties and potential manufacturing options we made rubber molds of the tiles shown Figures 11d, and 11c for casting silicon rubber and wax versions. The wax tiles were used to cast aluminum tiles via the lost wax casting process as shown in Figure 14a. Also shown in Figure 14b is the assembly of wax and aluminum tiles.
+
+§ 5 PHYSICAL ASSEMBLY
+
+The geometry and topology of weaves has a rich research history with several open questions relating to the ability of the weaves to hold together. The works by Grunbaum et al. [37] assume that the threads being woven are infinitely long. This, obviously is not the case with woven tiles, making it more difficult to completely and formally characterize the assembly of woven tiled. Therefore, our first evaluative step was to physically assemble common weaving patterns (plain, twill, and satin), with the goal to explore how the symmetries induced by these patterns affect the method of creating assemblies of the respective tiles. We are particularly interested in two aspects of woven tile assembly: (a) locking ability which maps to the holding-together property of the weaves and (b) chiral configurations of woven tile assemblies.
+
+§ 5.1 LOCKING ABILITY OF WOVEN TILES
+
+The topology of a weaving pattern directly affects the locking ability of its corresponding woven tile. For instance, plain weave tiling results in self-locking configurations (Figure 15) identical to a plain woven fabric. Therefore, if zero tolerance is assumed, plain woven tiles cannot theoretically be assembled together with tiles constructed out of rigid materials such as PLA or Aluminium. In the $2 \times 2$ plain woven tile assembly shown in Figure 15a, one of the two black tiles (also the dark green tile in Figure 1e) is a compliant tile made of silicone, constructed through casting. This assembly is structurally stable and the geometry of the elements itself holds the structure together. Specifically, both the assembly and disassembly of the plain woven tiling is possible only through the application of force. In addition to introducing a flexible element, we also experimented with all four pieces cast in wax as well as Aluminium. In this case, the shrinkage in the individual pieces allowed for the tiling to be assembled (Figure 14b).
+
+ < g r a p h i c s >
+
+Figure 17: Assembly of satin woven tiles.
+
+In case of twill weaves, we do not encounter the locking problem. As seen in Figure 16, the twill assembly can be simply created by an alternating placement of tiles along each of the axis (the white and blue tiles represent each axis). Therefore, neither the assembly nor disassembly require any application of force and we did not need any flexible pieces for twill (Figure 16c). There are two observations we make here. First, in the plain woven tiling, exactly half of each tile is above one adjacent tile and the other half is underneath a second adjacent tile. Second, in case of twill assembly, the unit tiles do share this alternate above-underneath relationship with their neighbors. However, note that if two twill woven tiles are combined to create a double-length tile (Figure 16d), we obtain the above-underneath relationship that will likely produce a perfectly interlocking tiling (thereby needing flexible tiles akin to the plain-woven case).
+
+In case of satin weaves (Figure 17), we come to similar conclusions - there is a minimal number of repetitions of each tile to ensure a tightly packed interlocked assembly. While we can say for certain that the number of repetitions must be higher than twill, we currently do not claim what the number of repetitions should be. We believe that much work needs to be done in order to develop a formal theory for locking ability of woven tiles.
+
+ < g r a p h i c s >
+
+Figure 18: An example of chirality in plain woven tile assemblies.
+
+ < g r a p h i c s >
+
+Figure 19: Von-Mises stress distribution on single woven tiles
+
+§ 5.2 CHIRALITY
+
+A chiral object is one that is non-superposable on its mirror image. Chirality is a fundamental to several natural phenomena and engineering applications. Our first example that explores chirality is the plain-woven assembly wherein we observed that assembling the same plain-woven tiles in mirrored configurations leads to chital assemblies (Figure 18). Penne's work on planar layouts [50] provides a formal explanation to this propery by connecting projective geometry and topology.
+
+§ 6 STRUCTURAL EVALUATION
+
+Multiple uses of our methodology can be noted by using tiles as structural building blocks. For example, we can assemble tiles of plain weave to use them as a reinforced slab blocks. This is because, the nature of contacts between the tiles allow the forces/stresses to be distributed from one tile to other very easily. To explore this aspect better, we performed simulations of the individual shape separately and the response of the shape in the assembly. For our evaluation, we considered three commonly known plain, twill, and satin weaves and analyzed their response to basic mechanical loading conditions. The main motivation is to observe key relationships between the symmetries induced by these well-known weave patterns and the corresponding mechanical behavior.
+
+§ 6.1 EVALUATION METHODOLOGY
+
+FEA is considered as one of the most powerful tools for studying the mechanical properties of textile composites owing to the fact that the interactions between the unit cells are complex in nature [61] in addition to experimental methods [44]. Based on this, present FEA analysis with two objectives. First, we are interested in understanding the effect of contact between the interlocked woven tiles. Second, we wanted to observe the stress distributions for an individual woven tile and patterns that emerge as an effect of the weaving pattern.
+
+max width=
+
+X Plain Twill Satin
+
+1-4
+Minimum Stress (Pa) 2099 4.76e-9 1212
+
+1-4
+Average Stress (MPa) .93 .59 .97
+
+1-4
+Maximum Stress (MPa) 12.11 48.41 19.64
+
+1-4
+Minimum Displacement (m) 0.00 0.00 0.00
+
+1-4
+Average Displacement (m) 2.91e-6 1.97e-5 8.36e-6
+
+1-4
+Maximum Displacement (m) 1.22e-5 1.97e-6 4.13e-5
+
+1-4
+
+Table 1: Minimum, maximum and average stresses and displacements for woven tile assemblies under normal loading.
+
+We used the ANSYS Workbench 2019 R1 and conducted static structural analysis for all simulations. We specifically explored two loading conditions: planar (tensile load applied on the plane of the tiled assembly) and normal (compressive load applied normal to the plane of the tiled assembly). For simplicity, we assume unit forces(1N)and moments(1N - m)across all simulations. All the dimensions were chosen in accordance with the 3D printed shapes. For the analysis, we first imported a given woven tile as a solid body in SolidWorks 2019 and created assemblies of these tiles. Here, the size of the assembly was an important factor for a fair comparison. We used the satin weaves as the benchmark since it required the largest number of tiles. Based on this, we created an $8 \times 8$ assembly for plain, twill, and satin woven tile assemblies. For materials considerations, we used the material properties of PLA (Polylactic acid). Specifically, we set the density to ${1250}\mathrm{{kg}}/{\mathrm{m}}^{3}$ , Young’s modulus to ${3.45} * {10}^{9}\mathrm{\;{Pa}}$ and the Poisson’s Ratio to 0.39. We further assume all the contact regions to be friction-less. For each loading condition, Von-Mises stress and the total deformations were evaluated.
+
+§ 6.2 TENSILE LOADING ON SINGLE TILES
+
+We begin with a simple test of tensile loading of individual unit tiles for plain, twill, and satin cases (Figure 19). The first key observation for the same $1\mathrm{\;N}$ load is that the twill woven tile admits the minimum value of the maximum stress(43MPa)as compared to the plain and satin tiles ( ${69}\mathrm{{MPa}}$ and ${62}\mathrm{{MPa}}$ respectively). What is commonly evident across all cases is that the maximum stresses are located at the saddle points of the tiles. Also note that these are the critical points where any two tiles come in contact for a given assembly causing a high stress region. Therefore, the individual stress contours (Figure 19) help us identify the critical high stress regions through which load is internally transferred from part to part in a woven assembly. Furthermore, since the critical stresses are transferred through two doubly-curved surfaces in contact, the stress can be propagated in multiple directions.
+
+§ 6.3 COMMON BEHAVIORAL CHARACTERISTICS ACROSS WEAVES
+
+The deformation of the weave assembly behaves similar to a solid block of material for both planar and normal loading conditions. This is expected since the tiles are space filling and interlocking. The stress characteristics of the shapes, however, was found to have a deeper relationship with the the geometry of the weave pattern (Figure 21 and 20). Below, we make observations regarding these distributions for normal and planar loading conditions.
+
+§ 6.4 ASSEMBLY UNDER NORMAL LOADING
+
+Threads of plain, twill and satin were created by joining individual tiles. These threads were assembled to form an $8 \times 8$ assembly (Figure 20). The faces on the perimeter were fixed and a unit normal force was applied perpendicular to the plane of the assembly in the central region.
+
+ < g r a p h i c s >
+
+Figure 20: Von-Mises stress distributions and deformation of 8x8 assemblies of plain (left), twill (middle) and satin symmetries under normal loading. The assemblies are based on threads constructed out of multiple tiles. A uniform unit normal force was applied in the central region of the assembly indicated by the direction of arrow. The faces in the region indicated by the ground symbol were assigned as fixed supports.
+
+We observe that the woven nature of the tiles allows the deformation caused by the central load to be distributed uniformly around the axis of application in the assembly. Given that the deformation of the assembly is similar to a solid block, the contacts between the treads along the two axes transfer the incoming forces and stresses to the adjacent tiles. The distribution for each assembly respects the geometry of the weave pattern. For example, in the plain woven case (Figure 20 left panel), the stress starts as maximum at the center tiles and assumes a radial checkerboard pattern as it shifts toward the periphery. For instance, for plain woven assembly, we observe a transition from a concentrated stress distribution to a radial checkerboard pattern reminiscent of the original plain woven symmetries (Figure 5 top-left image). Interestingly, in twill we notice that stress propagates in a spiral-like manner (Figure 20 middle panel) from the central region.
+
+From a cursory numerical comparison (Table 1), there are two interesting observations to be made. First, the twill assembly has the lowest average stress(0.59MPa)while the plain and satin assemblies exhibit comparable average stresses. This is supported in textile engineering literature [63]. However, what is interesting that twill simultaneously exhibits the largest range of stress(0 - 48MPa)when compared to plain(0 - 12MPa)and satin cases. This implies that while twill may be better suitable for load-sensitive applications, plain and satin assemblies may be better candidates for scenarios that need durability.
+
+§ 6.5 ASSEMBLY UNDER PLANAR LOADING
+
+In case of planar loading, we applied tensile load on threads in one axial direction and observed the effect the orthogonal threads. We notice that the average and maximum stress values were lower in the assembly compared to individual simulations for all weaving patterns. This means that the assembly allows for an efficient stress distribution.
+
+Similar to stress patterns in normal loading, the planar simulations show a pronounced stress arrangement. While the plane of maximum stress distribution is orthogonal to the load direction in plain woven tiles, we note that in case of twill it is aligned approximately perpendicular to the weave direction as also noted by previous works [8,42]. The plan of maximum stress is in fact orthogonal to the diagonal lines produced on the face if the assembly is developed into a twill fabric. In the contrary, no apparent stress patterns can be noticed in satin assembly. We also note a significantly high value of maximum stress in satin compared to twill and plain (Table 2). This also agrees with the fact that in clothing satin, being a long yarn floats makes it unstable [63] and needs a much densely woven fabric to counter this.
+
+The twill assemblies again exhibit the lowest average Von Mises stress levels(0.42MPa)as compared to plain(0.57MPa)and satin (3 MPa). This is aligned with current literature on weave mechanics wherein plain and twill weaves display superior resistance to tensile loading in comparison to satin weaves because of lower frequency of alternation between the two axes [63]. Similar to nomal loading, twill also exhibits maximum range of stress values(0 - 23MPa) when compared to plain(0 - 10MPa)and satin(0 - 793MPa)tile assemblies. Here, once again, satin clearly demonstrates significantly high maximum stress when compared to plain and twill as is evident from the topology of the satin weave.
+
+§ 7 DISCUSSION
+
+The work presented in this paper provides (1) many new directions that need to be explored further; and (2) many interesting questions that need to investigated further. In the rest of this section, we discuss some of the future directions to explore and some of the questions to investigate.
+
+max width=
+
+X Plain Twill Satin
+
+1-4
+Minimum Stress (Pa) 308 2.92e-7 2.31e-5
+
+1-4
+Average Stress (MPa) .57 .4217 3.08
+
+1-4
+Maximum Stress (MPa) 10.43 23.32 793.80
+
+1-4
+Minimum Displacement (m) 0.00 0.00 0.00
+
+1-4
+Average Displacement (m) 1.54e-6 9.27e-6 2.27e-6
+
+1-4
+Maximum Displacement (m) 4.14e-6 7.91e-6 1.50e-5
+
+1-4
+
+Table 2: Minimum, maximum and average stresses and displacements for woven tile assemblies under planar loading
+
+§ 7.1 GENERALIZATION BASED ON KNOTS AND LINKS
+
+2-fold fabric structures are much richer than just 2-way genus-1 fabrics. Their real power can be best understood with extended graph rotation systems (EGRS) that was introduced in early 2010's [5, 6]. EGRS allows us to use orientable 2-manifold meshes as guide shapes to represent knots and links. The guide shapes help us to classify the fabrics. For instance, the guide shapes for 2-fold 2-way fabrics are regular grids embedded on genus-1 surfaces. For 2-fold 3-way fabrics, we need regular hexagonal or regular triangular grid embedded on genus-1 surfaces [6]. This is useful since some of the Leonardo grid designs are based on also 3-way woven patterns [54]. Using regular maps [13, 67], it is also possible to obtain hyperbolic tiling. Using the regular maps that correspond to hyperbolic tiles as guide shapes, 2-fold k-way genus-n fabrics can be obtained. From these fabrics, one can also obtain space filling shapes. For practical applications, there is a need for a significant amount theoretical work.
+
+§ 7.2 LOCKING
+
+The key open question that we hope to answer in our future work is a formally supported computational methodology for determining minimum tile repetition to generate pure interlocking of woven tiles. Here, Dawson's work on the enumeration of weave families can provide an important starting point as a means to develop such a method based on sound mathematical principles. We see that the locking ability of woven tiles is related to three interlinked concepts in geometry and topology literature, namely, liftability [53], oriented matroids [77], and planar layouts of lines in 3-space [50]. To simply determine repetitions for locking is only the first step. Once we obtain a locking configuration, the second challenge is to determine the minimum number of flexible/compliant elements to make the assembly possible. We only showed this example for the plain woven tiles (Figure 1e). To the best of our knowledge, a general strategy for this problem is currently unavailable.
+
+§ 7.3 CHIRALITY
+
+As we have seen in our results, chirality is a key aspect of further investigation in this research. A recent work discovered how to produce handedness in auxetic unit cells that shear as they expand by changing the symmetries and alignments of repeating unit cells [46]. Using the symmetry and alignment rules we can potentially expand our woven tiles to develop a new class of rigid and compliant structures [40, 52, 74]. Recent works on knot periodicity in reticular chemistry [47] and tri-axial weaves (also known as mad weaves) [24] are fundamental examples of how the geometry and physics of chirality are connected. Thus, identifying any fundamental multi-physical behavior of the assemblies shown in this work and beyond would allow us to construct assemblies with several practical applications such as mechanically augmented structures in mechanical, architectural, aerospace [2, 3], and materials [69] engineering. The main gap that must first be filled, however, is a complete characterization of chirality of woven tiles including and beyond plain, twill, and satin varieties.
+
+ < g r a p h i c s >
+
+Figure 21: Von-Mises stress distributions and deformation of 8x8 assemblies of plain (left), twill (middle) and satin symmetries under planar loading. The assemblies are based on threads constructed out of multiple tiles.
+
+§ 7.4 STRUCTURAL BEHAVIOR & TOPOLOGY
+
+We observed correlation between the weave topology and the structural behavior of woven tile assemblies. However, our analysis and observations are currently qualitative. Therefore, a formal and constitutive methodology for connecting the topology and structural properties is an important future direction that needs attention. As an important example, determining the relationship between direction of stress distributions to the weave parameters (the numbers $a,b$ , and $c$ in Figure 4) will allow for systematic design of woven tiles for specific applications.
+
+§ 8 CONCLUSION & FUTURE DIRECTIONS
+
+In this paper, we have developed a methodology to design interlocking space-filling tiles that we call bi-axial woven tiles that are generated using the topology of bi-axial woven fabrics. To this end, we developed a method to create desired input curves segments using the properties of 2-fold 2-way genus-1 fabrics. We further developed a simple method to compute Voronoi decomposition of the curve segments. We demonstrated our general methodology by designing, fabricating, assembling, and mechanically analyzing woven tile assemblies. We 3D-printed some of these tiles and physically observed their mathematical and physical properties. We also developed molds to directly cast these shapes with a wider range of materials such as silicone and aluminium. While our physical evaluation of the individual and assembled properties of these tiles aligns with the current literature on woven fabrics, we show some interesting additional properties that were not previously apparent. Furthermore, our results suggest that interlocking these tiles have potential to replace existing extrusion based building blocks (such as bricks) which do not provide interlocking capability.
+
+We want to point out that 2 -fold fabrics are not really a final frontier. It is also possible to represent k-fold fabrics using 3-manifold meshes as guide shapes [4]. The extension to k -fold fabrics requires even more theoretical foundations, but it demonstrates the potential. A significant advantage of using guide shapes is that the topological properties of the knotted structures do not change with any geometric perturbation of the guide shapes. In conclusion, even though we chose our proof of concept tiles from 2-fold 2-way types, the ideas in this paper can be extended into more general types of fabrics with the maturation of theoretical work in regular maps and 3-manifold meshes.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..f57c2c30e85dfb05e5e14dcea7c10bd183f5fd92
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,799 @@
+# Unlimiting the Dual Gaussian Distribution Model to Predict Touch Accuracy in On-screen-start Pointing Tasks
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+## ABSTRACT
+
+The dual Gaussian distribution hypothesis has been utilized to predict the success rate of target acquisition in finger touching. Bi and Zhai limited the applicability of their success-rate prediction model to off-screen-start pointing. However, we found that their doing so was theoretically over-limiting and their prediction model could also be used to on-screen-start pointing operations. We discuss the reasons why and empirically validate our hypothesis in a series of four experiments with various target sizes and distances. Bi and Zhai's model showed high prediction accuracy in all the experiments, with ${10}\%$ prediction error at worst. Our theoretical and empirical justifications will enable designers and researchers to use a single model to predict success rates regardless of whether users mainly perform on- or off-screen-start pointing and automatically generate and optimize UI items on apps and keyboards.
+
+## Author Keywords
+
+Dual Gaussian distribution model; touchscreens; finger input; pointing; graphical user interfaces.
+
+## CCS Concepts
+
+-Human-centered computing $\rightarrow$ HCI theory, concepts and models; Pointing; Empirical studies in HCI;
+
+## INTRODUCTION
+
+Target acquisition is the most frequently performed operation on touchscreens. Tapping a small target, however, is sometimes an error-prone task, for reasons such as the "fat finger problem" $\left\lbrack {{24},{46}}\right\rbrack$ and the offset between a user's intended tap point and the position sensed by the system $\left\lbrack {8,{25}}\right\rbrack$ . Hence, various techniques have been proposed to improve the precision of touch pointing $\left\lbrack {2,{46},{59}}\right\rbrack$ . Researchers have also sought to understand the fundamental principles of touch, e.g., touch-point distributions $\left\lbrack {4,{49}}\right\rbrack$ . As shown in these studies, finger touching is an inaccurate way to select a small target.
+
+If touch GUI designers could compute the success rate of tapping a given target, they could determine button/icon sizes that would strike a balance between usability and screen-space occupation. For example, suppose that a designer has to arrange many icons on a webpage. In this case, is a $5 - \mathrm{{mm}}$ diameter for each circular icon sufficiently large for accurate tapping? If not, then how about a 7-mm diameter? By how much can we expect the accuracy to be improved? Moreover, while larger icons can be more accurately tapped, they occupy more screen space. In that case, the webpage can be lengthened so that the larger icons fit, but this requires users to perform more scrolling operations to view and select icons at the bottom of the page. Hence, designers have to carefully manage this tradeoff between user performance and screen space.
+
+Without a success-rate prediction model, designers have to conduct costly user studies to determine suitable target sizes on a webpage or app, but this strategy has low scalability. Accurate quantitative models would also be helpful for automatically generating user-friendly UIs [17, 39] and optimizing UIs [5, 14]. Furthermore, having such models would help researchers justify their experimental designs in investigating novel systems and interaction techniques that do not focus mainly on touch accuracy. For example, researchers could state, "According to the success-rate prediction model, 9-mm-diameter circular targets are assumed to be accurately $\left( { > {99}\% }\right)$ selected by finger touching, and thus, this experiment is a fair one for comparing the usabilities of our proposed system and the baseline."
+
+To predict how successfully users tap a target, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ proposed a model that computes the success rate solely from the target size $W$ for both 1D and 2D pointing tasks [10]. They reasonably limited their model's applicability to touch-pointing tasks starting off-screen, i.e., those in which the user's finger moves from a position outside the touch screen. In this paper, we first explain why this limitation seems reasonable by addressing potential concerns in applying the model to on-screen-start pointing tasks, in which a finger moves from a certain position on the screen to another position to tap a target. Then, however, we justify the use of the model for pointing with an on-screen start. After that, we empirically show through a series of experiments that the model has comparable prediction accuracy even for such pointing with an on-screen start. Our key contributions are as follows.
+
+- Theoretical justification for applying Bi and Zhai's success-rate prediction model to pointing tasks starting on-screen. We found that the model is valid regardless of whether a pointing task starts on- or off-screen. This means that designers and researchers can predict success rates by using a single model. We thus expand the coverage of the model to other applications, such as (a) tapping a "Like" button after scrolling through a social networking service feed, (b) successively inputting check marks on a questionnaire, and (c) typing on a software keyboard.
+
+---
+
+Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
+
+${GI}$ '20, May 21-22,2020, Toronto, Canada
+
+(C) 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-6708-0/20/04...\$15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX
+
+---
+
+- Empirical verification of our hypothesis via four experiments. Despite the theoretical reasoning, we were still concerned about using the model after consulting the existing literature and finding results like those showing that the endpoint variability is significantly affected by the movement distance $A$ in a ballistic pointing motion [6,26,44,61]. Hence, we conducted 1D and 2D pointing experiments starting on-screen with (a) successive pointing tasks in which targets appeared at random positions, which ignored the effect of $A$ , and (b) discrete pointing tasks in which the start and goal targets were separated by a given distance, meaning that $A$ was controlled. The results showed that we could accurately predict the success rates with a prediction error of $\sim {10}\%$ at worst.
+
+In short, the novelty of our study is that it extends the applicability of Bi and Zhai's model to a variety of tasks (e.g., Fitts tasks, key typing), with support from theoretical and empirical evidence. With this model, designers and researchers can evaluate and improve their UIs in terms of touch accuracy, which will directly contribute to UI development. In addition, by reducing the time and cost of conducting user studies, our model will let them focus on other important tasks such as visual design and backend system development, which will indirectly contribute to implementing better, novel UIs.
+
+## RELATED WORK
+
+## Success-Rate Prediction for Pointing Tasks
+
+When human operators try to minimize both the movement time ${MT}$ and the number of target misses, the error rate has been thought to be close to $4\% \left\lbrack {{35},{45},{53}}\right\rbrack$ . A recent report, however, pointed out that this percentage is an arbitrary, questionable assumption [19]. Actual data shows that the error rate tends to decrease as the target size $W$ increases $\lbrack {10},{16},{48}$ , 53].
+
+While a typical goal of pointing models is to predict the ${MT}$ , researchers have also tried to derive models to predict the success rate (or error rate) of target acquisition tasks. In particular, the model of Meyer et al. [36] is often cited as the first one to predict the error rate, but it does not account for the ${MT}$ . In practice, the error rate increases as operators move faster (e.g., [62]), and thus Wobbrock et al. accounted for this effect in their model [53]. That model was later shown to be applicable for pointing to 2D circular targets [54] and moving targets [40]. For both models, by Meyer et al. and Wobbrock et al., the predicted error rate increases as $W$ decreases, which is consistent with the actual observations mentioned above.
+
+As for speed, simply speaking, when operators give it priority, the error rate increases. While Wobbrock et al. applied a time limit as an objective constraint by using a metronome in their study [53], this speed-accuracy tradeoff was empirically validated in a series of experiments by Zhai et al., in which the priority was subjectively biased [62]. Besides the case of rapidly aimed movements, the error rate has also been investigated for tapping on a static button within a given temporal window $\left\lbrack {{28},{29},{30}}\right\rbrack$ . Despite the recent importance of finger-touch operations on smartphones and tablets, however, the only literature on predicting the success rate while accounting for finger-touch ambiguity is the work of Bi and Zhai on pointing from an off-screen start [10]. It would be useful if we could extend the validity of their model to other applications.
+
+## Improvements and Principles of Finger-Touch Accuracy
+
+Various methods to improve the touch accuracy have been proposed. Examples include using an offset from the finger contact point $\left\lbrack {{12},{43},{46}}\right\rbrack$ , dragging in a specific direction to confirm a particular target among a number of potential targets $\left\lbrack {2,{38},{59}}\right\rbrack$ , visualizing a contact point $\left\lbrack {{52},{60}}\right\rbrack$ , applying machine learning [50] or probabilistic modeling [9], and correcting hand tremor effects by using motion sensors [42].
+
+In addition to these techniques, researchers have sought to understand why finger touch is less accurate compared with other input modalities such as a mouse cursor. One typical issue is the fat finger problem $\left\lbrack {{24},{25},{46}}\right\rbrack$ , in which an operator wants to tap a small target, but the finger occludes it. Another issue is that finger touch has an unavoidable offset (spatial bias) from the operator's intended touch point to the actual touch position sensed by the system. Even if operators focus on accuracy by spending a sufficient length of time, the sensed touch point is biased from the crosshair target [24, 25].
+
+## Success-Rate Prediction for Finger-Touch Pointing
+
+## Outline of Dual Gaussian Distribution Model
+
+Previous studies have shown that the endpoint distribution of finger touches follows a bivariate Gaussian distribution over a target $\left\lbrack {4,{22},{49}}\right\rbrack$ . Thus, the touch point observed by the system can be considered a random variable ${X}_{obs}$ following a Gaussian distribution: ${X}_{obs} \sim N\left( {{\mu }_{obs},{\sigma }_{obs}^{2}}\right)$ , where ${\mu }_{obs}$ and ${\sigma }_{obs}$ are the center and ${SD}$ of the distribution, respectively. Bi, Li, and Zhai hypothesized that ${X}_{obs}$ is the sum of two independent random variables consisting of relative and absolute components, both of which follow Gaussian distributions: ${X}_{r} \sim N\left( {{\mu }_{r},{\sigma }_{r}^{2}}\right)$ , and ${X}_{a} \sim N\left( {{\mu }_{a},{\sigma }_{a}^{2}}\right) \left\lbrack 8\right\rbrack$ .
+
+${X}_{r}$ is a relative component affected by the speed-accuracy tradeoff. When an operator aims for a target more quickly, the relative endpoint distribution ${\sigma }_{r}$ increases. As indicated by Fitts’ law studies, if the acceptable endpoint tolerance $W$ increases, then the operator’s endpoint noise level ${\sigma }_{r}$ also increases $\left\lbrack {{13},{35}}\right\rbrack$ .
+
+${X}_{a}$ is an absolute component that reflects the precision of the probe (i.e., the input device: a finger in this paper) and is independent of the task precision. Therefore, even when an operator taps a small target very carefully, there is still a spatial bias from the intended touch point (typically the target center) $\left\lbrack {8,{24},{25}}\right\rbrack$ ; the distribution of this bias is what ${\sigma }_{a}$ models. Therefore, although ${\sigma }_{r}$ can be reduced by an operator aiming slowly at a target, ${\sigma }_{a}$ cannot be controlled by setting such a speed-accuracy priority. Note that the means of both components’ random variables $\left( {\mu }_{r}\right.$ and $\left. {\mu }_{a}\right)$ are assumed to tend close to the target center: ${\mu }_{r} \approx {\mu }_{a} \approx 0$ if the coordinate of the target center is defined as 0 .
+
+Again, Bi et al. hypothesized that the observed touch point is a random variable that is the sum of two independent components [8]:
+
+$$
+{X}_{obs} = {X}_{r} + {X}_{a} \sim N\left( {{\mu }_{r} + {\mu }_{a},{\sigma }_{r}^{2} + {\sigma }_{a}^{2}}\right) \tag{1}
+$$
+
+${\mu }_{obs}\left( { = {\mu }_{r} + {\mu }_{a}}\right)$ is close to 0 on average, and ${\sigma }_{obs}^{2}$ is:
+
+$$
+{\sigma }_{obs}^{2} = {\sigma }_{r}^{2} + {\sigma }_{a}^{2} \tag{2}
+$$
+
+When an operator exactly utilizes the target size $W$ in rapidly aimed movements, $\sqrt{2\pi e}{\sigma }_{r}$ matches a given $W$ (i.e., ${4.133}{\sigma }_{r} \approx W)\left\lbrack {9,{35}}\right\rbrack$ . Operators tend to bias operations toward speed or accuracy, however, thus over- or underusing $W$ [62]. Bi and Zhai assumed that using a fine probe of negligible size $\left( {{\sigma }_{a} \approx 0}\right)$ , such as a mouse cursor, makes ${\sigma }_{r}$ proportional to $W$ . Thus, by introducing a constant $\alpha$ , we have:
+
+$$
+{\sigma }_{r}^{2} = \alpha {W}^{2} \tag{3}
+$$
+
+Then, replacing ${\sigma }_{r}^{2}$ in Equation 2 with Equation 3, we obtain:
+
+$$
+{\sigma }_{obs}^{2} = \alpha {W}^{2} + {\sigma }_{a}^{2} \tag{4}
+$$
+
+Hence, by conducting a pointing task with several $W$ values, we can run a linear regression on Equation 4 and obtain the constants, $\alpha$ and ${\sigma }_{a}$ . Accordingly, we can compute the endpoint variability for tapping a target of size $W$ . We denote this endpoint variability computed from a regression expression as ${\sigma }_{reg}$ :
+
+$$
+{\sigma }_{\text{reg }} = \sqrt{\alpha {W}^{2} + {\sigma }_{a}^{2}} \tag{5}
+$$
+
+Revisiting Bi and Zhai's Studies on Success-Rate Prediction Here, we revisit Bi and Zhai's first experiment on the Bayesian Touch Criterion [9]. They conducted a 2D pointing task with circular targets of diameter $W = 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ . In their task, tapping the starting circle caused the first target to immediately appear at a random position. Subsequently, lifting the input finger off a target caused the next target to appear immediately. Hence, the participants successively tapped each new target as quickly and accurately as possible. The target distance was not predefined as $A$ , unlike typical experiments involving Fitts' law. A possible way to analyze the effect of the movement amplitude would be to calculate $A$ as the distance between the current target and the previous one; however, no such analysis was performed. Thus, even if the endpoint variability ${\sigma }_{obs}$ was influenced by $A$ , the effect was averaged.
+
+By using Equation 5, the regression expressions of the ${\sigma }_{\text{reg }}$ values on the $x$ - and $y$ -axes were calculated as [9]:
+
+$$
+{\sigma }_{{\text{reg }}_{x}} = \sqrt{{0.0075}{W}^{2} + {1.6834}} \tag{6}
+$$
+
+$$
+{\sigma }_{{\text{reg }}_{y}} = \sqrt{{0.0108}{W}^{2} + {1.3292}} \tag{7}
+$$
+
+$\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ then derived their success-rate prediction model [10]. Assuming a negligible correlation between the observed touch point values on the $x$ - and $y$ -axes (i.e., $\rho = 0$ ) gives the following probability density function for the bivariate Gaussian distribution:
+
+$$
+P\left( {x, y}\right) = \frac{1}{{2\pi }{\sigma }_{{\operatorname{reg}}_{x}}{\sigma }_{{\operatorname{reg}}_{y}}}\exp \left( {-\frac{{x}^{2}}{2{\sigma }_{{\operatorname{reg}}_{x}}^{2}} - \frac{{y}^{2}}{2{\sigma }_{{\operatorname{reg}}_{y}}^{2}}}\right) \tag{8}
+$$
+
+Then, the probability that the observed touch point falls within the target boundary $D$ is:
+
+$$
+P\left( D\right) = {\iint }_{D}\frac{1}{{2\pi }{\sigma }_{{\operatorname{reg}}_{x}}{\sigma }_{{\operatorname{reg}}_{y}}}\exp \left( {-\frac{{x}^{2}}{2{\sigma }_{{\operatorname{reg}}_{x}}^{2}} - \frac{{y}^{2}}{2{\sigma }_{{\operatorname{reg}}_{y}}^{2}}}\right) {dxdy} \tag{9}
+$$
+
+where ${\sigma }_{{re}{g}_{x}}$ and ${\sigma }_{{re}{g}_{y}}$ are calculated from Equations 6 and 7, respectively.
+
+For a 1D vertical bar target, whose boundary is defined to range from ${x}_{1}$ to ${x}_{2}$ , we can simplify the predicted probability for where the touch point $X$ falls on the target:
+
+$$
+P\left( {{x}_{1} \leq X \leq {x}_{2}}\right) = \frac{1}{2}\left\lbrack {\operatorname{erf}\left( \frac{{x}_{2}}{{\sigma }_{{\text{reg }}_{x}}\sqrt{2}}\right) - \operatorname{erf}\left( \frac{{x}_{1}}{{\sigma }_{{\text{reg }}_{x}}\sqrt{2}}\right) }\right\rbrack
+$$
+
+(10)
+
+Note that the mean touch point $\mu$ of the probability density function is assumed to be $\approx 0$ , thus eliminating it already from this equation. If the target width is $W$ , then Equation 10 can be simplified further:
+
+$$
+P\left( {-\frac{W}{2} \leq X \leq \frac{W}{2}}\right) = \operatorname{erf}\left( \frac{W}{2\sqrt{2}{\sigma }_{{\operatorname{reg}}_{x}}}\right) \tag{11}
+$$
+
+Alternatively, if the target is a 1D horizontal bar of height $W$ , then we replace the $x$ -coordinates in Equation 11 with $y$ -coordinates.
+
+Bi and Zhai's experiment on success-rate prediction tested 1D vertical,1D horizontal, and 2D circular targets with $W$ $= {2.4},{4.8}$ , and ${7.2}\mathrm{\;{mm}}$ [10]. Unlike a different experiment to measure the coefficients in Equations 6 and 7 [9], Bi and Zhai empirically confirmed their model's validity in pointing from an off-screen start. To simulate this condition of starting off-screen, they told their participants to keep their dominant hands off the screen in natural positions and start from those positions in each trial [10]. Hence, while the coefficients of the touch-point variability were measured in a successive pointing task [9], which is regarded as a pointing task starting on-screen, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ did not claim that their success-rate prediction model using Equations 9 and 11 was valid for other kinds of pointing tasks, such as a Fitts' law paradigm specifying both $A$ and $W$ .
+
+## GENERALIZABILITY OF SUCCESS-RATE PREDICTION MODEL TO POINTING TASKS STARTING ON-SCREEN
+
+## Effect of Movement Distance on Success Rate
+
+Here, we discuss why Bi and Zhai's model (Equations 9 and 11) can be applied to touch-pointing tasks starting on-screen, as well as possible concerns about this application. In their paper [9], Bi and Zhai stated, "We generalize the dual Gaussian distribution hypothesis from Fitts' tasks-which are special target selection tasks involving both amplitude(A)and target width(W)-to the more general target-selection tasks which are predominantly characterized by $W$ alone." Therefore, to omit the effect of $A$ when they later evaluated the success-rate prediction model, they explicitly instructed the participants to start with their dominant hands away from the screen at the beginning of each trial [10]. This is a reasonable instruction: if their experiment used an on-screen start, defining $A$ would be their model's limitation, because such an experiment would show only that the prediction model could be used when the pointing task is controlled by both $A$ and $W$ . Thus, pointing experiments starting off-screen are a reasonable way to show that the model can be used if $W$ is defined.
+
+To generalize the model to pointing tasks starting on-screen, one concern is the effect of the movement distance $A$ on the success rate. Even if we do not define the target distance $A$ from the initial finger position, in actuality the finger has an implicit travel distance, because " $A$ is less well-defined" [9] does not mean "there is no movement distance." Therefore, a pointing task predominantly characterized by $W$ alone can also be interpreted as merging or averaging the effects of $A$ on touch-point distributions and success rates. For example, suppose that a participant in a pointing experiment starting off-screen repeatedly taps a target 200 times. Let the implicit $A$ value be ${20}\mathrm{\;{mm}}$ for 100 trials and ${60}\mathrm{\;{mm}}$ for the other 100. Suppose that the success rates are independently calculated as, e.g., 95 and 75%, respectively. If we do not distinguish the implicit $A$ values, however, then the success rate is $\left( {{95} + {75}}\right) /{200} = {80}\%$ . This value is somewhat close to the independent value of ${75}\%$ for the condition of $A = {60}$ $\mathrm{{mm}}$ , but the prediction error for $A = {20}\mathrm{\;{mm}}$ is ${15}\%$ .
+
+If the implicit or explicit movement distance $A$ does not significantly change the success rate, such as from ${88}\%$ for $A =$ ${20}\mathrm{\;{mm}}$ to ${86}\%$ for $A = {60}\mathrm{\;{mm}}$ , then we can use Bi and Zhai’s model regardless of whether pointing tasks start on- or offscreen. Now, the question is whether the success rate changes depending on the implicit or explicit $A$ . If it does change, then we can use the model only if we ignore the movement distance. According to the current prediction model (Equations 9 and 11), once $W$ is given, the predicted success rate is determined by ${\sigma }_{\text{reg }}$ . Hence, the debate revolves around whether the touch-point distribution is affected by the distance $A$ . This is equivalent to asking whether Equation $4\left( {{\sigma }_{obs}^{2} = \alpha {W}^{2} + {\sigma }_{a}^{2}}\right)$ is valid regardless of the value of $A$ . In fact, the literature offers evidence on both sides of the question, as explained below.
+
+## Effect of Movement Distance on Endpoint Variability
+
+Previous studies reported that the $A$ does not strongly influence the endpoint distribution $\left\lbrack {8,{27},{62}}\right\rbrack$ . For typical pointing tasks, operators are asked to balance speed and accuracy, which means that they can spend a long time successfully acquiring a target if it is small. Thus, target pointing tasks implicitly allow participants to use visual feedback to perform closed-loop motions (e.g., [27]). Under such conditions, the endpoint distribution is expected to decrease as $W$ decreases [9,35].
+
+In contrast, when participants perform a ballistic motion, the endpoint distribution has been reported to vary linearly with the square of the movement distance $A\left\lbrack {{20},{21},{23},{37},{44},{61}}\right\rbrack$ . Beggs et al. $\left\lbrack {6,7}\right\rbrack$ formulated the relationship in this way:
+
+$$
+{\sigma }_{obs}^{2} = a + b{A}^{2} \tag{12}
+$$
+
+where ${\sigma }_{obs}$ is valid for directions collinear and perpendicular to the movement direction, and $a$ and $b$ are empirically determined constants. This model has since been empirically confirmed by other researchers (e.g., [32, 33]). Because the intercept $a$ tends to be small $\left\lbrack {{37},{44},{61}}\right\rbrack$ , this model is consistent with reports on the relationship being linear $\left( {{\sigma }_{obs} = \sqrt{b}A}\right)$ .
+
+The critical threshold of whether participants perform a closed-loop or ballistic motion depends on Fitts' original index of difficulty, ${ID} = {\log }_{2}\left( {{2A}/W}\right)$ . When ${ID}$ is less than 3 or 4 bits, a pointing task can be accomplished with only a ballistic motion [18,23]. While the critical ${ID}$ changes depending on the experimental conditions, an extremely easy task such as this one (i.e., one with a short $A$ or large $W$ ) generally does not require any precise closed-loop operations. Therefore, we theoretically assume that the endpoint distribution ${\sigma }_{obs}$ and the success rate change depending on the movement distance.
+
+Nevertheless, we also assume that such a ballistic motion would not degrade success-rate prediction. The evidence comes from a study by Bi et al. on the FFitts law model [8]. They conducted 1D and 2D Fitts-paradigm experiments with $A = {20}$ and ${30}\mathrm{\;{mm}}$ and $W = {2.4},{4.8}$ , and ${7.2}\mathrm{\;{mm}}$ ; the ${ID}$ in Fitts' original formulation ranged from 2.47 to 4.64 bits. For $\left( {A, W,{ID}}\right) = \left( {{20}\mathrm{\;{mm}},{7.2}\mathrm{\;{mm}},{2.47}\text{ bits }}\right)$ and $({30}\mathrm{\;{mm}},{7.2}\mathrm{\;{mm}}$ , 3.06 bits), a somewhat sloppy ballistic motion might fulfill these conditions [23]. The ${\sigma }_{{ob}{s}_{v}}$ values, however, were 1.21 and ${1.33}\mathrm{\;{mm}}$ for these $1\mathrm{D}$ horizontal targets. The difference was only $\left| {{1.21} - {1.33}}\right| = {0.12}\mathrm{\;{mm}}$ , while the error rate difference was $\left| {{29} - {38}}\right| = 9\%$ . Similarly, their 2D tasks showed only small differences in error rate, up to $2\%$ at most, between the conditions of two values of $A$ for each $W$ condition.
+
+As empirically shown by this study of Bi et al. on FFitts law [8], the changes in ${\sigma }_{obs}$ and the success rate owing to $A$ might be small in practice because such short $A$ values could not greatly change ${\sigma }_{obs}$ , even if the effect of $A$ on ${\sigma }_{obs}$ is statistically significant as expressed by Equation 12. If so, then from a practical viewpoint, it would not be problematic to apply Bi and Zhai's success-rate prediction model to pointing from an on-screen start; hence, we empirically validated this.
+
+## EXPERIMENTS
+
+As discussed in the previous section, we have contrary hypotheses on whether we can accurately predict the success rate of touch pointing tasks solely from the target size $W$ . Specifically, (1) when pointing is ballistic with a short movement distance $A, A$ would have a statistically significant effect on ${\sigma }_{obs}$ , and thus, the success rate might not be accurately predicted. Yet, in that situation,(2) a short $A$ like $2 - 3\mathrm{\;{cm}}$ would induce only a slight (though statistically significant) change in ${\sigma }_{obs}$ , and thus, in practice, the change of $A$ is not detrimental to success-rate prediction.
+
+To settle the question, we ran experiments involving $1\mathrm{D}$ and 2D pointing tasks starting on-screen. For each dimensionality, we conducted (a) successive pointing tasks in which a target appeared at a random position immediately after the previous target was tapped, and (b) discrete pointing tasks in which the target distance $A$ was predefined. Under condition (a), we could have post-computed the target distance from the previous target position. Instead, we merged the various distance values; this was a fair modification of Bi and Zhai's success-rate prediction experiments [10], which started off-screen, to an on-screen start condition. Under condition (b), we separately predicted and measured the error rates for each value of $A$ to empirically evaluate the effect of movement distance on the prediction accuracy. We thus conducted four experiments composed of $1\mathrm{D}$ and $2\mathrm{D}$ target conditions:
+
+
+
+Figure 1. Experimental environments: (a) a participant attempting Experiment 2, and the visual stimuli used in (b) Experiments 2 and (c) 4.
+
+Exp. 1. Successive 1D pointing task: horizontal bar targets appeared at random positions.
+
+Exp. 2. Discrete 1D pointing task: a start bar and a target bar were displayed with distance $A$ between them.
+
+Exp. 3. Successive 2D pointing task: circular targets appeared at random positions.
+
+Exp. 4. Discrete 2D pointing task: a start circle and a target circle were displayed with distance $A$ between them.
+
+Experiments 1 and 2 were conducted on the first day and performed by the same 12 participants. Although we explicitly labeled these as Experiments 1 and 2, their order was balanced among the 12 participants ${}^{1}$ . Similarly, on the second day,12 participants were divided into two groups, and the order of Experiments 3 and 4 was balanced. Each set of two experiments took less than 40 min per participant.
+
+We used an iPhone XS Max (A12 Bionic CPU; 4-GB RAM; iOS 12; 1242 x 2688 pixels, 6.5-inch-diagonal display, 458 ppi; ${208}\mathrm{\;g}$ ). The experimental system was implemented with JavaScript, HTML, and CSS. The web page was viewed with the Safari app. After eliminating the top and bottom navigation-bar areas, the browser converted the canvas resolution to ${414} \times {719}$ pixels, giving 5.978 pixels/mm. The system was set to run at ${60}\mathrm{{fps}}$ . We used the takeoff positions as tap points, as in previous studies [8, 9, 10, 57, 58].
+
+The participants were asked to sit on an office chair in a silent room. As shown in Figure 1a, each participant held the smart-phone with the nondominant (left) hand and tapped the screen with the dominant (right) hand's index finger. They were instructed not to rest their hands or elbows on their laps.
+
+## EXPERIMENT 1: 1D TASK WITH RANDOM AMPLITUDES
+
+## Participants
+
+Twelve university students, two female and 10 male, participated in this study. Their ages ranged from 20 to 25 years $\left( {M = {23.0},{SD} = {1.41}}\right)$ . They all had normal or corrected-to-normal vision, were right-handed, and were daily smartphone users. Their histories of smartphone usage ranged from 5 to 8 years $\left( {M = {6.67},{SD} = {1.07}}\right)$ . For daily usage, five participants used iOS smartphones, and seven used Android smartphones. They each received 45 US\$ in compensation for performing Experiments 1 and 2.
+
+## Task and Design
+
+A 6-mm-high start bar was initially displayed at a random position on the screen. When a participant tapped it, the first target bar immediately appeared at a random position. The participant successively tapped new targets that appeared upon lifting the finger off. If a target was missed, a beep sounded, and the participant had to re-aim for the target. If the participant succeeded, a bell sounded. To reduce the negative effect of targets located close to a screen edge, the random target position was at least ${11}\mathrm{\;{mm}}$ away from both the top and bottom edges of the screen [3].
+
+This task was a single-factor within-subjects design with an independent variable of the target width $W : 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ , or12,24,36,48, and 60 pixels, respectively. The dependent variables were the observed touch-point distribution on the $y$ -axis, ${\sigma }_{{ob}{s}_{v}}$ , and the success rate. The touch-point bias was measured from the target center with a sign [55]. First, the participants performed 20 trials as practice, which included four repetitions of the five $W$ values appearing in random order. In each session, the $W$ values appeared 10 times in random order. The participants were instructed to successively tap the target as quickly and accurately as possible in a single session. They each completed four sessions as data-collection trials, with a short break between two successive sessions. Thus, we recorded ${5}_{W} \times {10}_{\text{repetitions }} \times {4}_{\text{sessions }} \times {12}_{\text{participants }} = {2400}$ data points in total.
+
+## Results
+
+We removed 13 outlier trials (0.54%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center [9]. According to observations, such outliers resulted mainly from participants accidentally touching the screen with the thumb or little finger. For consistency with Bi and Zhai's work [10], we decided to compute the regression between ${\sigma }_{obs}^{2}$ and ${W}^{2}$ to validate Equation 4, compare the observed and computed touch-point distributions $\left( {\sigma }_{{ob}{s}_{v}}\right.$ and ${\sigma }_{{re}{g}_{v}}$ , respectively), and compare the observed and predicted success rates.
+
+## Touch-Point Distribution
+
+A repeated-measures ANOVA showed that $W$ had a significant main effect on ${\sigma }_{{ob}{s}_{v}}\left( {{F}_{4,{44}} = {11.18}, p < {0.001},{\eta }_{p}^{2} = {0.50}}\right)$ . Shapiro-Wilk tests showed that the touch points followed Gaussian distributions $\left( {p > {0.05}}\right)$ under 47 of the 60 conditions $\left( { = {5}_{W} \times {12}_{\text{participants }}}\right)$ , or ${78.3}\%$ . Figure 2 shows the regression expression for ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ to validate Equation 4. The assumption of a linear relationship for these variances even for touch-pointing operations with an on-screen start was supported with ${R}^{2} = {0.9353}$ . Accordingly, we obtained the following coefficients for Equation 5:
+
+$$
+{\sigma }_{{re}{g}_{y}} = \sqrt{{0.0154}{W}^{2} + {1.0123}} \tag{13}
+$$
+
+---
+
+${}^{1}$ We conducted another measurement called a finger calibration task to replicate the model of FFitts law [8]. The order of Experiments 1 and 2 and the finger calibration task was actually balanced.
+
+---
+
+
+
+Figure 2. Regression between the variance in the $y$ -direction $\left( {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ in Experiment 1.
+
+
+
+Figure 3. Observed versus predicted success rates in Experiment 1.
+
+For comparison, in Bi and Zhai’s 2D task [9] using the same $W$ values as in our 1D task, ${\sigma }_{{ob}{s}_{x}}^{2}$ versus ${W}^{2}$ gave ${R}^{2} = {0.9344}$ , and ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ gave ${R}^{2} = {0.9756}$ . Hence, even for a pointing task with an on-screen start and random target positioning, we could compute the touch-point distribution values $\left( {\sigma }_{{re}{g}_{v}}\right)$ for each $W$ by using Equation 13 with accuracy similar to those of their study. The differences between the computed ${\sigma }_{{re}{g}_{v}}$ values and observed distributions ${\sigma }_{{ob}{s}_{v}}$ were less than ${0.1}\mathrm{\;{mm}}\left( { < 1\text{pixel}}\right)$ , as obtained by taking the square roots of the vertical distances between the points and the regression line in Figure 2.
+
+## Success Rate
+
+Among the ${2387}\left( { = {2400} - {13}}\right)$ non-outlier data points, the participants successfully tapped the target in 2194 trials, or ${91.91}\%$ . As shown by the blue bars in Figure 3, the observed success rate increased from 71.55 to 99.79% with the increase in $W$ , which had a significant main effect $\left( {{F}_{4,{44}} = {58.37}, p < }\right.$ ${0.001},{\eta }_{p}^{2} = {0.84})$ . Note that, throughout this paper, the error bars in charts indicate ${SD}$ across all participants.
+
+By applying Equation 13 in Equation 11, we computed the predicted success rates for each $W$ , as represented by the red bars in Figure 3. The difference from the observed success rate was 5% at most. These results show that we could accurately predict the success rate solely from the target size $W$ , with a mean absolute error ${MAE}$ of ${1.657}\%$ for $N = 5$ data points. This indicates the applicability of Bi and Zhai's model with our on-screen start condition.
+
+## EXPERIMENT 2: 1D TASK WITH PRESET AMPLITUDES
+
+This task followed the discrete pointing experiment of Bi et al. with specific target amplitudes [8], but with more variety in the values of $A$ and $W$ .
+
+## Task and Design
+
+Figure 1b shows the visual stimulus used in Experiment 2. At the beginning of each trial, a 6-mm-high blue start bar and a $W$ -mm-high green target bar were displayed at random positions with distance $A$ between them and margins of at least ${11}\mathrm{\;{mm}}$ from the top and bottom edges of the screen. When a participant tapped the start bar, it disappeared and a click sounded. Then, if the participant successfully tapped the target, a bell sounded, and the next set of start and target bars was displayed. If the participant missed the target, he or she had to aim at it until successfully tapping it; in such a case, the trial was not restarted from tapping the start bar. The participants were instructed to tap the target as quickly and accurately as possible after tapping the start bar.
+
+We included four target distances $(A = {20},{30},{45}$ , and ${60}\mathrm{\;{mm}}$ , or 120, 180, 270, and 358 pixels, respectively) and five target widths $(W = 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ , or12,24,36,48, and 60 pixels, respectively). Each $A \times W$ combination was used for 16 repetitions, following a single repetition of practice trials. We thus recorded ${4}_{A} \times {5}_{W} \times {16}_{\text{repetitions }} \times {12}_{\text{participants }} = {3840}$ data points in total.
+
+## Results
+
+Among the 3840 trials, we removed 4 outlier trials (0.10%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+## Touch-Point Distribution
+
+We found significant main effects of $A\left( {{F}_{3,{33}} = {2.949}, p < }\right.$ ${0.05},{\eta }_{p}^{2} = {0.21})$ and $W\left( {{F}_{4,{44}} = {72.63}, p < {0.001},{\eta }_{p}^{2} = {0.87}}\right)$ on ${\sigma }_{{ob}{s}_{v}}$ , but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = }\right.$ ${1.371}, p = {0.187},{\eta }_{p}^{2} = {0.11})$ . Shapiro-Wilk tests showed that the touch points followed Gaussian distributions under 218 of the 240 conditions $\left( {{4}_{A} \times {5}_{W} \times {12}_{\text{participants }}}\right)$ , or 90.8%. Figure 4 shows the regression expression for ${\sigma }_{{ob}{s}_{y}}^{2}$ versus ${W}^{2}$ , with ${R}^{2} = {0.8141}$ for $N = {4}_{A} \times {5}_{W} = {20}$ data points. When we merged the four ${\sigma }_{{ob}{s}_{v}}^{2}$ values for each $A$ as we did with the movement amplitudes in Experiment 1, we obtained five data points with ${R}^{2} = {0.9171}$ (the regression constants did not change). We used the results of the regression expression to obtain the coefficients in Equation 5:
+
+$$
+{\sigma }_{{\text{reg }}_{y}} = \sqrt{{0.0191}{W}^{2} + {0.9543}} \tag{14}
+$$
+
+Using Equation 14, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{v}}\right)$ for each $W$ . The differences between the ${\sigma }_{{re}{g}_{v}}$ and ${\sigma }_{{ob}{s}_{y}}$ values were less than ${0.2}\mathrm{\;{mm}}\left( { \sim 1\text{pixel}}\right)$ . As a check, the average ${\sigma }_{{ob}{s}_{v}}$ values for $A = {20},{30},{45}$ , and ${60}\mathrm{\;{mm}}$ were 1.345, 1.279,1.319, and ${1.415}\mathrm{\;{mm}}$ , respectively, giving a difference of ${0.136}\mathrm{\;{mm}}$ at most. The conclusion of the small effect of $A$ was supported by a repeated-measures ANOVA: although $A$ had a main effect on ${\sigma }_{{ob}{s}_{v}}$ , a pairwise comparison with the Bonferroni correction as the $p$ -value adjustment method showed only one pair having a significant difference between $A = {45}$ and ${60}\mathrm{\;{mm}}(p < {0.05};\left| {{1.319} - {1.415}}\right| = {0.096}\mathrm{\;{mm}}$ $< 1$ pixel). These results indicate that we could compute the touch-point distributions regardless of $A$ at a certain degree of accuracy in most cases.
+
+
+
+Figure 4. Regression between the variance in the $y$ -direction $\left( {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ in Experiment 2.
+
+
+
+Figure 5. Observed versus predicted success rates in Experiment 2.
+
+## Success Rate
+
+Among the ${3736}\left( { = {3840} - 4}\right)$ non-outlier data points, the participants successfully tapped the target in 3489 trials, or ${90.95}\%$ . We found significant main effects of $A\left( {{F}_{3,{33}} = }\right.$ ${4.124}, p < {0.05},{\eta }_{p}^{2} = {0.27})$ and $W\left( {{F}_{4,{44}} = {45.03}, p < {0.001}}\right.$ , ${\eta }_{p}^{2} = {0.80}$ ) on the success rate, but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = {0.681}, p = {0.767},{\eta }_{p}^{2} = {0.058}}\right)$ . Figure 5 shows the observed and predicted success rates. The largest difference was ${77.60} - {67.53} = {10.07}\%$ under the condition of $A = {20}\mathrm{\;{mm}} \times W = 2\mathrm{\;{mm}}$ . This is comparable with Bi and Zhai's success-rate prediction [10], in which the largest difference (9.74%) was observed for $W = {2.4}\mathrm{\;{mm}}$ on a 1D vertical bar target. In Experiment 2, the MAE was 3.266% for $N = {20}$ data points.
+
+## EXPERIMENT 3: 2D TASK WITH RANDOM AMPLITUDES
+
+The experimental designs were almost entirely the same as in Experiments 1 and 2, except that circular targets were used in Experiments 3 and 4. Here, the target size $W$ means the circle's diameter. The random target positions were set at least ${11}\mathrm{\;{mm}}$ from the edges of the screen. For Experiment 3, we used the same task design as in Experiment 1: ${5}_{W} \times$ ${40}_{\text{repetitions }} \times {12}_{\text{participants }} = {2400}$ data points.
+
+## Participants
+
+Twelve university students, three female and nine male, participated in Experiments 3 and 4. Their ages ranged from 19 to 25 years $\left( {M = {22.2},{SD} = {2.12}}\right)$ . They all had normal or corrected-to-normal vision, were right-handed, and were daily smartphone users. Their histories of smartphone usage ranged from 4 to 10 years $\left( {M = {6.17}\text{and}{SD} = {1.75}}\right)$ . For daily usage, nine participants used iOS smartphones, and three used Android smartphones. They each received 45 US\$ in compensation for performing Experiments 3 and 4. Three of these participants had also performed Experiments 1 and 2.
+
+
+
+Figure 6. Regression between the variances in the $x$ - and $y$ -directions $\left( {\sigma }_{{ob}{s}_{x}}^{2}\right.$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ , respectively) and the target size $\left( {W}^{2}\right)$ in Experiment 3.
+
+
+
+Figure 7. Observed versus predicted success rates in Experiment 3.
+
+## Results
+
+Among the 2400 trials, we removed 33 outlier trials (1.375%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+## Touch-Point Distribution
+
+A repeated-measures ANOVA showed that $W$ had significant main effects on ${\sigma }_{{ob}{s}_{x}}\left( {{F}_{4,{44}} = {15.96}, p < {0.001},{\eta }_{p}^{2} = {0.59}}\right)$ and ${\sigma }_{{ob}{s}_{v}}\left( {{F}_{4,{44}} = {25.71}, p < {0.001},{\eta }_{p}^{2} = {0.70}}\right)$ . Shapiro-Wilk tests indicated that the touch points on the $x$ - and $y$ -axes followed Gaussian distributions for ${55}\left( {{91.7}\% }\right)$ and ${53}\left( {{88.3}\% }\right)$ out of 60 conditions, respectively $\left( {p > {0.05}}\right)$ . Under 41 (68.3%) conditions, the touch points followed bivariate Gaussian distributions. Figure 6 shows the regression expressions for ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ . From these results, we obtained the coefficients in Equation 5 on the $x$ - and $y$ -axes:
+
+$$
+{\sigma }_{{\text{reg }}_{x}} = \sqrt{{0.0096}{W}^{2} + {0.8079}} \tag{15}
+$$
+
+$$
+{\sigma }_{{\text{reg }}_{y}} = \sqrt{{0.0117}{W}^{2} + {0.8076}} \tag{16}
+$$
+
+Using Equations 15 and 16, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{x}}\right.$ and $\left. {\sigma }_{{re}{g}_{y}}\right)$ for each $W$ . The differences between the computed ${\sigma }_{\text{reg }}$ and observed ${\sigma }_{\text{obs }}$ values were at most 0.05 and ${0.2}\mathrm{\;{mm}}$ for the $x$ - and $y$ -axes, respectively.
+
+## Success Rate
+
+Among the ${2367}\left( { = {2400} - {33}}\right)$ non-outlier data points, the participants successfully tapped the target in 2017 trials, or ${85.21}\%$ . As shown by the blue bars in Figure 7, the observed success rate increased from 50.84 to 99.58% with $W$ , which had a significant main effect $\left( {{F}_{4,{44}} = {59.24}, p < {0.001},{\eta }_{p}^{2} = }\right.$ 0.84).
+
+We computed the predicted success rates for each $W$ , as represented by the red bars, by applying Equations 15 and 16 in Equation 9. The differences from the observed success rates were all under $7\%$ . These results show that we could accurately predict the success rate from the target size $W$ , with ${MAE} = {3.082}\%$ for $N = 5$ data points.
+
+
+
+Figure 8. Regression between the variances in the $x$ - and $y$ -directions $\left( {\sigma }_{{ob}{s}_{x}}^{2}\right.$ and $\left. {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ for all data points $\left( {N = {20}}\right)$ in Experiment 4.
+
+## EXPERIMENT 4: 2D TASK WITH PRESET AMPLITUDES
+
+We used the same task design as in Experiment 2: ${4}_{A} \times {5}_{W} \times$ ${16}_{\text{repetitions }} \times {12}_{\text{participants }} = {3840}$ data points. Figure $1\mathrm{\;c}$ shows the visual stimulus.
+
+## Results
+
+Among the 3840 trials, we removed 9 outlier trials (0.23%) having tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+## Touch-Point Distribution
+
+For ${\sigma }_{{ob}{s}_{x}}$ , we found a significant main effect of $W\left( {{F}_{4,{44}} = }\right.$ ${24.12}, p < {0.001},{\eta }_{p}^{2} = {0.69})$ , but not of $A\left( {{F}_{3,{33}} = {0.321}, p = }\right.$ ${0.810},{\eta }_{p}^{2} = {0.028})$ . No significant interaction of $A \times W$ was found $\left( {{\dot{F}}_{{12},{132}} = {0.950}, p = {0.500},{\eta }_{p}^{2} = {0.079}}\right)$ . For ${\sigma }_{{ob}{s}_{v}}$ , we found significant main effects of $A\left( {{F}_{3,{33}} = {3.833}, p < {0.05}\text{,}}\right.$ ${\eta }_{p}^{2} = {0.26}$ ) and $W\left( {{F}_{4,{44}} = {48.35}, p < {0.001},{\eta }_{p}^{2} = {0.82}}\right)$ , but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = {1.662}, p = {0.082}\text{,}}\right.$ ${\eta }_{p}^{2} = {0.13})$ . Shapiro-Wilk tests showed that the touch points on the $x$ - and $y$ -axes followed Gaussian distributions for 224 $\left( {{93.3}\% }\right)$ and ${218}\left( {{90.8}\% }\right)$ of the 240 conditions, respectively $\left( {p > {0.05}}\right)$ . Under 184 (76.7%) conditions, the touch points followed bivariate Gaussian distributions.
+
+Figure 8 shows the regression expressions for ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ versus ${W}^{2}$ , with ${R}^{2} = {0.8201}$ and 0.7347, respectively, for $N = {20}$ data points. When we merged the four ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ values for each $A$ , we obtained $N = 5$ data points with ${R}^{2} = {0.9137}$ and 0.9425, respectively (the regression constants did not change). From the regression expression results, we obtained the coefficients of Equation 5:
+
+$$
+{\sigma }_{{\text{reg }}_{x}} = \sqrt{{0.0111}{W}^{2} + {0.9227}} \tag{17}
+$$
+
+$$
+{\sigma }_{{re}{g}_{y}} = \sqrt{{0.0188}{W}^{2} + {0.8366}} \tag{18}
+$$
+
+Using Equations 17 and 18, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{x}}\right.$ and $\left. {\sigma }_{{re}{g}_{y}}\right)$ under each condition of $A \times W$ . The differences between the computed ${\sigma }_{\text{reg }}$ and observed ${\sigma }_{\text{obs }}$ values were less than ${0.2}\mathrm{\;{mm}}$ on the $x$ -axis and less than 0.5 $\mathrm{{mm}}$ on the $y$ -axis. The differences were comparatively greater for ${\sigma }_{{ob}{s}_{y}}$ because $A$ significantly affected ${\sigma }_{{ob}{s}_{y}}$ .
+
+## Success Rate
+
+Among the remaining ${3831}\left( { = {3840} - 9}\right)$ non-outlier data points, the participants successfully tapped the target in 3145 trials, or ${82.09}\%$ . We found a significant main effect of $W$ $\left( {{F}_{4,{44}} = {120.0}, p < {0.001},{\eta }_{p}^{2} = {0.92}}\right)$ on the success rate, but not of $A\left( {{F}_{3,{33}} = {2.100}, p = {0.119},{\eta }_{p}^{2} = {0.16}}\right)$ . The interaction of $A \times W$ was not significant $\left( {{F}_{{12},{132}} = {0.960}, p = {0.490},{\eta }_{p}^{2} = }\right.$ 0.080 ). Figure 9 shows the observed and predicted success rates. The largest difference was ${95.80} - {85.94} = {9.86}\%$ for $A = {20}\mathrm{\;{mm}}$ and $W = 6\mathrm{\;{mm}}$ . The ${MAE}$ was ${3.671}\%$ for $N = {20}$ data points.
+
+
+
+Figure 9. Observed versus predicted success rates in Experiment 4.
+
+
+
+Figure 10. Predicted success rate with respect to the target size $W$ .
+
+## DISCUSSION
+
+## Prediction Accuracy of Success Rates
+
+Throughout the experiments, the prediction errors were about as low as in Bi and Zhai's pointing tasks with an off-screen start [10]: ${10.07}\%$ at most in our case (for $A = {20}\mathrm{\;{mm}} \times$ $W = 2\mathrm{\;{mm}}$ in Experiment 2), versus ${9.74}\%$ at most in Bi and Zhai’s case ( ${2.4}\mathrm{\;{mm}}$ ). As in their study, we found that the success rate approached ${100}\%$ as $W$ increased; thus, the prediction errors tended to become smaller. Therefore, the model accuracy should be judged from the prediction errors for small targets.
+
+The largest prediction error in our experiments was under the condition of $W = 2\mathrm{\;{mm}}$ in the 1D task. Similarly, the largest prediction error in $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ ’s experiments was under the condition of $W = {2.4}\mathrm{\;{mm}}$ in the 1D (vertical target) task [10]. While Bi and Zhai checked the prediction errors under nine conditions in total [10] (three $W$ values for three target shapes), we checked the prediction errors under $5 + {20} + 5 + {20} =$ 50 conditions (Experiments 1 to 4, respectively), which may have given more chances to show a higher prediction error. In addition, although we used $2\mathrm{\;{mm}}$ as the smallest $W$ for consistency with Bi and Zhai's study on the Bayesian Touch Criterion [9], such a small target is not often used in practical touch UIs. Therefore, the slightly larger prediction error in our results should be less critical in actual usage.
+
+We also found that our concern that the prediction accuracy might drop, depending on the $A$ values, was not a critical issue as compared with tasks using an off-screen start [10]. Hence, the comparable prediction accuracy observed in our experiments empirically shows that Bi and Zhai’s model can be applied to pointing tasks with an on-screen start, regardless of whether the effect of $A$ is averaged (Experiments 1 and 3) or not ( 2 and 4 ).
+
+Figure 10 plots the predicted success rate with respect to $W$ , which can help designers choose the appropriate size for a GUI item. This also provides evidence that conducting costly user studies to measure success rates for multiple $W$ values has low scalability. For example, from the data in Experiment 4, the success rates for $W = 7$ and ${10}\mathrm{\;{mm}}$ do not differ much, while the curve sharply rises from $W = 1$ to $6\mathrm{\;{mm}}$ . Hence, even if the error rate is measured for $W = 2,6$ , and ${10}\mathrm{\;{mm}}$ , for example, it would be difficult to accurately predict error rates for other $W$ values such as $3\mathrm{\;{mm}}$ . Therefore, without an appropriate success-rate prediction model, designers have to conduct user studies with fine-grained $W$ values, e.g.,1 to 10 $\mathrm{{mm}}$ at $1 - \mathrm{{mm}}$ interval ${\mathrm{s}}^{2}$ .
+
+Regarding UI designs, how would prediction errors affect the display layout? In our worst case, for $W = 2\mathrm{\;{mm}}$ (Experiment 2), the actual success rate was 77%, but the predicted rate was 67%. If designers want a hyperlink to have a 77% success rate, they might set $W = {2.4}\mathrm{\;{mm}}$ according to the model shown in Figure 10. This 0.4-mm "excess" height could be negligible. When a space has a height of ${12}\mathrm{\;{mm}}$ , however, designers can arrange only five 2.4-mm hyperlinks, but in actuality, six links could be located in that space with the intended touch accuracy. Still, this is the worst case; for more practical $W$ values, this negative effect would become less significant as the prediction accuracy increases.
+
+## Adequacy of Experiments
+
+In our experiments, the endpoint distributions were not normal in some cases. One might think that those results violate the assumption of a dual Gaussian distribution model. To visually check the distributions, Figure 11 show the histograms and 95% confidence ellipses of tap positions (see the Supplementary Materials for all results including Experiments 2 and 4). We can see that some conditions do not exhibit normal distributions, e.g., Figure11c. This could be partly due to the small numbers of trials in our experiments: 40 repetitions per condition in Experiments 1 and 3, and 16 in Experiments 2 and 4. Still, according to the central limit theorem, it is reasonable to assume that the distributions should approach Gaussian distributions after a sufficient number of trials.
+
+We also checked the Fitts' law fitness. Using the Shannon formulation [35] with nominal $A$ and $W$ values, we found that the error-free ${MT}$ data showed excellent fits ${}^{3}$ for Experiments 2 and 4, respectively, by using $N = {20}$ data points:
+
+$$
+{MT} = {132.0} + {90.29} \times {\log }_{2}\left( {A/W + 1}\right) ,{R}^{2} = {0.9807} \tag{19}
+$$
+
+$$
+{MT} = {114.3} + {97.91} \times {\log }_{2}\left( {A/W + 1}\right) ,{R}^{2} = {0.9900} \tag{20}
+$$
+
+
+
+Figure 11. Histograms and 95% confidence ellipses using the all error-free data in Experiments (a-c) 1 and (d-f) 3. For 1D tasks, the histograms show the frequencies of tap positions, the dashed curve lines show the normal distributions using the mean and ${\sigma }_{{ob}{s}_{\mathrm{v}}}$ data, two red bars are the borderlines of target, and the black bar shows the mean of tap positions. For 2D tasks, blues dots are tap positions, light blue ellipses are 95% confidence ellipses of tap positions, and red dashed circles are target areas. For all tasks, the $0\mathrm{\;{mm}}$ positions on the $\mathrm{x}$ - and $\mathrm{y}$ -axes are aligned to the centers of targets.
+
+The indexes of performance, ${IP}\left( { = 1/b}\right)$ , were 11.08 and 10.21 bits/s, close to those in Pedersen and Hornbæk's report on error-free ${MT}$ analysis (11.11-12.50 bits/s for 1D touch-pointing tasks) [41]. Therefore, we conclude that both participant groups appropriately followed our instruction on trying to balance speed and accuracy.
+
+## Internal and External Validity of Prediction Parameters
+
+Because the main scope of our study did not include testing the external validity of the prediction parameters in equations like Equation 18, it is sufficient that the observed and predicted success rates internally matched the participant group, as shown by our experimental results. Yet, it is still worth discussing the external validity of the prediction parameters in the hope of gaining a better understanding of the dual Gaussian distribution hypothesis.
+
+A common way to check external validity is to apply obtained parameters to data from different participants (e.g., [11]). Bi and Zhai measured the parameters of Equations 6 and 7 in their experiment on the Bayesian Touch Criterion [9]. Those parameters were then used in Equations 9 and 11 to predict the success rates [10]. Because the participants in those two studies differed, the parameters of Equations 6 and 7 could have had external validity. Bi, Li, and Zhai stated, "Assuming finger size and shape do not vary drastically across users, ${\sigma }_{a}$ could be used across users as an approximation." The reason was that the ${\sigma }_{a}$ values measured in their 2D discrete pointing tasks were suitable for a key-typing task performed by a different group of participants [8].
+
+The top panel of Figure 12 shows the predicted success rates in the 1D horizontal bar pointing tasks. In addition to the prediction data reported in Figures 3 and 5, we also computed the predicted success rates by using the ${\sigma }_{{ob}{s}_{v}}$ values measured in the 2D tasks of Experiments 3 and 4. The actual success rate in Experiment 1 under the condition of $W = 2\mathrm{\;{mm}}$ was 71.55% (Figure 3), and those in Experiment 2 ranged from 71.73 to 77.60% (Figure 5). Therefore, we conclude that using the ${\sigma }_{{ob}{s}_{v}}$ values measured in the 2D tasks would allow us to predict more accurate success rates. Here, using Bi and Zhai's generic ${\sigma }_{{ob}{s}_{v}}$ value [10] allows us to predict the success rate $\left( {{60.66}\% }\right)$ , but this is not as close as ours to the actual data. Note that three students participated on both days in our study; this is not a complete comparison as an external validity check.
+
+---
+
+${}^{2}$ In fact,1-mm intervals are still not sufficient: the predicted success rate "jumps up" from 41.3 to ${67.0}\%$ for $W = 2$ and $3\mathrm{\;{mm}}$ , respectively. ${}^{3}$ Results for the effective width method [13,35] and FFitts law [8] by taking failure trials into account were also analyzed. Because of the space limitation, we decided to focus on success-rate prediction in this paper.
+
+---
+
+
+
+Figure 12. Comparison of predicted success rates from our data and Bi and Zhai's [9] for (top) 1D and (bottom) 2D tasks.
+
+We also tried to determine whether the success rates in the 2D tasks could be predicted from Bi and Zhai's data, as shown in the bottom panel of Figure 12. Because Bi and Zhai's data for ${\sigma }_{{a}_{x}}$ and ${\sigma }_{{a}_{y}}$ were larger than ours, their predicted success rates tended to be lower. Furthermore, because the actual success rate was over ${50}\%$ for $W = 2\mathrm{\;{mm}}$ in Experiment 3 (Figure 7), Bi and Zhai's prediction parameters could not be used to predict the success rates in our experiments. Note that using Bi and Zhai's prediction parameters for the index finger [10] would not influence this conclusion.
+
+One possible explanation for why Bi and Zhai's parameters were appropriate for their predictions but not for ours is the participants' ages. While the age ranges of their participants were 26-49 for parameter measurement [9] and 28-45 years for success-rate prediction [10], our participants were university students with an age range of 19-25. Assuming that the cognitive and physical skills and sensory abilities of adults are relatively lower than those of younger persons [47], it is reasonable that the ${\sigma }_{{a}_{x}}$ and ${\sigma }_{{a}_{y}}$ values measured in our experiments were smaller than those in Bi and Zhai's. This result supports $\mathrm{{Bi}},\mathrm{{Li}}$ , and Zhai’s hypothesis that ${\sigma }_{a}$ may vary with the individual's finger size or motor impairment (e.g., tremor, or lack of) [8]. The fact that the model parameters $\alpha$ and ${\sigma }_{a}$ can change depending on the user group and thus affect the success-rate prediction accuracy is an empirically demonstrated limitation on the generalizability of the dual Gaussian distribution hypothesis. This is one of the novel findings of our study, as it has never been shown with such evidence.
+
+To accurately predict the success rate when the age range of the main users of an app or the main visitors to a smartphone website is known (e.g., teenagers), we suggest that designers choose appropriate participants for measuring the prediction parameters $\alpha$ and ${\sigma }_{a}$ . Such methodology of designing UIs differently according to the users' age has already been adopted in websites and apps. For example, Leitão and Silva listed various apps having large buttons and swipe widgets suitable for older adults, and maybe also for users with presbyopia (Figures 1-11 in [31]). On YouTube Kids [1], the button size is auto-personalized depending on the age listed in the user's account information. Our results can help such optimization and personalization according to the characteristics of target users.
+
+## Limitations and Future Work
+
+Our findings are somewhat limited by the experimental conditions, such as the $A$ and $W$ values used in the tasks. In particular, much longer $A$ values have been tested in touch-pointing studies, e.g., ${20}\mathrm{\;{cm}}$ [34]. Hence, our conclusions are limited to small screens. The limited range of $A$ values provides one possible reason why we observed only one pair having a significant difference in ${\sigma }_{obs}$ (between $A = {45}$ and 60 $\mathrm{{mm}}$ in Experiment 2). If we tested much longer $A$ values, the ballistic portion of rapidly aimed movements might affect ${\sigma }_{obs}$ $\left\lbrack {{15},{56}}\right\rbrack$ and change the resultant prediction accuracy. In addition, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ measured prediction parameters for using both the thumb in a one-handed posture and the index finger [9], and they also measured the success rates in 1D pointing with a vertical bar target [10]. If we conduct user studies under such conditions, they will provide additional contributions.
+
+Our experiments required the participants to balance speed and accuracy. In other words, the participants could spend sufficient time if necessary. The success rate has been shown to vary nonlinearly depending on whether users shorten the operation time or aim carefully ${\left\lbrack {53},{54}\right\rbrack }^{4}$ . Our experimental instructions covered just one case among various situations of touch selection.
+
+## CONCLUSION
+
+We discussed the applicability of Bi and Zhai’s success-rate prediction model [10] to pointing tasks starting on-screen. The potential concern about an on-screen start in such tasks was that the movement distance $A$ is both implicitly and explicitly defined, and previous studies suggested that the $A$ value would influence the endpoint variability. We empirically showed the validity of the model in four experiments. The prediction error was at most 10.07% among 50 conditions in total. Our results indicate that designers and researchers can accurately predict the success rate by using a single model, regardless of whether a user taps a certain GUI item by moving a finger to the screen or keeping it close to the surface as in keyboard typing. Our findings will be beneficial for designing better touch GUIs and for automatically generating and optimizing UIs.
+
+---
+
+${}^{4}$ For more examples of nonlinear relationships in the speed-accuracy tradeoff on tasks other than pointing, see [51].
+
+---
+
+## REFERENCES
+
+[1] YouTube Kids. (2015). https: //apps.apple.com/us/app/youtube-kids/id936971630, Or https://play.google.com/store/apps/details?id=com.google.android.apps.youtube.kids.
+
+[2] Oscar Kin-Chung Au, Xiaojun Su, and Rynson W.H. Lau. 2014. LinearDragger: A Linear Selector for One-finger Target Acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2607-2616. DOI:
+
+http://dx.doi.org/10.1145/2556288.2557096
+
+[3] Daniel Avrahami. 2015. The Effect of Edge Targets on Touch Performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1837-1846. DOI:
+
+http://dx.doi.org/10.1145/2702123.2702439
+
+[4] Shiri Azenkot and Shumin Zhai. 2012. Touch Behavior with Different Postures on Soft Smartphone Keyboards. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '12). ACM, New York, NY, USA, 251-260. DOI:
+
+http://dx.doi.org/10.1145/2371574.2371612
+
+[5] Gilles Bailly, Antti Oulasvirta, Timo Kötzing, and Sabrina Hoppe. 2013. MenuOptimizer: Interactive Optimization of Menu Systems. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, New York, NY, USA, 331-342. DOI:
+
+http://dx.doi.org/10.1145/2501988.2502024
+
+[6] W. D. A. Beggs, Jacqueline A. Andrew, Martha L. Baker, S. R. Dove, Irene Fairclough, and C. I. Howarth. 1972. The accuracy of non-visual aiming. Quarterly Journal of Experimental Psychology 24, 4 (1972), 515-523. DOI:
+
+http://dx.doi.org/10.1080/14640747208400311
+
+[7] W. D. A. BEGGS, RUTH SAKSTEIN, and C. I. HOWARTH. 1974. The Generality of a Theory of the Intermittent Control of Accurate Movements. Ergonomics 17, 6 (1974), 757-768. DOI: http://dx.doi.org/10.1080/00140137408931422
+
+[8] Xiaojun Bi, Yang Li, and Shumin Zhai. 2013. FFitts Law: Modeling Finger Touch with Fitts' Law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1363-1372. DOI: http://dx.doi.org/10.1145/2470654.2466180
+
+[9] Xiaojun Bi and Shumin Zhai. 2013. Bayesian touch: a statistical criterion of target selection with finger touch. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '13). 51-60. DOI:http://dx.doi.org/10.1145/2501988.2502058
+
+[10] Xiaojun Bi and Shumin Zhai. 2016. Predicting
+
+Finger-Touch Accuracy Based on the Dual Gaussian
+
+Distribution Model. In Proceedings of the 29th Annual
+
+Symposium on User Interface Software and Technology
+
+(UIST '16). ACM, New York, NY, USA, 313-319. DOI:
+
+http://dx.doi.org/10.1145/2984511.2984546
+
+[11] Xiang Cao and Shumin Zhai. 2007. Modeling Human
+
+Performance of Pen Stroke Gestures. In Proceedings of
+
+the SIGCHI Conference on Human Factors in
+
+Computing Systems (CHI '07). ACM, New York, NY,
+
+USA, 1495-1504. DOI:
+
+http://dx.doi.org/10.1145/1240624.1240850
+
+[12] Andy Cockburn, David Ahlström, and Carl Gutwin.
+
+2012. Understanding performance in touch selections:
+
+Tap, drag and radial pointing drag with finger, stylus and
+
+mouse. International Journal of Human-Computer
+
+Studies 70, 3 (2012), 218 - 233. DOI:
+
+http://dx.doi.org/https:
+
+//doi.org/10.1016/j.ijhcs.2011.11.002
+
+[13] Edward R.F.W. Crossman. 1956. The speed and
+
+accuracy of simple hand movements. Ph.D. Dissertation.
+
+University of Birmingham.
+
+[14] Jan Eggers, Dominique Feillet, Steffen Kehl,
+
+Marc Oliver Wagner, and Bernard Yannou. 2003.
+
+Optimization of the keyboard arrangement problem
+
+using an Ant Colony algorithm. European Journal of
+
+Operational Research 148, 3 (2003), 672-686. DOI:
+
+http://dx.doi.org/10.1016/S0377-2217(02)00489-7
+
+[15] Digby Elliott, Werner Helsen, and Romeo Chua. 2001.
+
+A century later: Woodworth's (1899) two-component
+
+model of goal-directed aiming. Psychological bulletin
+
+127, 3 (06 2001), 342-357. DOI:
+
+http://dx.doi.org/10.1037/0033-2909.127.3.342
+
+[16] Paul M. Fitts. 1954. The information capacity of the
+
+human motor system in controlling the amplitude of
+
+movement. Journal of Experimental Psychology 47, 6
+
+(1954), 381-391. DOI:
+
+http://dx.doi.org/10.1037/h0055392
+
+[17] Krzysztof Gajos and Daniel S. Weld. 2004. SUPPLE:
+
+Automatically Generating User Interfaces. In
+
+Proceedings of the 9th International Conference on
+
+Intelligent User Interfaces (IUI '04). ACM, New York,
+
+NY, USA, 93-100. DOI:
+
+http://dx.doi.org/10.1145/964442.964461
+
+[18] Khai-Chung Gan and Errol R. Hoffmann. 1988.
+
+Geometrical conditions for ballistic and visually
+
+controlled movements. Ergonomics 31, 5 (1988),
+
+829-839. DOI:
+
+http://dx.doi.org/10.1080/00140138808966724
+
+[19] Julien Gori, Olivier Rioul, and Yves Guiard. 2018.
+
+Speed-Accuracy Tradeoff: A Formal
+
+Information-Theoretic Transmission Scheme (FITTS).
+
+ACM Trans. Comput.-Hum. Interact. 25, 5, Article 27
+
+(Sept. 2018), 33 pages. DOI:
+
+http://dx.doi.org/10.1145/3231595
+
+[20] Tovi Grossman and Ravin Balakrishnan. 2005. A Probabilistic Approach to Modeling Two-dimensional Pointing. ACM Trans. Comput.-Hum. Interact. 12, 3 (Sept. 2005), 435-459. DOI: http://dx.doi.org/10.1145/1096737.1096741
+
+[21] Tovi Grossman, Nicholas Kong, and Ravin Balakrishnan. 2007. Modeling Pointing at Targets of Arbitrary Shapes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 463-472. DOI:
+
+http://dx.doi.org/10.1145/1240624.1240700
+
+[22] Niels Henze, Enrico Rukzio, and Susanne Boll. 2012. Observational and Experimental Investigation of Typing Behaviour Using Virtual Keyboards for Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 2659-2668. DOI: http://dx.doi.org/10.1145/2207676.2208658
+
+[23] Errol R. Hoffmann. 2016. Critical Index of Difficulty for Different Body Motions: A Review. Journal of Motor Behavior 48, 3 (2016), 277-288. DOI:
+
+http://dx.doi.org/10.1080/00222895.2015.1090389
+
+[24] Christian Holz and Patrick Baudisch. 2010. The Generalized Perceived Input Point Model and How to Double Touch Accuracy by Extracting Fingerprints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 581-590. DOI:
+
+http://dx.doi.org/10.1145/1753326.1753413
+
+[25] Christian Holz and Patrick Baudisch. 2011. Understanding Touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 2501-2510. DOI:http://dx.doi.org/10.1145/1978942.1979308
+
+[26] C.I. Howarth, W.D.A. Beggs, and J.M. Bowden. 1971. The relationship between speed and accuracy of movement aimed at a target. Acta Psychologica 35, 3 (1971), 207-218. DOI:http://dx.doi.org/https: //doi.org/10.1016/0001-6918(71)90022-9
+
+[27] Jin Huang, Feng Tian, Xiangmin Fan, Xiaolong (Luke) Zhang, and Shumin Zhai. 2018. Understanding the Uncertainty in 1D Unidirectional Moving Target Selection. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 237, 12 pages. DOI: http://dx.doi.org/10.1145/3173574.3173811
+
+[28] Byungjoo Lee, Sunjun Kim, Antti Oulasvirta, Jong-In Lee, and Eunji Park. 2018. Moving Target Selection: A Cue Integration Model. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 230, 12 pages. DOI:http://dx.doi.org/10.1145/3173574.3173804
+
+[29] Byungjoo Lee and Antti Oulasvirta. 2016. Modelling Error Rates in Temporal Pointing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1857-1868. DOI:
+
+http://dx.doi.org/10.1145/2858036.2858143
+
+[30] Injung Lee, Sunjun Kim, and Byungjoo Lee. 2019. Geometrically Compensating Effect of End-to-End Latency in Moving-Target Selection Games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA. DOI:
+
+http://dx.doi.org/10.1145/3290605.3300790
+
+[31] Roxanne Leitão and Paula Alexandra Silva. 2012. Target and Spacing Sizes for Smartphone User Interfaces for Older Adults: Design Patterns Based on an Evaluation with Users. In Proceedings of the 19th Conference on Pattern Languages of Programs (PLoP '12). The Hillside Group, USA, Article 5, 13 pages. http://dl.acm.org/citation.cfm?id=2821679.2831275
+
+[32] Jui-Feng Lin and Colin G. Drury. 2009. Modeling Fitts' law. In Proceedings of the 9th Pan-Pacific Conference on Ergonomics.
+
+[33] Jui-Feng Lin, Colin G. Drury, Mark H. Karwan, , and Victor Paquet. 2009. A general model that accounts for Fitts' law and Drury's model. In Proceedings of the 17th Congress of the International Ergonomics.
+
+[34] Yuexing Luo and Daniel Vogel. 2014. Crossing-based Selection with Direct Touch Input. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2627-2636. DOI:
+
+http://dx.doi.org/10.1145/2556288.2557397
+
+[35] I. Scott MacKenzie. 1992. Fitts' law as a research and design tool in human-computer interaction. Human-Computer Interaction 7, 1 (1992), 91-139. DOI: http://dx.doi.org/10.1207/s15327051hci0701_3
+
+[36] David E. Meyer, Richard A. Abrams, Sylvan Kornblum, Charles E. Wright, and J. E. Keith Smith. 1988. Optimality in human motor performance: ideal control of rapid aimed movements. Psychological Review 95, 3 (1988), 340-370. DOI:
+
+http://dx.doi.org/10.1037/0033-295X.95.3.340
+
+[37] David E. Meyer, J. E. Keith Smith, and Charles E. Wright. 1982. Models for the speed and accuracy of aimed movements. Psychological Review 89, 5 (1982), 449-482. DOI:
+
+http://dx.doi.org/10.1037/0033-295X.89.5.449
+
+[38] Tomer Moscovich. 2009. Contact Area Interaction with Sliding Widgets. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology (UIST '09). ACM, New York, NY, USA, 13-22. DOI:http://dx.doi.org/10.1145/1622176.1622181
+
+[39] Jeffrey Nichols, Brad A. Myers, and Kevin Litwack. 2004. Improving Automatic Interface Generation with Smart Templates. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI '04). ACM, New York, NY, USA, 286-288. DOI: http://dx.doi.org/10.1145/964442.964507
+
+[40] Eunji Park and Byungjoo Lee. Predicting Error Rates in
+
+Pointing Regardless of Target Motion. (2018).
+
+https://arxiv.org/abs/1806.02973
+
+[41] Esben Warming Pedersen and Kasper Hornbæk. 2012.
+
+An Experimental Comparison of Touch Interaction on
+
+Vertical and Horizontal Surfaces. In Proceedings of the
+
+7th Nordic Conference on Human-Computer Interaction:
+
+Making Sense Through Design (NordiCHI '12). ACM,
+
+New York, NY, USA, 370-379. DOI:
+
+http://dx.doi.org/10.1145/2399016.2399074
+
+[42] Katrin Plaumann, Milos Babic, Tobias Drey, Witali
+
+Hepting, Daniel Stooss, and Enrico Rukzio. 2018.
+
+Improving Input Accuracy on Smartphones for Persons
+
+Who Are Affected by Tremor Using Motion Sensors.
+
+Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
+
+1, 4, Article 156 (Jan. 2018), 30 pages. DOI:
+
+http://dx.doi.org/10.1145/3161169
+
+[43] R. L. Potter, L. J. Weldon, and B. Shneiderman. 1988.
+
+Improving the Accuracy of Touch Screens: An
+
+Experimental Evaluation of Three Strategies. In
+
+Proceedings of the SIGCHI Conference on Human
+
+Factors in Computing Systems (CHI '88). ACM, New
+
+York, NY, USA, 27-32. DOI:
+
+http://dx.doi.org/10.1145/57167.57171
+
+[44] Richard A. Schmidt, Howard N. Zelaznik, Bob Hawkins,
+
+James S. Frank, and J. T. Quinn. 1979. Motor-output
+
+variability: a theory for the accuracy of rapid motor acts.
+
+Psychological review 86, 5 (1979), 415-451.
+
+[45] R. William Soukoreff and I. Scott MacKenzie. 2004.
+
+Towards a standard for pointing device evaluation,
+
+perspectives on 27 years of Fitts' law research in HCI.
+
+International Journal of Human-Computer Studies 61, 6
+
+(2004), 751-789. DOI:
+
+http://dx.doi.org/10.1016/j.ijhcs.2004.09.001
+
+[46] Daniel Vogel and Patrick Baudisch. 2007. Shift: A
+
+Technique for Operating Pen-based Interfaces Using
+
+Touch. In Proceedings of the SIGCHI Conference on
+
+Human Factors in Computing Systems (CHI '07). ACM,
+
+New York, NY, USA, 657-666. DOI:
+
+http://dx.doi.org/10.1145/1240624.1240727
+
+[47] Neff Walker, D A Philbin, and Arthur Fisk. 1997.
+
+Age-Related Differences in Movement Control:
+
+Adjusting Submovement Structure To Optimize
+
+Performance. The journals of gerontology. Series B,
+
+Psychological sciences and social sciences 52 (02 1997),
+
+40-52. DOI:
+
+http://dx.doi.org/10.1093/geronb/52B.1.P40
+
+[48] Stephen A. Wallace and Karl M. Newell. 1983. Visual
+
+control of discrete aiming movements. The Quarterly
+
+Journal of Experimental Psychology Section A 35, 2
+
+(1983), 311-321. DOI:
+
+http://dx.doi.org/10.1080/14640748308402136
+
+[49] Feng Wang and Xiangshi Ren. 2009. Empirical
+
+Evaluation for Finger Input Properties in Multi-touch
+
+Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 1063-1072. DOI:
+
+http://dx.doi.org/10.1145/1518701.1518864
+
+[50] Daryl Weir, Simon Rogers, Roderick Murray-Smith, and Markus Löchtefeld. 2012. A User-specific Machine Learning Approach for Improving Touch Accuracy on Mobile Devices. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST '12). ACM, New York, NY, USA, 465-476. DOI: http://dx.doi.org/10.1145/2380116.2380175
+
+[51] Wayne A. Wickelgren. 1977. Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica 41, 1 (1977), 67-85. DOI:
+
+http://dx.doi.org/10.1016/0001-6918(77)90012-9
+
+[52] Daniel Wigdor, Sarah Williams, Michael Cronin, Robert Levy, Katie White, Maxim Mazeev, and Hrvoje Benko. 2009. Ripples: Utilizing Per-contact Visualizations to Improve User Interaction with Touch Displays. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology (UIST '09). ACM, New York, NY, USA, 3-12. DOI: http://dx.doi.org/10.1145/1622176.1622180
+
+[53] Jacob O. Wobbrock, Edward Cutrell, Susumu Harada, and I. Scott MacKenzie. 2008. An Error Model for Pointing Based on Fitts' Law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 1613-1622. DOI:
+
+http://dx.doi.org/10.1145/1357054.1357306
+
+[54] Jacob O. Wobbrock, Alex Jansen, and Kristen Shinohara. 2011a. Modeling and Predicting Pointing Errors in Two Dimensions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 1653-1656. DOI: http://dx.doi.org/10.1145/1978942.1979183
+
+[55] Jacob O. Wobbrock, Kristen Shinohara, and Alex Jansen. 2011b. The Effects of Task Dimensionality, Endpoint Deviation, Throughput Calculation, and Experiment Design on Pointing Measures and Models. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 1639-1648. DOI:
+
+http://dx.doi.org/10.1145/1978942.1979181
+
+[56] Robert Sessions Woodworth. 1899. SThe Accuracy of Voluntary Movement. The Psychological Review: Monograph Supplements 3, 3 (1899), 1-114. DOI: http://dx.doi.org/10.1037/h0092992
+
+[57] Shota Yamanaka. 2018a. Effect of Gaps with Penal Distractors Imposing Time Penalty in Touch-pointing Tasks. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'18). ACM, New York, NY, USA, 8. DOI:
+
+http://dx.doi.org/10.1145/3229434.3229435
+
+[58] Shota Yamanaka. 2018b. Risk Effects of Surrounding Distractors Imposing Time Penalty in Touch-Pointing Tasks. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces (ISS '18). ACM, New York, NY, USA, 129-135. DOI: http://dx.doi.org/10.1145/3279778.3279781
+
+[59] Koji Yatani, Kurt Partridge, Marshall Bern, and Mark W. Newman. 2008. Escape: A Target Selection Technique Using Visually-cued Gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 285-294. DOI:
+
+http://dx.doi.org/10.1145/1357054.1357104
+
+[60] Chun Yu, Hongyi Wen, Wei Xiong, Xiaojun Bi, and Yuanchun Shi. 2016. Investigating Effects of Post-Selection Feedback for Acquiring Ultra-Small Targets on Touchscreen. In Proceedings of the 2016 CHI
+
+Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4699-4710. DOI:http://dx.doi.org/10.1145/2858036.2858593
+
+[61] Howard N. Zelaznik, Susan Mone, George P. McCabe, and Christopher Thaman. 1988. Role of temporal and spatial precision in determining the nature of the speed-accuracy trade-off in aimed-hand movements. Journal of Experimental Psychology: Human Perception and Performance 14, 2 (1988), 221-230. DOI: http://dx.doi.org/10.1037/0096-1523.14.2.221
+
+[62] Shumin Zhai, Jing Kong, and Xiangshi Ren. 2004. Speed-accuracy tradeoff in Fitts' law tasks: on the equivalency of actual and nominal pointing precision. International Journal of Human-Computer Studies 61, 6 (2004), 823-856. DOI:
+
+http://dx.doi.org/10.1016/j.ijhcs.2004.09.007
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3a45f1e82d7cc56d6daf8e002318f313bd336018
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kqhc9IlVQN3/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,435 @@
+§ UNLIMITING THE DUAL GAUSSIAN DISTRIBUTION MODEL TO PREDICT TOUCH ACCURACY IN ON-SCREEN-START POINTING TASKS
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+Leave Authors Anonymous
+
+for Submission
+
+City, Country
+
+e-mail address
+
+§ ABSTRACT
+
+The dual Gaussian distribution hypothesis has been utilized to predict the success rate of target acquisition in finger touching. Bi and Zhai limited the applicability of their success-rate prediction model to off-screen-start pointing. However, we found that their doing so was theoretically over-limiting and their prediction model could also be used to on-screen-start pointing operations. We discuss the reasons why and empirically validate our hypothesis in a series of four experiments with various target sizes and distances. Bi and Zhai's model showed high prediction accuracy in all the experiments, with ${10}\%$ prediction error at worst. Our theoretical and empirical justifications will enable designers and researchers to use a single model to predict success rates regardless of whether users mainly perform on- or off-screen-start pointing and automatically generate and optimize UI items on apps and keyboards.
+
+§ AUTHOR KEYWORDS
+
+Dual Gaussian distribution model; touchscreens; finger input; pointing; graphical user interfaces.
+
+§ CCS CONCEPTS
+
+-Human-centered computing $\rightarrow$ HCI theory, concepts and models; Pointing; Empirical studies in HCI;
+
+§ INTRODUCTION
+
+Target acquisition is the most frequently performed operation on touchscreens. Tapping a small target, however, is sometimes an error-prone task, for reasons such as the "fat finger problem" $\left\lbrack {{24},{46}}\right\rbrack$ and the offset between a user's intended tap point and the position sensed by the system $\left\lbrack {8,{25}}\right\rbrack$ . Hence, various techniques have been proposed to improve the precision of touch pointing $\left\lbrack {2,{46},{59}}\right\rbrack$ . Researchers have also sought to understand the fundamental principles of touch, e.g., touch-point distributions $\left\lbrack {4,{49}}\right\rbrack$ . As shown in these studies, finger touching is an inaccurate way to select a small target.
+
+If touch GUI designers could compute the success rate of tapping a given target, they could determine button/icon sizes that would strike a balance between usability and screen-space occupation. For example, suppose that a designer has to arrange many icons on a webpage. In this case, is a $5 - \mathrm{{mm}}$ diameter for each circular icon sufficiently large for accurate tapping? If not, then how about a 7-mm diameter? By how much can we expect the accuracy to be improved? Moreover, while larger icons can be more accurately tapped, they occupy more screen space. In that case, the webpage can be lengthened so that the larger icons fit, but this requires users to perform more scrolling operations to view and select icons at the bottom of the page. Hence, designers have to carefully manage this tradeoff between user performance and screen space.
+
+Without a success-rate prediction model, designers have to conduct costly user studies to determine suitable target sizes on a webpage or app, but this strategy has low scalability. Accurate quantitative models would also be helpful for automatically generating user-friendly UIs [17, 39] and optimizing UIs [5, 14]. Furthermore, having such models would help researchers justify their experimental designs in investigating novel systems and interaction techniques that do not focus mainly on touch accuracy. For example, researchers could state, "According to the success-rate prediction model, 9-mm-diameter circular targets are assumed to be accurately $\left( { > {99}\% }\right)$ selected by finger touching, and thus, this experiment is a fair one for comparing the usabilities of our proposed system and the baseline."
+
+To predict how successfully users tap a target, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ proposed a model that computes the success rate solely from the target size $W$ for both 1D and 2D pointing tasks [10]. They reasonably limited their model's applicability to touch-pointing tasks starting off-screen, i.e., those in which the user's finger moves from a position outside the touch screen. In this paper, we first explain why this limitation seems reasonable by addressing potential concerns in applying the model to on-screen-start pointing tasks, in which a finger moves from a certain position on the screen to another position to tap a target. Then, however, we justify the use of the model for pointing with an on-screen start. After that, we empirically show through a series of experiments that the model has comparable prediction accuracy even for such pointing with an on-screen start. Our key contributions are as follows.
+
+ * Theoretical justification for applying Bi and Zhai's success-rate prediction model to pointing tasks starting on-screen. We found that the model is valid regardless of whether a pointing task starts on- or off-screen. This means that designers and researchers can predict success rates by using a single model. We thus expand the coverage of the model to other applications, such as (a) tapping a "Like" button after scrolling through a social networking service feed, (b) successively inputting check marks on a questionnaire, and (c) typing on a software keyboard.
+
+Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
+
+${GI}$ '20, May 21-22,2020, Toronto, Canada
+
+(C) 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-6708-0/20/04...$15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX
+
+ * Empirical verification of our hypothesis via four experiments. Despite the theoretical reasoning, we were still concerned about using the model after consulting the existing literature and finding results like those showing that the endpoint variability is significantly affected by the movement distance $A$ in a ballistic pointing motion [6,26,44,61]. Hence, we conducted 1D and 2D pointing experiments starting on-screen with (a) successive pointing tasks in which targets appeared at random positions, which ignored the effect of $A$ , and (b) discrete pointing tasks in which the start and goal targets were separated by a given distance, meaning that $A$ was controlled. The results showed that we could accurately predict the success rates with a prediction error of $\sim {10}\%$ at worst.
+
+In short, the novelty of our study is that it extends the applicability of Bi and Zhai's model to a variety of tasks (e.g., Fitts tasks, key typing), with support from theoretical and empirical evidence. With this model, designers and researchers can evaluate and improve their UIs in terms of touch accuracy, which will directly contribute to UI development. In addition, by reducing the time and cost of conducting user studies, our model will let them focus on other important tasks such as visual design and backend system development, which will indirectly contribute to implementing better, novel UIs.
+
+§ RELATED WORK
+
+§ SUCCESS-RATE PREDICTION FOR POINTING TASKS
+
+When human operators try to minimize both the movement time ${MT}$ and the number of target misses, the error rate has been thought to be close to $4\% \left\lbrack {{35},{45},{53}}\right\rbrack$ . A recent report, however, pointed out that this percentage is an arbitrary, questionable assumption [19]. Actual data shows that the error rate tends to decrease as the target size $W$ increases $\lbrack {10},{16},{48}$ , 53].
+
+While a typical goal of pointing models is to predict the ${MT}$ , researchers have also tried to derive models to predict the success rate (or error rate) of target acquisition tasks. In particular, the model of Meyer et al. [36] is often cited as the first one to predict the error rate, but it does not account for the ${MT}$ . In practice, the error rate increases as operators move faster (e.g., [62]), and thus Wobbrock et al. accounted for this effect in their model [53]. That model was later shown to be applicable for pointing to 2D circular targets [54] and moving targets [40]. For both models, by Meyer et al. and Wobbrock et al., the predicted error rate increases as $W$ decreases, which is consistent with the actual observations mentioned above.
+
+As for speed, simply speaking, when operators give it priority, the error rate increases. While Wobbrock et al. applied a time limit as an objective constraint by using a metronome in their study [53], this speed-accuracy tradeoff was empirically validated in a series of experiments by Zhai et al., in which the priority was subjectively biased [62]. Besides the case of rapidly aimed movements, the error rate has also been investigated for tapping on a static button within a given temporal window $\left\lbrack {{28},{29},{30}}\right\rbrack$ . Despite the recent importance of finger-touch operations on smartphones and tablets, however, the only literature on predicting the success rate while accounting for finger-touch ambiguity is the work of Bi and Zhai on pointing from an off-screen start [10]. It would be useful if we could extend the validity of their model to other applications.
+
+§ IMPROVEMENTS AND PRINCIPLES OF FINGER-TOUCH ACCURACY
+
+Various methods to improve the touch accuracy have been proposed. Examples include using an offset from the finger contact point $\left\lbrack {{12},{43},{46}}\right\rbrack$ , dragging in a specific direction to confirm a particular target among a number of potential targets $\left\lbrack {2,{38},{59}}\right\rbrack$ , visualizing a contact point $\left\lbrack {{52},{60}}\right\rbrack$ , applying machine learning [50] or probabilistic modeling [9], and correcting hand tremor effects by using motion sensors [42].
+
+In addition to these techniques, researchers have sought to understand why finger touch is less accurate compared with other input modalities such as a mouse cursor. One typical issue is the fat finger problem $\left\lbrack {{24},{25},{46}}\right\rbrack$ , in which an operator wants to tap a small target, but the finger occludes it. Another issue is that finger touch has an unavoidable offset (spatial bias) from the operator's intended touch point to the actual touch position sensed by the system. Even if operators focus on accuracy by spending a sufficient length of time, the sensed touch point is biased from the crosshair target [24, 25].
+
+§ SUCCESS-RATE PREDICTION FOR FINGER-TOUCH POINTING
+
+§ OUTLINE OF DUAL GAUSSIAN DISTRIBUTION MODEL
+
+Previous studies have shown that the endpoint distribution of finger touches follows a bivariate Gaussian distribution over a target $\left\lbrack {4,{22},{49}}\right\rbrack$ . Thus, the touch point observed by the system can be considered a random variable ${X}_{obs}$ following a Gaussian distribution: ${X}_{obs} \sim N\left( {{\mu }_{obs},{\sigma }_{obs}^{2}}\right)$ , where ${\mu }_{obs}$ and ${\sigma }_{obs}$ are the center and ${SD}$ of the distribution, respectively. Bi, Li, and Zhai hypothesized that ${X}_{obs}$ is the sum of two independent random variables consisting of relative and absolute components, both of which follow Gaussian distributions: ${X}_{r} \sim N\left( {{\mu }_{r},{\sigma }_{r}^{2}}\right)$ , and ${X}_{a} \sim N\left( {{\mu }_{a},{\sigma }_{a}^{2}}\right) \left\lbrack 8\right\rbrack$ .
+
+${X}_{r}$ is a relative component affected by the speed-accuracy tradeoff. When an operator aims for a target more quickly, the relative endpoint distribution ${\sigma }_{r}$ increases. As indicated by Fitts’ law studies, if the acceptable endpoint tolerance $W$ increases, then the operator’s endpoint noise level ${\sigma }_{r}$ also increases $\left\lbrack {{13},{35}}\right\rbrack$ .
+
+${X}_{a}$ is an absolute component that reflects the precision of the probe (i.e., the input device: a finger in this paper) and is independent of the task precision. Therefore, even when an operator taps a small target very carefully, there is still a spatial bias from the intended touch point (typically the target center) $\left\lbrack {8,{24},{25}}\right\rbrack$ ; the distribution of this bias is what ${\sigma }_{a}$ models. Therefore, although ${\sigma }_{r}$ can be reduced by an operator aiming slowly at a target, ${\sigma }_{a}$ cannot be controlled by setting such a speed-accuracy priority. Note that the means of both components’ random variables $\left( {\mu }_{r}\right.$ and $\left. {\mu }_{a}\right)$ are assumed to tend close to the target center: ${\mu }_{r} \approx {\mu }_{a} \approx 0$ if the coordinate of the target center is defined as 0 .
+
+Again, Bi et al. hypothesized that the observed touch point is a random variable that is the sum of two independent components [8]:
+
+$$
+{X}_{obs} = {X}_{r} + {X}_{a} \sim N\left( {{\mu }_{r} + {\mu }_{a},{\sigma }_{r}^{2} + {\sigma }_{a}^{2}}\right) \tag{1}
+$$
+
+${\mu }_{obs}\left( { = {\mu }_{r} + {\mu }_{a}}\right)$ is close to 0 on average, and ${\sigma }_{obs}^{2}$ is:
+
+$$
+{\sigma }_{obs}^{2} = {\sigma }_{r}^{2} + {\sigma }_{a}^{2} \tag{2}
+$$
+
+When an operator exactly utilizes the target size $W$ in rapidly aimed movements, $\sqrt{2\pi e}{\sigma }_{r}$ matches a given $W$ (i.e., ${4.133}{\sigma }_{r} \approx W)\left\lbrack {9,{35}}\right\rbrack$ . Operators tend to bias operations toward speed or accuracy, however, thus over- or underusing $W$ [62]. Bi and Zhai assumed that using a fine probe of negligible size $\left( {{\sigma }_{a} \approx 0}\right)$ , such as a mouse cursor, makes ${\sigma }_{r}$ proportional to $W$ . Thus, by introducing a constant $\alpha$ , we have:
+
+$$
+{\sigma }_{r}^{2} = \alpha {W}^{2} \tag{3}
+$$
+
+Then, replacing ${\sigma }_{r}^{2}$ in Equation 2 with Equation 3, we obtain:
+
+$$
+{\sigma }_{obs}^{2} = \alpha {W}^{2} + {\sigma }_{a}^{2} \tag{4}
+$$
+
+Hence, by conducting a pointing task with several $W$ values, we can run a linear regression on Equation 4 and obtain the constants, $\alpha$ and ${\sigma }_{a}$ . Accordingly, we can compute the endpoint variability for tapping a target of size $W$ . We denote this endpoint variability computed from a regression expression as ${\sigma }_{reg}$ :
+
+$$
+{\sigma }_{\text{ reg }} = \sqrt{\alpha {W}^{2} + {\sigma }_{a}^{2}} \tag{5}
+$$
+
+Revisiting Bi and Zhai's Studies on Success-Rate Prediction Here, we revisit Bi and Zhai's first experiment on the Bayesian Touch Criterion [9]. They conducted a 2D pointing task with circular targets of diameter $W = 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ . In their task, tapping the starting circle caused the first target to immediately appear at a random position. Subsequently, lifting the input finger off a target caused the next target to appear immediately. Hence, the participants successively tapped each new target as quickly and accurately as possible. The target distance was not predefined as $A$ , unlike typical experiments involving Fitts' law. A possible way to analyze the effect of the movement amplitude would be to calculate $A$ as the distance between the current target and the previous one; however, no such analysis was performed. Thus, even if the endpoint variability ${\sigma }_{obs}$ was influenced by $A$ , the effect was averaged.
+
+By using Equation 5, the regression expressions of the ${\sigma }_{\text{ reg }}$ values on the $x$ - and $y$ -axes were calculated as [9]:
+
+$$
+{\sigma }_{{\text{ reg }}_{x}} = \sqrt{{0.0075}{W}^{2} + {1.6834}} \tag{6}
+$$
+
+$$
+{\sigma }_{{\text{ reg }}_{y}} = \sqrt{{0.0108}{W}^{2} + {1.3292}} \tag{7}
+$$
+
+$\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ then derived their success-rate prediction model [10]. Assuming a negligible correlation between the observed touch point values on the $x$ - and $y$ -axes (i.e., $\rho = 0$ ) gives the following probability density function for the bivariate Gaussian distribution:
+
+$$
+P\left( {x,y}\right) = \frac{1}{{2\pi }{\sigma }_{{\operatorname{reg}}_{x}}{\sigma }_{{\operatorname{reg}}_{y}}}\exp \left( {-\frac{{x}^{2}}{2{\sigma }_{{\operatorname{reg}}_{x}}^{2}} - \frac{{y}^{2}}{2{\sigma }_{{\operatorname{reg}}_{y}}^{2}}}\right) \tag{8}
+$$
+
+Then, the probability that the observed touch point falls within the target boundary $D$ is:
+
+$$
+P\left( D\right) = {\iint }_{D}\frac{1}{{2\pi }{\sigma }_{{\operatorname{reg}}_{x}}{\sigma }_{{\operatorname{reg}}_{y}}}\exp \left( {-\frac{{x}^{2}}{2{\sigma }_{{\operatorname{reg}}_{x}}^{2}} - \frac{{y}^{2}}{2{\sigma }_{{\operatorname{reg}}_{y}}^{2}}}\right) {dxdy} \tag{9}
+$$
+
+where ${\sigma }_{{re}{g}_{x}}$ and ${\sigma }_{{re}{g}_{y}}$ are calculated from Equations 6 and 7, respectively.
+
+For a 1D vertical bar target, whose boundary is defined to range from ${x}_{1}$ to ${x}_{2}$ , we can simplify the predicted probability for where the touch point $X$ falls on the target:
+
+$$
+P\left( {{x}_{1} \leq X \leq {x}_{2}}\right) = \frac{1}{2}\left\lbrack {\operatorname{erf}\left( \frac{{x}_{2}}{{\sigma }_{{\text{ reg }}_{x}}\sqrt{2}}\right) - \operatorname{erf}\left( \frac{{x}_{1}}{{\sigma }_{{\text{ reg }}_{x}}\sqrt{2}}\right) }\right\rbrack
+$$
+
+(10)
+
+Note that the mean touch point $\mu$ of the probability density function is assumed to be $\approx 0$ , thus eliminating it already from this equation. If the target width is $W$ , then Equation 10 can be simplified further:
+
+$$
+P\left( {-\frac{W}{2} \leq X \leq \frac{W}{2}}\right) = \operatorname{erf}\left( \frac{W}{2\sqrt{2}{\sigma }_{{\operatorname{reg}}_{x}}}\right) \tag{11}
+$$
+
+Alternatively, if the target is a 1D horizontal bar of height $W$ , then we replace the $x$ -coordinates in Equation 11 with $y$ -coordinates.
+
+Bi and Zhai's experiment on success-rate prediction tested 1D vertical,1D horizontal, and 2D circular targets with $W$ $= {2.4},{4.8}$ , and ${7.2}\mathrm{\;{mm}}$ [10]. Unlike a different experiment to measure the coefficients in Equations 6 and 7 [9], Bi and Zhai empirically confirmed their model's validity in pointing from an off-screen start. To simulate this condition of starting off-screen, they told their participants to keep their dominant hands off the screen in natural positions and start from those positions in each trial [10]. Hence, while the coefficients of the touch-point variability were measured in a successive pointing task [9], which is regarded as a pointing task starting on-screen, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ did not claim that their success-rate prediction model using Equations 9 and 11 was valid for other kinds of pointing tasks, such as a Fitts' law paradigm specifying both $A$ and $W$ .
+
+§ GENERALIZABILITY OF SUCCESS-RATE PREDICTION MODEL TO POINTING TASKS STARTING ON-SCREEN
+
+§ EFFECT OF MOVEMENT DISTANCE ON SUCCESS RATE
+
+Here, we discuss why Bi and Zhai's model (Equations 9 and 11) can be applied to touch-pointing tasks starting on-screen, as well as possible concerns about this application. In their paper [9], Bi and Zhai stated, "We generalize the dual Gaussian distribution hypothesis from Fitts' tasks-which are special target selection tasks involving both amplitude(A)and target width(W)-to the more general target-selection tasks which are predominantly characterized by $W$ alone." Therefore, to omit the effect of $A$ when they later evaluated the success-rate prediction model, they explicitly instructed the participants to start with their dominant hands away from the screen at the beginning of each trial [10]. This is a reasonable instruction: if their experiment used an on-screen start, defining $A$ would be their model's limitation, because such an experiment would show only that the prediction model could be used when the pointing task is controlled by both $A$ and $W$ . Thus, pointing experiments starting off-screen are a reasonable way to show that the model can be used if $W$ is defined.
+
+To generalize the model to pointing tasks starting on-screen, one concern is the effect of the movement distance $A$ on the success rate. Even if we do not define the target distance $A$ from the initial finger position, in actuality the finger has an implicit travel distance, because " $A$ is less well-defined" [9] does not mean "there is no movement distance." Therefore, a pointing task predominantly characterized by $W$ alone can also be interpreted as merging or averaging the effects of $A$ on touch-point distributions and success rates. For example, suppose that a participant in a pointing experiment starting off-screen repeatedly taps a target 200 times. Let the implicit $A$ value be ${20}\mathrm{\;{mm}}$ for 100 trials and ${60}\mathrm{\;{mm}}$ for the other 100. Suppose that the success rates are independently calculated as, e.g., 95 and 75%, respectively. If we do not distinguish the implicit $A$ values, however, then the success rate is $\left( {{95} + {75}}\right) /{200} = {80}\%$ . This value is somewhat close to the independent value of ${75}\%$ for the condition of $A = {60}$ $\mathrm{{mm}}$ , but the prediction error for $A = {20}\mathrm{\;{mm}}$ is ${15}\%$ .
+
+If the implicit or explicit movement distance $A$ does not significantly change the success rate, such as from ${88}\%$ for $A =$ ${20}\mathrm{\;{mm}}$ to ${86}\%$ for $A = {60}\mathrm{\;{mm}}$ , then we can use Bi and Zhai’s model regardless of whether pointing tasks start on- or offscreen. Now, the question is whether the success rate changes depending on the implicit or explicit $A$ . If it does change, then we can use the model only if we ignore the movement distance. According to the current prediction model (Equations 9 and 11), once $W$ is given, the predicted success rate is determined by ${\sigma }_{\text{ reg }}$ . Hence, the debate revolves around whether the touch-point distribution is affected by the distance $A$ . This is equivalent to asking whether Equation $4\left( {{\sigma }_{obs}^{2} = \alpha {W}^{2} + {\sigma }_{a}^{2}}\right)$ is valid regardless of the value of $A$ . In fact, the literature offers evidence on both sides of the question, as explained below.
+
+§ EFFECT OF MOVEMENT DISTANCE ON ENDPOINT VARIABILITY
+
+Previous studies reported that the $A$ does not strongly influence the endpoint distribution $\left\lbrack {8,{27},{62}}\right\rbrack$ . For typical pointing tasks, operators are asked to balance speed and accuracy, which means that they can spend a long time successfully acquiring a target if it is small. Thus, target pointing tasks implicitly allow participants to use visual feedback to perform closed-loop motions (e.g., [27]). Under such conditions, the endpoint distribution is expected to decrease as $W$ decreases [9,35].
+
+In contrast, when participants perform a ballistic motion, the endpoint distribution has been reported to vary linearly with the square of the movement distance $A\left\lbrack {{20},{21},{23},{37},{44},{61}}\right\rbrack$ . Beggs et al. $\left\lbrack {6,7}\right\rbrack$ formulated the relationship in this way:
+
+$$
+{\sigma }_{obs}^{2} = a + b{A}^{2} \tag{12}
+$$
+
+where ${\sigma }_{obs}$ is valid for directions collinear and perpendicular to the movement direction, and $a$ and $b$ are empirically determined constants. This model has since been empirically confirmed by other researchers (e.g., [32, 33]). Because the intercept $a$ tends to be small $\left\lbrack {{37},{44},{61}}\right\rbrack$ , this model is consistent with reports on the relationship being linear $\left( {{\sigma }_{obs} = \sqrt{b}A}\right)$ .
+
+The critical threshold of whether participants perform a closed-loop or ballistic motion depends on Fitts' original index of difficulty, ${ID} = {\log }_{2}\left( {{2A}/W}\right)$ . When ${ID}$ is less than 3 or 4 bits, a pointing task can be accomplished with only a ballistic motion [18,23]. While the critical ${ID}$ changes depending on the experimental conditions, an extremely easy task such as this one (i.e., one with a short $A$ or large $W$ ) generally does not require any precise closed-loop operations. Therefore, we theoretically assume that the endpoint distribution ${\sigma }_{obs}$ and the success rate change depending on the movement distance.
+
+Nevertheless, we also assume that such a ballistic motion would not degrade success-rate prediction. The evidence comes from a study by Bi et al. on the FFitts law model [8]. They conducted 1D and 2D Fitts-paradigm experiments with $A = {20}$ and ${30}\mathrm{\;{mm}}$ and $W = {2.4},{4.8}$ , and ${7.2}\mathrm{\;{mm}}$ ; the ${ID}$ in Fitts' original formulation ranged from 2.47 to 4.64 bits. For $\left( {A,W,{ID}}\right) = \left( {{20}\mathrm{\;{mm}},{7.2}\mathrm{\;{mm}},{2.47}\text{ bits }}\right)$ and $({30}\mathrm{\;{mm}},{7.2}\mathrm{\;{mm}}$ , 3.06 bits), a somewhat sloppy ballistic motion might fulfill these conditions [23]. The ${\sigma }_{{ob}{s}_{v}}$ values, however, were 1.21 and ${1.33}\mathrm{\;{mm}}$ for these $1\mathrm{D}$ horizontal targets. The difference was only $\left| {{1.21} - {1.33}}\right| = {0.12}\mathrm{\;{mm}}$ , while the error rate difference was $\left| {{29} - {38}}\right| = 9\%$ . Similarly, their 2D tasks showed only small differences in error rate, up to $2\%$ at most, between the conditions of two values of $A$ for each $W$ condition.
+
+As empirically shown by this study of Bi et al. on FFitts law [8], the changes in ${\sigma }_{obs}$ and the success rate owing to $A$ might be small in practice because such short $A$ values could not greatly change ${\sigma }_{obs}$ , even if the effect of $A$ on ${\sigma }_{obs}$ is statistically significant as expressed by Equation 12. If so, then from a practical viewpoint, it would not be problematic to apply Bi and Zhai's success-rate prediction model to pointing from an on-screen start; hence, we empirically validated this.
+
+§ EXPERIMENTS
+
+As discussed in the previous section, we have contrary hypotheses on whether we can accurately predict the success rate of touch pointing tasks solely from the target size $W$ . Specifically, (1) when pointing is ballistic with a short movement distance $A,A$ would have a statistically significant effect on ${\sigma }_{obs}$ , and thus, the success rate might not be accurately predicted. Yet, in that situation,(2) a short $A$ like $2 - 3\mathrm{\;{cm}}$ would induce only a slight (though statistically significant) change in ${\sigma }_{obs}$ , and thus, in practice, the change of $A$ is not detrimental to success-rate prediction.
+
+To settle the question, we ran experiments involving $1\mathrm{D}$ and 2D pointing tasks starting on-screen. For each dimensionality, we conducted (a) successive pointing tasks in which a target appeared at a random position immediately after the previous target was tapped, and (b) discrete pointing tasks in which the target distance $A$ was predefined. Under condition (a), we could have post-computed the target distance from the previous target position. Instead, we merged the various distance values; this was a fair modification of Bi and Zhai's success-rate prediction experiments [10], which started off-screen, to an on-screen start condition. Under condition (b), we separately predicted and measured the error rates for each value of $A$ to empirically evaluate the effect of movement distance on the prediction accuracy. We thus conducted four experiments composed of $1\mathrm{D}$ and $2\mathrm{D}$ target conditions:
+
+ < g r a p h i c s >
+
+Figure 1. Experimental environments: (a) a participant attempting Experiment 2, and the visual stimuli used in (b) Experiments 2 and (c) 4.
+
+Exp. 1. Successive 1D pointing task: horizontal bar targets appeared at random positions.
+
+Exp. 2. Discrete 1D pointing task: a start bar and a target bar were displayed with distance $A$ between them.
+
+Exp. 3. Successive 2D pointing task: circular targets appeared at random positions.
+
+Exp. 4. Discrete 2D pointing task: a start circle and a target circle were displayed with distance $A$ between them.
+
+Experiments 1 and 2 were conducted on the first day and performed by the same 12 participants. Although we explicitly labeled these as Experiments 1 and 2, their order was balanced among the 12 participants ${}^{1}$ . Similarly, on the second day,12 participants were divided into two groups, and the order of Experiments 3 and 4 was balanced. Each set of two experiments took less than 40 min per participant.
+
+We used an iPhone XS Max (A12 Bionic CPU; 4-GB RAM; iOS 12; 1242 x 2688 pixels, 6.5-inch-diagonal display, 458 ppi; ${208}\mathrm{\;g}$ ). The experimental system was implemented with JavaScript, HTML, and CSS. The web page was viewed with the Safari app. After eliminating the top and bottom navigation-bar areas, the browser converted the canvas resolution to ${414} \times {719}$ pixels, giving 5.978 pixels/mm. The system was set to run at ${60}\mathrm{{fps}}$ . We used the takeoff positions as tap points, as in previous studies [8, 9, 10, 57, 58].
+
+The participants were asked to sit on an office chair in a silent room. As shown in Figure 1a, each participant held the smart-phone with the nondominant (left) hand and tapped the screen with the dominant (right) hand's index finger. They were instructed not to rest their hands or elbows on their laps.
+
+§ EXPERIMENT 1: 1D TASK WITH RANDOM AMPLITUDES
+
+§ PARTICIPANTS
+
+Twelve university students, two female and 10 male, participated in this study. Their ages ranged from 20 to 25 years $\left( {M = {23.0},{SD} = {1.41}}\right)$ . They all had normal or corrected-to-normal vision, were right-handed, and were daily smartphone users. Their histories of smartphone usage ranged from 5 to 8 years $\left( {M = {6.67},{SD} = {1.07}}\right)$ . For daily usage, five participants used iOS smartphones, and seven used Android smartphones. They each received 45 US$ in compensation for performing Experiments 1 and 2.
+
+§ TASK AND DESIGN
+
+A 6-mm-high start bar was initially displayed at a random position on the screen. When a participant tapped it, the first target bar immediately appeared at a random position. The participant successively tapped new targets that appeared upon lifting the finger off. If a target was missed, a beep sounded, and the participant had to re-aim for the target. If the participant succeeded, a bell sounded. To reduce the negative effect of targets located close to a screen edge, the random target position was at least ${11}\mathrm{\;{mm}}$ away from both the top and bottom edges of the screen [3].
+
+This task was a single-factor within-subjects design with an independent variable of the target width $W : 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ , or12,24,36,48, and 60 pixels, respectively. The dependent variables were the observed touch-point distribution on the $y$ -axis, ${\sigma }_{{ob}{s}_{v}}$ , and the success rate. The touch-point bias was measured from the target center with a sign [55]. First, the participants performed 20 trials as practice, which included four repetitions of the five $W$ values appearing in random order. In each session, the $W$ values appeared 10 times in random order. The participants were instructed to successively tap the target as quickly and accurately as possible in a single session. They each completed four sessions as data-collection trials, with a short break between two successive sessions. Thus, we recorded ${5}_{W} \times {10}_{\text{ repetitions }} \times {4}_{\text{ sessions }} \times {12}_{\text{ participants }} = {2400}$ data points in total.
+
+§ RESULTS
+
+We removed 13 outlier trials (0.54%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center [9]. According to observations, such outliers resulted mainly from participants accidentally touching the screen with the thumb or little finger. For consistency with Bi and Zhai's work [10], we decided to compute the regression between ${\sigma }_{obs}^{2}$ and ${W}^{2}$ to validate Equation 4, compare the observed and computed touch-point distributions $\left( {\sigma }_{{ob}{s}_{v}}\right.$ and ${\sigma }_{{re}{g}_{v}}$ , respectively), and compare the observed and predicted success rates.
+
+§ TOUCH-POINT DISTRIBUTION
+
+A repeated-measures ANOVA showed that $W$ had a significant main effect on ${\sigma }_{{ob}{s}_{v}}\left( {{F}_{4,{44}} = {11.18},p < {0.001},{\eta }_{p}^{2} = {0.50}}\right)$ . Shapiro-Wilk tests showed that the touch points followed Gaussian distributions $\left( {p > {0.05}}\right)$ under 47 of the 60 conditions $\left( { = {5}_{W} \times {12}_{\text{ participants }}}\right)$ , or ${78.3}\%$ . Figure 2 shows the regression expression for ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ to validate Equation 4. The assumption of a linear relationship for these variances even for touch-pointing operations with an on-screen start was supported with ${R}^{2} = {0.9353}$ . Accordingly, we obtained the following coefficients for Equation 5:
+
+$$
+{\sigma }_{{re}{g}_{y}} = \sqrt{{0.0154}{W}^{2} + {1.0123}} \tag{13}
+$$
+
+${}^{1}$ We conducted another measurement called a finger calibration task to replicate the model of FFitts law [8]. The order of Experiments 1 and 2 and the finger calibration task was actually balanced.
+
+ < g r a p h i c s >
+
+Figure 2. Regression between the variance in the $y$ -direction $\left( {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ in Experiment 1.
+
+ < g r a p h i c s >
+
+Figure 3. Observed versus predicted success rates in Experiment 1.
+
+For comparison, in Bi and Zhai’s 2D task [9] using the same $W$ values as in our 1D task, ${\sigma }_{{ob}{s}_{x}}^{2}$ versus ${W}^{2}$ gave ${R}^{2} = {0.9344}$ , and ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ gave ${R}^{2} = {0.9756}$ . Hence, even for a pointing task with an on-screen start and random target positioning, we could compute the touch-point distribution values $\left( {\sigma }_{{re}{g}_{v}}\right)$ for each $W$ by using Equation 13 with accuracy similar to those of their study. The differences between the computed ${\sigma }_{{re}{g}_{v}}$ values and observed distributions ${\sigma }_{{ob}{s}_{v}}$ were less than ${0.1}\mathrm{\;{mm}}\left( { < 1\text{ pixel }}\right)$ , as obtained by taking the square roots of the vertical distances between the points and the regression line in Figure 2.
+
+§ SUCCESS RATE
+
+Among the ${2387}\left( { = {2400} - {13}}\right)$ non-outlier data points, the participants successfully tapped the target in 2194 trials, or ${91.91}\%$ . As shown by the blue bars in Figure 3, the observed success rate increased from 71.55 to 99.79% with the increase in $W$ , which had a significant main effect $\left( {{F}_{4,{44}} = {58.37},p < }\right.$ ${0.001},{\eta }_{p}^{2} = {0.84})$ . Note that, throughout this paper, the error bars in charts indicate ${SD}$ across all participants.
+
+By applying Equation 13 in Equation 11, we computed the predicted success rates for each $W$ , as represented by the red bars in Figure 3. The difference from the observed success rate was 5% at most. These results show that we could accurately predict the success rate solely from the target size $W$ , with a mean absolute error ${MAE}$ of ${1.657}\%$ for $N = 5$ data points. This indicates the applicability of Bi and Zhai's model with our on-screen start condition.
+
+§ EXPERIMENT 2: 1D TASK WITH PRESET AMPLITUDES
+
+This task followed the discrete pointing experiment of Bi et al. with specific target amplitudes [8], but with more variety in the values of $A$ and $W$ .
+
+§ TASK AND DESIGN
+
+Figure 1b shows the visual stimulus used in Experiment 2. At the beginning of each trial, a 6-mm-high blue start bar and a $W$ -mm-high green target bar were displayed at random positions with distance $A$ between them and margins of at least ${11}\mathrm{\;{mm}}$ from the top and bottom edges of the screen. When a participant tapped the start bar, it disappeared and a click sounded. Then, if the participant successfully tapped the target, a bell sounded, and the next set of start and target bars was displayed. If the participant missed the target, he or she had to aim at it until successfully tapping it; in such a case, the trial was not restarted from tapping the start bar. The participants were instructed to tap the target as quickly and accurately as possible after tapping the start bar.
+
+We included four target distances $(A = {20},{30},{45}$ , and ${60}\mathrm{\;{mm}}$ , or 120, 180, 270, and 358 pixels, respectively) and five target widths $(W = 2,4,6,8$ , and ${10}\mathrm{\;{mm}}$ , or12,24,36,48, and 60 pixels, respectively). Each $A \times W$ combination was used for 16 repetitions, following a single repetition of practice trials. We thus recorded ${4}_{A} \times {5}_{W} \times {16}_{\text{ repetitions }} \times {12}_{\text{ participants }} = {3840}$ data points in total.
+
+§ RESULTS
+
+Among the 3840 trials, we removed 4 outlier trials (0.10%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+§ TOUCH-POINT DISTRIBUTION
+
+We found significant main effects of $A\left( {{F}_{3,{33}} = {2.949},p < }\right.$ ${0.05},{\eta }_{p}^{2} = {0.21})$ and $W\left( {{F}_{4,{44}} = {72.63},p < {0.001},{\eta }_{p}^{2} = {0.87}}\right)$ on ${\sigma }_{{ob}{s}_{v}}$ , but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = }\right.$ ${1.371},p = {0.187},{\eta }_{p}^{2} = {0.11})$ . Shapiro-Wilk tests showed that the touch points followed Gaussian distributions under 218 of the 240 conditions $\left( {{4}_{A} \times {5}_{W} \times {12}_{\text{ participants }}}\right)$ , or 90.8%. Figure 4 shows the regression expression for ${\sigma }_{{ob}{s}_{y}}^{2}$ versus ${W}^{2}$ , with ${R}^{2} = {0.8141}$ for $N = {4}_{A} \times {5}_{W} = {20}$ data points. When we merged the four ${\sigma }_{{ob}{s}_{v}}^{2}$ values for each $A$ as we did with the movement amplitudes in Experiment 1, we obtained five data points with ${R}^{2} = {0.9171}$ (the regression constants did not change). We used the results of the regression expression to obtain the coefficients in Equation 5:
+
+$$
+{\sigma }_{{\text{ reg }}_{y}} = \sqrt{{0.0191}{W}^{2} + {0.9543}} \tag{14}
+$$
+
+Using Equation 14, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{v}}\right)$ for each $W$ . The differences between the ${\sigma }_{{re}{g}_{v}}$ and ${\sigma }_{{ob}{s}_{y}}$ values were less than ${0.2}\mathrm{\;{mm}}\left( { \sim 1\text{ pixel }}\right)$ . As a check, the average ${\sigma }_{{ob}{s}_{v}}$ values for $A = {20},{30},{45}$ , and ${60}\mathrm{\;{mm}}$ were 1.345, 1.279,1.319, and ${1.415}\mathrm{\;{mm}}$ , respectively, giving a difference of ${0.136}\mathrm{\;{mm}}$ at most. The conclusion of the small effect of $A$ was supported by a repeated-measures ANOVA: although $A$ had a main effect on ${\sigma }_{{ob}{s}_{v}}$ , a pairwise comparison with the Bonferroni correction as the $p$ -value adjustment method showed only one pair having a significant difference between $A = {45}$ and ${60}\mathrm{\;{mm}}(p < {0.05};\left| {{1.319} - {1.415}}\right| = {0.096}\mathrm{\;{mm}}$ $< 1$ pixel). These results indicate that we could compute the touch-point distributions regardless of $A$ at a certain degree of accuracy in most cases.
+
+ < g r a p h i c s >
+
+Figure 4. Regression between the variance in the $y$ -direction $\left( {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ in Experiment 2.
+
+ < g r a p h i c s >
+
+Figure 5. Observed versus predicted success rates in Experiment 2.
+
+§ SUCCESS RATE
+
+Among the ${3736}\left( { = {3840} - 4}\right)$ non-outlier data points, the participants successfully tapped the target in 3489 trials, or ${90.95}\%$ . We found significant main effects of $A\left( {{F}_{3,{33}} = }\right.$ ${4.124},p < {0.05},{\eta }_{p}^{2} = {0.27})$ and $W\left( {{F}_{4,{44}} = {45.03},p < {0.001}}\right.$ , ${\eta }_{p}^{2} = {0.80}$ ) on the success rate, but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = {0.681},p = {0.767},{\eta }_{p}^{2} = {0.058}}\right)$ . Figure 5 shows the observed and predicted success rates. The largest difference was ${77.60} - {67.53} = {10.07}\%$ under the condition of $A = {20}\mathrm{\;{mm}} \times W = 2\mathrm{\;{mm}}$ . This is comparable with Bi and Zhai's success-rate prediction [10], in which the largest difference (9.74%) was observed for $W = {2.4}\mathrm{\;{mm}}$ on a 1D vertical bar target. In Experiment 2, the MAE was 3.266% for $N = {20}$ data points.
+
+§ EXPERIMENT 3: 2D TASK WITH RANDOM AMPLITUDES
+
+The experimental designs were almost entirely the same as in Experiments 1 and 2, except that circular targets were used in Experiments 3 and 4. Here, the target size $W$ means the circle's diameter. The random target positions were set at least ${11}\mathrm{\;{mm}}$ from the edges of the screen. For Experiment 3, we used the same task design as in Experiment 1: ${5}_{W} \times$ ${40}_{\text{ repetitions }} \times {12}_{\text{ participants }} = {2400}$ data points.
+
+§ PARTICIPANTS
+
+Twelve university students, three female and nine male, participated in Experiments 3 and 4. Their ages ranged from 19 to 25 years $\left( {M = {22.2},{SD} = {2.12}}\right)$ . They all had normal or corrected-to-normal vision, were right-handed, and were daily smartphone users. Their histories of smartphone usage ranged from 4 to 10 years $\left( {M = {6.17}\text{ and }{SD} = {1.75}}\right)$ . For daily usage, nine participants used iOS smartphones, and three used Android smartphones. They each received 45 US$ in compensation for performing Experiments 3 and 4. Three of these participants had also performed Experiments 1 and 2.
+
+ < g r a p h i c s >
+
+Figure 6. Regression between the variances in the $x$ - and $y$ -directions $\left( {\sigma }_{{ob}{s}_{x}}^{2}\right.$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ , respectively) and the target size $\left( {W}^{2}\right)$ in Experiment 3.
+
+ < g r a p h i c s >
+
+Figure 7. Observed versus predicted success rates in Experiment 3.
+
+§ RESULTS
+
+Among the 2400 trials, we removed 33 outlier trials (1.375%) that had tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+§ TOUCH-POINT DISTRIBUTION
+
+A repeated-measures ANOVA showed that $W$ had significant main effects on ${\sigma }_{{ob}{s}_{x}}\left( {{F}_{4,{44}} = {15.96},p < {0.001},{\eta }_{p}^{2} = {0.59}}\right)$ and ${\sigma }_{{ob}{s}_{v}}\left( {{F}_{4,{44}} = {25.71},p < {0.001},{\eta }_{p}^{2} = {0.70}}\right)$ . Shapiro-Wilk tests indicated that the touch points on the $x$ - and $y$ -axes followed Gaussian distributions for ${55}\left( {{91.7}\% }\right)$ and ${53}\left( {{88.3}\% }\right)$ out of 60 conditions, respectively $\left( {p > {0.05}}\right)$ . Under 41 (68.3%) conditions, the touch points followed bivariate Gaussian distributions. Figure 6 shows the regression expressions for ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{v}}^{2}$ versus ${W}^{2}$ . From these results, we obtained the coefficients in Equation 5 on the $x$ - and $y$ -axes:
+
+$$
+{\sigma }_{{\text{ reg }}_{x}} = \sqrt{{0.0096}{W}^{2} + {0.8079}} \tag{15}
+$$
+
+$$
+{\sigma }_{{\text{ reg }}_{y}} = \sqrt{{0.0117}{W}^{2} + {0.8076}} \tag{16}
+$$
+
+Using Equations 15 and 16, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{x}}\right.$ and $\left. {\sigma }_{{re}{g}_{y}}\right)$ for each $W$ . The differences between the computed ${\sigma }_{\text{ reg }}$ and observed ${\sigma }_{\text{ obs }}$ values were at most 0.05 and ${0.2}\mathrm{\;{mm}}$ for the $x$ - and $y$ -axes, respectively.
+
+§ SUCCESS RATE
+
+Among the ${2367}\left( { = {2400} - {33}}\right)$ non-outlier data points, the participants successfully tapped the target in 2017 trials, or ${85.21}\%$ . As shown by the blue bars in Figure 7, the observed success rate increased from 50.84 to 99.58% with $W$ , which had a significant main effect $\left( {{F}_{4,{44}} = {59.24},p < {0.001},{\eta }_{p}^{2} = }\right.$ 0.84).
+
+We computed the predicted success rates for each $W$ , as represented by the red bars, by applying Equations 15 and 16 in Equation 9. The differences from the observed success rates were all under $7\%$ . These results show that we could accurately predict the success rate from the target size $W$ , with ${MAE} = {3.082}\%$ for $N = 5$ data points.
+
+ < g r a p h i c s >
+
+Figure 8. Regression between the variances in the $x$ - and $y$ -directions $\left( {\sigma }_{{ob}{s}_{x}}^{2}\right.$ and $\left. {\sigma }_{{ob}{s}_{y}}^{2}\right)$ and the target size $\left( {W}^{2}\right)$ for all data points $\left( {N = {20}}\right)$ in Experiment 4.
+
+§ EXPERIMENT 4: 2D TASK WITH PRESET AMPLITUDES
+
+We used the same task design as in Experiment 2: ${4}_{A} \times {5}_{W} \times$ ${16}_{\text{ repetitions }} \times {12}_{\text{ participants }} = {3840}$ data points. Figure $1\mathrm{\;c}$ shows the visual stimulus.
+
+§ RESULTS
+
+Among the 3840 trials, we removed 9 outlier trials (0.23%) having tap points at least ${15}\mathrm{\;{mm}}$ from the target center.
+
+§ TOUCH-POINT DISTRIBUTION
+
+For ${\sigma }_{{ob}{s}_{x}}$ , we found a significant main effect of $W\left( {{F}_{4,{44}} = }\right.$ ${24.12},p < {0.001},{\eta }_{p}^{2} = {0.69})$ , but not of $A\left( {{F}_{3,{33}} = {0.321},p = }\right.$ ${0.810},{\eta }_{p}^{2} = {0.028})$ . No significant interaction of $A \times W$ was found $\left( {{\dot{F}}_{{12},{132}} = {0.950},p = {0.500},{\eta }_{p}^{2} = {0.079}}\right)$ . For ${\sigma }_{{ob}{s}_{v}}$ , we found significant main effects of $A\left( {{F}_{3,{33}} = {3.833},p < {0.05}\text{ , }}\right.$ ${\eta }_{p}^{2} = {0.26}$ ) and $W\left( {{F}_{4,{44}} = {48.35},p < {0.001},{\eta }_{p}^{2} = {0.82}}\right)$ , but no significant interaction of $A \times W\left( {{F}_{{12},{132}} = {1.662},p = {0.082}\text{ , }}\right.$ ${\eta }_{p}^{2} = {0.13})$ . Shapiro-Wilk tests showed that the touch points on the $x$ - and $y$ -axes followed Gaussian distributions for 224 $\left( {{93.3}\% }\right)$ and ${218}\left( {{90.8}\% }\right)$ of the 240 conditions, respectively $\left( {p > {0.05}}\right)$ . Under 184 (76.7%) conditions, the touch points followed bivariate Gaussian distributions.
+
+Figure 8 shows the regression expressions for ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ versus ${W}^{2}$ , with ${R}^{2} = {0.8201}$ and 0.7347, respectively, for $N = {20}$ data points. When we merged the four ${\sigma }_{{ob}{s}_{x}}^{2}$ and ${\sigma }_{{ob}{s}_{y}}^{2}$ values for each $A$ , we obtained $N = 5$ data points with ${R}^{2} = {0.9137}$ and 0.9425, respectively (the regression constants did not change). From the regression expression results, we obtained the coefficients of Equation 5:
+
+$$
+{\sigma }_{{\text{ reg }}_{x}} = \sqrt{{0.0111}{W}^{2} + {0.9227}} \tag{17}
+$$
+
+$$
+{\sigma }_{{re}{g}_{y}} = \sqrt{{0.0188}{W}^{2} + {0.8366}} \tag{18}
+$$
+
+Using Equations 17 and 18, we computed the touch-point distributions $\left( {\sigma }_{{re}{g}_{x}}\right.$ and $\left. {\sigma }_{{re}{g}_{y}}\right)$ under each condition of $A \times W$ . The differences between the computed ${\sigma }_{\text{ reg }}$ and observed ${\sigma }_{\text{ obs }}$ values were less than ${0.2}\mathrm{\;{mm}}$ on the $x$ -axis and less than 0.5 $\mathrm{{mm}}$ on the $y$ -axis. The differences were comparatively greater for ${\sigma }_{{ob}{s}_{y}}$ because $A$ significantly affected ${\sigma }_{{ob}{s}_{y}}$ .
+
+§ SUCCESS RATE
+
+Among the remaining ${3831}\left( { = {3840} - 9}\right)$ non-outlier data points, the participants successfully tapped the target in 3145 trials, or ${82.09}\%$ . We found a significant main effect of $W$ $\left( {{F}_{4,{44}} = {120.0},p < {0.001},{\eta }_{p}^{2} = {0.92}}\right)$ on the success rate, but not of $A\left( {{F}_{3,{33}} = {2.100},p = {0.119},{\eta }_{p}^{2} = {0.16}}\right)$ . The interaction of $A \times W$ was not significant $\left( {{F}_{{12},{132}} = {0.960},p = {0.490},{\eta }_{p}^{2} = }\right.$ 0.080 ). Figure 9 shows the observed and predicted success rates. The largest difference was ${95.80} - {85.94} = {9.86}\%$ for $A = {20}\mathrm{\;{mm}}$ and $W = 6\mathrm{\;{mm}}$ . The ${MAE}$ was ${3.671}\%$ for $N = {20}$ data points.
+
+ < g r a p h i c s >
+
+Figure 9. Observed versus predicted success rates in Experiment 4.
+
+ < g r a p h i c s >
+
+Figure 10. Predicted success rate with respect to the target size $W$ .
+
+§ DISCUSSION
+
+§ PREDICTION ACCURACY OF SUCCESS RATES
+
+Throughout the experiments, the prediction errors were about as low as in Bi and Zhai's pointing tasks with an off-screen start [10]: ${10.07}\%$ at most in our case (for $A = {20}\mathrm{\;{mm}} \times$ $W = 2\mathrm{\;{mm}}$ in Experiment 2), versus ${9.74}\%$ at most in Bi and Zhai’s case ( ${2.4}\mathrm{\;{mm}}$ ). As in their study, we found that the success rate approached ${100}\%$ as $W$ increased; thus, the prediction errors tended to become smaller. Therefore, the model accuracy should be judged from the prediction errors for small targets.
+
+The largest prediction error in our experiments was under the condition of $W = 2\mathrm{\;{mm}}$ in the 1D task. Similarly, the largest prediction error in $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ ’s experiments was under the condition of $W = {2.4}\mathrm{\;{mm}}$ in the 1D (vertical target) task [10]. While Bi and Zhai checked the prediction errors under nine conditions in total [10] (three $W$ values for three target shapes), we checked the prediction errors under $5 + {20} + 5 + {20} =$ 50 conditions (Experiments 1 to 4, respectively), which may have given more chances to show a higher prediction error. In addition, although we used $2\mathrm{\;{mm}}$ as the smallest $W$ for consistency with Bi and Zhai's study on the Bayesian Touch Criterion [9], such a small target is not often used in practical touch UIs. Therefore, the slightly larger prediction error in our results should be less critical in actual usage.
+
+We also found that our concern that the prediction accuracy might drop, depending on the $A$ values, was not a critical issue as compared with tasks using an off-screen start [10]. Hence, the comparable prediction accuracy observed in our experiments empirically shows that Bi and Zhai’s model can be applied to pointing tasks with an on-screen start, regardless of whether the effect of $A$ is averaged (Experiments 1 and 3) or not ( 2 and 4 ).
+
+Figure 10 plots the predicted success rate with respect to $W$ , which can help designers choose the appropriate size for a GUI item. This also provides evidence that conducting costly user studies to measure success rates for multiple $W$ values has low scalability. For example, from the data in Experiment 4, the success rates for $W = 7$ and ${10}\mathrm{\;{mm}}$ do not differ much, while the curve sharply rises from $W = 1$ to $6\mathrm{\;{mm}}$ . Hence, even if the error rate is measured for $W = 2,6$ , and ${10}\mathrm{\;{mm}}$ , for example, it would be difficult to accurately predict error rates for other $W$ values such as $3\mathrm{\;{mm}}$ . Therefore, without an appropriate success-rate prediction model, designers have to conduct user studies with fine-grained $W$ values, e.g.,1 to 10 $\mathrm{{mm}}$ at $1 - \mathrm{{mm}}$ interval ${\mathrm{s}}^{2}$ .
+
+Regarding UI designs, how would prediction errors affect the display layout? In our worst case, for $W = 2\mathrm{\;{mm}}$ (Experiment 2), the actual success rate was 77%, but the predicted rate was 67%. If designers want a hyperlink to have a 77% success rate, they might set $W = {2.4}\mathrm{\;{mm}}$ according to the model shown in Figure 10. This 0.4-mm "excess" height could be negligible. When a space has a height of ${12}\mathrm{\;{mm}}$ , however, designers can arrange only five 2.4-mm hyperlinks, but in actuality, six links could be located in that space with the intended touch accuracy. Still, this is the worst case; for more practical $W$ values, this negative effect would become less significant as the prediction accuracy increases.
+
+§ ADEQUACY OF EXPERIMENTS
+
+In our experiments, the endpoint distributions were not normal in some cases. One might think that those results violate the assumption of a dual Gaussian distribution model. To visually check the distributions, Figure 11 show the histograms and 95% confidence ellipses of tap positions (see the Supplementary Materials for all results including Experiments 2 and 4). We can see that some conditions do not exhibit normal distributions, e.g., Figure11c. This could be partly due to the small numbers of trials in our experiments: 40 repetitions per condition in Experiments 1 and 3, and 16 in Experiments 2 and 4. Still, according to the central limit theorem, it is reasonable to assume that the distributions should approach Gaussian distributions after a sufficient number of trials.
+
+We also checked the Fitts' law fitness. Using the Shannon formulation [35] with nominal $A$ and $W$ values, we found that the error-free ${MT}$ data showed excellent fits ${}^{3}$ for Experiments 2 and 4, respectively, by using $N = {20}$ data points:
+
+$$
+{MT} = {132.0} + {90.29} \times {\log }_{2}\left( {A/W + 1}\right) ,{R}^{2} = {0.9807} \tag{19}
+$$
+
+$$
+{MT} = {114.3} + {97.91} \times {\log }_{2}\left( {A/W + 1}\right) ,{R}^{2} = {0.9900} \tag{20}
+$$
+
+ < g r a p h i c s >
+
+Figure 11. Histograms and 95% confidence ellipses using the all error-free data in Experiments (a-c) 1 and (d-f) 3. For 1D tasks, the histograms show the frequencies of tap positions, the dashed curve lines show the normal distributions using the mean and ${\sigma }_{{ob}{s}_{\mathrm{v}}}$ data, two red bars are the borderlines of target, and the black bar shows the mean of tap positions. For 2D tasks, blues dots are tap positions, light blue ellipses are 95% confidence ellipses of tap positions, and red dashed circles are target areas. For all tasks, the $0\mathrm{\;{mm}}$ positions on the $\mathrm{x}$ - and $\mathrm{y}$ -axes are aligned to the centers of targets.
+
+The indexes of performance, ${IP}\left( { = 1/b}\right)$ , were 11.08 and 10.21 bits/s, close to those in Pedersen and Hornbæk's report on error-free ${MT}$ analysis (11.11-12.50 bits/s for 1D touch-pointing tasks) [41]. Therefore, we conclude that both participant groups appropriately followed our instruction on trying to balance speed and accuracy.
+
+§ INTERNAL AND EXTERNAL VALIDITY OF PREDICTION PARAMETERS
+
+Because the main scope of our study did not include testing the external validity of the prediction parameters in equations like Equation 18, it is sufficient that the observed and predicted success rates internally matched the participant group, as shown by our experimental results. Yet, it is still worth discussing the external validity of the prediction parameters in the hope of gaining a better understanding of the dual Gaussian distribution hypothesis.
+
+A common way to check external validity is to apply obtained parameters to data from different participants (e.g., [11]). Bi and Zhai measured the parameters of Equations 6 and 7 in their experiment on the Bayesian Touch Criterion [9]. Those parameters were then used in Equations 9 and 11 to predict the success rates [10]. Because the participants in those two studies differed, the parameters of Equations 6 and 7 could have had external validity. Bi, Li, and Zhai stated, "Assuming finger size and shape do not vary drastically across users, ${\sigma }_{a}$ could be used across users as an approximation." The reason was that the ${\sigma }_{a}$ values measured in their 2D discrete pointing tasks were suitable for a key-typing task performed by a different group of participants [8].
+
+The top panel of Figure 12 shows the predicted success rates in the 1D horizontal bar pointing tasks. In addition to the prediction data reported in Figures 3 and 5, we also computed the predicted success rates by using the ${\sigma }_{{ob}{s}_{v}}$ values measured in the 2D tasks of Experiments 3 and 4. The actual success rate in Experiment 1 under the condition of $W = 2\mathrm{\;{mm}}$ was 71.55% (Figure 3), and those in Experiment 2 ranged from 71.73 to 77.60% (Figure 5). Therefore, we conclude that using the ${\sigma }_{{ob}{s}_{v}}$ values measured in the 2D tasks would allow us to predict more accurate success rates. Here, using Bi and Zhai's generic ${\sigma }_{{ob}{s}_{v}}$ value [10] allows us to predict the success rate $\left( {{60.66}\% }\right)$ , but this is not as close as ours to the actual data. Note that three students participated on both days in our study; this is not a complete comparison as an external validity check.
+
+${}^{2}$ In fact,1-mm intervals are still not sufficient: the predicted success rate "jumps up" from 41.3 to ${67.0}\%$ for $W = 2$ and $3\mathrm{\;{mm}}$ , respectively. ${}^{3}$ Results for the effective width method [13,35] and FFitts law [8] by taking failure trials into account were also analyzed. Because of the space limitation, we decided to focus on success-rate prediction in this paper.
+
+ < g r a p h i c s >
+
+Figure 12. Comparison of predicted success rates from our data and Bi and Zhai's [9] for (top) 1D and (bottom) 2D tasks.
+
+We also tried to determine whether the success rates in the 2D tasks could be predicted from Bi and Zhai's data, as shown in the bottom panel of Figure 12. Because Bi and Zhai's data for ${\sigma }_{{a}_{x}}$ and ${\sigma }_{{a}_{y}}$ were larger than ours, their predicted success rates tended to be lower. Furthermore, because the actual success rate was over ${50}\%$ for $W = 2\mathrm{\;{mm}}$ in Experiment 3 (Figure 7), Bi and Zhai's prediction parameters could not be used to predict the success rates in our experiments. Note that using Bi and Zhai's prediction parameters for the index finger [10] would not influence this conclusion.
+
+One possible explanation for why Bi and Zhai's parameters were appropriate for their predictions but not for ours is the participants' ages. While the age ranges of their participants were 26-49 for parameter measurement [9] and 28-45 years for success-rate prediction [10], our participants were university students with an age range of 19-25. Assuming that the cognitive and physical skills and sensory abilities of adults are relatively lower than those of younger persons [47], it is reasonable that the ${\sigma }_{{a}_{x}}$ and ${\sigma }_{{a}_{y}}$ values measured in our experiments were smaller than those in Bi and Zhai's. This result supports $\mathrm{{Bi}},\mathrm{{Li}}$ , and Zhai’s hypothesis that ${\sigma }_{a}$ may vary with the individual's finger size or motor impairment (e.g., tremor, or lack of) [8]. The fact that the model parameters $\alpha$ and ${\sigma }_{a}$ can change depending on the user group and thus affect the success-rate prediction accuracy is an empirically demonstrated limitation on the generalizability of the dual Gaussian distribution hypothesis. This is one of the novel findings of our study, as it has never been shown with such evidence.
+
+To accurately predict the success rate when the age range of the main users of an app or the main visitors to a smartphone website is known (e.g., teenagers), we suggest that designers choose appropriate participants for measuring the prediction parameters $\alpha$ and ${\sigma }_{a}$ . Such methodology of designing UIs differently according to the users' age has already been adopted in websites and apps. For example, Leitão and Silva listed various apps having large buttons and swipe widgets suitable for older adults, and maybe also for users with presbyopia (Figures 1-11 in [31]). On YouTube Kids [1], the button size is auto-personalized depending on the age listed in the user's account information. Our results can help such optimization and personalization according to the characteristics of target users.
+
+§ LIMITATIONS AND FUTURE WORK
+
+Our findings are somewhat limited by the experimental conditions, such as the $A$ and $W$ values used in the tasks. In particular, much longer $A$ values have been tested in touch-pointing studies, e.g., ${20}\mathrm{\;{cm}}$ [34]. Hence, our conclusions are limited to small screens. The limited range of $A$ values provides one possible reason why we observed only one pair having a significant difference in ${\sigma }_{obs}$ (between $A = {45}$ and 60 $\mathrm{{mm}}$ in Experiment 2). If we tested much longer $A$ values, the ballistic portion of rapidly aimed movements might affect ${\sigma }_{obs}$ $\left\lbrack {{15},{56}}\right\rbrack$ and change the resultant prediction accuracy. In addition, $\mathrm{{Bi}}$ and $\mathrm{{Zhai}}$ measured prediction parameters for using both the thumb in a one-handed posture and the index finger [9], and they also measured the success rates in 1D pointing with a vertical bar target [10]. If we conduct user studies under such conditions, they will provide additional contributions.
+
+Our experiments required the participants to balance speed and accuracy. In other words, the participants could spend sufficient time if necessary. The success rate has been shown to vary nonlinearly depending on whether users shorten the operation time or aim carefully ${\left\lbrack {53},{54}\right\rbrack }^{4}$ . Our experimental instructions covered just one case among various situations of touch selection.
+
+§ CONCLUSION
+
+We discussed the applicability of Bi and Zhai’s success-rate prediction model [10] to pointing tasks starting on-screen. The potential concern about an on-screen start in such tasks was that the movement distance $A$ is both implicitly and explicitly defined, and previous studies suggested that the $A$ value would influence the endpoint variability. We empirically showed the validity of the model in four experiments. The prediction error was at most 10.07% among 50 conditions in total. Our results indicate that designers and researchers can accurately predict the success rate by using a single model, regardless of whether a user taps a certain GUI item by moving a finger to the screen or keeping it close to the surface as in keyboard typing. Our findings will be beneficial for designing better touch GUIs and for automatically generating and optimizing UIs.
+
+${}^{4}$ For more examples of nonlinear relationships in the speed-accuracy tradeoff on tasks other than pointing, see [51].
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f47c22c2d627247d30d7f137850a6572336b289
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,537 @@
+# Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars
+
+Febi Chajadi* Md. Sami Uddin† Carl Gutwin‡
+
+University of Saskatchewan
+
+## Abstract
+
+Learnability is important in graphical interfaces because it supports the user's transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable - but current "flat" and "subtle" designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons' representations.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-User Interface Design
+
+## 1 INTRODUCTION
+
+Learnability is important in graphical user interfaces because it is an important part of a user's transition from novice to expert. Many kinds of learning can occur with an interface, but for WIMP interfaces (systems with windows, icons, menus, and pointers), one main way that users improve their performance is by learning the commands associated with icons in toolbars and ribbons, and where those icons are located. Therefore, a goal in the visual design of icons is to help the user remember the icon and the underlying command. However, other goals in icon design may interfere with an icon's ability to communicate its intended meaning to the user. One of these goals is the desire for visual consistency and cohesiveness - the idea that all of the icons in an interface should repeat the same visual variables (such as colour, contrast, weight, shape, angle, and size) in order to tie together the visual elements of the interface and give the system a recognizable style. For example, Figure 1 shows icons presented as good examples of visual consistency in icon design. These icons also illustrate a second design goal that is common in many commercial systems - subtle and "flat" icon design, in which icons are monochrome and have relatively low contrast.
+
+Although these types of icons are popular, the similarity across several visual variables also reduces distinctiveness, which could hinder visibility, learnability, and memorability (in the limit, if all of a UI's icons were identical grey rectangles, they would be difficult to remember). More generally, it seems likely that icon learnability could be affected by the visual attributes of the icons. This issue has been raised by some users who have noticed the potential problems of "flat" icon design: for example, forum posts often complain that icons are too similar (Figure 2). Recently, Microsoft has moved away from flat icons to more colourful and non-uniform imagery reminiscent of earlier guidance on graphical design [43].
+
+koppooble/x/c1/p0p0p0/l _____ …卫生水無料回山心是介理版的イスキ離り?三島 カッキャの中の四輪を11回●領地は外合体の米国 Web 2018 MAY SO O O O O O O O O O O O O O O <>こCS※令/■■位TA買過三面色→/>< OAOS十一✓メなのネストの中間四四四十曲の 6 & 1 A + 2 4 + 5 A / 2 + A 3 v 4 B
+
+| 〈9四日日日③☒☒闪 |
| ①Q☒800 |
| ①※$< 0$②G>②Cn0日 |
| 2✘↘00DI |
| (9)もA6⍻光((12 |
| 0:圆C9归0Q3 |
+
+Figure 1: Two examples from web posts on "visually cohesive icon design" [27] and "minimalist icon design" [70]. Visual variables such as colour, contrast, weight, and size are repeated across the entire set.
+
+
+
+Figure 2: Blog post: "1995, SVGA: colourful and distinguishable icons. 2019, 4K and millions of colours: flat monochrome icons"
+
+Previous research into visual variables suggests an increase in both noticeability and memorability when multimedia objects employ variations in colour and shape $\left\lbrack {8,{67},{74}}\right\rbrack$ . However, there are limits to how shape may contribute to better learnability. For example, an icon that cannot clearly represent its command (e.g., icons for abstract or complex commands such as "Analyse" or "Encoding") may cause confusion and impede learning despite the benefits of using representative icons [25]. Designers need to know whether the visual properties of icons can affect learnability and memorability. Although there is information about the usability of certain visual properties (e.g., suggested contrast ratios between text and background [4]), there is little known about effects on learning.
+
+---
+
+*e-mail: febi.chajadi@usask.ca
+
+${}^{ \dagger }$ e-mail: sami.uddin@usask.ca
+
+${}^{ \ddagger }$ e-mail: gutwin@cs.usask.ca
+
+Graphics Interface Conference 2020
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+To address this gap, we carried out two studies to compare how quickly people could learn and select icons with varying degrees of visual distinctiveness. Our first study tested the effects of meaningfulness in the icon's representation (comparing abstract and concrete imagery) and the effects of colour (comparing monochrome icons to icons of different colours). We also tested a fifth icon set that had substantial variation in both shape and colour. Participants were asked to find and select target commands from a toolbar with 60 icons, repeated over five blocks. The results of our first study were:
+
+- Icons with concrete imagery were learned much faster than abstract icons;
+
+- Adding colour to either the concrete or the abstract icon set did not lead to improved learning;
+
+- Varying both shape and colour did not improve learnability;
+
+Our second study tested the effects of three factors in a series of planned comparisons. We assessed meaningfulness (icons were either identical squares or concrete images), familiarity (concrete icons were either unfamiliar shapes or familiar images), and colour (squares could be either monochrome or coloured). Results from our second study were:
+
+- The addition of colour to the identical grey squares did not improve learning;
+
+- Unfamiliar shapes (Chinese characters for users with no familiarity with them) were much harder to learn than familiar shapes (everyday objects);
+
+- There was no difference in learning between the Chinese characters and the grey squares, even though the characters were far more differentiable in terms of shape.
+
+Our studies provide new information about how visual distinctiveness affects icon learning and retrieval. Surprisingly, the low visual distinctiveness of "flat" and subtle icon designs does not appear to make them more difficult to find or remember. Instead, having a concrete visual representation in the icon was shown to be extremely valuable for learning the icons. Based on our participants' comments, we suggest that this property better allows users to create a "memory hook" for the association between the icon and the command. Our studies contribute to a better understanding of how visual variables affect the process of learning icon locations, and provides a clear suggestion to designers that concrete images are likely to be more important than other forms of visual distinctiveness.
+
+## 2 RELATED WORK
+
+### 2.1 Visual Distinctiveness in Graphical Icons
+
+Since the invention of GUIs, researchers have studied icon design and have identified features that can be divided into two broad categories: visual and cognitive [53]. Visual features of icons include colour, size and shape. Colour is one of the most prominent visual traits that can easily separate an icon from another. Despite our ability to see a huge number of colours, however, most people can only differentiate and remember about five to eight colours in a visual workspace [68]. One of the main uses for colour in interactive systems is in highlighting items, e.g., when searching [12,14]. Size is another visible feature that makes icons distinguishable. Although a common use of the size feature is to make an interface cohesive (e.g., similar-sized icons used throughout a GUI; Figure 1), changing the size of an icon can make it distinct (e.g., MS Office [48] uses multiple sizes of icons). Besides colour and size, the shape of an icon is a strong visual factor that represents the underlying meaning [53]. Shapes can make icons more easily discernible as people can identify far more shapes than colours [69].
+
+Cognitive features of icons are related to people's cognition and memory. Researchers mainly focus on five subjective aspects of icons: familiarity, concreteness, complexity, meaningfulness, and semantic distance [47] (see Ng et al.'s [53] and Moyes et al.'s [52] reviews for comparative summaries). Among these features, however, the success of making an icon distinguishable depends on how naturally it can depict its underlying function [10]. Although these visual variables have been studied extensively, less is known about how they contribute to learning and retrieval of icons in GUIs.
+
+### 2.2 Psychology of Learning and Retrieval
+
+Learning and recall are two natural yet powerful human abilities. Researchers in psychology have extensively studied human memory $\left\lbrack {5,6,{13},{20},{21}}\right\rbrack$ and explored how these learning and retrieval skills are developed $\left\lbrack {3,{56},{72}}\right\rbrack$ . Siegel et al. [65] suggested that a combination of landmark, route, and survey knowledge contributes to the development of spatial learning and retrieval skills. People naturally begin learning objects in a new area through visual inspection [36] - which forms landmark knowledge [24]. Once familiar with the area, people build route knowledge [73] and start retrieving objects from already learned locations. With further experience, people acquire survey knowledge - where they can recall objects solely from memory, without requiring any visual search.
+
+Anderson [3] and Fitts et al. [26] suggested that learning and retrieving occur in three stages: cognitive, associative and autonomous. These stages of skill acquisition can be observed in GUIs. First, in the cognitive stage, users learn the contents of an interface and visually look for commands. Second, in the associative stage, users already know the contents of the interface and begin to remember the commands in the UI. As a result, they can reach those locations more quickly. However, in the associative stage, users still perform local visual search after reaching the vicinity of a command. Last, in the autonomous stage, users can recall a command's location from memory and visit it, without searching for it visually.
+
+### 2.3 Facilitating Learning and Retrieval of Icons in GUIs
+
+Although learnability (the idea of making an interface easily learnable and memorable) is frequently considered as a vital part of usability $\left\lbrack {{22},{54},{64}}\right\rbrack$ , learnability is difficult to predict and measure $\left\lbrack {{31},{61}}\right\rbrack$ . In order to facilitate the learning of icons and commands in GUIs, researchers have followed two main strategies: spatial interfaces and landmarks.
+
+Spatial interfaces. Spatial memory $\left\lbrack {{42},{56}}\right\rbrack$ is a human cognitive ability responsible for learning and remembering the locations of objects and places. Researchers have tried to exploit it by laying out interfaces in ways that are spatially stable $\left\lbrack {{16},{23},{62}}\right\rbrack$ . For example, Scarr et al.'s [58, 60] CommandMap showed a spatially stable icon arrangement in desktops, yielding better learning and recall of icons, even for real tasks [59], because users could leverage spatial memory $\left\lbrack {{17},{58},{78}}\right\rbrack$ . Similarly, Gutwin et al. [32] and Cockburn et al. [15] showed that a stable layout composed of all commands can increase recall efficiency compared to hierarchical ribbons or menus. Similar to desktops, spatially stable icons can improve learning and recall in multi-touch tablets [30, 33, 34, 77], smartwatches [45], smartphones [83, 84], digital tabletops [80], and even in VR [28].
+
+Landmarks in GUIs. Landmarks are easily identifiable objects and features that are different from their surroundings [46] and which can act as anchors for performing spatial activities such as navigation and object learning. Similar to their benefits in real life, landmarks have exhibited potential in GUIs [2,75,76]. Researchers have exploited landmarks that are already present in the GUI environment: for example, the corners of a screen $\left\lbrack {{34},{80}}\right\rbrack$ or the bezel of a device [63] can provide strong landmarks for icons near those locations. However, these natural landmarks often become useless in large interfaces (e.g., the middle area of a large screen) or a GUI with a large number of icons, because no landmark is present near those icons. In such cases, 'artificial landmarks' [29,78] can aid learning and recall. Studies suggested that coloured blocks [2,78], images as the background of a menu [78] and meaningful yet abstract icons $\left\lbrack {{50},{79}}\right\rbrack$ can be landmarks in GUIs to benefit spatial memory development.
+
+
+
+Figure 3: Example target words in each interface.
+
+Apart from spatial memory and landmarks, researchers have also studied features such as luminance [51] and the 'articulatory distance' [9] of icons for learnability. Others found icons representing accurate underlying meaning beneficial for learning and recall $\left\lbrack {7,{25},{39},{44},{57},{66}}\right\rbrack$ . Studies have shown that abstract and ambiguous icons demand more cognitive processing to recognize [40] and often hinder users from quickly learning them. However, the primary question - how visual variables of icons impact learning and recall - remains unanswered.
+
+### 2.4 Visual Distinctiveness of Icons
+
+We carried out two studies to investigate the effects of visual distinctiveness of icons on learnability. We manipulated two visual variables - shape and colour - but because the shape of an icon can also be representational, we also consider the cognitive variable of meaning in our studies as well $\left\lbrack {{10},{53}}\right\rbrack$ .
+
+Meaning: The meaning of an icon refers to the concept or idea that the icon's image conveys. Icons in our studies varied by the types of underlying meaning they possess:
+
+- Meaningless: Icon images have no connection to real-world objects or to their underlying commands (e.g., a grey square for the command "Settings").
+
+- Contextual: Icon images are representational of underlying commands, but require interpretation if unfamiliar (e.g., a summation symbol for the command "Formula").
+
+- Familiar: Icon images are pictorial and match their underlying command (e.g., an image of a calculator for the command "Calculator").
+
+Shape Distinctiveness: Shape distinctiveness, that is, separability of shapes, is difficult to define precisely. Prior work has explored aspects of this concept: Julész [37] identified shape features that early visual systems detect, Burlinson et al. [11] proposed that open or closed shapes influence perceptual processing, and Smart et al. [67] investigated the perception of filled, unfilled and open shapes in scatterplots. In this work, we define levels of shape separability with respect to trends observed in modern icon design.
+
+- None: Icons have no differences in shape (e.g. icons are all identical circles).
+
+- Medium: Icons use different shapes, but are thematically uniform for visual consistency (similar sizes, weights, line styles, and borders).
+
+- High: Icon shapes are distinctly different from one another.
+
+Colour Distinctiveness: We consider only basic levels of colour distinctiveness, because people's ability to distinguish colours is much lower than the ability to distinguish shapes [69].
+
+- None (Monochrome): All icons use only a single colour.
+
+- Medium (Colour): Different icons use different colours.
+
+- High (Multi-colour): Icons use several different colours.
+
+Both the visual and cognitive variables operate in the context of the spatial arrangement of the icons - the two studies reported in the following sections investigate how different combinations of the levels and factors above (summarized in Table 1) affect users' ability to learn the spatial location of the icon corresponding to each command.
+
+## 3 Study 1 Methods
+
+### 3.1 Interfaces
+
+We developed five custom web-based desktop icon selection interfaces, each consisting of sixty icons ( ${44}\mathrm{{px}}$ in size) arranged in three equal rows and presented in a standard ribbon-toolbar structure. All icon toolbars appeared at the top of the interface and allowed two types of mouse-based interaction: selection and hover. Names of the icons were not shown in the UI, but could be seen in a tooltip after hovering the mouse over an icon for ${300}\mathrm{\;{ms}}$ . Icons were created using the GIMP image editor, using source images from freely-available icon sets such as material.io and icons8. We used five experimental interfaces in Study 1, described below and shown in Figure 4.
+
+Concrete. The Concrete interface used monochrome icons similar to those found in standard mobile and desktop environments. The icons were chosen to avoid images of real-world objects, and therefore had contextual meaning. Although icons varied in shape, the level of distinctiveness was reduced by adding a circular grey background with a 1-pixel black border.
+
+Concrete+Colour. The Concrete+Colour interface used icons similar to Concrete in terms of shape distinctiveness and meaning (no icons were repeated). Icons were given a colour from a set of twelve unique colours; colours were equally distributed among the 60 icons. Colour brightness was adjusted to make icons with different colours clearly differentiable, and colours were not repeated for neighboring icons. The addition of colour provides the user with new landmarks that could be valuable for remembering locations (e.g., "it was the blue icon next to the red icon").
+
+Abstract. The Abstract interface used meaningless monochrome icons consisting of circle and octagon shapes that were augmented with partial or full crossing lines, gaps in the outline, or dots in the centre of the icon. Icons in this set provided medium shape distinctiveness: each shape was different, but the set shared several basic visual properties.
+
+Abstract+Colour. The Abstract+Colour interface used icons that were similar in design to Abstract, but used a square base outline. Colours were added to icons as described above for the Concrete+Colour interface.
+
+Mixed. The Mixed interface used icons with high shape distinctiveness (variations in size, shape, weight, and texture) and high colour distinctiveness (icons used a variety of colours). These variations provide users with two different types of landmark to assist their location memory. The icons were adapted from a real-world set, and had contextual meaning.
+
+
+
+Figure 4: Screenshots of the five interfaces used in Study 1. Target objects are outlined in red.
+
+### 3.2 Tasks and Stimuli
+
+The study consisted of a series of trials in each of the five interfaces, where each trial involved locating and selecting an icon. This task is commonly and frequently done in several toolbar-based or ribbon-based interfaces, such as Microsoft Word 2007 [48], Adobe Photoshop [1], or the GIMP graphics editor [71]. Every trial began by displaying a target word cue in the middle of a screen that remained visible for the entire trial, and participants were asked to find and select the corresponding icon from the toolbar. Participants could see the name of an icon as a tooltip after hovering over it for ${300}\mathrm{\;{ms}}$ . Each correct selection was indicated by a green flash at the selected location; red flashes were used to indicate incorrect selections. After selecting the correct icon, participants could proceed to the next trial by clicking on a 'Next Trial' button that appeared in the middle of the screen. The button centred a participant's gaze and cursor position, and started the timer of a trial. For each interface, 9 out of the 60 icons were used as targets; these were sampled from three general areas of the toolbar [78]: 3 from the corner regions (first and last three columns), 4 from the edges (top and bottom rows) and 2 from the middle row. No target position was repeated among the five interfaces. Target positions in each interface was repeated across all participants in random order of appearance.
+
+### 3.3 Participants and Apparatus
+
+Twenty participants (ten men, nine women, one non-binary), ages 20-44 (mean 26, SD 5.4), were recruited from a local university and received a $\$ {15}$ honorarium. All participants had normal or corrected-to-normal vision, and none reported a colour-vision deficiency. All participants were highly familiar with desktop and mobile applications (up to ${10}\mathrm{{hrs}}/\mathrm{{wk}}\left( 3\right) ,{20}\mathrm{{hrs}}/\mathrm{{wk}}\left( 4\right) ,{30}\mathrm{{hrs}}/\mathrm{{wk}}\left( 1\right)$ and over 30 hrs/wk (12)). The study took 90 minutes. Ten participants reported primarily issuing commands by navigating GUIs with mice and ten reported using keyboard shortcuts. Overall participants were familiar with keyboard shortcuts (1-5 shortcuts (7), 6-10 shortcuts (9), 11-15 shortcuts (2), 16-20 shortcuts (1), and over 20 shortcuts (1)).
+
+Study software (used in Study 1 and 2) was written in JavaScript, HTML and CSS, and ran in the Chrome browser. The study used a 27-inch monitor at 1920x1080 resolution, running on a Windows 10 PC with an Nvidia GTX 1080Ti graphics card. The system recorded all performance data; subjective responses were collected with SurveyMonkey.
+
+### 3.4 Procedure and Study Design
+
+At the beginning of the study session, participants completed an informed consent form and were given an overview of the study. After filling out a demographic questionnaire, participants completed a practice round consisting of 4 trials and 4 blocks with an icon set not used in the main study. They then completed 5 blocks of 9 trials for each of the five interfaces. The study followed a within-participant design, with the interfaces counterbalanced using a Latin square model. After each interface, participants completed NASA-TLX [35] questionnaires; after all interfaces, participants answered final questions about their preferences. Last, they reported their strategies for remembering target locations.
+
+The study used a within-participants design with three factors (meaning, shape distinctiveness, and colour distinctiveness) that were used for a series of planned comparisons. The dependent measures were completion time, hover amounts, errors, and subjective responses. Our main hypotheses were:
+
+- H1: Increased colour distinctiveness will reduce completion time and hover amounts (Abstract and Concrete vs. Abstract+Colour and Concrete+Colour);
+
+- H2: Increased meaning will reduce completion time and hover amounts (Abstract and Abstract+Colour vs. Concrete and Concrete + Colour);
+
+- H3: Increased shape distinctiveness will reduce completion time and hover amounts (Mixed vs. Concrete+Colour).
+
+- H4: Increasing both colour distinctiveness and shape distinctiveness will lead to a larger reduction in completion time and hover amounts (Mixed vs. Concrete).
+
+## 4 STUDY 1 RESULTS
+
+For all studies, we report the effect size for significant RM-ANOVA results as general eta-squared: ${\eta }^{2}$ (considering .01 small,.06 medium, and $> {.14}$ large [18]), and Holm correction was performed for post-hoc pairwise t-tests.
+
+### 4.1 Completion Time
+
+Completion time was measured from the appearance of a word cue to the selection of a correct icon; no data was removed due to outlying values. Mean completion times for the five icon sets are shown in Figure 5.
+
+Our first planned comparisons (H1 and H2) involved the effects of colour distinctiveness and meaning. A 2x2x5 RM-ANOVA (Meaning $X$ Colour Distinctiveness $X$ Block) showed effects of Meaning $\left( {{F}_{1,{19}} = {89.60}, p < {0.0001},{\eta }^{2} = {0.54}}\right)$ and Block $\left( {{F}_{1,{19}} = {336.88}, p}\right.$ $\left. { < {0.0001},{\eta }^{2} = {0.71}}\right)$ on completion time, but no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = {2.99}, p = {0.10}}\right)$ . There were no interactions between the factors (all $p > {0.10}$ ).
+
+Follow-up tests for Meaning showed significant differences (all $p < {0.05}$ ) between the concrete icon sets (Concrete and Concrete+Colour) and the abstract sets (Abstract and Abstract+Colour). Follow-up tests for Block showed differences between each successive pair except blocks 3 and 4 .
+
+Our third planned comparison (H3) used the Mixed and Concrete+Colour conditions to see whether shape distinctiveness would improve performance in icon sets that are already distinctive in terms of colour. However, a one-way ANOVA showed no difference $\left( {{F}_{1,{19}} = {0.086}, p = {0.77}}\right)$ . Our fourth comparison (H4) used the Mixed and Concrete interfaces to see whether having two distinctive visual variables would improve performance (i.e., Mixed is more differentiable both in terms of colour and shape than Concrete). However, once again a one-way ANOVA showed no difference $\left( {{F}_{1,{19}} = {0.03}}\right.$ , $p = {0.86})$ .
+
+
+
+Figure 5: Mean trial completion time, by interface (±s.e.).
+
+### 4.2 Hovers
+
+We measured the number of hovers (where the participant held the mouse for ${300}\mathrm{\;{ms}}$ over a target, showing the name) as a more sensitive measure of progress through the stages of cognitive, associative, and autonomous performance. As a participant moves from the cognitive to the associative stage, there should be a reduction in the number of icons that they need to inspect. Mean hovers per trial are shown in Figure 6. Results are very similar to those reported above for completion time: a 2x2x5 RM-ANOVA (Meaning X Colour Distinctiveness $X$ Block) showed effects of Meaning $\left( {{F}_{1,{19}} = {117.5}}\right.$ , $p < {0.0001},{\eta }^{2} = {0.66})$ and ${Block}\left( {{F}_{1,{19}} = {353.65}, p < {0.0001},{\eta }^{2} = }\right.$ 0.65 ) on number of hovers, but no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = {4.36}, p = {0.051}}\right) \left( {\mathrm{H}1\text{and}\mathrm{H}2}\right)$ . There were also interactions between Meaning and Colour $\left( {{F}_{1,{19}} = {5.61}, p < {0.05}}\right)$ ; as shown in Figure 6, the Abstract+Colour condition has fewer hovers than Abstract, whereas Concrete+Colour has more hovers than Concrete.
+
+Follow-up tests for Meaning again showed significant differences (all $p < {0.05}$ ) between both concrete icon sets (Concrete and Concrete+Colour) and both abstract sets (Abstract and Abstract+Colour). Follow-up tests for Block showed differences between successive pairs except for blocks 3 and 4 .
+
+
+
+Figure 6: Mean hover amounts, by interface (±s.e.).
+
+### 4.3 Errors
+
+We measured errors as the number of incorrect clicks before choosing the correct item. In some trials, participants clicked instead of hovering, leading to unusually high numbers of errors; we therefore removed 32 outliers out of 4500 total trials that were more than 3 s.d. from the mean. Overall errors were low (an average of 0.032 errors per click). A 2x2x5 RM-ANOVA (Meaning X Colour Distinctiveness $X$ Block) to look for effects on errors showed a main effect of ${Block}\left( {{F}_{1,{19}} = {12.2}, p < {0.05},{\eta }^{2} = {0.046}}\right)$ and a main effect of Meaning $\left( {{F}_{1,{19}} = {5.16}, p < {0.05},{\eta }^{2} = {0.18}}\right)$ . Follow-up t-tests showed that abstract icons had a significantly $\left( {p < {0.05}}\right)$ higher error rate (0.048 errors per trial) than concrete icons (0.018 errors per trial).
+
+### 4.4 Subjective Responses and Comments
+
+We used the Aligned Rank Transform [82] to perform RM-ANOVA on the NASA-TLX responses. As shown in Figure 7, mean scores of all TLX measures followed a trend similar to completion time. We found significant effects for all subjective measures. Follow-up t-tests revealed significant differences (all $p < {0.05}$ ) between the two conditions with abstract icons (Abstract and Abstract+Colour) and the three conditions with concrete icons (Concrete, Concrete+Colour and Mixed) for every measure except physical effort. Significant effects were also found (all $p < {0.05}$ ) in physical effort between Abstract and the three conditions with concrete icons as well as between Abstract and Abstract+Colour in perceived success.
+
+| Study | Condition | Meaning | Shape Distinctiveness | Colour Distinctiveness |
| Meaningless | Contextual | Familiar | None | Medium | High | None (Monochrome) | Medium (Colour) | High (Multi-colour) |
| S1 | Concrete | | $x$ | | | ✘ | | ✘ | | |
| S1 | Concrete+Colour | | ✘ | | | ✘ | | | ✘ | |
| S1 | Mixed | | ✘ | | | | ✘ | | | ✘ |
| S1 | Abstract | ✘ | | | | ✘ | | ✘ | | |
| S1 | Abstract+Colour | ✘ | | | | ✘ | | | ✘ | |
| S2 | Square | ✘ | | | ✘ | | | ✘ | | |
| S2 | Square+Colour | ✘ | | | ✘ | | | | ✘ | |
| S2 | UnfamiliarShape | ✘ | | | | | ✘ | ✘ | | |
| S2 | FamiliarShape | | | ✘ | | ✘ | | ✘ | | |
+
+Table 1: Icon properties of the interfaces in Study 1 & 2.
+
+
+
+Figure 7: Mean NASA-TLX questions responses for Study 1 (±s.e.).
+
+Overall, participants preferred both Mixed and Concrete+Colour conditions. They also perceived them as the easiest and fastest conditions where they made the least errors. Results of the preference survey are summarized in Table 2.
+
+ | Easiest | Fastest | Fewest Errors | Preference |
| Abstract | 0 | 0 | 0 | 0 |
| Abstract+Colour | 3 | 3 | 3 | 3 |
| Concrete | 3 | 3 | 3 | 4 |
| Concrete+Colour | 6 | 7 | 7 | 7 |
| Mixed | 8 | 7 | 7 | 6 |
+
+Table 2: Summary of preference survey results.
+
+Participants used a variety of techniques to learn and retrieve the icons. Eight participants stated that they relied on icon meaning and attempted to find a story or link to use as the basis for their memory: for example, one participant said "I tried to make a connection between the icon and the word." Ten participants focused on remembering the spatial locations (at different levels of specificity); one stated "[I recalled] the location of an icon if it was in the first, middle, or end [of the toolbar]." Nine participants also commented on the value of shape distinctiveness. For example, a participant said "If I had a good grasp of the icon's shape, it was easier to mentally place it in on the screen and find it again." The same participant reported a challenge with the less-distinctive icon sets: "I couldn't properly grasp a unique shape [in Abstract or Abstract+Colour], it became very difficult to mentally recall its position." Finally, six participants also used the colour of icons; one stated "colour added an additional element for memory."
+
+## 5 Study 2 Methods
+
+Study 1 suggested that colour did not improve learnability, and that icons with concrete imagery were substantially easier to learn. In Study 2, we expand on these results and go into more detail on two questions: first, whether colour improves learning when it is the only visual variable (i.e., the icons have no shape differentiability at all); and second, whether it is the differentiability of an icon's shape or the meaningfulness of the image that assists learning.
+
+Study 2 followed a similar method to study 1 , but with two alterations. To reduce the overall time needed for the session, we reduced the number of targets from nine to seven, and the number of blocks from five to four (Study 1 showed clear learning effects within four trial blocks, see Figure 5). All other elements of the study method, procedure, and apparatus were identical to Study 1.
+
+### 5.1 Pre-Study to Choose Number of Colours
+
+Study 1 used 12 colours, a larger number than is recommended for mapping tasks by visual design guidelines. To determine a suitable number of colours, we carried out a small pre-study comparing learning rates with four [19], eight [49], and twelve colours. Similar to study 1, three interfaces were designed, each having 60 square icons with 5-pixel borders. In each interface, colours were distributed evenly among the icons (none repeated for neighboring icons). Participants carried out four blocks with seven targets in each interface. RM-ANOVA on completion time showed no effect of number of colours, although participant comments and literature $\left\lbrack {{19},{81}}\right\rbrack$ generally supported four colours. Therefore, we used four colours for Study 2.
+
+### 5.2 Interfaces
+
+The interfaces in Study 2 used a similar spatial layout of 60 icons as in Study 1, but used four new icon sets to explore our new questions about the effects of colour, shape distinctiveness, and familiarity.
+
+Square. The Square interface's icons were identical squares with a grey 5-pixel border. These icons have no colour differentiability, no shape differentiability, and no meaning. Therefore, the only way that participants could remember the correct icon was by memorizing its spatial location.
+
+Square+Colour. The Square+Colour interface used the same square shapes as Square for all icons, but the icons were coloured with one of red, green, brown, or blue. Colours were evenly distributed across the 60 icons, and no neighboring icons repeated a colour. Colour brightness was adjusted to maximize differentiability following Arthur et al. [4]. With no shape distinctiveness in the icon set, the colours provide additional landmarks for users to remember locations.
+
+
+
+Figure 8: The four icon sets used in Study 2. Targets are outlined in red.
+
+UnfamiliarShape. The UnfamiliarShape interface showed monochrome four-stroke Chinese characters as icons. These icons had high shape distinctiveness (all icons were clearly different shapes). Chinese characters are meaningful, but only if the user is familiar with them - and our participants were chosen such that none knew these characters. Therefore, this icon set had no meaning for our study.
+
+FamiliarShape. The FamiliarShape interface used meaningful icons with imagery of recognizable real-world objects (Figure 8). Shape distinctiveness was medium, because we equalized several other visual variables such as size, line weight, and background shape (a grey circle with a 1-pixel black border).
+
+Icons were created using GIMP. FamiliarShape's images were sourced from material.io and icons8.
+
+### 5.3 Design
+
+Study 2 used a within-participants factorial design with several planned comparisons. There were three factors involved in the comparisons: shape distinctiveness (none or high), colour distinctiveness (monochrome or colour), and familiarity (meaningless or familiar). The comparisons used different sets of conditions, as specified by our four hypotheses:
+
+- H1: Increasing shape distinctiveness will reduce completion time and hover amounts (Square and Square+Colour vs. Fa-miliarShape and UnfamiliarShape);
+
+- H2: Increasing colour distinctiveness in icons with no shape distinctiveness will reduce completion time and hover amounts (Square vs. Square + Colour);
+
+- H3: Increasing familiarity will reduce completion time and hover amounts (UnfamiliarShape vs. FamiliarShape);
+
+- H4: Even in icons without meaning, increasing shape distinctiveness will reduce completion time and hover amounts (Square vs. UnfamiliarShape).
+
+### 5.4 Participants
+
+Twenty participants who did not take part in Study 1 (sixteen women, three men, and one non-binary; ages 18-37 (mean 24, SD 5)) completed the 60-minute study, and each received a $\$ {10}$ honorarium. Participants had normal or corrected-to-normal vision with no reported colour-vision deficiencies, and all were highly familiar with desktop and mobile applications (up to ${10}\mathrm{{hrs}}/\mathrm{{wk}}\left( 3\right) ,{20}\mathrm{{hrs}}/\mathrm{{wk}}$ (3), 30 hrs/wk (6) and over 30 hrs/wk (8)). Seven participants reported primarily issuing commands by navigating GUIs with mice, eleven reported using keyboard shortcuts, one reported using both and one reported using a trackpad. Overall participants were familiar with keyboard shortcuts (1-5 shortcuts (9), 6-10 shortcuts (6), 11-15 shortcuts (3), 16-20 shortcuts (1), and over 20 shortcuts (1)). None of the participants could read Chinese characters.
+
+## 6 Study 2 Results
+
+### 6.1 Completion Time
+
+Mean trial completion times are summarized in Figure 9. No data was removed due to outlying values. We carried out analyses for each of our four planned comparisons.
+
+First (H1), a 2x4 RM-ANOVA (Shape Distinctiveness X Block) showed effects of both Shape Distinctiveness $\left( {{F}_{1,{19}} = {124.22}, p}\right.$ $< {0.0001},{\eta }^{2} = {0.67})$ and ${Block}\left( {{F}_{1,{19}} = {181.67}, p < {0.0001},{\eta }^{2} = }\right.$ 0.84 ) on completion time, as well as an interaction between the two factors $\left( {{F}_{1,{19}} = {12.44}, p < {0.01},{\eta }^{2} = {0.09}}\right)$ .
+
+The effect of Shape Distinctiveness, however, must be considered in light of our third planned comparison (H3) of the familiarity of icon imagery - that is, in light of the large performance difference between the two interfaces with distinctive shapes. These interfaces (UnfamiliarShape and FamiliarShape) differ in terms of the familiarity of the icon imagery, and a one-way RM-ANOVA showed a highly significant difference between them $\left( {{F}_{1,{19}} = {112.24}, p < {0.0001},{\eta }^{2} = }\right.$ 0.70 ). As can be seen in Figure 9, the UnfamiliarShape interface was much closer in learning rate to the two interfaces with square icons, and t-tests showed no significant differences between Unfamil-iarShape and Square $\left( {p > {0.1}}\right)$ , but showed that FamiliarShape was significantly different from all three other interfaces (all $\mathrm{p} < {0.001}$ ). In our results, therefore, the benefit of shape distinctiveness arose only when those shapes were both differentiable and familiar.
+
+Follow up tests for Block showed significant differences between each successive pair (all $p < {0.05}$ ). The significant interaction between Shape Distinctiveness and Block can be seen in Figure 9, where the learning curve for FamiliarShape flattens before the other conditions (because users reached expertise far earlier in this condition).
+
+Our second planned comparison (H2) investigates the effect of colour distinctiveness in icons that have no shape differentiability (Square vs. Square+Colour). A 2x4 RM-ANOVA (Colour Distinctiveness $X$ Block) showed no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = }\right.$ ${1.62}, p = {0.2}$ ), and no interaction with $\operatorname{Block}\left( {{F}_{1,{19}} = {0.56}, p = {0.46}}\right)$ .
+
+Our fourth planned comparison (H4) looked at whether shape differentiability alone (with meaningless icons) would improve learning. We compared the Square and UnfamiliarShape conditions using a one-way RM-ANOVA, but found no difference $\left( {{F}_{1,{19}} = {0.27}, p = }\right.$ 0.61).
+
+
+
+Figure 9: Mean trial completion time, by interface (±s.e.).
+
+### 6.2 Hovers
+
+Similar to Study 1, the results for mean hovers in Study 2 closely mirror the completion time results. RM-ANOVA (Shape Distinctiveness $x$ Block) showed effects of Shape Distinctiveness $\left( {{F}_{1,{19}} = }\right.$ ${107.02}, p < {0.0001},{\eta }^{2} = {0.73})$ , Block $\left( {{F}_{1,{19}} = {290.45}, p < {0.0001}}\right.$ , ${\eta }^{2} = {0.84}$ ) as well as an interaction between the two factors $\left( {{F}_{1.19} = }\right.$ ${24.4}, p < {0.0001},{\eta }^{2} = {0.18})$ on hovers (H1). Follow-up tests for Block showed significant differences (all $p < {0.05}$ ) between each successive pair.
+
+As with the completion time results, the effect of Shape Distinctiveness appears to be largely due to the substantial effect of familiarity: in our third planned comparison, a one-way RM-ANOVA also showed a significant effect between UnfamiliarShape and Familiar-Shape $\left( {{F}_{1,{19}} = {102.17}, p < {0.0001},{\eta }^{2} = {0.73}}\right)$ . T-tests also showed no significant difference between UnfamiliarShape and Square (p $> {0.1}$ ), but showed that FamiliarShape was significantly different from all three other interfaces (all $\mathrm{p} < {0.001}$ ). Follow-up tests for Block showed significant differences between every successive pair except blocks 3 and 4 .
+
+In our second planned comparison (H2), a 2x4 RM-ANOVA found no effect of Colour Distinctiveness $\left( {p > {0.15}}\right)$ and no interaction with ${Block}\left( {{F}_{1,{19}} = {0.15}, p = {0.71},{\eta }^{2} = {0.002}}\right)$ .
+
+In our fourth planned comparison (H4), a one-way RM-ANOVA found no effect of shape differentiability $\left( {{F}_{1,{19}} = {0.27}, p = {0.61},{\eta }^{2} = }\right.$ ${0.006})$ .
+
+
+
+Figure 10: Mean hover amounts, by interface (±s.e.).
+
+### 6.3 Errors
+
+We measured errors as the number of incorrect clicks before choosing the correct item. Data from one participant (who clicked instead of hovered) was removed. For all other participants, errors were very low, with an overall average of 0.037 errors per trial. RM-ANOVA showed no main effect of any of our main factors on errors (Shape Distinctiveness: ${F}_{1,{19}} = {1.42}, p = {0.24}$ ; Block: ${F}_{1,{19}} = {0.39}, p = {0.75}$ ; Colour: ${F}_{1,{19}} = {1.67}, p = {0.21}$ ; or Familiarity: ${F}_{1,{19}} = {2.47}, p = {0.13}$ ).
+
+### 6.4 Subjective Responses and Comments
+
+NASA-TLX responses were analyzed after performing an Aligned Rank Transformation [82]. Data from two participants, which was incomplete, was removed. The mean effort scores shown in Figure 11 mirror the trend in the performance data, in which FamiliarShape outperformed others in all measures. RM-ANOVA showed significant effects for all subjective measures. Follow-up tests showed significant (all $p < {0.05}$ ) differences between FamiliarShape and every other condition in mental effort, perceived success, effort, and annoyance. Overall, the FamiliarShape icons were greatly preferred - results are summarized in Table 3.
+
+
+
+Figure 11: Mean NASA-TLX questions responses for Study 2 (±s.e.).
+
+Participants' comments again echoed the performance results. Three participants stated that the uniformity in the Square condition was challenging; one said, "it was really hard since everything looked the same." Four participants also noted difficulties when attempting to use the colour information. For example, one participant stated "I tried to use colour [in Square+Colour] but it didn't work super well." The realistic representation of targets in FamiliarShape was found to be beneficial to eight participants (e.g., "remembering the picture of each object, and my brain just brought me to where it was"). Finally, six participants stated that the distinct shapes of the UnfamiliarShape condition provided a connection that helped them to remember targets: for example, one participant reported that one target's icon "looked like a bent cross", making it easier to remember the location.
+
+ | Easiest | Fastest | Fewest Errors | Preference |
| Square | 1 | 0 | 0 | 1 |
| Square+Colour | 0 | 1 | 0 | 0 |
| UnfamiliarShape | 0 | 0 | 1 | 0 |
| FamiliarShape | 19 | 19 | 19 | 19 |
+
+Table 3: Summary of Study 2 preference survey results.
+
+## 7 Discussion
+
+Our two studies provided the following findings:
+
+- Colour distinctiveness did not improve learning in either study.
+
+- Adding multiple distinctive variables (colour and shape) also did not improve learning.
+
+- Shape distinctiveness when coupled with meaning substantially improved learning, but shape distinctiveness on its own was not effective.
+
+- Participant strategies suggested that they primarily try to search by meaning rather than visual characteristics.
+
+In the following paragraphs, we consider explanations for these main results, limitations to our findings, and directions for future research.
+
+### 7.1 Explanation for Results
+
+#### 7.1.1 Colour distinctiveness does not improve icon learning
+
+Colour distinctiveness did not reduce completion time or hover amounts in either study, even when it was the only visual variable available (Study 2). One main reason for this finding is that many participants apparently did not use the colour cues, and instead only searched by meaning and spatial location - participants often reported creating and connecting stories to icons to remember them rather than using colour as a visual landmark. However, participant comments suggest that at least a few people attempted to use colour information - some participants would search by colour first in target selection, allowing them to narrow down the target set (e.g., searching for red icons with two crossing lines reduces the number of icons that must be searched). But in many cases, attempts to use colour appeared to be unsuccessful. One reason for colour's ineffectiveness may be that the colour cues interfered with one another, reducing the value of colour as a landmark. That is, because all icons were coloured, remembering only that an icon was "beside the blue one" did not uniquely identify a target (because there were several blue icons) $\left\lbrack {{38},{41}}\right\rbrack$ . It is, however, possible that if there were fewer coloured icons, colour might be a more effective landmark - in studies of artificial landmarks, for example, having only grey-coloured obvious landmarks significantly improved performance [78] in a similar selection task. It is also possible that colour interfered with participants' ability to see differences in the abstract shapes used in Study 1; that is, the colours used in the Abstract+Colour condition may have reduced contrast and thus reduced any potential effect of shape distinctiveness $\left\lbrack {{44},{55}}\right\rbrack$ .
+
+#### 7.1.2 Shape distinctiveness was only effective with meaning
+
+When icons had even a contextual level of meaning, we observed that participants would visually search using meaning as a memory cue; and when meaning was available, participants tended to disregard the landmarks created by differences in the icons' visual presentation. In icons with meaningless imagery, participants needed to rely more on absolute spatial memory - and without pre-existing knowledge of the icon mappings, participants had to find a prompted icon by laborious visual search (hovering one by one). In addition to improving performance in the early stages of learning, it was also clear that meaningful icons had a similar learning curve to the other conditions, implying that these conditions also allowed users to switch to location-based retrieval. Our findings confirm previous guidance about designing icons with clear meaning to help user navigation of an interface (e.g., $\left\lbrack {7,{25},{39},{40},{44},{57}}\right\rbrack$ ), although our results extend this guidance to the value of meaning for longer-term learning of an interface as well. In contrast, Study 2 showed that shape distinctiveness without meaning did not improve learning, and the reasons for this condition's poor performance are similar to that of the colour conditions: namely, interference between similar-looking shapes may have prevented a shape's differentiability from being useful as a landmark. As with colour, shape may still be useful as a landmark if there are fewer shapes that have more noticeable differences.
+
+### 7.2 Design Implications and Generalizing the Results
+
+Our results suggest that user learning of an interface is not hindered by the lack of visual distinctiveness in 'flat' and subtle icon designs, and also clearly show the value of using concrete and familiar imagery. Therefore, designers can use flat and subtle icon styles without compromising memorability, as long as meaning is clearly conveyed. We note, however, that there are other potential factors in the use of flat icons that should be considered in addition to learning (e.g., whether users can tell that an on-screen object is in fact a clickable icon). Our results also raise the question of what designers should do in situations where they must create icons for commands or concepts that do not have obvious visual representations. The frequency with which we saw the "memory hook" strategy in our studies (i.e., looking for a connection between the image of the icon and the associated command) suggests that concrete imagery - even if not a direct representation of the underlying concept - may enable learning better than simply using distinctive visual variables. As suggested above, however, it may be that the value of colour or shape distinctiveness as landmarks could be improved, a topic we will consider in future studies.
+
+### 7.3 Limitations and Future Work
+
+There are several ways in which our studies could not exactly replicate various factors of real-world interface learning, and these suggest possibilities for future research. First, we plan to test the idea mentioned above that colour and shape differentiability could be more effective if there are fewer items in the set that are different, thus providing a better anchor for spatial learning. One implementation would involve strategically placed icons that are designed to catch the user's attention (using colour or shape) within the toolbar; these icons could anchor memory and potentially improve learning.
+
+Second, a limitation in our studies was the short time available for learning - users typically learn an interface in a much slower fashion, and in the context of real tasks. In addition, we tested only immediate recall, not retention after a time period, and we did not test transfer from the training task back to a real-world task with the interface. We plan retention and transfer phases in our future studies.
+
+## 8 CONCLUSION
+
+Icons are a ubiquitous mechanism for representing commands in an interface [7], and learning the icons in an interface is a major part of becoming an expert with that system. Despite the prevalence of icons, toolbars, and ribbons, however, little is known about the effects of icon design on learnability. We carried out two studies to test whether differentiability in two visual variables - colour and shape - would improve learning of icons in a 60-item toolbar. Our results showed that our manipulations of these variables did not have significant effects on learning or performance, and that the concreteness and meaning of the icon's imagery was far more effective in helping users learn and recall targets. Our studies provide new empirical evidence for existing guidelines that suggest an icon that are contextual or familiar will be more learnable and easier to navigate. This work increases understanding of how users learn new icons and the relative roles that visual variables and cognitive factors play in users' spatial learning and expertise development.
+
+## ACKNOWLEDGMENTS
+
+We would like to thank our participants and the anonymous reviewers for their feedback. This work was supported by the Natural Sciences and Engineering Research Council of Canada.
+
+## REFERENCES
+
+[1] Adobe. Adobe photoshop, 2020.
+
+[2] J. Alexander, A. Cockburn, S. Fitchett, C. Gutwin, and S. Greenberg. Revisiting read wear: analysis, design, and evaluation of a footprints scrollbar. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems - CHI '09, pp. 1665-1674. ACM, New York, NY, USA, 2009. doi: 10.1145/1518701.1518957
+
+[3] J. R. Anderson. Learning and memory: An integrated approach, 2nd ed. John Wiley & Sons Inc, Hoboken, NJ, US, 2000.
+
+[4] P. Arthur and R. Passini. Wayfinding: people, signs, and architecture. 1992.
+
+[5] A. D. Baddeley. Human memory: theory and practice. Lawrence Erlbaum, Hove, England, 1990.
+
+[6] A. D. Baddeley. Essentials of human memory. Psychology Press, Hove, England, 1999.
+
+[7] P. Barr, J. Noble, and R. Biddle. Icons r icons. In Proceedings of the Fourth Australasian User Interface Conference on User Interfaces 2003 - Volume 18, AUIC '03, p. 25-32. Australian Computer Society, Inc., AUS, 2003.
+
+[8] S. Bateman, R. L. Mandryk, C. Gutwin, A. Genest, D. McDine, and C. Brooks. Useful junk? the effects of visual embellishment on comprehension and memorability of charts. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2573-2582, 2010.
+
+[9] S. Blankenberger and K. Hahn. Effects of icon design on human-computer interaction. Int. J. Man-Mach. Stud., 35(3):363-377, Sept. 1991. doi: 10.1016/S0020-7373(05)80133-6
+
+[10] M. M. Blattner, D. A. Sumikawa, and R. M. Greenberg. Earcons and icons: Their structure and common design principles. Hum.-Comput. Interact., 4(1):11-44, Mar. 1989. doi: 10.1207/s15327051hci0401_1
+
+[11] D. Burlinson, K. Subramanian, and P. Goolkasian. Open vs. closed shapes: New perceptual categories? IEEE transactions on visualization and computer graphics, 24(1):574-583, 2017.
+
+[12] R. Carter. Visual search with color. Journal of experimental psychology. Human perception and performance, 8(1):127-136, February 1982. doi: 10.1037//0096-1523.8.1.127
+
+[13] W. G. Chase. Visual information processing. In Handbook of perception and human performance, Vol. 2: Cognitive processes and performance, pp. 1-71. John Wiley & Sons, Oxford, England, 1986.
+
+[14] R. E. Christ. Review and analysis of color coding research for visual displays. Human Factors, 17(6):542-570, 1975. doi: 10.1177/ 001872087501700602
+
+[15] A. Cockburn, C. Gutwin, and J. Alexander. Faster document navigation with space-filling thumbnails. In Proceedings of the SIGCHI
+
+Conference on Human Factors in Computing Systems, CHI '06, p. 1-10. ACM, New York, NY, USA, 2006. doi: 10.1145/1124772.1124774
+
+[16] A. Cockburn, C. Gutwin, J. Scarr, and S. Malacria. Supporting novice
+
+to expert transitions in user interfaces. ACM Comput. Surv., 47(2), Nov. 2014. doi: 10.1145/2659796
+
+[17] A. Cockburn, P. O. Kristensson, J. Alexander, and S. Zhai. Hard lessons: Effort-inducing interfaces benefit spatial learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, p. 1571-1580. ACM, New York, NY, USA, 2007.
+
+[18] J. Cohen. Eta-squared and partial eta-squared in communication science. Human Communication Research, 28(56):473-490, 1973.
+
+[19] N. Cowan. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and brain sciences, 24(1):87-114, 2001.
+
+[20] F. I. Craik and R. S. Lockhart. Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11(6):671 - 684, 1972. doi: 10.1016/S0022-5371(72)80001-X
+
+[21] J. Deese and R. A. Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material. Journal of experimental psychology, 54 3:180-7, 1957.
+
+[22] A. Dix, J. Finlay, G. Abowd, and R. Beale. Human-Computer Interaction. Prentice-Hall, Inc., USA, 1997.
+
+[23] B. D. Ehret. Learning where to look: Location learning in graphical user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '02, p. 211-218. ACM, New York, NY, USA, 2002. doi: 10.1145/503376.503414
+
+[24] G. W. Evans. Environmental cognition. Psychological bulletin, 88(2):259, 1980.
+
+[25] J. Ferreira, J. Noble, and R. Biddle. A case for iconic icons. In Proceedings of the 7th Australasian User interface conference-Volume 50, pp. 97-100. Australian Computer Society, Inc., 2006.
+
+[26] P. M. Fitts and M. I. Posner. Human performance. 1967.
+
+[27] D. Gandy. Visual consistency and setup., 2015.
+
+[28] B. Gao, B. Kim, J.-I. Kim, and H. Kim. Amphitheater layout with egocentric distance-based item sizing and landmarks for browsing in virtual reality. International Journal of Human-Computer Interaction, 35(10):831-845, 2019.
+
+[29] B. Gao, H. Kim, B. Kim, and J.-I. Kim. Artificial landmarks to facilitate spatial learning and recalling for curved visual wall layout in virtual reality. In 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 475-482. IEEE, 2018.
+
+[30] V. Gaur, M. S. Uddin, and C. Gutwin. Multiplexing spatial memory: increasing the capacity of fasttap menus with multiple tabs. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-13, 2018.
+
+[31] T. Grossman, G. Fitzmaurice, and R. Attar. A survey of software learnability: metrics, methodologies and guidelines. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 649-658, 2009.
+
+[32] C. Gutwin and A. Cockburn. Improving list revisitation with listmaps. In Proceedings of the working conference on Advanced visual interfaces, pp. 396-403, 2006.
+
+[33] C. Gutwin, A. Cockburn, and B. Lafreniere. Testing the rehearsal hypothesis with two fasttap interfaces. In Proceedings of the 41st Graphics Interface Conference, pp. 223-231, 2015.
+
+[34] C. Gutwin, A. Cockburn, J. Scarr, S. Malacria, and S. C. Olson. Faster command selection on tablets with fasttap. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2617-2626, 2014.
+
+[35] S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology, 52:139-183, 1988. doi: 10.1016/S0166-4115(08)62386 -9
+
+[36] L. Hasher and R. T. Zacks. Automatic and effortful processes in memory. Journal of experimental psychology: General, 108(3):356, 1979.
+
+[37] C. Healey and J. Enns. Attention and visual memory in visualization and computer graphics. IEEE transactions on visualization and computer graphics, 18(7):1170-1188, 2011.
+
+[38] W. E. Hick. On the rate of gain of information. Quarterly Journal
+
+of Experimental Psychology, 4(1):11-26, mar 1952. doi: 10.1080/ 17470215208416600
+
+[39] W. Horton. Designing icons and visual symbols. In Conference com-
+
+panion on Human factors in computing systems, pp. 371-372, 1996.
+
+[40] S.-C. Huang, R. G. Bias, and D. Schnyer. How are icons processed by the brain? neuroimaging measures of four types of visual stimuli used in information systems. Journal of the association for information science and technology, 66(4):702-720, 2015.
+
+[41] R. Hyman. Stimulus information as a determinant of reaction time. Journal of Experimental Psychology, 45(3):188-196, 1953. doi: 10. 1037/h0056940
+
+[42] R. P. Kessels, L. J. Kappelle, E. H. de Haan, and A. Postma. Lateralization of spatial-memory processes: evidence on spatial span, maze learning, and memory for object locations. Neuropsychologia, 40(8):1465-1473, 2002.
+
+[43] C. Koehn. Iconic icons: Designing the world of windows, Feb 2020.
+
+[44] S. H. Kurniawan. A rule of thumb of icons' visual distinctiveness. In Proceedings on the 2000 conference on Universal Usability, pp. 159-160, 2000.
+
+[45] B. Lafreniere, C. Gutwin, A. Cockburn, and T. Grossman. Faster command selection on touchscreen watches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4663-4674, 2016.
+
+[46] K. Lynch. The image of the city. MIT Press, 1960.
+
+[47] S. J. Mcdougall, M. B. Curry, and O. de Bruijn. Measuring symbol and icon characteristics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic distance for 239 symbols. Behavior Research Methods, Instruments, & Computers, 31(3):487-519, 1999.
+
+[48] Microsoft. Microsoft word, 2020.
+
+[49] G. A. Miller. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2):81, 1956.
+
+[50] E. S. Mollashahi, M. S. Uddin, and C. Gutwin. Improving revisitation in long documents with two-level artificial-landmark scrollbars. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces, pp. 1-9, 2018.
+
+[51] J. M. Moon and W.-T. Fu. Where is my stuff? augmenting finding and re-finding information by spatial locations and icon luminance. In International Conference on Foundations of Augmented Cognition, pp. 58-67. Springer, 2009.
+
+[52] J. Moyes and P. W. Jordan. Icon design and its effect on guessability, learnability, and experienced user performance. People and computers, (8):49-60, 1993.
+
+[53] A. W. Ng and A. H. Chan. What makes an icon effective? In AIP Conference Proceedings, vol. 1089, pp. 104-114. American Institute of Physics, 2009.
+
+[54] J. Nielsen. Usability engineering: Morgan kaufmann publishers. Inc.- 1993, 1993.
+
+[55] R. Pal, J. Mukherjee, and P. Mitra. How do warm colors affect visual attention? In Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, pp. 1-8, 2012.
+
+[56] A. Postma and E. H. De Haan. What was where? memory for object locations. The Quarterly Journal of Experimental Psychology Section $A,{49}\left( 1\right) : {178} - {199},{1996}$ .
+
+[57] K. Satcharoen. Icon concreteness effect on selection speed and accuracy. In Proceedings of the 2018 10th International Conference on Computer and Automation Engineering, pp. 107-110, 2018.
+
+[58] J. Scarr, A. Cockburn, C. Gutwin, and A. Bunt. Improving command selection with commandmaps. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 257-266, 2012.
+
+[59] J. Scarr, A. Cockburn, C. Gutwin, A. Bunt, and J. E. Cechanowicz. The usability of commandmaps in realistic tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2241-2250, 2014.
+
+[60] J. Scarr, A. Cockburn, C. Gutwin, and S. Malacria. Testing the robustness and performance of spatially consistent interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3139-3148, 2013.
+
+[61] J. Scarr, A. Cockburn, C. Gutwin, and P. Quinn. Dips and ceilings: understanding and supporting transitions to expertise in user interfaces.
+
+In Proceedings of the sigchi conference on human factors in computing systems, pp. 2741-2750, 2011.
+
+[62] J. L. Scarr. Understanding and exploiting spatial memory in the design of efficient command selection interfaces. 2014.
+
+[63] K. Schramm, C. Gutwin, and A. Cockburn. Supporting Transitions to Expertise in Hidden Toolbars. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 4687-4698. ACM, New York, NY, USA, 2016. doi: 10.1145/2858036.2858412
+
+[64] B. Shneiderman. Designing the User Interface (2nd Ed.): Strategies for Effective Human-Computer Interaction. Addison-Wesley Longman Publishing Co., Inc., USA, 1992.
+
+[65] A. W. Siegel and S. H. White. The development of spatial representations of large-scale environments. In Advances in child development and behavior, vol. 10, pp. 9-55. Elsevier, 1975.
+
+[66] J. M. Silvennoinen and J. P. Jokinen. Aesthetic appeal and visual usability in four icon design eras. In Proceedings of the Conference on Human Factors in Computing Systems, pp. 4390-4400, 2016.
+
+[67] S. Smart and D. A. Szafir. Measuring the separability of shape, size, and color in scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2019.
+
+[68] S. L. Smith. Color coding and visual search. Journal of Experimental Psychology, 64(5):434, 1962.
+
+[69] S. L. Smith and D. W. Thomas. Color versus shape coding in information displays. Journal of Applied Psychology, 48(3):137, 1964.
+
+[70] A. Sowwards. A collection of minimal icons for subtle design work., 2014.
+
+[71] The GIMP Development Team. Gimp, 2020.
+
+[72] P. W. Thorndyke and S. E. Goldin. Spatial learning and reasoning skill. In Spatial orientation, pp. 195-217. Springer, 1983.
+
+[73] P. W. Thorndyke and B. Hayes-Roth. Differences in spatial knowledge acquired from maps and navigation. Technical report, RAND CORP SANTA MONICA CA, 1980.
+
+[74] A. M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive psychology, 12(1):97-136, 1980.
+
+[75] M. S. Uddin. Improving Multi-Touch Interactions Using Hands as Landmarks. MSc thesis, University of Saskatchewan, 2016.
+
+[76] M. S. Uddin. Use of Landmarks to Design Large and Efficient Command Interfaces. In Proceedings of the ACM Companion on Interactive Surfaces and Spaces, pp. 13-17. ACM, New York, NY, USA, 2016.
+
+[77] M. S. Uddin and C. Gutwin. Rapid command selection on multi-touch tablets with single-handed handmark menus. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, pp. 205-214, 2016.
+
+[78] M. S. Uddin, C. Gutwin, and A. Cockburn. The effects of artificial landmarks on learning and performance in spatial-memory interfaces. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3843-3855, 2017.
+
+[79] M. S. Uddin, C. Gutwin, and A. Goguey. Using Artificial Landmarks to Improve Revisitation Performance and Spatial Learning in Linear Control Widgets. In Proceedings of ACM symposium on Spatial User Interaction - SUI 2017, pp. 48-57. ACM, Brighton, United Kingdom, 2017. doi: 10.1145/3131277.3132184
+
+[80] M. S. Uddin, C. Gutwin, and B. Lafreniere. Handmark menus: Rapid command selection and large command sets on multi-touch displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5836-5848, 2016.
+
+[81] L. H. Williams and T. Drew. Working memory capacity predicts search accuracy for novel as well as repeated targets. Visual Cognition, 26(6):463-474, 2018.
+
+[82] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The Aligned Rank Transform for nonparametric factorial analyses using only ANOVA procedures. In Conference on Human Factors in Computing Systems, pp. 143-146. ACM, New York, NY, USA, 2011.
+
+[83] S. Zhai and P.-O. Kristensson. Shorthand writing on stylus keyboard. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 97-104, 2003.
+
+[84] J. Zheng, X. Bi, K. Li, Y. Li, and S. Zhai. M3 gesture menu: Design and experimental analyses of marking menus for touchscreen mobile interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2018.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9580c35ec7789b79b1ae6ad5b6e452b8d9f1e7e2
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ksaOFGFyAkf/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,441 @@
+§ EFFECTS OF VISUAL DISTINCTIVENESS ON LEARNING AND RETRIEVAL IN ICON TOOLBARS
+
+Febi Chajadi* Md. Sami Uddin† Carl Gutwin‡
+
+University of Saskatchewan
+
+§ ABSTRACT
+
+Learnability is important in graphical interfaces because it supports the user's transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable - but current "flat" and "subtle" designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons' representations.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-User Interface Design
+
+§ 1 INTRODUCTION
+
+Learnability is important in graphical user interfaces because it is an important part of a user's transition from novice to expert. Many kinds of learning can occur with an interface, but for WIMP interfaces (systems with windows, icons, menus, and pointers), one main way that users improve their performance is by learning the commands associated with icons in toolbars and ribbons, and where those icons are located. Therefore, a goal in the visual design of icons is to help the user remember the icon and the underlying command. However, other goals in icon design may interfere with an icon's ability to communicate its intended meaning to the user. One of these goals is the desire for visual consistency and cohesiveness - the idea that all of the icons in an interface should repeat the same visual variables (such as colour, contrast, weight, shape, angle, and size) in order to tie together the visual elements of the interface and give the system a recognizable style. For example, Figure 1 shows icons presented as good examples of visual consistency in icon design. These icons also illustrate a second design goal that is common in many commercial systems - subtle and "flat" icon design, in which icons are monochrome and have relatively low contrast.
+
+Although these types of icons are popular, the similarity across several visual variables also reduces distinctiveness, which could hinder visibility, learnability, and memorability (in the limit, if all of a UI's icons were identical grey rectangles, they would be difficult to remember). More generally, it seems likely that icon learnability could be affected by the visual attributes of the icons. This issue has been raised by some users who have noticed the potential problems of "flat" icon design: for example, forum posts often complain that icons are too similar (Figure 2). Recently, Microsoft has moved away from flat icons to more colourful and non-uniform imagery reminiscent of earlier guidance on graphical design [43].
+
+koppooble/x/c1/p0p0p0/l_____ …卫生水無料回山心是介理版的イスキ離り?三島 カッキャの中の四輪を11回●領地は外合体の米国 Web 2018 MAY SO O O O O O O O O O O O O O O <>こCS※令/■■位TA買過三面色→/>< OAOS十一✓メなのネストの中間四四四十曲の 6 & 1 A + 2 4 + 5 A / 2 + A 3 v 4 B
+
+max width=
+
+〈9四日日日③☒☒闪
+
+1-1
+①Q☒800
+
+1-1
+①※ $< 0$ ②G>②Cn0日
+
+1-1
+2✘↘00DI
+
+1-1
+(9)もA6⍻光((12
+
+1-1
+0:圆C9归0Q3
+
+1-1
+
+Figure 1: Two examples from web posts on "visually cohesive icon design" [27] and "minimalist icon design" [70]. Visual variables such as colour, contrast, weight, and size are repeated across the entire set.
+
+ < g r a p h i c s >
+
+Figure 2: Blog post: "1995, SVGA: colourful and distinguishable icons. 2019, 4K and millions of colours: flat monochrome icons"
+
+Previous research into visual variables suggests an increase in both noticeability and memorability when multimedia objects employ variations in colour and shape $\left\lbrack {8,{67},{74}}\right\rbrack$ . However, there are limits to how shape may contribute to better learnability. For example, an icon that cannot clearly represent its command (e.g., icons for abstract or complex commands such as "Analyse" or "Encoding") may cause confusion and impede learning despite the benefits of using representative icons [25]. Designers need to know whether the visual properties of icons can affect learnability and memorability. Although there is information about the usability of certain visual properties (e.g., suggested contrast ratios between text and background [4]), there is little known about effects on learning.
+
+*e-mail: febi.chajadi@usask.ca
+
+${}^{ \dagger }$ e-mail: sami.uddin@usask.ca
+
+${}^{ \ddagger }$ e-mail: gutwin@cs.usask.ca
+
+Graphics Interface Conference 2020
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+To address this gap, we carried out two studies to compare how quickly people could learn and select icons with varying degrees of visual distinctiveness. Our first study tested the effects of meaningfulness in the icon's representation (comparing abstract and concrete imagery) and the effects of colour (comparing monochrome icons to icons of different colours). We also tested a fifth icon set that had substantial variation in both shape and colour. Participants were asked to find and select target commands from a toolbar with 60 icons, repeated over five blocks. The results of our first study were:
+
+ * Icons with concrete imagery were learned much faster than abstract icons;
+
+ * Adding colour to either the concrete or the abstract icon set did not lead to improved learning;
+
+ * Varying both shape and colour did not improve learnability;
+
+Our second study tested the effects of three factors in a series of planned comparisons. We assessed meaningfulness (icons were either identical squares or concrete images), familiarity (concrete icons were either unfamiliar shapes or familiar images), and colour (squares could be either monochrome or coloured). Results from our second study were:
+
+ * The addition of colour to the identical grey squares did not improve learning;
+
+ * Unfamiliar shapes (Chinese characters for users with no familiarity with them) were much harder to learn than familiar shapes (everyday objects);
+
+ * There was no difference in learning between the Chinese characters and the grey squares, even though the characters were far more differentiable in terms of shape.
+
+Our studies provide new information about how visual distinctiveness affects icon learning and retrieval. Surprisingly, the low visual distinctiveness of "flat" and subtle icon designs does not appear to make them more difficult to find or remember. Instead, having a concrete visual representation in the icon was shown to be extremely valuable for learning the icons. Based on our participants' comments, we suggest that this property better allows users to create a "memory hook" for the association between the icon and the command. Our studies contribute to a better understanding of how visual variables affect the process of learning icon locations, and provides a clear suggestion to designers that concrete images are likely to be more important than other forms of visual distinctiveness.
+
+§ 2 RELATED WORK
+
+§ 2.1 VISUAL DISTINCTIVENESS IN GRAPHICAL ICONS
+
+Since the invention of GUIs, researchers have studied icon design and have identified features that can be divided into two broad categories: visual and cognitive [53]. Visual features of icons include colour, size and shape. Colour is one of the most prominent visual traits that can easily separate an icon from another. Despite our ability to see a huge number of colours, however, most people can only differentiate and remember about five to eight colours in a visual workspace [68]. One of the main uses for colour in interactive systems is in highlighting items, e.g., when searching [12,14]. Size is another visible feature that makes icons distinguishable. Although a common use of the size feature is to make an interface cohesive (e.g., similar-sized icons used throughout a GUI; Figure 1), changing the size of an icon can make it distinct (e.g., MS Office [48] uses multiple sizes of icons). Besides colour and size, the shape of an icon is a strong visual factor that represents the underlying meaning [53]. Shapes can make icons more easily discernible as people can identify far more shapes than colours [69].
+
+Cognitive features of icons are related to people's cognition and memory. Researchers mainly focus on five subjective aspects of icons: familiarity, concreteness, complexity, meaningfulness, and semantic distance [47] (see Ng et al.'s [53] and Moyes et al.'s [52] reviews for comparative summaries). Among these features, however, the success of making an icon distinguishable depends on how naturally it can depict its underlying function [10]. Although these visual variables have been studied extensively, less is known about how they contribute to learning and retrieval of icons in GUIs.
+
+§ 2.2 PSYCHOLOGY OF LEARNING AND RETRIEVAL
+
+Learning and recall are two natural yet powerful human abilities. Researchers in psychology have extensively studied human memory $\left\lbrack {5,6,{13},{20},{21}}\right\rbrack$ and explored how these learning and retrieval skills are developed $\left\lbrack {3,{56},{72}}\right\rbrack$ . Siegel et al. [65] suggested that a combination of landmark, route, and survey knowledge contributes to the development of spatial learning and retrieval skills. People naturally begin learning objects in a new area through visual inspection [36] - which forms landmark knowledge [24]. Once familiar with the area, people build route knowledge [73] and start retrieving objects from already learned locations. With further experience, people acquire survey knowledge - where they can recall objects solely from memory, without requiring any visual search.
+
+Anderson [3] and Fitts et al. [26] suggested that learning and retrieving occur in three stages: cognitive, associative and autonomous. These stages of skill acquisition can be observed in GUIs. First, in the cognitive stage, users learn the contents of an interface and visually look for commands. Second, in the associative stage, users already know the contents of the interface and begin to remember the commands in the UI. As a result, they can reach those locations more quickly. However, in the associative stage, users still perform local visual search after reaching the vicinity of a command. Last, in the autonomous stage, users can recall a command's location from memory and visit it, without searching for it visually.
+
+§ 2.3 FACILITATING LEARNING AND RETRIEVAL OF ICONS IN GUIS
+
+Although learnability (the idea of making an interface easily learnable and memorable) is frequently considered as a vital part of usability $\left\lbrack {{22},{54},{64}}\right\rbrack$ , learnability is difficult to predict and measure $\left\lbrack {{31},{61}}\right\rbrack$ . In order to facilitate the learning of icons and commands in GUIs, researchers have followed two main strategies: spatial interfaces and landmarks.
+
+Spatial interfaces. Spatial memory $\left\lbrack {{42},{56}}\right\rbrack$ is a human cognitive ability responsible for learning and remembering the locations of objects and places. Researchers have tried to exploit it by laying out interfaces in ways that are spatially stable $\left\lbrack {{16},{23},{62}}\right\rbrack$ . For example, Scarr et al.'s [58, 60] CommandMap showed a spatially stable icon arrangement in desktops, yielding better learning and recall of icons, even for real tasks [59], because users could leverage spatial memory $\left\lbrack {{17},{58},{78}}\right\rbrack$ . Similarly, Gutwin et al. [32] and Cockburn et al. [15] showed that a stable layout composed of all commands can increase recall efficiency compared to hierarchical ribbons or menus. Similar to desktops, spatially stable icons can improve learning and recall in multi-touch tablets [30, 33, 34, 77], smartwatches [45], smartphones [83, 84], digital tabletops [80], and even in VR [28].
+
+Landmarks in GUIs. Landmarks are easily identifiable objects and features that are different from their surroundings [46] and which can act as anchors for performing spatial activities such as navigation and object learning. Similar to their benefits in real life, landmarks have exhibited potential in GUIs [2,75,76]. Researchers have exploited landmarks that are already present in the GUI environment: for example, the corners of a screen $\left\lbrack {{34},{80}}\right\rbrack$ or the bezel of a device [63] can provide strong landmarks for icons near those locations. However, these natural landmarks often become useless in large interfaces (e.g., the middle area of a large screen) or a GUI with a large number of icons, because no landmark is present near those icons. In such cases, 'artificial landmarks' [29,78] can aid learning and recall. Studies suggested that coloured blocks [2,78], images as the background of a menu [78] and meaningful yet abstract icons $\left\lbrack {{50},{79}}\right\rbrack$ can be landmarks in GUIs to benefit spatial memory development.
+
+ < g r a p h i c s >
+
+Figure 3: Example target words in each interface.
+
+Apart from spatial memory and landmarks, researchers have also studied features such as luminance [51] and the 'articulatory distance' [9] of icons for learnability. Others found icons representing accurate underlying meaning beneficial for learning and recall $\left\lbrack {7,{25},{39},{44},{57},{66}}\right\rbrack$ . Studies have shown that abstract and ambiguous icons demand more cognitive processing to recognize [40] and often hinder users from quickly learning them. However, the primary question - how visual variables of icons impact learning and recall - remains unanswered.
+
+§ 2.4 VISUAL DISTINCTIVENESS OF ICONS
+
+We carried out two studies to investigate the effects of visual distinctiveness of icons on learnability. We manipulated two visual variables - shape and colour - but because the shape of an icon can also be representational, we also consider the cognitive variable of meaning in our studies as well $\left\lbrack {{10},{53}}\right\rbrack$ .
+
+Meaning: The meaning of an icon refers to the concept or idea that the icon's image conveys. Icons in our studies varied by the types of underlying meaning they possess:
+
+ * Meaningless: Icon images have no connection to real-world objects or to their underlying commands (e.g., a grey square for the command "Settings").
+
+ * Contextual: Icon images are representational of underlying commands, but require interpretation if unfamiliar (e.g., a summation symbol for the command "Formula").
+
+ * Familiar: Icon images are pictorial and match their underlying command (e.g., an image of a calculator for the command "Calculator").
+
+Shape Distinctiveness: Shape distinctiveness, that is, separability of shapes, is difficult to define precisely. Prior work has explored aspects of this concept: Julész [37] identified shape features that early visual systems detect, Burlinson et al. [11] proposed that open or closed shapes influence perceptual processing, and Smart et al. [67] investigated the perception of filled, unfilled and open shapes in scatterplots. In this work, we define levels of shape separability with respect to trends observed in modern icon design.
+
+ * None: Icons have no differences in shape (e.g. icons are all identical circles).
+
+ * Medium: Icons use different shapes, but are thematically uniform for visual consistency (similar sizes, weights, line styles, and borders).
+
+ * High: Icon shapes are distinctly different from one another.
+
+Colour Distinctiveness: We consider only basic levels of colour distinctiveness, because people's ability to distinguish colours is much lower than the ability to distinguish shapes [69].
+
+ * None (Monochrome): All icons use only a single colour.
+
+ * Medium (Colour): Different icons use different colours.
+
+ * High (Multi-colour): Icons use several different colours.
+
+Both the visual and cognitive variables operate in the context of the spatial arrangement of the icons - the two studies reported in the following sections investigate how different combinations of the levels and factors above (summarized in Table 1) affect users' ability to learn the spatial location of the icon corresponding to each command.
+
+§ 3 STUDY 1 METHODS
+
+§ 3.1 INTERFACES
+
+We developed five custom web-based desktop icon selection interfaces, each consisting of sixty icons ( ${44}\mathrm{{px}}$ in size) arranged in three equal rows and presented in a standard ribbon-toolbar structure. All icon toolbars appeared at the top of the interface and allowed two types of mouse-based interaction: selection and hover. Names of the icons were not shown in the UI, but could be seen in a tooltip after hovering the mouse over an icon for ${300}\mathrm{\;{ms}}$ . Icons were created using the GIMP image editor, using source images from freely-available icon sets such as material.io and icons8. We used five experimental interfaces in Study 1, described below and shown in Figure 4.
+
+Concrete. The Concrete interface used monochrome icons similar to those found in standard mobile and desktop environments. The icons were chosen to avoid images of real-world objects, and therefore had contextual meaning. Although icons varied in shape, the level of distinctiveness was reduced by adding a circular grey background with a 1-pixel black border.
+
+Concrete+Colour. The Concrete+Colour interface used icons similar to Concrete in terms of shape distinctiveness and meaning (no icons were repeated). Icons were given a colour from a set of twelve unique colours; colours were equally distributed among the 60 icons. Colour brightness was adjusted to make icons with different colours clearly differentiable, and colours were not repeated for neighboring icons. The addition of colour provides the user with new landmarks that could be valuable for remembering locations (e.g., "it was the blue icon next to the red icon").
+
+Abstract. The Abstract interface used meaningless monochrome icons consisting of circle and octagon shapes that were augmented with partial or full crossing lines, gaps in the outline, or dots in the centre of the icon. Icons in this set provided medium shape distinctiveness: each shape was different, but the set shared several basic visual properties.
+
+Abstract+Colour. The Abstract+Colour interface used icons that were similar in design to Abstract, but used a square base outline. Colours were added to icons as described above for the Concrete+Colour interface.
+
+Mixed. The Mixed interface used icons with high shape distinctiveness (variations in size, shape, weight, and texture) and high colour distinctiveness (icons used a variety of colours). These variations provide users with two different types of landmark to assist their location memory. The icons were adapted from a real-world set, and had contextual meaning.
+
+ < g r a p h i c s >
+
+Figure 4: Screenshots of the five interfaces used in Study 1. Target objects are outlined in red.
+
+§ 3.2 TASKS AND STIMULI
+
+The study consisted of a series of trials in each of the five interfaces, where each trial involved locating and selecting an icon. This task is commonly and frequently done in several toolbar-based or ribbon-based interfaces, such as Microsoft Word 2007 [48], Adobe Photoshop [1], or the GIMP graphics editor [71]. Every trial began by displaying a target word cue in the middle of a screen that remained visible for the entire trial, and participants were asked to find and select the corresponding icon from the toolbar. Participants could see the name of an icon as a tooltip after hovering over it for ${300}\mathrm{\;{ms}}$ . Each correct selection was indicated by a green flash at the selected location; red flashes were used to indicate incorrect selections. After selecting the correct icon, participants could proceed to the next trial by clicking on a 'Next Trial' button that appeared in the middle of the screen. The button centred a participant's gaze and cursor position, and started the timer of a trial. For each interface, 9 out of the 60 icons were used as targets; these were sampled from three general areas of the toolbar [78]: 3 from the corner regions (first and last three columns), 4 from the edges (top and bottom rows) and 2 from the middle row. No target position was repeated among the five interfaces. Target positions in each interface was repeated across all participants in random order of appearance.
+
+§ 3.3 PARTICIPANTS AND APPARATUS
+
+Twenty participants (ten men, nine women, one non-binary), ages 20-44 (mean 26, SD 5.4), were recruited from a local university and received a $\$ {15}$ honorarium. All participants had normal or corrected-to-normal vision, and none reported a colour-vision deficiency. All participants were highly familiar with desktop and mobile applications (up to ${10}\mathrm{{hrs}}/\mathrm{{wk}}\left( 3\right) ,{20}\mathrm{{hrs}}/\mathrm{{wk}}\left( 4\right) ,{30}\mathrm{{hrs}}/\mathrm{{wk}}\left( 1\right)$ and over 30 hrs/wk (12)). The study took 90 minutes. Ten participants reported primarily issuing commands by navigating GUIs with mice and ten reported using keyboard shortcuts. Overall participants were familiar with keyboard shortcuts (1-5 shortcuts (7), 6-10 shortcuts (9), 11-15 shortcuts (2), 16-20 shortcuts (1), and over 20 shortcuts (1)).
+
+Study software (used in Study 1 and 2) was written in JavaScript, HTML and CSS, and ran in the Chrome browser. The study used a 27-inch monitor at 1920x1080 resolution, running on a Windows 10 PC with an Nvidia GTX 1080Ti graphics card. The system recorded all performance data; subjective responses were collected with SurveyMonkey.
+
+§ 3.4 PROCEDURE AND STUDY DESIGN
+
+At the beginning of the study session, participants completed an informed consent form and were given an overview of the study. After filling out a demographic questionnaire, participants completed a practice round consisting of 4 trials and 4 blocks with an icon set not used in the main study. They then completed 5 blocks of 9 trials for each of the five interfaces. The study followed a within-participant design, with the interfaces counterbalanced using a Latin square model. After each interface, participants completed NASA-TLX [35] questionnaires; after all interfaces, participants answered final questions about their preferences. Last, they reported their strategies for remembering target locations.
+
+The study used a within-participants design with three factors (meaning, shape distinctiveness, and colour distinctiveness) that were used for a series of planned comparisons. The dependent measures were completion time, hover amounts, errors, and subjective responses. Our main hypotheses were:
+
+ * H1: Increased colour distinctiveness will reduce completion time and hover amounts (Abstract and Concrete vs. Abstract+Colour and Concrete+Colour);
+
+ * H2: Increased meaning will reduce completion time and hover amounts (Abstract and Abstract+Colour vs. Concrete and Concrete + Colour);
+
+ * H3: Increased shape distinctiveness will reduce completion time and hover amounts (Mixed vs. Concrete+Colour).
+
+ * H4: Increasing both colour distinctiveness and shape distinctiveness will lead to a larger reduction in completion time and hover amounts (Mixed vs. Concrete).
+
+§ 4 STUDY 1 RESULTS
+
+For all studies, we report the effect size for significant RM-ANOVA results as general eta-squared: ${\eta }^{2}$ (considering .01 small,.06 medium, and $> {.14}$ large [18]), and Holm correction was performed for post-hoc pairwise t-tests.
+
+§ 4.1 COMPLETION TIME
+
+Completion time was measured from the appearance of a word cue to the selection of a correct icon; no data was removed due to outlying values. Mean completion times for the five icon sets are shown in Figure 5.
+
+Our first planned comparisons (H1 and H2) involved the effects of colour distinctiveness and meaning. A 2x2x5 RM-ANOVA (Meaning $X$ Colour Distinctiveness $X$ Block) showed effects of Meaning $\left( {{F}_{1,{19}} = {89.60},p < {0.0001},{\eta }^{2} = {0.54}}\right)$ and Block $\left( {{F}_{1,{19}} = {336.88},p}\right.$ $\left. { < {0.0001},{\eta }^{2} = {0.71}}\right)$ on completion time, but no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = {2.99},p = {0.10}}\right)$ . There were no interactions between the factors (all $p > {0.10}$ ).
+
+Follow-up tests for Meaning showed significant differences (all $p < {0.05}$ ) between the concrete icon sets (Concrete and Concrete+Colour) and the abstract sets (Abstract and Abstract+Colour). Follow-up tests for Block showed differences between each successive pair except blocks 3 and 4 .
+
+Our third planned comparison (H3) used the Mixed and Concrete+Colour conditions to see whether shape distinctiveness would improve performance in icon sets that are already distinctive in terms of colour. However, a one-way ANOVA showed no difference $\left( {{F}_{1,{19}} = {0.086},p = {0.77}}\right)$ . Our fourth comparison (H4) used the Mixed and Concrete interfaces to see whether having two distinctive visual variables would improve performance (i.e., Mixed is more differentiable both in terms of colour and shape than Concrete). However, once again a one-way ANOVA showed no difference $\left( {{F}_{1,{19}} = {0.03}}\right.$ , $p = {0.86})$ .
+
+ < g r a p h i c s >
+
+Figure 5: Mean trial completion time, by interface (±s.e.).
+
+§ 4.2 HOVERS
+
+We measured the number of hovers (where the participant held the mouse for ${300}\mathrm{\;{ms}}$ over a target, showing the name) as a more sensitive measure of progress through the stages of cognitive, associative, and autonomous performance. As a participant moves from the cognitive to the associative stage, there should be a reduction in the number of icons that they need to inspect. Mean hovers per trial are shown in Figure 6. Results are very similar to those reported above for completion time: a 2x2x5 RM-ANOVA (Meaning X Colour Distinctiveness $X$ Block) showed effects of Meaning $\left( {{F}_{1,{19}} = {117.5}}\right.$ , $p < {0.0001},{\eta }^{2} = {0.66})$ and ${Block}\left( {{F}_{1,{19}} = {353.65},p < {0.0001},{\eta }^{2} = }\right.$ 0.65 ) on number of hovers, but no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = {4.36},p = {0.051}}\right) \left( {\mathrm{H}1\text{ and }\mathrm{H}2}\right)$ . There were also interactions between Meaning and Colour $\left( {{F}_{1,{19}} = {5.61},p < {0.05}}\right)$ ; as shown in Figure 6, the Abstract+Colour condition has fewer hovers than Abstract, whereas Concrete+Colour has more hovers than Concrete.
+
+Follow-up tests for Meaning again showed significant differences (all $p < {0.05}$ ) between both concrete icon sets (Concrete and Concrete+Colour) and both abstract sets (Abstract and Abstract+Colour). Follow-up tests for Block showed differences between successive pairs except for blocks 3 and 4 .
+
+ < g r a p h i c s >
+
+Figure 6: Mean hover amounts, by interface (±s.e.).
+
+§ 4.3 ERRORS
+
+We measured errors as the number of incorrect clicks before choosing the correct item. In some trials, participants clicked instead of hovering, leading to unusually high numbers of errors; we therefore removed 32 outliers out of 4500 total trials that were more than 3 s.d. from the mean. Overall errors were low (an average of 0.032 errors per click). A 2x2x5 RM-ANOVA (Meaning X Colour Distinctiveness $X$ Block) to look for effects on errors showed a main effect of ${Block}\left( {{F}_{1,{19}} = {12.2},p < {0.05},{\eta }^{2} = {0.046}}\right)$ and a main effect of Meaning $\left( {{F}_{1,{19}} = {5.16},p < {0.05},{\eta }^{2} = {0.18}}\right)$ . Follow-up t-tests showed that abstract icons had a significantly $\left( {p < {0.05}}\right)$ higher error rate (0.048 errors per trial) than concrete icons (0.018 errors per trial).
+
+§ 4.4 SUBJECTIVE RESPONSES AND COMMENTS
+
+We used the Aligned Rank Transform [82] to perform RM-ANOVA on the NASA-TLX responses. As shown in Figure 7, mean scores of all TLX measures followed a trend similar to completion time. We found significant effects for all subjective measures. Follow-up t-tests revealed significant differences (all $p < {0.05}$ ) between the two conditions with abstract icons (Abstract and Abstract+Colour) and the three conditions with concrete icons (Concrete, Concrete+Colour and Mixed) for every measure except physical effort. Significant effects were also found (all $p < {0.05}$ ) in physical effort between Abstract and the three conditions with concrete icons as well as between Abstract and Abstract+Colour in perceived success.
+
+max width=
+
+2*Study 2*Condition 3|c|Meaning 3|c|Shape Distinctiveness 3|c|Colour Distinctiveness
+
+3-11
+ Meaningless Contextual Familiar None Medium High None (Monochrome) Medium (Colour) High (Multi-colour)
+
+1-11
+S1 Concrete X $x$ X X ✘ X ✘ X X
+
+1-11
+S1 Concrete+Colour X ✘ X X ✘ X X ✘ X
+
+1-11
+S1 Mixed X ✘ X X X ✘ X X ✘
+
+1-11
+S1 Abstract ✘ X X X ✘ X ✘ X X
+
+1-11
+S1 Abstract+Colour ✘ X X X ✘ X X ✘ X
+
+1-11
+S2 Square ✘ X X ✘ X X ✘ X X
+
+1-11
+S2 Square+Colour ✘ X X ✘ X X X ✘ X
+
+1-11
+S2 UnfamiliarShape ✘ X X X X ✘ ✘ X X
+
+1-11
+S2 FamiliarShape X X ✘ X ✘ X ✘ X X
+
+1-11
+
+Table 1: Icon properties of the interfaces in Study 1 & 2.
+
+ < g r a p h i c s >
+
+Figure 7: Mean NASA-TLX questions responses for Study 1 (±s.e.).
+
+Overall, participants preferred both Mixed and Concrete+Colour conditions. They also perceived them as the easiest and fastest conditions where they made the least errors. Results of the preference survey are summarized in Table 2.
+
+max width=
+
+X Easiest Fastest Fewest Errors Preference
+
+1-5
+Abstract 0 0 0 0
+
+1-5
+Abstract+Colour 3 3 3 3
+
+1-5
+Concrete 3 3 3 4
+
+1-5
+Concrete+Colour 6 7 7 7
+
+1-5
+Mixed 8 7 7 6
+
+1-5
+
+Table 2: Summary of preference survey results.
+
+Participants used a variety of techniques to learn and retrieve the icons. Eight participants stated that they relied on icon meaning and attempted to find a story or link to use as the basis for their memory: for example, one participant said "I tried to make a connection between the icon and the word." Ten participants focused on remembering the spatial locations (at different levels of specificity); one stated "[I recalled] the location of an icon if it was in the first, middle, or end [of the toolbar]." Nine participants also commented on the value of shape distinctiveness. For example, a participant said "If I had a good grasp of the icon's shape, it was easier to mentally place it in on the screen and find it again." The same participant reported a challenge with the less-distinctive icon sets: "I couldn't properly grasp a unique shape [in Abstract or Abstract+Colour], it became very difficult to mentally recall its position." Finally, six participants also used the colour of icons; one stated "colour added an additional element for memory."
+
+§ 5 STUDY 2 METHODS
+
+Study 1 suggested that colour did not improve learnability, and that icons with concrete imagery were substantially easier to learn. In Study 2, we expand on these results and go into more detail on two questions: first, whether colour improves learning when it is the only visual variable (i.e., the icons have no shape differentiability at all); and second, whether it is the differentiability of an icon's shape or the meaningfulness of the image that assists learning.
+
+Study 2 followed a similar method to study 1, but with two alterations. To reduce the overall time needed for the session, we reduced the number of targets from nine to seven, and the number of blocks from five to four (Study 1 showed clear learning effects within four trial blocks, see Figure 5). All other elements of the study method, procedure, and apparatus were identical to Study 1.
+
+§ 5.1 PRE-STUDY TO CHOOSE NUMBER OF COLOURS
+
+Study 1 used 12 colours, a larger number than is recommended for mapping tasks by visual design guidelines. To determine a suitable number of colours, we carried out a small pre-study comparing learning rates with four [19], eight [49], and twelve colours. Similar to study 1, three interfaces were designed, each having 60 square icons with 5-pixel borders. In each interface, colours were distributed evenly among the icons (none repeated for neighboring icons). Participants carried out four blocks with seven targets in each interface. RM-ANOVA on completion time showed no effect of number of colours, although participant comments and literature $\left\lbrack {{19},{81}}\right\rbrack$ generally supported four colours. Therefore, we used four colours for Study 2.
+
+§ 5.2 INTERFACES
+
+The interfaces in Study 2 used a similar spatial layout of 60 icons as in Study 1, but used four new icon sets to explore our new questions about the effects of colour, shape distinctiveness, and familiarity.
+
+Square. The Square interface's icons were identical squares with a grey 5-pixel border. These icons have no colour differentiability, no shape differentiability, and no meaning. Therefore, the only way that participants could remember the correct icon was by memorizing its spatial location.
+
+Square+Colour. The Square+Colour interface used the same square shapes as Square for all icons, but the icons were coloured with one of red, green, brown, or blue. Colours were evenly distributed across the 60 icons, and no neighboring icons repeated a colour. Colour brightness was adjusted to maximize differentiability following Arthur et al. [4]. With no shape distinctiveness in the icon set, the colours provide additional landmarks for users to remember locations.
+
+ < g r a p h i c s >
+
+Figure 8: The four icon sets used in Study 2. Targets are outlined in red.
+
+UnfamiliarShape. The UnfamiliarShape interface showed monochrome four-stroke Chinese characters as icons. These icons had high shape distinctiveness (all icons were clearly different shapes). Chinese characters are meaningful, but only if the user is familiar with them - and our participants were chosen such that none knew these characters. Therefore, this icon set had no meaning for our study.
+
+FamiliarShape. The FamiliarShape interface used meaningful icons with imagery of recognizable real-world objects (Figure 8). Shape distinctiveness was medium, because we equalized several other visual variables such as size, line weight, and background shape (a grey circle with a 1-pixel black border).
+
+Icons were created using GIMP. FamiliarShape's images were sourced from material.io and icons8.
+
+§ 5.3 DESIGN
+
+Study 2 used a within-participants factorial design with several planned comparisons. There were three factors involved in the comparisons: shape distinctiveness (none or high), colour distinctiveness (monochrome or colour), and familiarity (meaningless or familiar). The comparisons used different sets of conditions, as specified by our four hypotheses:
+
+ * H1: Increasing shape distinctiveness will reduce completion time and hover amounts (Square and Square+Colour vs. Fa-miliarShape and UnfamiliarShape);
+
+ * H2: Increasing colour distinctiveness in icons with no shape distinctiveness will reduce completion time and hover amounts (Square vs. Square + Colour);
+
+ * H3: Increasing familiarity will reduce completion time and hover amounts (UnfamiliarShape vs. FamiliarShape);
+
+ * H4: Even in icons without meaning, increasing shape distinctiveness will reduce completion time and hover amounts (Square vs. UnfamiliarShape).
+
+§ 5.4 PARTICIPANTS
+
+Twenty participants who did not take part in Study 1 (sixteen women, three men, and one non-binary; ages 18-37 (mean 24, SD 5)) completed the 60-minute study, and each received a $\$ {10}$ honorarium. Participants had normal or corrected-to-normal vision with no reported colour-vision deficiencies, and all were highly familiar with desktop and mobile applications (up to ${10}\mathrm{{hrs}}/\mathrm{{wk}}\left( 3\right) ,{20}\mathrm{{hrs}}/\mathrm{{wk}}$ (3), 30 hrs/wk (6) and over 30 hrs/wk (8)). Seven participants reported primarily issuing commands by navigating GUIs with mice, eleven reported using keyboard shortcuts, one reported using both and one reported using a trackpad. Overall participants were familiar with keyboard shortcuts (1-5 shortcuts (9), 6-10 shortcuts (6), 11-15 shortcuts (3), 16-20 shortcuts (1), and over 20 shortcuts (1)). None of the participants could read Chinese characters.
+
+§ 6 STUDY 2 RESULTS
+
+§ 6.1 COMPLETION TIME
+
+Mean trial completion times are summarized in Figure 9. No data was removed due to outlying values. We carried out analyses for each of our four planned comparisons.
+
+First (H1), a 2x4 RM-ANOVA (Shape Distinctiveness X Block) showed effects of both Shape Distinctiveness $\left( {{F}_{1,{19}} = {124.22},p}\right.$ $< {0.0001},{\eta }^{2} = {0.67})$ and ${Block}\left( {{F}_{1,{19}} = {181.67},p < {0.0001},{\eta }^{2} = }\right.$ 0.84 ) on completion time, as well as an interaction between the two factors $\left( {{F}_{1,{19}} = {12.44},p < {0.01},{\eta }^{2} = {0.09}}\right)$ .
+
+The effect of Shape Distinctiveness, however, must be considered in light of our third planned comparison (H3) of the familiarity of icon imagery - that is, in light of the large performance difference between the two interfaces with distinctive shapes. These interfaces (UnfamiliarShape and FamiliarShape) differ in terms of the familiarity of the icon imagery, and a one-way RM-ANOVA showed a highly significant difference between them $\left( {{F}_{1,{19}} = {112.24},p < {0.0001},{\eta }^{2} = }\right.$ 0.70 ). As can be seen in Figure 9, the UnfamiliarShape interface was much closer in learning rate to the two interfaces with square icons, and t-tests showed no significant differences between Unfamil-iarShape and Square $\left( {p > {0.1}}\right)$ , but showed that FamiliarShape was significantly different from all three other interfaces (all $\mathrm{p} < {0.001}$ ). In our results, therefore, the benefit of shape distinctiveness arose only when those shapes were both differentiable and familiar.
+
+Follow up tests for Block showed significant differences between each successive pair (all $p < {0.05}$ ). The significant interaction between Shape Distinctiveness and Block can be seen in Figure 9, where the learning curve for FamiliarShape flattens before the other conditions (because users reached expertise far earlier in this condition).
+
+Our second planned comparison (H2) investigates the effect of colour distinctiveness in icons that have no shape differentiability (Square vs. Square+Colour). A 2x4 RM-ANOVA (Colour Distinctiveness $X$ Block) showed no effect of Colour Distinctiveness $\left( {{F}_{1,{19}} = }\right.$ ${1.62},p = {0.2}$ ), and no interaction with $\operatorname{Block}\left( {{F}_{1,{19}} = {0.56},p = {0.46}}\right)$ .
+
+Our fourth planned comparison (H4) looked at whether shape differentiability alone (with meaningless icons) would improve learning. We compared the Square and UnfamiliarShape conditions using a one-way RM-ANOVA, but found no difference $\left( {{F}_{1,{19}} = {0.27},p = }\right.$ 0.61).
+
+ < g r a p h i c s >
+
+Figure 9: Mean trial completion time, by interface (±s.e.).
+
+§ 6.2 HOVERS
+
+Similar to Study 1, the results for mean hovers in Study 2 closely mirror the completion time results. RM-ANOVA (Shape Distinctiveness $x$ Block) showed effects of Shape Distinctiveness $\left( {{F}_{1,{19}} = }\right.$ ${107.02},p < {0.0001},{\eta }^{2} = {0.73})$ , Block $\left( {{F}_{1,{19}} = {290.45},p < {0.0001}}\right.$ , ${\eta }^{2} = {0.84}$ ) as well as an interaction between the two factors $\left( {{F}_{1.19} = }\right.$ ${24.4},p < {0.0001},{\eta }^{2} = {0.18})$ on hovers (H1). Follow-up tests for Block showed significant differences (all $p < {0.05}$ ) between each successive pair.
+
+As with the completion time results, the effect of Shape Distinctiveness appears to be largely due to the substantial effect of familiarity: in our third planned comparison, a one-way RM-ANOVA also showed a significant effect between UnfamiliarShape and Familiar-Shape $\left( {{F}_{1,{19}} = {102.17},p < {0.0001},{\eta }^{2} = {0.73}}\right)$ . T-tests also showed no significant difference between UnfamiliarShape and Square (p $> {0.1}$ ), but showed that FamiliarShape was significantly different from all three other interfaces (all $\mathrm{p} < {0.001}$ ). Follow-up tests for Block showed significant differences between every successive pair except blocks 3 and 4 .
+
+In our second planned comparison (H2), a 2x4 RM-ANOVA found no effect of Colour Distinctiveness $\left( {p > {0.15}}\right)$ and no interaction with ${Block}\left( {{F}_{1,{19}} = {0.15},p = {0.71},{\eta }^{2} = {0.002}}\right)$ .
+
+In our fourth planned comparison (H4), a one-way RM-ANOVA found no effect of shape differentiability $\left( {{F}_{1,{19}} = {0.27},p = {0.61},{\eta }^{2} = }\right.$ ${0.006})$ .
+
+ < g r a p h i c s >
+
+Figure 10: Mean hover amounts, by interface (±s.e.).
+
+§ 6.3 ERRORS
+
+We measured errors as the number of incorrect clicks before choosing the correct item. Data from one participant (who clicked instead of hovered) was removed. For all other participants, errors were very low, with an overall average of 0.037 errors per trial. RM-ANOVA showed no main effect of any of our main factors on errors (Shape Distinctiveness: ${F}_{1,{19}} = {1.42},p = {0.24}$ ; Block: ${F}_{1,{19}} = {0.39},p = {0.75}$ ; Colour: ${F}_{1,{19}} = {1.67},p = {0.21}$ ; or Familiarity: ${F}_{1,{19}} = {2.47},p = {0.13}$ ).
+
+§ 6.4 SUBJECTIVE RESPONSES AND COMMENTS
+
+NASA-TLX responses were analyzed after performing an Aligned Rank Transformation [82]. Data from two participants, which was incomplete, was removed. The mean effort scores shown in Figure 11 mirror the trend in the performance data, in which FamiliarShape outperformed others in all measures. RM-ANOVA showed significant effects for all subjective measures. Follow-up tests showed significant (all $p < {0.05}$ ) differences between FamiliarShape and every other condition in mental effort, perceived success, effort, and annoyance. Overall, the FamiliarShape icons were greatly preferred - results are summarized in Table 3.
+
+ < g r a p h i c s >
+
+Figure 11: Mean NASA-TLX questions responses for Study 2 (±s.e.).
+
+Participants' comments again echoed the performance results. Three participants stated that the uniformity in the Square condition was challenging; one said, "it was really hard since everything looked the same." Four participants also noted difficulties when attempting to use the colour information. For example, one participant stated "I tried to use colour [in Square+Colour] but it didn't work super well." The realistic representation of targets in FamiliarShape was found to be beneficial to eight participants (e.g., "remembering the picture of each object, and my brain just brought me to where it was"). Finally, six participants stated that the distinct shapes of the UnfamiliarShape condition provided a connection that helped them to remember targets: for example, one participant reported that one target's icon "looked like a bent cross", making it easier to remember the location.
+
+max width=
+
+X Easiest Fastest Fewest Errors Preference
+
+1-5
+Square 1 0 0 1
+
+1-5
+Square+Colour 0 1 0 0
+
+1-5
+UnfamiliarShape 0 0 1 0
+
+1-5
+FamiliarShape 19 19 19 19
+
+1-5
+
+Table 3: Summary of Study 2 preference survey results.
+
+§ 7 DISCUSSION
+
+Our two studies provided the following findings:
+
+ * Colour distinctiveness did not improve learning in either study.
+
+ * Adding multiple distinctive variables (colour and shape) also did not improve learning.
+
+ * Shape distinctiveness when coupled with meaning substantially improved learning, but shape distinctiveness on its own was not effective.
+
+ * Participant strategies suggested that they primarily try to search by meaning rather than visual characteristics.
+
+In the following paragraphs, we consider explanations for these main results, limitations to our findings, and directions for future research.
+
+§ 7.1 EXPLANATION FOR RESULTS
+
+§ 7.1.1 COLOUR DISTINCTIVENESS DOES NOT IMPROVE ICON LEARNING
+
+Colour distinctiveness did not reduce completion time or hover amounts in either study, even when it was the only visual variable available (Study 2). One main reason for this finding is that many participants apparently did not use the colour cues, and instead only searched by meaning and spatial location - participants often reported creating and connecting stories to icons to remember them rather than using colour as a visual landmark. However, participant comments suggest that at least a few people attempted to use colour information - some participants would search by colour first in target selection, allowing them to narrow down the target set (e.g., searching for red icons with two crossing lines reduces the number of icons that must be searched). But in many cases, attempts to use colour appeared to be unsuccessful. One reason for colour's ineffectiveness may be that the colour cues interfered with one another, reducing the value of colour as a landmark. That is, because all icons were coloured, remembering only that an icon was "beside the blue one" did not uniquely identify a target (because there were several blue icons) $\left\lbrack {{38},{41}}\right\rbrack$ . It is, however, possible that if there were fewer coloured icons, colour might be a more effective landmark - in studies of artificial landmarks, for example, having only grey-coloured obvious landmarks significantly improved performance [78] in a similar selection task. It is also possible that colour interfered with participants' ability to see differences in the abstract shapes used in Study 1; that is, the colours used in the Abstract+Colour condition may have reduced contrast and thus reduced any potential effect of shape distinctiveness $\left\lbrack {{44},{55}}\right\rbrack$ .
+
+§ 7.1.2 SHAPE DISTINCTIVENESS WAS ONLY EFFECTIVE WITH MEANING
+
+When icons had even a contextual level of meaning, we observed that participants would visually search using meaning as a memory cue; and when meaning was available, participants tended to disregard the landmarks created by differences in the icons' visual presentation. In icons with meaningless imagery, participants needed to rely more on absolute spatial memory - and without pre-existing knowledge of the icon mappings, participants had to find a prompted icon by laborious visual search (hovering one by one). In addition to improving performance in the early stages of learning, it was also clear that meaningful icons had a similar learning curve to the other conditions, implying that these conditions also allowed users to switch to location-based retrieval. Our findings confirm previous guidance about designing icons with clear meaning to help user navigation of an interface (e.g., $\left\lbrack {7,{25},{39},{40},{44},{57}}\right\rbrack$ ), although our results extend this guidance to the value of meaning for longer-term learning of an interface as well. In contrast, Study 2 showed that shape distinctiveness without meaning did not improve learning, and the reasons for this condition's poor performance are similar to that of the colour conditions: namely, interference between similar-looking shapes may have prevented a shape's differentiability from being useful as a landmark. As with colour, shape may still be useful as a landmark if there are fewer shapes that have more noticeable differences.
+
+§ 7.2 DESIGN IMPLICATIONS AND GENERALIZING THE RESULTS
+
+Our results suggest that user learning of an interface is not hindered by the lack of visual distinctiveness in 'flat' and subtle icon designs, and also clearly show the value of using concrete and familiar imagery. Therefore, designers can use flat and subtle icon styles without compromising memorability, as long as meaning is clearly conveyed. We note, however, that there are other potential factors in the use of flat icons that should be considered in addition to learning (e.g., whether users can tell that an on-screen object is in fact a clickable icon). Our results also raise the question of what designers should do in situations where they must create icons for commands or concepts that do not have obvious visual representations. The frequency with which we saw the "memory hook" strategy in our studies (i.e., looking for a connection between the image of the icon and the associated command) suggests that concrete imagery - even if not a direct representation of the underlying concept - may enable learning better than simply using distinctive visual variables. As suggested above, however, it may be that the value of colour or shape distinctiveness as landmarks could be improved, a topic we will consider in future studies.
+
+§ 7.3 LIMITATIONS AND FUTURE WORK
+
+There are several ways in which our studies could not exactly replicate various factors of real-world interface learning, and these suggest possibilities for future research. First, we plan to test the idea mentioned above that colour and shape differentiability could be more effective if there are fewer items in the set that are different, thus providing a better anchor for spatial learning. One implementation would involve strategically placed icons that are designed to catch the user's attention (using colour or shape) within the toolbar; these icons could anchor memory and potentially improve learning.
+
+Second, a limitation in our studies was the short time available for learning - users typically learn an interface in a much slower fashion, and in the context of real tasks. In addition, we tested only immediate recall, not retention after a time period, and we did not test transfer from the training task back to a real-world task with the interface. We plan retention and transfer phases in our future studies.
+
+§ 8 CONCLUSION
+
+Icons are a ubiquitous mechanism for representing commands in an interface [7], and learning the icons in an interface is a major part of becoming an expert with that system. Despite the prevalence of icons, toolbars, and ribbons, however, little is known about the effects of icon design on learnability. We carried out two studies to test whether differentiability in two visual variables - colour and shape - would improve learning of icons in a 60-item toolbar. Our results showed that our manipulations of these variables did not have significant effects on learning or performance, and that the concreteness and meaning of the icon's imagery was far more effective in helping users learn and recall targets. Our studies provide new empirical evidence for existing guidelines that suggest an icon that are contextual or familiar will be more learnable and easier to navigate. This work increases understanding of how users learn new icons and the relative roles that visual variables and cognitive factors play in users' spatial learning and expertise development.
+
+§ ACKNOWLEDGMENTS
+
+We would like to thank our participants and the anonymous reviewers for their feedback. This work was supported by the Natural Sciences and Engineering Research Council of Canada.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..63ae5b69605d1f5e55857140c6d973f7e5e16753
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,305 @@
+# Constraint-Based Spectral Space Template Deformation for Ear Scans
+
+Srinivasan Ramachandran * Tiberiu Popa ${}^{ \dagger }$ École de technologie supérieure, Montréal, Canada Concordia University, Montréal Canada
+
+Eric Paquette ${}^{ \ddagger }$
+
+École de technologie supérieure, Montréal, Canada
+
+
+
+Figure 1: Our approach deforms a template to match the shape of a scan by aligning and deforming in a smooth domain space.
+
+## Abstract
+
+Ears are complicated shapes and contain a lot of folds. It is difficult to correctly deform an ear template to achieve the same shape as a scan, while avoiding the reconstruction of noise from the scan and being robust to bad geometry found in the scan. We leverage the smoothness of the spectral space to help in the alignment of the semantic features of the ears. Edges detected in image space are used to identify relevant features from the ear that we align in the spectral representation by iteratively deforming the template ear. We then apply a novel reconstruction that preserves the deformation from the spectral space while reintroducing the original details. A final deformation based on constraints considering surface position and orientation deforms the template ear to match the shape of the scan. We tested our approach on many ear scans and observed that the resulting template shape provides a good compromise between complying with the shape of the scan and avoiding the reconstruction of the noise found in the scan. Furthermore, our approach was robust enough to scan meshes exhibiting typical bad geometry such as cracks and handles.
+
+Index Terms: Computing methodologies-Computer graphics-Shape modeling-
+
+## 1 INTRODUCTION
+
+Virtual head models are extremely important in many fields ranging from game and entertainment industries to medical and cosmetic industries. Thus, new acquisition methods, striving to improve the accuracy and detail of $3\mathrm{D}$ models, increase the level of capturing automation and decrease the overall cost and time of the acquisition. Despite the great efforts that have thus far been invested and the considerable research progress obtained, there are still areas that require exploration.
+
+The first breakthrough in human head acquisitions was light stages [37] that could capture both the static geometry of the head as well as the appearance model. While the $3\mathrm{D}$ reconstructions were impressive, due to the complexity and variability of the human face, it soon became evident that a one size fits all acquisition pipeline is not the best approach and many special methods have been developed targeting the different components of the human head such as lips, jaws, teeth, eyes, etc.
+
+One component that has received little attention are human ears. For example, ears are important to recreate a faithful avatar. They are also of particular importance for visual effects, video games, and virtual reality, where we aim for believable characters with specific visual traits. Acquisition of human ears is difficult as ear geometry comes with a richer anatomical structure with many components that exhibit complex folds. Moreover, the ear geometry exhibits a large degree of self-occlusion, which leads to missing geometry in the scans. Furthermore, hair is often found in the way, occluding the ear, which is also detrimental to the reconstruction of the fine details found in the ears. All these challenges make ear reconstruction difficult and many of the general purpose reconstruction methods available may yield undesirable artifacts.
+
+There are two main steps to virtual head model acquisition: a geometric reconstruction step where an unstructured scan is obtained, and a registration step where a template is deformed to match the shape of a scan. Typical registration methods first compute a set of 3D feature matches between the template and the scan followed by a deformation step where the template is deformed to match the 3D features. Unfortunately human ears are pretty smooth and do not contain easily identifiable local features. Moreover, the variance of the human ear shape is high and the fold structure is complex.
+
+In this work we propose a non-rigid template registration method tailored to the specific geometric characteristics of human ears such as the long folds exhibiting long ridges and valleys. Our approach uses an input that is noisy and a potentially incomplete irregular mesh of an ear, and it deforms a template to match the geometry of the ear. We frame the non-rigid registration as an optimization problem in a smooth domain where the ear geometry is simplified. Point correspondences are obtained using edge detection in the image space. Fine details of the scan are matched in a way that strikes a balance to avoid reconstructing noise while still reconstructing legitimate ear details. Our method preserves the semantic structure of the human ear and it is robust to the wide natural variations of the ear shape.
+
+## 2 RELATED WORK
+
+Digital human modeling has seen a great level of improvement towards human facial scanning $\left\lbrack {{16},{37}}\right\rbrack$ . That level of quality came through complex capture setup and equipment. Scans from standard photogrammetry remain noisy and the noise is even harder to deal with when considering the intricate details found in the ear geometry. We will first present methods tailored to specific parts of the face. We will then review the methods specific to ears. We will end by discussing registration methods.
+
+---
+
+*e-mail: ashok.srinivasan2002@gmail.com
+
+${}^{ \dagger }$ e-mail: tiberiu.popa@concordia.ca
+
+${}^{ \ddagger }$ e-mail: eric.paquette@etsmtl.ca
+
+---
+
+
+
+Figure 2: These ear scans demonstrate the diversity of ears with respect to their shape.
+
+
+
+Figure 3: External ear anatomy is composed of many different components adding to the complexity of the ear shape.
+
+### 2.1 Facial Parts
+
+General-purpose methods quickly showed their limits and researchers introduced methods specialized to specific parts of the face. Berard et al. focused on capture of eyes using a parametric model [10] and further improved on capture with a multi-view imaging system that can reconstruct poses of the eye [11]. Bermano et al. [12] present a method that works towards the reconstruction of eyelids including folding and wrinkles. For a realistic facial appearance it is important to consider the lips $\left\lbrack {{19},{20}}\right\rbrack$ and jaw regions $\left\lbrack {{42},{43}}\right\rbrack$ . For the teeth, it is important to capture their appearance and to fit them with respect to the mouth region [36,38].
+
+### 2.2 Ear Tailored Methods
+
+Arbab-Zavar and Nixon [8] proposed a method that detects ears using elliptical Hough transforms. Ansari and Gupta [7] proposed a method that also detects ears in image space by detecting edges and segregating them into concave and convex edges, thus finding the outer helix region. Cummings et al. [17] detect ears in the images by modelling light rays and finding the helix regions. Other methods work in both image and depth space. Yan and Bower [39] proposed a method that detects the ears by combining both RGB and depth images. Chen and Bhanu [15] proposed a method that detects the helix region by analyzing discontinuities in the hills and valleys of a depth image. Zhou et al. [41] proposed a method that introduced histograms of categorized shapes for $3\mathrm{D}$ ear recognition by adopting a sliding window approach and a support vector machine classifier. Ears are detected by analyzing the $3\mathrm{D}$ features like saddles and ridges, and based on connectivity graphs in the depth images [32]. While the methods above are interesting, they are limited to ear detection and do not provide solutions to the 3D ear reconstruction problem.
+
+
+
+Figure 4: Ear scans often exhibit noise and have poor geometry, making them challenging for template matching. On top of noise, meshes often exhibit erroneous handles as seen in (b).
+
+
+
+Figure 5: Example of scans from the Faust [13] (a)-(b) and Face-warehouse [14] (c)-(d) data sets. The ears in (a) and (b) show that standard template fitting methods perform poorly on ears. For the Facewarehouse examples in (c) and (d), while the geometry is smooth, it does not represent the ear anatomy fully. Details are missing from the anatomy such as the tragus, anti-tragus, anti helix. Furthermore, the ears are almost identical, probably because the template matching method "preferred" to reconstruct the scan to avoid picking up noise.
+
+For the purpose of reconstruction, Guler et al. [6] proposed a method that computes a dense registration between an image and a 3D mesh template. It works based on convolution networks that learn a mapping from image pixels into a dense template grid. While the method is interesting, it considers images as inputs instead of 3D scans.
+
+### 2.3 Dense Registration and Reconstruction in General
+
+A dense registration can be a potential avenue in reconstructing a template to match the shape of a scan. Here we will focus on specific registration methods. For a more comprehensive list of dense registration methods the reader is referred to the survey of van Kaick et al. [35]. Ovsjanikov et al. [30, 31] proposed the functional maps framework to express dense registration. Lähner et al. [25] proposed a method that works by formulating the problem as matching between a set of descriptors, imposing a prior continuity on the mapping. A major limitation to this approach is that the difference in vertex density between meshes can be problematic. Furthermore, the choice of descriptors affects the results in the case of a noisy scan.
+
+The Blended Intrinsic Maps (BIM) [24] method produces multiple low-dimensional maps that get blended in a global map. BIM suffers from distortion and discontinuities in its mappings. A major limitation of this method is that it is difficult to be adapted for meshes with problems such as holes and noise.
+
+
+
+Figure 6: (a) Our inputs are a template and scan meshes $T$ and $S$ . (b) $T$ and $S$ are transformed to a smooth domain and the template is rigid aligned to the scan. (c) Edges detected in the image space are transferred in $3\mathrm{D}$ on the smooth meshes. (d) Smooth meshes are iteratively deformed through three sub steps of rigid alignment of constraints, constraints injective mapping, and non-rigid deformation of the smooth template. (e) Spectral reconstruction is used to reintroduce details to the template. This is followed by a last non-rigid deformation phase based on similar surface orientation constraints. (f) After these deformation phases, the template exhibits the shape of the scan without the noise and semantic regions are in correspondence.
+
+Parametrization-based methods for dense registration work by transforming the meshes into a simpler space where finding a mapping between the meshes is easier. Athanasiadis et al. [9] proposed a geometrically-constrained optimization technique to map 3D genus-zero meshes on a sphere. Then, they morph the meshes with structural similarities by applying feature-based methods. Mocanu and Zaharia [29] proposed a two step spherical parameterization method 1) by analyzing the Gaussian curvature to align feature correspondences between the meshes, 2) by applying a morphing step to establish the mapping.
+
+Another category of deformation methods work with user-given landmarks. Some methods $\left\lbrack {3,4,{33}}\right\rbrack$ use the landmarks to cut the meshes, flatten them, and extract the dense registration from the planar domain or improve [22] an already provided registration in the planar domain. In addition to spherical and planar domains, other domains such as hyperbolic orbifolds have been proposed [1,2]. Landmark-based methods require a very carefully chosen and, in some cases, large set of corresponding landmarks.
+
+Other methods deform the given meshes until their respective shapes match with each other; however, most of the methods are limited to near-isometric deformations [5]. Some methods [23,26] overcome this limitation by trying to extend the range of objects to handle non-isometric pairs. However, these methods face practical challenges when dealing with scans that can contain noise and cracks in the mesh.
+
+A practical limitation for most of the dense registration methods is the mesh quality of the scans. Moreover, a full-fledged dense registration also implies the picking up of noise from the scans thus leading to a bad reconstruction. One main goal of our approach is to achieve the conflicting goals of acquiring geometric detail while avoiding the reconstruction of the noise.
+
+## 3 Template Deformation
+
+An ear shape has a lot of variability as illustrated in Fig. 2, and its anatomy (Fig. 3) contributes to a shape that is complex in nature. Out of the box scanning methods struggle to reconstruct a good mesh as can be seen in Fig. 2. Current methods work by either deforming a template or by dense registration, but they still fail on human ears largely owing to its complex anatomy and practical problems such as the mesh quality (Fig. 4). This is also exemplified in widely used data sets of faces. Fig. 5(a)-(b) shows heads from the Faust [13] data set. We can see that the mesh is of very poor quality in the ear region. Fig. 5(c)-(d) presents heads from the FaceWarehouse [14] data set. In this case, the ear geometry is good, but this is at the expense of not reconstructing a faithful ear (the ears are almost identical in the whole data set). These examples demonstrate that it is necessary to develop a novel approach specific for ears. Our approach fills the gap between methods picking up too much noise (Fig. 5(a)-(b)) and methods avoiding the noise at the expense of not reconstructing a faithful ear (Fig. 5(c)-(d)).
+
+The inputs to our approach are two meshes uniformly scaled to fit in a unit cube: a template $T$ and a scan $S$ (Fig. 6(a)). Scan $S$ is a high-density mesh with holes, noise, and bad polygon quality (Fig. 4). We conduct a series of deformation phases to align the coarse and fine details of the template $T$ to the scan $S$ . The input meshes are first converted to a smooth domain as ${T}_{k}$ and ${S}_{k}.{T}_{k}$ is rigidly aligned to ${S}_{k}$ and becomes $\overline{{T}_{k}}$ (Fig. 6(b)). Constraint points ${C}_{T}$ and ${C}_{S}$ are found on the input meshes $T$ and $S$ using edge detection. Constraint points ${C}_{T}$ and ${C}_{S}$ are transferred on $\overline{{T}_{k}}$ and ${S}_{k}$ as ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ (Fig. 6(c)). $\overline{{T}_{k}}$ is iteratively deformed to match the shape of ${S}_{k}$ by aligning the constraints ${C}_{\overline{{T}_{k}}}$ with ${C}_{{S}_{k}}$ (Fig. 6(d)). At the end of the iterations, $\overline{{T}_{k}}$ becomes $\widehat{{T}_{k}}$ that approximates the coarse shape of ${S}_{k}$ . With a spectral reconstruction ${\widehat{T}}_{k}$ is deformed as ${\widetilde{T}}_{k}$ that reintroduces the fine details lost in the smooth domain, but preserves the deformation undergone in the smooth domain. A closest location approach between similarly oriented areas results in new constraints used for the final deformation of the template to match the scan (Fig. 6(e)).
+
+### 3.1 Smooth Domain Transformation
+
+In this section we explain our first deformation phase where we align the coarse features of the ear (Fig. 6(b)). This phase uses spectral processing to transform our template and scan meshes into a smooth domain where non-rigid registration is easier to perform. This spectral processing takes advantage of the eigenvectors of the Laplacian matrix of the mesh. Given the $n \times 3$ matrix of vertex positions, we compute the positions in the smooth domain as follows:
+
+$$
+{V}^{\prime } = {U}_{k} \cdot {U}_{k}^{\top } \cdot V \tag{1}
+$$
+
+
+
+Figure 7: Canny edge detection in image space (a, d) and its projection (b, e) on the meshes $T$ and $S$ . The edges are then transferred to $\overline{{T}_{k}}$ and ${S}_{k}\left( {\mathrm{c},\mathrm{f}}\right)$ . The canny edge detection is effective in identifying the sematic regions of the ears such as the helix, tragus, anti-tragus, etc.
+
+where ${V}^{\prime }$ are the resulting vertex positions of the eigensubspace projection and ${U}_{k}$ is a $n \times k$ matrix containing the first $k$ eigenvectors. It can be noticed that with a full eigendecomposition, i.e,., $k = n$ , ${U}_{k = n} \cdot {U}_{k = n}^{\top }$ results in an identity matrix. By reducing $k,{U}_{k < n} \cdot {U}_{k < n}^{\top }$ removes less important details, but maintains the global shape of the mesh. For the purpose of eigendecomposition, the Laplacian matrix based on the cotangent weights [28] is used. Applying this transformation to $T$ and $S$ using their respective eigenvectors ${U}_{k}\left( T\right)$ and ${U}_{k}\left( S\right)$ , they become ${T}_{k}$ and ${S}_{k}$ . Mesh ${T}_{k}$ is then rigid-aligned [40] with ${S}_{k}$ and is now $\overline{{T}_{k}}$ .
+
+### 3.2 Ear Features and Constraints
+
+This section describes how we extract meaningful features of the ears that we will use as constraints for the deformation in the next section. Our constraints are automatically computed using Canny edge detection on renderings of the 3D ears. An edge detection on 2D renderings of the original ears ( $S$ and $T$ ) proved to be quite robust in systematically detecting meaningful edges. To achieve this, an orthographic camera view was used for the rendering where the camera is facing the ears such that the view covers the bounding box of the ears. Each rendered image is ${600}\mathrm{{px}}$ in height and the width varies between ${400}\mathrm{{px}}$ to ${500}\mathrm{{px}}$ based on the width of the ear. As it can be seen in Fig. 7(a) and (d), the edge detection identifies important semantic features of the ears. The detected edges (2D pixel locations) are then projected back to the mesh (Fig. 7(b) and (e)). These constraints ${C}_{T}$ and ${C}_{S}$ are 3D locations on the surface $T$ and $S$ respectively. The constraints are then transferred to $\overline{{T}_{k}}$ and ${S}_{k}$ (Fig. 7(c) and (f)) as ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ . The transfer relies on barycentric coordinates on the triangles of $T/\overline{{T}_{k}}$ and $S/{S}_{k}$ .
+
+
+
+Figure 8: The figure shows how $\overline{{T}_{k}}$ deforms in a non-rigid fashion based on its constraints ${C}_{\overline{{T}_{k}}}$ . As the iterations progress the movement becomes marginal and hence the iterations are terminated when the average of vertex movements reaches a threshold.
+
+### 3.3 Iterative Coarse-Level Deformation
+
+This deformation phase is done by iterations consisting of two sub-steps: 1) constraints alignment and 2) non-rigid deformation. These iterations deform the smooth domain mesh $\overline{{T}_{k}}$ to match the shape of ${S}_{k}$ using the constraints ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ . A rigid alignment is applied from ${C}_{\overline{{T}_{k}}}$ to ${C}_{{S}_{k}}$ . After rigid alignment, for each of the constraints in ${C}_{\overline{{T}_{k}}}$ a closest correspondence in ${C}_{{S}_{{k}_{i}}}$ is found. $\overline{{T}_{k}}$ is then deformed to align the constraints ${C}_{\overline{{T}_{k}}}$ on corresponding constraints from ${C}_{{S}_{k}}$ . Each of the following iterations uses the updated $\overline{{T}_{k}}$ and ${C}_{\overline{{T}_{k}}}$ . As the iterations progress the mesh $\overline{{T}_{k}}$ is deformed to match the shape of ${S}_{k}$ .
+
+Constraints Alignment and Mapping The first step of the iteration is the alignment of constraints ${C}_{\overline{{T}_{k}}}$ to ${C}_{{S}_{k}}$ using Go-ICP [40]. An injective map is found between the two sets of constraints ${C}_{\overline{{T}_{k}}} \rightarrow$ ${C}_{{S}_{k}}$ based on closest locations.
+
+Non-Rigid Deformation In the second step, the mesh $\overline{{T}_{k}}$ is deformed in a non-rigid fashion through an energy minimization composed of two terms. One term maintains the shape through Laplacian surface editing (LSE) [27], while the other term minimizes the distance between the constraints. Both terms are equally weighted with a value of 1.0 .
+
+The iterations end when the average movement of the constraints in one iteration, ${a}_{d}$ , is within a threshold $t$ (Fig. 8). The examples shown in this paper rely on a threshold of $t = {10}^{-6}$ (we can rely on an absolute threshold as we unitize the input meshes). The final version of $\overline{{T}_{k}}$ is referred to as $\widehat{{T}_{k}}$ .
+
+### 3.4 Spectral Reconstruction
+
+Once the iterations are finished, ${\widehat{T}}_{k}$ will look similar (at a coarse level) to ${S}_{k}$ . The template fitting is improved at the fine level with a reconstruction process that reintegrates the surface details while preserving the deformation undergone in the smooth domain. Reintroducing details and using surface orientation to create deformation constraints are important since unrelated mesh features can overlap in the smooth domain. Our spectral reconstruction is an important contribution of the presented approach. The idea is to express the deformation made in the smooth domain back to the original space. A similar idea has been proposed by Dey et al. [18]. In their method, they calculate a displacement vector between the vertex position of the original shape and the smooth domain. They then add this displacement vector back to the deformed smooth domain. Instead of using local displacement vectors, our approach strives for a smooth surface by globally enforcing the original surface Laplacians while preserving the smooth domain deformation.
+
+
+
+Figure 9: Results showing the deformation of the template ear mesh for 16 scans. Rows (a) and (c) are the original high-density ear scans of real people. Rows (b) and (d) are the results of template deformation using the pipeline of the presented approach.
+
+In our approach we reconstruct using the Laplacians of $T$ combined with the spectral coefficients from ${\widehat{T}}_{k}$ . The goal is to reconstruct a surface that maintains the details of mesh $T$ while keeping the transformations of ${\widehat{T}}_{k}$ . This is done by solving for vertex positions under two constraints. The first constraint tries to maintain the Laplacian coordinates of the original mesh $T$ :
+
+$$
+L\left( T\right) V\left( \widetilde{T}\right) = L\left( T\right) V\left( T\right) , \tag{2}
+$$
+
+where $V\left( \widetilde{T}\right)$ is a $n \times 3$ matrix representing the vertices after reconstruction. For the second constraint, we want to maintain the transformation undergone in the smooth domain. To do so, we work in the low dimensional space of the eigenvectors ${U}_{k}\left( T\right)$ . In that space, we try to maintain the same coordinates as those of ${\widehat{T}}_{k}$ :
+
+$$
+{U}_{k}{\left( T\right) }^{\top }V\left( \widetilde{T}\right) = {U}_{k}{\left( T\right) }^{\top }V\left( {\widehat{T}}_{k}\right) , \tag{3}
+$$
+
+where $V\left( {\widehat{T}}_{k}\right)$ are the vertex coordinates of ${\widehat{T}}_{k}$ .
+
+We then solve for the vertex positions $V\left( \widetilde{T}\right)$ that meet these two constraints in a least-squares sense:
+
+$$
+\underset{V\left( \widetilde{T}\right) }{\arg \min }\mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}{L}_{i}\left( T\right) {V}_{i}\left( \widetilde{T}\right) - L\left( T\right) V\left( T\right) \end{Vmatrix}}_{2}^{2} +
+$$
+
+$$
+\mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}{U}_{k}^{i}{\left( T\right) }^{\top }{V}_{i}\left( \widetilde{T}\right) - {U}_{k}^{i}{\left( T\right) }^{\top }{V}_{i}\left( {\widehat{T}}_{k}\right) )\end{Vmatrix}}_{2}^{2}.V \tag{4}
+$$
+
+Solving Equation 4 applies the changes made in the smooth domain back to the original domain thereby deforming ${\widehat{T}}_{k}$ to $\widetilde{T}$ , a shape that is similar to $S$ at the coarse level.
+
+Our spectral reconstruction strategy (Equation 4) is composed of two parts. The LSE alone through the $L\left( T\right)$ matrix is rank deficient resulting in an underconstrained system. Our second term involving ${U}_{k}$ contains $k$ additional constraints. The eigenvectors of ${U}_{k}$ are orthogonal to each other and the resulting system can be solved in a stable way.
+
+### 3.5 Fine-Level Constraint Based Deformation
+
+Given the previous deformation steps, the shape of $\widetilde{T}$ is similar to $S$ , but only at a coarse level. By exploiting the spectrally reconstructed surface with more details, the template is further deformed with positional constraints based on similar surface orientation. The goal is to deform the template to match the shape of the scan without inheriting the surface noise from the scan. Hence, we use LSE to deform $\widetilde{T}$ while maintaining a smooth surface. We identify constraints as closest locations on the surface of $S$ for each vertex of $\widetilde{T}$ . We reject some of the closest location correspondences based on surface orientation. We keep only those correspondences that have an angle between the vertex normal on $\widetilde{T}$ and the normal at the closest location on $S$ is less than a threshold (we used $d = {45}^{ \circ }$ ). The selected locations are used as anchor locations for the LSE deformation. As in Sect. 3.3, both terms are equally weighted with a value of 1.0 . This deforms ${\widetilde{T}}_{k}$ to the final template shape.
+
+
+
+Figure 10: (a) Template with its features colored. (b) A scan from our data set. (c) Deformation using mapping without reconstruction from spectral shape. The semantic correspondence is wrong at various places that can be identified on areas showing incorrect lateral sliding and bad reconstruction. (d) Deformation after spectral reconstruction. Both the surface quality and the semantic region correspondences greatly improved.
+
+## 4 RESULTS AND DISCUSSION
+
+The scans in our data set were acquired using a multi-view stereo setup. Each scan consists of a dense polygonal mesh that has between ${90k}$ to ${120k}$ vertices. Note that we plan to release a subset of our data set for other researchers. Fig. 9 shows the scans (a and c) and the deformed template (b and d) using the presented pipeline. The scans exhibit a lot of diversity in their shape and hence the deformation of the template to match the shape was quite a challenge. These meshes are irregular, can have erroneous or missing geometry, degenerate triangles as well as topological errors as illustrated in Fig. 4. The helix region of the scans were the worst affected during the capture due to the presence of interfering hair strands. We can see that our approach is robust against noise while being able to match the shape of the ears.
+
+Our implementation uses OpenCV for edge detection in image space, Python packages SciPy and NumPy for linear system solutions, and Blender for mesh handling and the LSE system.
+
+### 4.1 Spectral Reconstruction Benefits
+
+We tested our approach with and without the spectral reconstruction. Without spectral reconstruction, we apply the deformation of Sect. 3.5 directly on ${\widehat{T}}_{k}$ , skipping the reconstruction phase of Sect. 3.4. While the iterative deformation finishes with ${\widehat{T}}_{k}$ that exhibits a shape globally similar to ${S}_{k}$ , the lack of detail is problematic for the deformation of the template to match $S$ . Fig. 10 shows a scan and the template with semantically important features highlighted with colors. Fig. 10(c) shows a typical example of incorrect lateral sliding when the final deformation is done directly from $\widehat{T}$ . When the final deformation is conducted from $\widetilde{T}$ to $S$ based on similarly oriented locations, we observe a significant improvement in fidelity of the shape as well as the correspondence of semantic regions (Fig. 10(d)). Furthermore, our spectral reconstruction approach is general. For example, like the method of Dey et al. [18], our approach could be applied to animation.
+
+### 4.2 Selection of $k$ Eigenvectors
+
+The selection of $k$ for the eigen decomposition is crucial as it plays a major role in the pipeline. The idea behind the selection of $k$ impacts a) how simple the shape will be in the smooth domain (Sect. 3.1) and b) the number of constraints available for spectral reconstruction (Sect. 3.4). Fig. 11(b) shows a smaller selection of $k = 3$ that results in a simpler shape that does not contain any features, preventing the alignment of coarse features in the smooth domain. It also results in a very small number of constraints for spectral reconstruction. Fig. 11(d) shows a higher selection of $k = {100}$ that results in higher number of constraints for spectral reconstruction, but the shape also reintroduces a lot of folds from the original surface (Fig. 11(a)). We observed that aligning both the coarse and fine details in a single step is very difficult. A selection of $k = {20}$ (Fig. 11(c)) was balanced for both being a simpler shape with important coarse features, and also sufficient in terms of the number of constraints for spectral reconstruction.
+
+
+
+Figure 11: Different selection of $k$ and the resulting shapes. (a) The original template ear. (b) Smooth domain transform using $k = 3$ results in a planar shape that is too simple to allow the alignment of any features. (d) Smooth domain transform using $k = {100}$ results in a more detailed shape, but it contains a lot of folds that hinder the alignment. (c) A value of $k = {20}$ results in a shape that is simple but detailed around the helix region, which is an important feature we want to align in the smooth domain.
+
+### 4.3 Comparison to Mapping Method
+
+Many mapping methods require manual landmarks, and as such cannot be used for comparison with our fully automatic approach. Furthermore, many methods do not work with meshes such as our ear scans, which contain boundaries and bad geometry. Lähner et al. [25] propose a mapping method using SHOT descriptors [34]. Their approach fails with high density scans to produce dense mapping, however it is able to produce a sparse mapping of around 3000 vertex-to-vertex correspondences. We tested if we could use this sparse mapping to directly deform template $T$ to the shape of the scan, thus avoiding the use of the smooth domain. This sparse map was evaluated by deforming the template with the 3000 registrations points employed as constraints using LSE, an idea similar to the deformation phase explained in Sect. 3.5. The deformation using the constraints from Lähner et al. is compared with our final result ${T}_{k}^{\prime }$ . Fig. 12 shows the comparison of results for two different ears. From the results we can see that our approach from Sect. 3.5 performs better.
+
+### 4.4 Limitations
+
+Fig. 13 shows the worst results from our experiments. In most cases, the edge detection constraints (Sect. 3.2) and non-rigid deformation (Sect. 3.3) steps align the features of the ears quite well, but as can be seen in Fig. 13 the helix occasionally does not align completely. The region of the helix close to the crux of the helix is another region where the registered template and the scan are sometimes not in a very good correspondence.
+
+## 5 CONCLUSION AND FUTURE WORK
+
+We presented an approach to fit a template ear mesh to scans of real ears. The template and the scan are first transformed to an eigenspace smooth domain where we begin by conducting a rigid alignment on the smooth meshes. Features of the ears are detected in image-space with a Canny edge detection. The smooth domain eases the alignment of the coarse scale features of the meshes. The next phase iterates upon three sub steps of aligning the edge detection features, computing an injective mapping of the features from the template to the scan, and conducting a non-rigid deformation of the template through Laplacian surface editing with the features as constraints. We then reintroduce the details of the template mesh through a spectral reconstruction. Our spectral reconstruction optimizes for the spectral coordinates and for surface smoothness through Laplacian constraints. This generates a smooth surface with the details of the original template, while preserving the deformation from the smooth domain. The detailed template mesh is finally deformed through Laplacian constraints and constraints based on closest location of surface regions with similar orientation. One notable advantage of our approach is that it is robust against bad mesh quality and is also completely automatic. Moreover, we are convinced that our spectral reconstruction approach is general and could be used outside of the ear reconstruction pipeline. We will investigate this avenue as in future research work. The fixed template used in our approach could be replaced by a 3D Morphable Model (3DMM) of ears. In a similar fashion to the work of Donya et al. [21], we could adjust the 3DMM parameters to have a template ear that is already much closer to the geometry of the scanned ear. Finally, the current choice of constraints is limited to ears, but we believe that our series of deformation phases could be applied to other types of shapes. In this sense, deriving other types of constraints is an interesting direction for future work.
+
+
+
+Figure 12: We compare (c) and (g) the results skipping the smooth domain by deforming the template directly using constraints derived from the method of Lähner et al. [25], to (d) and (h) using our pipeline involving the smooth domain. We can observe that the results skipping the smooth domain produce lateral sliding (indicated by red arrows) of the corresponding semantic regions.
+
+## 6 ACKNOWLDGEMENTS
+
+This work was supported by Natural Sciences and Engineering Research Council of Canada (NSERC),[CRDPJ 535746 - 18]. We wish to thank Zorah Lähner from Technical University, Munich, Germany and author of Efficient Deformable Shape Correspondence via Kernel Matching [25] for sharing her code so that we could use
+
+
+
+Figure 13: This figure presents our worst results (b) and (e). We can see the lack of alignment in the helix region. The alignment problem is apparent right at the spectral reconstruction step (a) and (d), Sect. 3.4) and originates from the edge detection constraints (Sect. 3.2) and the deformation in the smooth domain (Sect. 3.3).
+
+it for comparison. We also want to thank the anonymous reviewers, and all the participants for the facial scanning.
+
+## REFERENCES
+
+[1] N. Aigerman, S. Z. Kovalsky, and Y. Lipman. Spherical orbifold tutte embeddings. ACM Transactions on Graphics (TOG), 36(4):90, 2017.
+
+[2] N. Aigerman and Y. Lipman. Hyperbolic orbifold tutte embeddings. ACM Transactions on Graphics (TOG), 35(6):217:1-217:14, Nov. 2016.
+
+[3] N. Aigerman, R. Poranne, and Y. Lipman. Lifted bijections for low distortion surface mappings. ACM Transactions on Graphics (TOG), 33(4):69:1-69:12, 2014.
+
+[4] N. Aigerman, R. Poranne, and Y. Lipman. Seamless surface mappings. ACM Transactions on Graphics (TOG), 34(4):72:1-72:13, 2015.
+
+[5] B. Allen, B. Curless, and Z. Popović. The space of human body shapes: Reconstruction and parameterization from range scans. ACM Transactions on Graphics (TOG), 22(3):587-594, 2003.
+
+[6] R. Alp Guler, G. Trigeorgis, E. Antonakos, P. Snape, S. Zafeiriou, and I. Kokkinos. Densereg: Fully convolutional dense shape regression in-the-wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6799-6808, 2017.
+
+[7] S. Ansari and P. Gupta. Localization of ear using outer helix curve of the ear. In 2007 International Conference on Computing: Theory and Applications (ICCTA'07), pp. 688-692. IEEE, 2007.
+
+[8] B. Arbab-Zavar and M. S. Nixon. On shape-mediated enrolment in ear biometrics. In International Symposium on Visual Computing, pp. 549-558. Springer, 2007.
+
+[9] T. Athanasiadis, I. Fudos, C. Nikou, and V. Stamati. Feature-based 3D morphing based on geometrically constrained spherical parameterization. Computer Aided Geometric Design, 29:2-17, 2012.
+
+[10] P. Bérard, D. Bradley, M. Gross, and T. Beeler. Lightweight eye
+
+capture using a parametric model. ACM Transactions on Graphics (TOG), 35(4):1-12, 2016.
+
+[11] P. Bérard, D. Bradley, M. Gross, and T. Beeler. Practical Person-Specific Eye Rigging. Computer Graphics Forum, 2019. doi: 10. 1111/cgf. 13650
+
+[12] A. Bermano, T. Beeler, Y. Kozlov, D. Bradley, B. Bickel, and M. Gross. Detailed spatio-temporal reconstruction of eyelids. ACM Transactions on Graphics (TOG), 34(4):1-11, 2015.
+
+[13] F. Bogo, J. Romero, M. Loper, and M. J. Black. Faust: Dataset and evaluation for $3\mathrm{D}$ mesh registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3794- 3801, 2014.
+
+[14] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413-425, 2013.
+
+[15] H. Chen and B. Bhanu. Shape model-based 3D ear detection from side face range images. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)-Workshops, pp. 122-122. IEEE, 2005.
+
+[16] M. Cong, L. Lan, and R. Fedkiw. Local geometric indexing of high resolution data for facial reconstruction from sparse markers, 2019.
+
+[17] A. H. Cummings, M. S. Nixon, and J. N. Carter. The image ray transform for structural feature detection. Pattern Recognition Letters, 32(15):2053-2060, 2011.
+
+[18] T. K. Dey, P. Ranjan, and Y. Wang. Eigen deformation of 3D models. The Visual Computer, 28(6-8):585-595, 2012.
+
+[19] D. Dinev, T. Beeler, D. Bradley, M. Bächer, H. Xu, and L. Kavan. User-guided lip correction for facial performance capture. Computer Graphics Forum, 37(8):93-101, 2018.
+
+[20] P. Garrido, M. Zollhöfer, C. Wu, D. Bradley, P. Pérez, T. Beeler, and C. Theobalt. Corrective 3D reconstruction of lips from monocular video. ACM Transactions on Graphics (TOG), 35(6):219-1, 2016.
+
+[21] D. Ghafourzadeh, C. Rahgoshay, S. Fallahdoust, A. Aubame, A. Beauchamp, T. Popa, and E. Paquette. Part-based 3D face mor-phable model with anthropometric local control. In Proceedings of Graphics Interface 2020, 2020.
+
+[22] D. Ghafourzadeh, S. Ramachandran, M. de Lasa, T. Popa, and E. Pa-quette. Local editing of cross-surface mappings with iterative least squares conformal maps. In Proceedings of Graphics Interface 2020, 2020.
+
+[23] Q.-X. Huang, B. Adams, M. Wicke, and L. J. Guibas. Non-rigid registration under isometric deformations. Computer Graphics Forum, 27(5):1449-1457, 2008.
+
+[24] V. G. Kim, Y. Lipman, and T. Funkhouser. Blended intrinsic maps. ACM Transactions on Graphics (TOG), 30(4):79:1-79:12, 2011.
+
+[25] Z. Lähner, M. Vestner, A. Boyarski, O. Litany, R. Slossberg, T. Remez, E. Rodola, A. Bronstein, M. Bronstein, R. Kimmel, et al. Efficient deformable shape correspondence via kernel matching. In 2017 International Conference on 3D Vision (3DV), pp. 517-526. IEEE, 2017.
+
+[26] H. Li, R. W. Sumner, and M. Pauly. Global correspondence optimization for non-rigid registration of depth scans. Computer Graphics Forum, 27(5):1421-1430, 2008.
+
+[27] Y. Lipman, O. Sorkine, D. Cohen-Or, D. Levin, C. Rossi, and H.- P. Seidel. Differential coordinates for interactive mesh editing. In Proceedings Shape Modeling Applications, 2004., pp. 181-190. IEEE, 2004.
+
+[28] M. Meyer, M. Desbrun, P. Schröder, and A. H. Barr. Discrete differential-geometry operators for triangulated 2-manifolds. In ${Vi}$ - sualization and mathematics III, pp. 35-57. Springer, 2003.
+
+[29] B. Mocanu and T. Zaharia. A complete framework for 3D mesh morphing. In Proc. of the 11th ACM SIGGRAPH Int. Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI '12, pp. 161-170. ACM, 2012.
+
+[30] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas. Functional maps: A flexible representation of maps between shapes. ACM Transactions on Graphics (TOG), 31(4):30:1-30:11, 2012.
+
+[31] M. Ovsjanikov, E. Corman, M. Bronstein, E. Rodolà, M. Ben-Chen, L. Guibas, F. Chazal, and A. Bronstein. Computing and processing correspondences with functional maps. In SIGGRAPH ASIA 2016
+
+Courses, p. 9. ACM, 2016.
+
+[32] S. Prakash and P. Gupta. An efficient technique for ear detection in 3D: Invariant to rotation and scale. In 2012 5th IAPR International Conference on Biometrics (ICB), pp. 97-102. IEEE, 2012.
+
+[33] S. Ramachandran, D. Ghafourzadeh, M. de Lasa, T. Popa, and E. Pa-quette. Joint planar parameterization of segmented parts and cage deformation for dense correspondence. Computers & Graphics, 74:202-212, 2018.
+
+[34] S. Salti, F. Tombari, and L. Di Stefano. Shot: Unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125:251-264, 2014.
+
+[35] O. Van Kaick, H. Zhang, G. Hamarneh, and D. Cohen-Or. A survey on shape correspondence. Computer Graphics Forum, 30(6):1681-1707, 2011.
+
+[36] Z. Velinov, M. Papas, D. Bradley, P. Gotardo, P. Mirdehghan, S. Marschner, J. Novák, and T. Beeler. Appearance capture and modeling of human teeth. ACM Transactions on Graphics (TOG), 37(6):1-13, 2018.
+
+[37] T. Weise, S. Bouaziz, H. Li, and M. Pauly. Realtime performance-based facial animation. ACM Transactions on Graphics (TOG), 30(4):77:1- 77:10, 2011.
+
+[38] C. Wu, D. Bradley, P. Garrido, M. Zollhöfer, C. Theobalt, M. H. Gross, and T. Beeler. Model-based teeth reconstruction. ACM Transactions on Graphics (TOG), 35(6):220-1, 2016.
+
+[39] P. Yan and K. W. Bowyer. Biometric recognition using 3D ear shape. IEEE Transactions on pattern analysis and machine intelligence, 29(8):1297-1308, 2007.
+
+[40] J. Yang, H. Li, D. Campbell, and Y. Jia. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE transactions on pattern analysis and machine intelligence, 38(11):2241-2254, 2015.
+
+[41] J. Zhou, S. Cadavid, and M. Abdel-Mottaleb. Histograms of categorized shapes for 3D ear detection. In 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1-6. IEEE, 2010.
+
+[42] G. Zoss, T. Beeler, M. Gross, and D. Bradley. Accurate markerless jaw tracking for facial performance capture. ACM Transactions on Graphics (TOG), 38(4):1-8, 2019.
+
+[43] G. Zoss, D. Bradley, P. Bérard, and T. Beeler. An empirical rig for jaw animation. ACM Transactions on Graphics (TOG), 37(4):1-12, 2018.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e9f58f7b95a781bb287b85242762e0f6639a3e8e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/kt46JHTsAIo/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,209 @@
+§ CONSTRAINT-BASED SPECTRAL SPACE TEMPLATE DEFORMATION FOR EAR SCANS
+
+Srinivasan Ramachandran * Tiberiu Popa ${}^{ \dagger }$ École de technologie supérieure, Montréal, Canada Concordia University, Montréal Canada
+
+Eric Paquette ${}^{ \ddagger }$
+
+École de technologie supérieure, Montréal, Canada
+
+ < g r a p h i c s >
+
+Figure 1: Our approach deforms a template to match the shape of a scan by aligning and deforming in a smooth domain space.
+
+§ ABSTRACT
+
+Ears are complicated shapes and contain a lot of folds. It is difficult to correctly deform an ear template to achieve the same shape as a scan, while avoiding the reconstruction of noise from the scan and being robust to bad geometry found in the scan. We leverage the smoothness of the spectral space to help in the alignment of the semantic features of the ears. Edges detected in image space are used to identify relevant features from the ear that we align in the spectral representation by iteratively deforming the template ear. We then apply a novel reconstruction that preserves the deformation from the spectral space while reintroducing the original details. A final deformation based on constraints considering surface position and orientation deforms the template ear to match the shape of the scan. We tested our approach on many ear scans and observed that the resulting template shape provides a good compromise between complying with the shape of the scan and avoiding the reconstruction of the noise found in the scan. Furthermore, our approach was robust enough to scan meshes exhibiting typical bad geometry such as cracks and handles.
+
+Index Terms: Computing methodologies-Computer graphics-Shape modeling-
+
+§ 1 INTRODUCTION
+
+Virtual head models are extremely important in many fields ranging from game and entertainment industries to medical and cosmetic industries. Thus, new acquisition methods, striving to improve the accuracy and detail of $3\mathrm{D}$ models, increase the level of capturing automation and decrease the overall cost and time of the acquisition. Despite the great efforts that have thus far been invested and the considerable research progress obtained, there are still areas that require exploration.
+
+The first breakthrough in human head acquisitions was light stages [37] that could capture both the static geometry of the head as well as the appearance model. While the $3\mathrm{D}$ reconstructions were impressive, due to the complexity and variability of the human face, it soon became evident that a one size fits all acquisition pipeline is not the best approach and many special methods have been developed targeting the different components of the human head such as lips, jaws, teeth, eyes, etc.
+
+One component that has received little attention are human ears. For example, ears are important to recreate a faithful avatar. They are also of particular importance for visual effects, video games, and virtual reality, where we aim for believable characters with specific visual traits. Acquisition of human ears is difficult as ear geometry comes with a richer anatomical structure with many components that exhibit complex folds. Moreover, the ear geometry exhibits a large degree of self-occlusion, which leads to missing geometry in the scans. Furthermore, hair is often found in the way, occluding the ear, which is also detrimental to the reconstruction of the fine details found in the ears. All these challenges make ear reconstruction difficult and many of the general purpose reconstruction methods available may yield undesirable artifacts.
+
+There are two main steps to virtual head model acquisition: a geometric reconstruction step where an unstructured scan is obtained, and a registration step where a template is deformed to match the shape of a scan. Typical registration methods first compute a set of 3D feature matches between the template and the scan followed by a deformation step where the template is deformed to match the 3D features. Unfortunately human ears are pretty smooth and do not contain easily identifiable local features. Moreover, the variance of the human ear shape is high and the fold structure is complex.
+
+In this work we propose a non-rigid template registration method tailored to the specific geometric characteristics of human ears such as the long folds exhibiting long ridges and valleys. Our approach uses an input that is noisy and a potentially incomplete irregular mesh of an ear, and it deforms a template to match the geometry of the ear. We frame the non-rigid registration as an optimization problem in a smooth domain where the ear geometry is simplified. Point correspondences are obtained using edge detection in the image space. Fine details of the scan are matched in a way that strikes a balance to avoid reconstructing noise while still reconstructing legitimate ear details. Our method preserves the semantic structure of the human ear and it is robust to the wide natural variations of the ear shape.
+
+§ 2 RELATED WORK
+
+Digital human modeling has seen a great level of improvement towards human facial scanning $\left\lbrack {{16},{37}}\right\rbrack$ . That level of quality came through complex capture setup and equipment. Scans from standard photogrammetry remain noisy and the noise is even harder to deal with when considering the intricate details found in the ear geometry. We will first present methods tailored to specific parts of the face. We will then review the methods specific to ears. We will end by discussing registration methods.
+
+*e-mail: ashok.srinivasan2002@gmail.com
+
+${}^{ \dagger }$ e-mail: tiberiu.popa@concordia.ca
+
+${}^{ \ddagger }$ e-mail: eric.paquette@etsmtl.ca
+
+ < g r a p h i c s >
+
+Figure 2: These ear scans demonstrate the diversity of ears with respect to their shape.
+
+ < g r a p h i c s >
+
+Figure 3: External ear anatomy is composed of many different components adding to the complexity of the ear shape.
+
+§ 2.1 FACIAL PARTS
+
+General-purpose methods quickly showed their limits and researchers introduced methods specialized to specific parts of the face. Berard et al. focused on capture of eyes using a parametric model [10] and further improved on capture with a multi-view imaging system that can reconstruct poses of the eye [11]. Bermano et al. [12] present a method that works towards the reconstruction of eyelids including folding and wrinkles. For a realistic facial appearance it is important to consider the lips $\left\lbrack {{19},{20}}\right\rbrack$ and jaw regions $\left\lbrack {{42},{43}}\right\rbrack$ . For the teeth, it is important to capture their appearance and to fit them with respect to the mouth region [36,38].
+
+§ 2.2 EAR TAILORED METHODS
+
+Arbab-Zavar and Nixon [8] proposed a method that detects ears using elliptical Hough transforms. Ansari and Gupta [7] proposed a method that also detects ears in image space by detecting edges and segregating them into concave and convex edges, thus finding the outer helix region. Cummings et al. [17] detect ears in the images by modelling light rays and finding the helix regions. Other methods work in both image and depth space. Yan and Bower [39] proposed a method that detects the ears by combining both RGB and depth images. Chen and Bhanu [15] proposed a method that detects the helix region by analyzing discontinuities in the hills and valleys of a depth image. Zhou et al. [41] proposed a method that introduced histograms of categorized shapes for $3\mathrm{D}$ ear recognition by adopting a sliding window approach and a support vector machine classifier. Ears are detected by analyzing the $3\mathrm{D}$ features like saddles and ridges, and based on connectivity graphs in the depth images [32]. While the methods above are interesting, they are limited to ear detection and do not provide solutions to the 3D ear reconstruction problem.
+
+ < g r a p h i c s >
+
+Figure 4: Ear scans often exhibit noise and have poor geometry, making them challenging for template matching. On top of noise, meshes often exhibit erroneous handles as seen in (b).
+
+ < g r a p h i c s >
+
+Figure 5: Example of scans from the Faust [13] (a)-(b) and Face-warehouse [14] (c)-(d) data sets. The ears in (a) and (b) show that standard template fitting methods perform poorly on ears. For the Facewarehouse examples in (c) and (d), while the geometry is smooth, it does not represent the ear anatomy fully. Details are missing from the anatomy such as the tragus, anti-tragus, anti helix. Furthermore, the ears are almost identical, probably because the template matching method "preferred" to reconstruct the scan to avoid picking up noise.
+
+For the purpose of reconstruction, Guler et al. [6] proposed a method that computes a dense registration between an image and a 3D mesh template. It works based on convolution networks that learn a mapping from image pixels into a dense template grid. While the method is interesting, it considers images as inputs instead of 3D scans.
+
+§ 2.3 DENSE REGISTRATION AND RECONSTRUCTION IN GENERAL
+
+A dense registration can be a potential avenue in reconstructing a template to match the shape of a scan. Here we will focus on specific registration methods. For a more comprehensive list of dense registration methods the reader is referred to the survey of van Kaick et al. [35]. Ovsjanikov et al. [30, 31] proposed the functional maps framework to express dense registration. Lähner et al. [25] proposed a method that works by formulating the problem as matching between a set of descriptors, imposing a prior continuity on the mapping. A major limitation to this approach is that the difference in vertex density between meshes can be problematic. Furthermore, the choice of descriptors affects the results in the case of a noisy scan.
+
+The Blended Intrinsic Maps (BIM) [24] method produces multiple low-dimensional maps that get blended in a global map. BIM suffers from distortion and discontinuities in its mappings. A major limitation of this method is that it is difficult to be adapted for meshes with problems such as holes and noise.
+
+ < g r a p h i c s >
+
+Figure 6: (a) Our inputs are a template and scan meshes $T$ and $S$ . (b) $T$ and $S$ are transformed to a smooth domain and the template is rigid aligned to the scan. (c) Edges detected in the image space are transferred in $3\mathrm{D}$ on the smooth meshes. (d) Smooth meshes are iteratively deformed through three sub steps of rigid alignment of constraints, constraints injective mapping, and non-rigid deformation of the smooth template. (e) Spectral reconstruction is used to reintroduce details to the template. This is followed by a last non-rigid deformation phase based on similar surface orientation constraints. (f) After these deformation phases, the template exhibits the shape of the scan without the noise and semantic regions are in correspondence.
+
+Parametrization-based methods for dense registration work by transforming the meshes into a simpler space where finding a mapping between the meshes is easier. Athanasiadis et al. [9] proposed a geometrically-constrained optimization technique to map 3D genus-zero meshes on a sphere. Then, they morph the meshes with structural similarities by applying feature-based methods. Mocanu and Zaharia [29] proposed a two step spherical parameterization method 1) by analyzing the Gaussian curvature to align feature correspondences between the meshes, 2) by applying a morphing step to establish the mapping.
+
+Another category of deformation methods work with user-given landmarks. Some methods $\left\lbrack {3,4,{33}}\right\rbrack$ use the landmarks to cut the meshes, flatten them, and extract the dense registration from the planar domain or improve [22] an already provided registration in the planar domain. In addition to spherical and planar domains, other domains such as hyperbolic orbifolds have been proposed [1,2]. Landmark-based methods require a very carefully chosen and, in some cases, large set of corresponding landmarks.
+
+Other methods deform the given meshes until their respective shapes match with each other; however, most of the methods are limited to near-isometric deformations [5]. Some methods [23,26] overcome this limitation by trying to extend the range of objects to handle non-isometric pairs. However, these methods face practical challenges when dealing with scans that can contain noise and cracks in the mesh.
+
+A practical limitation for most of the dense registration methods is the mesh quality of the scans. Moreover, a full-fledged dense registration also implies the picking up of noise from the scans thus leading to a bad reconstruction. One main goal of our approach is to achieve the conflicting goals of acquiring geometric detail while avoiding the reconstruction of the noise.
+
+§ 3 TEMPLATE DEFORMATION
+
+An ear shape has a lot of variability as illustrated in Fig. 2, and its anatomy (Fig. 3) contributes to a shape that is complex in nature. Out of the box scanning methods struggle to reconstruct a good mesh as can be seen in Fig. 2. Current methods work by either deforming a template or by dense registration, but they still fail on human ears largely owing to its complex anatomy and practical problems such as the mesh quality (Fig. 4). This is also exemplified in widely used data sets of faces. Fig. 5(a)-(b) shows heads from the Faust [13] data set. We can see that the mesh is of very poor quality in the ear region. Fig. 5(c)-(d) presents heads from the FaceWarehouse [14] data set. In this case, the ear geometry is good, but this is at the expense of not reconstructing a faithful ear (the ears are almost identical in the whole data set). These examples demonstrate that it is necessary to develop a novel approach specific for ears. Our approach fills the gap between methods picking up too much noise (Fig. 5(a)-(b)) and methods avoiding the noise at the expense of not reconstructing a faithful ear (Fig. 5(c)-(d)).
+
+The inputs to our approach are two meshes uniformly scaled to fit in a unit cube: a template $T$ and a scan $S$ (Fig. 6(a)). Scan $S$ is a high-density mesh with holes, noise, and bad polygon quality (Fig. 4). We conduct a series of deformation phases to align the coarse and fine details of the template $T$ to the scan $S$ . The input meshes are first converted to a smooth domain as ${T}_{k}$ and ${S}_{k}.{T}_{k}$ is rigidly aligned to ${S}_{k}$ and becomes $\overline{{T}_{k}}$ (Fig. 6(b)). Constraint points ${C}_{T}$ and ${C}_{S}$ are found on the input meshes $T$ and $S$ using edge detection. Constraint points ${C}_{T}$ and ${C}_{S}$ are transferred on $\overline{{T}_{k}}$ and ${S}_{k}$ as ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ (Fig. 6(c)). $\overline{{T}_{k}}$ is iteratively deformed to match the shape of ${S}_{k}$ by aligning the constraints ${C}_{\overline{{T}_{k}}}$ with ${C}_{{S}_{k}}$ (Fig. 6(d)). At the end of the iterations, $\overline{{T}_{k}}$ becomes $\widehat{{T}_{k}}$ that approximates the coarse shape of ${S}_{k}$ . With a spectral reconstruction ${\widehat{T}}_{k}$ is deformed as ${\widetilde{T}}_{k}$ that reintroduces the fine details lost in the smooth domain, but preserves the deformation undergone in the smooth domain. A closest location approach between similarly oriented areas results in new constraints used for the final deformation of the template to match the scan (Fig. 6(e)).
+
+§ 3.1 SMOOTH DOMAIN TRANSFORMATION
+
+In this section we explain our first deformation phase where we align the coarse features of the ear (Fig. 6(b)). This phase uses spectral processing to transform our template and scan meshes into a smooth domain where non-rigid registration is easier to perform. This spectral processing takes advantage of the eigenvectors of the Laplacian matrix of the mesh. Given the $n \times 3$ matrix of vertex positions, we compute the positions in the smooth domain as follows:
+
+$$
+{V}^{\prime } = {U}_{k} \cdot {U}_{k}^{\top } \cdot V \tag{1}
+$$
+
+ < g r a p h i c s >
+
+Figure 7: Canny edge detection in image space (a, d) and its projection (b, e) on the meshes $T$ and $S$ . The edges are then transferred to $\overline{{T}_{k}}$ and ${S}_{k}\left( {\mathrm{c},\mathrm{f}}\right)$ . The canny edge detection is effective in identifying the sematic regions of the ears such as the helix, tragus, anti-tragus, etc.
+
+where ${V}^{\prime }$ are the resulting vertex positions of the eigensubspace projection and ${U}_{k}$ is a $n \times k$ matrix containing the first $k$ eigenvectors. It can be noticed that with a full eigendecomposition, i.e,., $k = n$ , ${U}_{k = n} \cdot {U}_{k = n}^{\top }$ results in an identity matrix. By reducing $k,{U}_{k < n} \cdot {U}_{k < n}^{\top }$ removes less important details, but maintains the global shape of the mesh. For the purpose of eigendecomposition, the Laplacian matrix based on the cotangent weights [28] is used. Applying this transformation to $T$ and $S$ using their respective eigenvectors ${U}_{k}\left( T\right)$ and ${U}_{k}\left( S\right)$ , they become ${T}_{k}$ and ${S}_{k}$ . Mesh ${T}_{k}$ is then rigid-aligned [40] with ${S}_{k}$ and is now $\overline{{T}_{k}}$ .
+
+§ 3.2 EAR FEATURES AND CONSTRAINTS
+
+This section describes how we extract meaningful features of the ears that we will use as constraints for the deformation in the next section. Our constraints are automatically computed using Canny edge detection on renderings of the 3D ears. An edge detection on 2D renderings of the original ears ( $S$ and $T$ ) proved to be quite robust in systematically detecting meaningful edges. To achieve this, an orthographic camera view was used for the rendering where the camera is facing the ears such that the view covers the bounding box of the ears. Each rendered image is ${600}\mathrm{{px}}$ in height and the width varies between ${400}\mathrm{{px}}$ to ${500}\mathrm{{px}}$ based on the width of the ear. As it can be seen in Fig. 7(a) and (d), the edge detection identifies important semantic features of the ears. The detected edges (2D pixel locations) are then projected back to the mesh (Fig. 7(b) and (e)). These constraints ${C}_{T}$ and ${C}_{S}$ are 3D locations on the surface $T$ and $S$ respectively. The constraints are then transferred to $\overline{{T}_{k}}$ and ${S}_{k}$ (Fig. 7(c) and (f)) as ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ . The transfer relies on barycentric coordinates on the triangles of $T/\overline{{T}_{k}}$ and $S/{S}_{k}$ .
+
+ < g r a p h i c s >
+
+Figure 8: The figure shows how $\overline{{T}_{k}}$ deforms in a non-rigid fashion based on its constraints ${C}_{\overline{{T}_{k}}}$ . As the iterations progress the movement becomes marginal and hence the iterations are terminated when the average of vertex movements reaches a threshold.
+
+§ 3.3 ITERATIVE COARSE-LEVEL DEFORMATION
+
+This deformation phase is done by iterations consisting of two sub-steps: 1) constraints alignment and 2) non-rigid deformation. These iterations deform the smooth domain mesh $\overline{{T}_{k}}$ to match the shape of ${S}_{k}$ using the constraints ${C}_{\overline{{T}_{k}}}$ and ${C}_{{S}_{k}}$ . A rigid alignment is applied from ${C}_{\overline{{T}_{k}}}$ to ${C}_{{S}_{k}}$ . After rigid alignment, for each of the constraints in ${C}_{\overline{{T}_{k}}}$ a closest correspondence in ${C}_{{S}_{{k}_{i}}}$ is found. $\overline{{T}_{k}}$ is then deformed to align the constraints ${C}_{\overline{{T}_{k}}}$ on corresponding constraints from ${C}_{{S}_{k}}$ . Each of the following iterations uses the updated $\overline{{T}_{k}}$ and ${C}_{\overline{{T}_{k}}}$ . As the iterations progress the mesh $\overline{{T}_{k}}$ is deformed to match the shape of ${S}_{k}$ .
+
+Constraints Alignment and Mapping The first step of the iteration is the alignment of constraints ${C}_{\overline{{T}_{k}}}$ to ${C}_{{S}_{k}}$ using Go-ICP [40]. An injective map is found between the two sets of constraints ${C}_{\overline{{T}_{k}}} \rightarrow$ ${C}_{{S}_{k}}$ based on closest locations.
+
+Non-Rigid Deformation In the second step, the mesh $\overline{{T}_{k}}$ is deformed in a non-rigid fashion through an energy minimization composed of two terms. One term maintains the shape through Laplacian surface editing (LSE) [27], while the other term minimizes the distance between the constraints. Both terms are equally weighted with a value of 1.0 .
+
+The iterations end when the average movement of the constraints in one iteration, ${a}_{d}$ , is within a threshold $t$ (Fig. 8). The examples shown in this paper rely on a threshold of $t = {10}^{-6}$ (we can rely on an absolute threshold as we unitize the input meshes). The final version of $\overline{{T}_{k}}$ is referred to as $\widehat{{T}_{k}}$ .
+
+§ 3.4 SPECTRAL RECONSTRUCTION
+
+Once the iterations are finished, ${\widehat{T}}_{k}$ will look similar (at a coarse level) to ${S}_{k}$ . The template fitting is improved at the fine level with a reconstruction process that reintegrates the surface details while preserving the deformation undergone in the smooth domain. Reintroducing details and using surface orientation to create deformation constraints are important since unrelated mesh features can overlap in the smooth domain. Our spectral reconstruction is an important contribution of the presented approach. The idea is to express the deformation made in the smooth domain back to the original space. A similar idea has been proposed by Dey et al. [18]. In their method, they calculate a displacement vector between the vertex position of the original shape and the smooth domain. They then add this displacement vector back to the deformed smooth domain. Instead of using local displacement vectors, our approach strives for a smooth surface by globally enforcing the original surface Laplacians while preserving the smooth domain deformation.
+
+ < g r a p h i c s >
+
+Figure 9: Results showing the deformation of the template ear mesh for 16 scans. Rows (a) and (c) are the original high-density ear scans of real people. Rows (b) and (d) are the results of template deformation using the pipeline of the presented approach.
+
+In our approach we reconstruct using the Laplacians of $T$ combined with the spectral coefficients from ${\widehat{T}}_{k}$ . The goal is to reconstruct a surface that maintains the details of mesh $T$ while keeping the transformations of ${\widehat{T}}_{k}$ . This is done by solving for vertex positions under two constraints. The first constraint tries to maintain the Laplacian coordinates of the original mesh $T$ :
+
+$$
+L\left( T\right) V\left( \widetilde{T}\right) = L\left( T\right) V\left( T\right) , \tag{2}
+$$
+
+where $V\left( \widetilde{T}\right)$ is a $n \times 3$ matrix representing the vertices after reconstruction. For the second constraint, we want to maintain the transformation undergone in the smooth domain. To do so, we work in the low dimensional space of the eigenvectors ${U}_{k}\left( T\right)$ . In that space, we try to maintain the same coordinates as those of ${\widehat{T}}_{k}$ :
+
+$$
+{U}_{k}{\left( T\right) }^{\top }V\left( \widetilde{T}\right) = {U}_{k}{\left( T\right) }^{\top }V\left( {\widehat{T}}_{k}\right) , \tag{3}
+$$
+
+where $V\left( {\widehat{T}}_{k}\right)$ are the vertex coordinates of ${\widehat{T}}_{k}$ .
+
+We then solve for the vertex positions $V\left( \widetilde{T}\right)$ that meet these two constraints in a least-squares sense:
+
+$$
+\underset{V\left( \widetilde{T}\right) }{\arg \min }\mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}{L}_{i}\left( T\right) {V}_{i}\left( \widetilde{T}\right) - L\left( T\right) V\left( T\right) \end{Vmatrix}}_{2}^{2} +
+$$
+
+$$
+\mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}{U}_{k}^{i}{\left( T\right) }^{\top }{V}_{i}\left( \widetilde{T}\right) - {U}_{k}^{i}{\left( T\right) }^{\top }{V}_{i}\left( {\widehat{T}}_{k}\right) )\end{Vmatrix}}_{2}^{2}.V \tag{4}
+$$
+
+Solving Equation 4 applies the changes made in the smooth domain back to the original domain thereby deforming ${\widehat{T}}_{k}$ to $\widetilde{T}$ , a shape that is similar to $S$ at the coarse level.
+
+Our spectral reconstruction strategy (Equation 4) is composed of two parts. The LSE alone through the $L\left( T\right)$ matrix is rank deficient resulting in an underconstrained system. Our second term involving ${U}_{k}$ contains $k$ additional constraints. The eigenvectors of ${U}_{k}$ are orthogonal to each other and the resulting system can be solved in a stable way.
+
+§ 3.5 FINE-LEVEL CONSTRAINT BASED DEFORMATION
+
+Given the previous deformation steps, the shape of $\widetilde{T}$ is similar to $S$ , but only at a coarse level. By exploiting the spectrally reconstructed surface with more details, the template is further deformed with positional constraints based on similar surface orientation. The goal is to deform the template to match the shape of the scan without inheriting the surface noise from the scan. Hence, we use LSE to deform $\widetilde{T}$ while maintaining a smooth surface. We identify constraints as closest locations on the surface of $S$ for each vertex of $\widetilde{T}$ . We reject some of the closest location correspondences based on surface orientation. We keep only those correspondences that have an angle between the vertex normal on $\widetilde{T}$ and the normal at the closest location on $S$ is less than a threshold (we used $d = {45}^{ \circ }$ ). The selected locations are used as anchor locations for the LSE deformation. As in Sect. 3.3, both terms are equally weighted with a value of 1.0 . This deforms ${\widetilde{T}}_{k}$ to the final template shape.
+
+ < g r a p h i c s >
+
+Figure 10: (a) Template with its features colored. (b) A scan from our data set. (c) Deformation using mapping without reconstruction from spectral shape. The semantic correspondence is wrong at various places that can be identified on areas showing incorrect lateral sliding and bad reconstruction. (d) Deformation after spectral reconstruction. Both the surface quality and the semantic region correspondences greatly improved.
+
+§ 4 RESULTS AND DISCUSSION
+
+The scans in our data set were acquired using a multi-view stereo setup. Each scan consists of a dense polygonal mesh that has between ${90k}$ to ${120k}$ vertices. Note that we plan to release a subset of our data set for other researchers. Fig. 9 shows the scans (a and c) and the deformed template (b and d) using the presented pipeline. The scans exhibit a lot of diversity in their shape and hence the deformation of the template to match the shape was quite a challenge. These meshes are irregular, can have erroneous or missing geometry, degenerate triangles as well as topological errors as illustrated in Fig. 4. The helix region of the scans were the worst affected during the capture due to the presence of interfering hair strands. We can see that our approach is robust against noise while being able to match the shape of the ears.
+
+Our implementation uses OpenCV for edge detection in image space, Python packages SciPy and NumPy for linear system solutions, and Blender for mesh handling and the LSE system.
+
+§ 4.1 SPECTRAL RECONSTRUCTION BENEFITS
+
+We tested our approach with and without the spectral reconstruction. Without spectral reconstruction, we apply the deformation of Sect. 3.5 directly on ${\widehat{T}}_{k}$ , skipping the reconstruction phase of Sect. 3.4. While the iterative deformation finishes with ${\widehat{T}}_{k}$ that exhibits a shape globally similar to ${S}_{k}$ , the lack of detail is problematic for the deformation of the template to match $S$ . Fig. 10 shows a scan and the template with semantically important features highlighted with colors. Fig. 10(c) shows a typical example of incorrect lateral sliding when the final deformation is done directly from $\widehat{T}$ . When the final deformation is conducted from $\widetilde{T}$ to $S$ based on similarly oriented locations, we observe a significant improvement in fidelity of the shape as well as the correspondence of semantic regions (Fig. 10(d)). Furthermore, our spectral reconstruction approach is general. For example, like the method of Dey et al. [18], our approach could be applied to animation.
+
+§ 4.2 SELECTION OF $K$ EIGENVECTORS
+
+The selection of $k$ for the eigen decomposition is crucial as it plays a major role in the pipeline. The idea behind the selection of $k$ impacts a) how simple the shape will be in the smooth domain (Sect. 3.1) and b) the number of constraints available for spectral reconstruction (Sect. 3.4). Fig. 11(b) shows a smaller selection of $k = 3$ that results in a simpler shape that does not contain any features, preventing the alignment of coarse features in the smooth domain. It also results in a very small number of constraints for spectral reconstruction. Fig. 11(d) shows a higher selection of $k = {100}$ that results in higher number of constraints for spectral reconstruction, but the shape also reintroduces a lot of folds from the original surface (Fig. 11(a)). We observed that aligning both the coarse and fine details in a single step is very difficult. A selection of $k = {20}$ (Fig. 11(c)) was balanced for both being a simpler shape with important coarse features, and also sufficient in terms of the number of constraints for spectral reconstruction.
+
+ < g r a p h i c s >
+
+Figure 11: Different selection of $k$ and the resulting shapes. (a) The original template ear. (b) Smooth domain transform using $k = 3$ results in a planar shape that is too simple to allow the alignment of any features. (d) Smooth domain transform using $k = {100}$ results in a more detailed shape, but it contains a lot of folds that hinder the alignment. (c) A value of $k = {20}$ results in a shape that is simple but detailed around the helix region, which is an important feature we want to align in the smooth domain.
+
+§ 4.3 COMPARISON TO MAPPING METHOD
+
+Many mapping methods require manual landmarks, and as such cannot be used for comparison with our fully automatic approach. Furthermore, many methods do not work with meshes such as our ear scans, which contain boundaries and bad geometry. Lähner et al. [25] propose a mapping method using SHOT descriptors [34]. Their approach fails with high density scans to produce dense mapping, however it is able to produce a sparse mapping of around 3000 vertex-to-vertex correspondences. We tested if we could use this sparse mapping to directly deform template $T$ to the shape of the scan, thus avoiding the use of the smooth domain. This sparse map was evaluated by deforming the template with the 3000 registrations points employed as constraints using LSE, an idea similar to the deformation phase explained in Sect. 3.5. The deformation using the constraints from Lähner et al. is compared with our final result ${T}_{k}^{\prime }$ . Fig. 12 shows the comparison of results for two different ears. From the results we can see that our approach from Sect. 3.5 performs better.
+
+§ 4.4 LIMITATIONS
+
+Fig. 13 shows the worst results from our experiments. In most cases, the edge detection constraints (Sect. 3.2) and non-rigid deformation (Sect. 3.3) steps align the features of the ears quite well, but as can be seen in Fig. 13 the helix occasionally does not align completely. The region of the helix close to the crux of the helix is another region where the registered template and the scan are sometimes not in a very good correspondence.
+
+§ 5 CONCLUSION AND FUTURE WORK
+
+We presented an approach to fit a template ear mesh to scans of real ears. The template and the scan are first transformed to an eigenspace smooth domain where we begin by conducting a rigid alignment on the smooth meshes. Features of the ears are detected in image-space with a Canny edge detection. The smooth domain eases the alignment of the coarse scale features of the meshes. The next phase iterates upon three sub steps of aligning the edge detection features, computing an injective mapping of the features from the template to the scan, and conducting a non-rigid deformation of the template through Laplacian surface editing with the features as constraints. We then reintroduce the details of the template mesh through a spectral reconstruction. Our spectral reconstruction optimizes for the spectral coordinates and for surface smoothness through Laplacian constraints. This generates a smooth surface with the details of the original template, while preserving the deformation from the smooth domain. The detailed template mesh is finally deformed through Laplacian constraints and constraints based on closest location of surface regions with similar orientation. One notable advantage of our approach is that it is robust against bad mesh quality and is also completely automatic. Moreover, we are convinced that our spectral reconstruction approach is general and could be used outside of the ear reconstruction pipeline. We will investigate this avenue as in future research work. The fixed template used in our approach could be replaced by a 3D Morphable Model (3DMM) of ears. In a similar fashion to the work of Donya et al. [21], we could adjust the 3DMM parameters to have a template ear that is already much closer to the geometry of the scanned ear. Finally, the current choice of constraints is limited to ears, but we believe that our series of deformation phases could be applied to other types of shapes. In this sense, deriving other types of constraints is an interesting direction for future work.
+
+ < g r a p h i c s >
+
+Figure 12: We compare (c) and (g) the results skipping the smooth domain by deforming the template directly using constraints derived from the method of Lähner et al. [25], to (d) and (h) using our pipeline involving the smooth domain. We can observe that the results skipping the smooth domain produce lateral sliding (indicated by red arrows) of the corresponding semantic regions.
+
+§ 6 ACKNOWLDGEMENTS
+
+This work was supported by Natural Sciences and Engineering Research Council of Canada (NSERC),[CRDPJ 535746 - 18]. We wish to thank Zorah Lähner from Technical University, Munich, Germany and author of Efficient Deformable Shape Correspondence via Kernel Matching [25] for sharing her code so that we could use
+
+ < g r a p h i c s >
+
+Figure 13: This figure presents our worst results (b) and (e). We can see the lack of alignment in the helix region. The alignment problem is apparent right at the spectral reconstruction step (a) and (d), Sect. 3.4) and originates from the edge detection constraints (Sect. 3.2) and the deformation in the smooth domain (Sect. 3.3).
+
+it for comparison. We also want to thank the anonymous reviewers, and all the participants for the facial scanning.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6e03b840797e147c08f4235497c433f5e1658b2
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,341 @@
+Comparing Learned and Iterative Pressure Solvers for Fluid Simulation Category: Research
+
+
+
+Figure 1: In the above Channeling scene at Frame 200, a red plume is shot from the bottom centre. White regions represent obstacles, and each side of the scene is a wall (not shown explicitly). The set-up is identical across all four experiments, and we execute each iterative solver for ${300}\mathrm{\;{ms}}$ . The fluid effect produced by each solver is shown accordingly. While different methods have varying plausibility, we ultimately evaluate the magnitude of the velocity divergence for different amounts of compute time in 2D fluid simulations such as this example.
+
+## Abstract
+
+This paper compares the performance of the neural network based pressure projection approach of Tompson et al. [21] to traditional iterative solvers. Our investigation includes the Jacobi and preconditioned conjugate gradient solver comparison included in the previous work, as well as a red-black Gauss-Seidel method, all running with a GPU implementation. Our investigation focuses 2D fluid simulations and three scenarios that present boundary conditions and velocity sources of different complexity. We collect convergence of the velocity divergence norm as the error in these test simulations and use plots of the error distribution to make high level observations about the performance of iterative solvers in comparison to the fixed time cost of the neural network solution. Our results show that Jacobi provides the best bang of the buck with respect to minimizing error using a small fixed time budget.
+
+Index Terms: Computing methodologies-Physical simulation; Computing methodologies—Neural networks;
+
+## 1 INTRODUCTION
+
+Simulating realistic fluid effects such as smoke, fire and water is an important problem in physically-based animation with broad applications in film and video games. In comparison to the accuracy required in computational physics and engineering domains, computer graphics research typically aims to provide fast and approximate techniques. The stable fluids work of Stam [18] is an excellent example of this, which has likewise had a strong influence on graphics research over the last two decades. The key idea is to take a splitting approach with the incompressible Navier-Stokes equations, and to use different integration methods for solving each term. A pressure projection to produce a divergence free velocity field is an important and costly step, and has been on its own a topic of research.
+
+Our paper focuses specifically on comparing GPU implementations of iterative solvers with the work of Tompson et al. [21], which proposes the use of a learned convolutional neural network (CNN) to solve the pressure projection problem. With the growing popularity of learning techniques in physics simulation, we believe it is valuable to revisit this recent work using a set of tests that give us a better understanding of error and compute time in comparison to traditional methods. In addition to the Jacobi and preconditioned conjugate gradient (solvers that were discussed by Tompson et al.), we also include experiments with red-black Gauss-Seidel to understand how the convergence properties change in comparison. We design three scenes to characterize typical 2D fluid simulation problems (e.g., see Figure 1 for one example), and collect statistics on the convergence across many time steps. With this data, we present the convergence in these simulations as distributions, allowing an understanding the typical behaviour of the different approaches.
+
+## 2 RELATED WORK
+
+Pressure projection is known to be one of the main bottlenecks for incompressible fluid simulation as it involves solving a Poisson equation (equivalently, a large-scale sparse linear system). The efficiency of pressure solver can often be the determining factor to overall speed performance of a fluid simulator. Such systems can be solved using iterative numerical methods including Jacobi, Gauss-Seidel (GS) and preconditioned conjugate gradient (PCG) [2,7,18, 19]. With the rise of multi-core CPUs and GPUs for general-purpose parallel computing, there has been a push to adapt classic numerical solvers in all variety of physics based simulation problems for these parallel hardware platforms $\left\lbrack {8,{14}}\right\rbrack$ . For instance, GS is inherently a sequential algorithm suitable only for running on a single thread. However, a small modification inspired by graph colouring leads to the red-black Gauss-Seidel (RBGS) method, which enables a parallel implementation which is useful for a variety of applications. For instance, Pall et al. [15] simulate cloth systems with projective dynamics solved with a GPU-based RBGS implementation, which performs more iterations than its CPU-based serial counterpart, but attains lower residual error given the same time budget.
+
+Novel approaches for speeding up the pressure projection are important contributions in fluid simulation research. Extensions to multigrid approaches are one example that shows great promise in reducing computation times [8, 14]. McAdams et al. [14] employs multigrid as a preconditioner for their PCG solver. In contrast, Jung et al. [8] propose a heterogeneous CPU-GPU Poisson solver that does wavelet decomposition and smooth processing on CPU, and performs coarse level projection on GPU. In related work, Ando et al. [1] improve the coarse grid pressure solver initially presented by Lentine et al. [13], which reduces the size of the problem by introducing a novel change of basis to help resolve free-surface boundary conditions in liquid simulation.
+
+In contrast to numerical methods for fluid simulation, machine learning techniques present an interesting alternative. Yang et al. [23] present a data-driven method based on the neural network to infer pressure per grid cell that completely avoids the computations of traditional Poisson solvers. This approach significantly reduces the computational cost of pressure projection and makes it independent of the scene's resolution due to the algorithmic nature of forward propagation. The method still maintains a sufficiently good accuracy for visual effects compared to traditional Poisson solvers as long as the network is properly trained. However, their method does not generalize well to previously unseen scenarios. Tompson et al. [21] identifies test cases where the method of Yang et al. will fail, and propose a method to overcome these limitations. They instead pose the problem as an unsupervised learning task and introduce a novel architecture for pressure inference using a convolutional neural network (CNN), and their results show both greater stability and accuracy. A key part of their approach involves training and test sets representative of general fluids. They use wavelet turbulent noise [10] to produce random divergence-free velocity fields, and generate diverse boundaries a a collection of $3\mathrm{D}$ models with varying scale, rotation and translation. They also employ data augmentation techniques, such as adding gravity, buoyancy, and vorticity confinement [20]. This helps their trained network be applicable to simulating fluids in a more general setting. In general, these learning based approaches exploit the observation that neural networks are very effective for regression, and are able to fit a deterministic function (a simulation problem) that maps inputs to the solution of an optimization problem.
+
+Similar to the current trend for applications in other fields, machine learning has found increasing popularity for fluid simulations as a powerful tool in a general sense, not only for pressure projection. Ladický et al. [12] formulate the Lagrangian fluid simulation as a regression problem and use the regression forest method to predict positions and velocities of particles, possibly with a very large time step. With a GPU implementation, they obtain a ${10} \times$ to ${1000} \times$ speed-up compared to a state-of-the-art position-based fluid solver. Most recent work heavily adopted CNN-based methods to synthesize fluid flows thanks to its proven successful performance on computer vision problems such as image classification (e.g., AlexNet [11]). Chu et al. [4] use a CNN to learn small yet expressive feature descriptors from fluid densities and velocities. With the help of those descriptors, their simulation algorithm is capable of rapidly generating high-resolution realistic smoke flows with reusable space-time flow data. Kim et al. [9] trained a generative model with underlying CNN structure to reconstruct divergence-free fluid simulation velocities from a collection of reduced parameters. Their network includes an autoencoder architecture to encode parametrizable velocity fields into a latent space representation. They report a ${1300} \times$ increase in compression rates and a ${700} \times$ performance improvement compared to a CPU solver.
+
+Beyond the core problem of fluid simulation, fluid super-resolution and controlling style is another direction of enormous interest that has seen great success in exploiting learning techniques. By leveraging the idea behind the generative adversarial network (GAN) [5]), Xie et al. [22] introduce tempoGAN, which comprises two discriminators, one being a novel temporal discriminator, to learn both spatial and temporal evolution of data for further synthesis of fluid flows with high-resolution details only from low-resolution data. We observe that learning techniques are broadly useful in fluid simulation, and in our work we put effort into better understanding their benefits and limitations within the core problem, specifically in the solution of the Poisson problem to produce divergence free velocity fields for incompressible fluids.
+
+## 3 BACKGROUND
+
+In this section, we give a brief review of the mathematical background and approaches to physically-based fluid simulation. We only focus on the two-dimensional case, but it is easy to generalize them to the three-dimensional space.
+
+### 3.1 Fluid Equations
+
+The motion of inviscid fluid flow is governed by the Euler equations, a special case of the famous Navier-Stokes equations [3],
+
+$$
+\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} + \frac{1}{\rho }\nabla p = \mathbf{f} \tag{1}
+$$
+
+$$
+\nabla \cdot \mathbf{u} = 0 \tag{2}
+$$
+
+where Equations 1 and 2 are usually referred to as the momentum equation, which in fact is Newton's second law applied to fluids, and the incompressibility condition which restricts the volume of fluids to be constant, respectively. In these equations, $\mathbf{u}$ stands for velocity, $p$ is pressure, $\rho$ is density, and $\mathbf{f}$ is the total external force (e.g., gravity, buoyancy, vorticity confinement) acting on the fluid.
+
+We spatially discretize the above PDEs on a marker-and-cell (MAC) grid, as done by Harlow and Welch [6], rather than a simple collocated grid to avoid the non-trivial nullspace problem, and to make the boundary condition easier to handle. Then, we use the finite difference method to approximate all partial derivatives. On the fluid-solid boundary, we require the normal component of the fluid velocity to be equal to that of the solid's velocity, that is,
+
+$$
+\mathbf{u} \cdot \widehat{\mathbf{n}} = {\mathbf{u}}_{\text{solid }} \cdot \widehat{\mathbf{n}},
+$$
+
+or their relative velocity has zero normal component.
+
+### 3.2 Stable Fluids Framework
+
+Suppose the current velocity field ${\mathbf{u}}^{t}$ at time $t$ is known, and we would like to advance our simulation to the next state and obtain ${\mathbf{u}}^{t + {\Delta t}}$ given time step ${\Delta t}$ . It requires us to integrate the momentum equation while keeping the incompressibility condition satisfied, but the momentum equation involves three terms (not counting the velocity partial derivative with respect to time). Because it is challenging to handle all terms simultaneously, we use Stam's operator splitting approach to integrate them one by one [18]. This provides a modular framework, shown in Algorithm 1, which is first-order accurate (see Bridson's book [3]). Thus, we solve for the velocity field at time $t + {\Delta t}$ in three steps.
+
+On line 3 of Algorithm 1, we first compute and add all external forces $\mathbf{f}$ over time step ${\Delta t}$ to the velocity field according to the standard forward Euler integration formula. Then on line 4, we self-advect the velocity field using the MacCormack method proposed by Selle et al. [17]. Finally, on line 5-6, we perform the pressure projection to make the resulting field ${\mathbf{u}}^{t + 1}$ divergence-free by solving the Poisson problem
+
+$$
+{\nabla }^{2}p = \frac{\Delta t}{\rho }\nabla \cdot {\mathbf{u}}^{B}. \tag{3}
+$$
+
+This is equivalent to a linear system $\mathbf{{Ax}} = \mathbf{b}$ under our spatial dis-cretization scheme, where $\mathbf{A}$ is a sparse, symmetric, and positive definite matrix often called five-point Laplacian matrix, $\mathbf{b}$ is a vector of velocity divergence of every fluid cell, and $\mathbf{x}$ consists of all pressure unknowns we desire to find. Stam [18] showed that solvers coupled with a semi-Lagragian advection algorithm such as Mac-Cormack and an accurate Poisson solver are unconditionally stable, meaning our simulation will not "blow up" no matter how large the time step ${\Delta t}$ . As a result, we can arbitrarily vary the time step ${\Delta t}$ in accordance with our requirement to create fluid animations with different visual effects.
+
+---
+
+Initialize a divergence-free velocity field ${\mathbf{u}}^{0}$ ;
+
+for $t \leftarrow 0,1,2,\cdots$ do
+
+ ${\mathbf{u}}^{A} \leftarrow {\mathbf{u}}^{t} + {\Delta t}\mathbf{f};$
+
+ ${\mathbf{u}}^{B} \leftarrow \operatorname{Advect}\left( {{\mathbf{u}}^{A},{\Delta t}}\right)$ ;
+
+ $p \leftarrow$ SolvePoisson $\left( {\mathbf{u}}^{B}\right)$ ;
+
+ ${\mathbf{u}}^{t + 1} \leftarrow {\mathbf{u}}^{B} - \frac{\Delta t}{\rho }\nabla p$
+
+end
+
+ Algorithm 1: Stable fluids solver.
+
+---
+
+### 3.3 Iterative Methods for Linear Systems
+
+Classic direct linear solvers, such as Cholesky factorization, can be used to solve the above Poisson equation if the number of unknowns is relatively small. When it comes to large-scale applications, however, those methods become impractical as they suffer from high time and space complexities. Even for fluid simulations where the matrix $\mathbf{A}$ is very sparse, the nonzero fill in the Cholesky factorization for large problems can become unattractive. Instead, we can use iterative linear solvers that gradually approximate the solution by repeatedly applying the update rule ${\mathbf{x}}^{k + 1} = f\left( {\mathbf{x}}^{k}\right)$ specified by solver given an initial guess to the solution ${\mathbf{x}}^{0}$ . Iterative methods can be fast compared to their direct counterparts, using minimal storage, and can be stopped at any time to yield an approximate solution. We briefly review the methods relevant to our experiments in this section, while Saad's book [16] provides an excellent reference for these techniques.
+
+#### 3.3.1 Jacobi
+
+The update rule of Jacobi method can be written as
+
+$$
+{\mathbf{x}}_{i}^{k + 1} = \frac{1}{{\mathbf{A}}_{ii}}\left( {{\mathbf{b}}_{i} - \mathop{\sum }\limits_{{j \neq i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k}}\right) , \tag{4}
+$$
+
+where ${\mathbf{A}}_{ij},{\mathbf{x}}_{j}^{k}$ denote the entry of matrix $\mathbf{A}$ at row $i$ and column $j$ , and the ${j}^{\text{th }}$ component of vector $\mathbf{p}$ at time step $k$ , respectively. Note that the positive definiteness of $\mathbf{A}$ implies $\mathbf{A}$ is invertible and all diagonal elements ${\mathbf{A}}_{ii}$ are positive, ensuring the validity of our update rule. Furthermore, every component of ${\mathbf{x}}^{k + 1}$ depends on known values ${\mathbf{x}}_{j}^{k}$ , hence Jacobi is inherently parallelizable, making it trivial to implement and efficient to run on GPU. However, this method tends to have a slow convergence rate and it may take many iterations before we get a satisfying solution.
+
+#### 3.3.2 Gauss-Seidel (GS)
+
+Consider modifying the update rule of Jacobi to use the updated values ${\mathbf{x}}_{j}^{k + 1}$ once they are available. In this way, we obtain the Gauss-Seidel method defined as
+
+$$
+{\mathbf{x}}_{i}^{k + 1} = \frac{1}{{\mathbf{A}}_{ii}}\left( {{\mathbf{b}}_{i} - \mathop{\sum }\limits_{{j < i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k + 1} - \mathop{\sum }\limits_{{j > i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k}}\right) . \tag{5}
+$$
+
+More precisely, for the ${i}^{\text{th }}$ component of ${\mathbf{x}}^{k + 1}$ , we compute the matrix-vector product by replacing ${\mathbf{x}}_{j}^{k}$ with ${\mathbf{x}}_{j}^{k + 1}$ when $j < i$ , and keep everything unchanged when $j > i$ . Gauss-Seidel tends to exhibit a faster convergence rate, but loses Jacobi's parallel nature and becomes a sequential algorithm.
+
+#### 3.3.3 Red-Black Gauss-Seidel (RBGS)
+
+Graph colouring can provide a mechanism for obtaining the benefits of Gauss-Seidel while still allowing for parallelization. We can construct an undirected dependency graph $G$ as follows:
+
+
+
+Figure 2: Left: The dependency graph for a $4 \times 4$ fluid pressure grid. Right: A red-black colouring partitions the graph into two independent sets. All vertices within the same set can be solved in parallel.
+
+- Each component of $\mathbf{x}$ is a vertex;
+
+- If the computation of ${\mathbf{x}}_{i}$ and ${\mathbf{x}}_{j}$ is interdependent, then add an edge(i, j)to $G$ .
+
+At the(i, j)-entry of fluid grid, the Laplacian of the pressure $\mathbf{P}$ is equal to the divergence of the velocities $\mathbf{u}$ . The discretized relationship is given by
+
+$$
+4{\mathbf{P}}_{i, j} - {\mathbf{P}}_{i - 1, j} - {\mathbf{P}}_{i + 1, j} - {\mathbf{P}}_{i, j - 1} - {\mathbf{P}}_{i, j + 1} = {\left( \nabla \cdot \mathbf{u}\right) }_{i, j}. \tag{6}
+$$
+
+A vertex in graph $G$ corresponding to pressure ${\mathbf{P}}_{i, j}$ is connected to all its neighbours in a five-point stencil, namely, ${\mathbf{P}}_{i - 1, j},{\mathbf{P}}_{i + 1, j},{\mathbf{P}}_{i, j - 1}$ and ${\mathbf{P}}_{i, j + 1}$ . We can colour this dependency graph by alternatively choosing from two colours, red and black. The coloured graph displays a checkerboard pattern, where a simplified example is shown in Figure 2. Observe that vertices of the same colour form an independent set, which indicates the computation of every vertex in this set can be done in parallel. As a consequence, we can solve Equation 3 in two parallel steps.
+
+#### 3.3.4 Preconditioned Conjugate Gradient (PCG)
+
+Given matrix $\mathbf{A}$ and vector $\mathbf{v}$ , a Krylov subspace of ${\mathbb{R}}^{n}$ has the form
+
+$$
+{\mathcal{K}}_{n}\left( {\mathbf{A},\mathbf{v}}\right) = \operatorname{span}\left\{ {\mathbf{v},\mathbf{{Av}},{\mathbf{A}}^{2}\mathbf{v},\cdots ,{\mathbf{A}}^{n - 1}\mathbf{v}}\right\} .
+$$
+
+Conjugate Gradient (CG) is an orthogonal projection method onto the Krylov subspace ${\mathcal{K}}_{n}\left( {\mathbf{A},{\mathbf{r}}_{0}}\right)$ where ${\mathbf{r}}_{0} = \mathbf{b} - \mathbf{A}{\mathbf{p}}_{0}$ is the initial residual. It rephrases solving linear system as an optimization problem of the quadratic function
+
+$$
+f\left( \mathbf{x}\right) = \frac{1}{2}{\mathbf{x}}^{\top }\mathbf{A}\mathbf{x} - {\mathbf{x}}^{\top }\mathbf{b},
+$$
+
+in which the solution to the system ${\mathbf{x}}^{ * }$ is its unique minimizer. CG then follows from the gradient descent algorithm. To improve convergence rate, we further incorporate an incomplete Cholesky precondi-tioner with zero-fill, $\operatorname{IC}\left( 0\right)$ . In our experiments, this is what we use for preconditioned conjugate gradients (PCG). CG and PCG expect the matrix $\mathbf{A}$ to be positive, symmetric, and definite, but eliminating variables by substituting boundary conditions into Equation 6 leads to a rank deficient matrix. To avoid the problems caused by matrix $\mathbf{A}$ being not full rank (both for the incomplete Cholesky preconditioner, and for the CG method), we instead solve the regularized system
+
+$$
+\left( {\mathbf{A} + \lambda \mathbf{I}}\right) \mathbf{x} = \mathbf{b}
+$$
+
+for a carefully selected regularization parameter $\lambda$ . Notice that in all our results we still report the residual norm as $r = \parallel \mathbf{b} - \mathbf{{Ax}}\parallel$ rather than $\parallel \mathbf{b} - \left( {\mathbf{A} + \lambda \mathbf{I}}\right) \mathbf{x}\parallel$ . Our regularized $\operatorname{IC}\left( 0\right)$ PCG solver is given in Algorithm 2.
+
+---
+
+Data: $\mathbf{A},\mathbf{b}$ , regularization parameter $\lambda$
+
+Result: $x$
+
+$\mathbf{M} \leftarrow$ Generate-IC®-Preconditioner(A);
+
+$\mathbf{A} \leftarrow \mathbf{A} + \lambda \mathbf{I};$
+
+${\mathbf{r}}_{0} \leftarrow \mathbf{b} - {\mathbf{{Ap}}}_{0}$ ;
+
+${\mathbf{z}}_{0} \leftarrow {\mathbf{M}}^{-1}{\mathbf{r}}_{0}$
+
+${\mathbf{p}}_{0} \leftarrow {\mathbf{z}}_{0}$ ;
+
+for $j \leftarrow 0,1,2,\cdots$ do
+
+ ${\alpha }_{j} \leftarrow \frac{{\mathbf{r}}_{j}^{\top }{\mathbf{z}}_{j}}{{\mathbf{p}}_{j}^{\top }{\mathbf{{Ap}}}_{j}};$
+
+ ${\mathbf{x}}_{j + 1} \leftarrow {\mathbf{x}}_{j} + {\alpha }_{j}{\mathbf{p}}_{j}$
+
+ ${\mathbf{r}}_{j + 1} \leftarrow {\mathbf{r}}_{j} - {\alpha }_{j}{\mathbf{{Ap}}}_{j}$ ;
+
+ if $\begin{Vmatrix}{{\mathbf{r}}_{j + 1} + \lambda {\mathbf{x}}_{j + 1}}\end{Vmatrix} < \varepsilon$ for given threshold $\varepsilon$ then
+
+ exit the loop;
+
+ end
+
+ ${\mathbf{z}}_{j + 1} \leftarrow {\mathbf{M}}^{-1}{\mathbf{r}}_{j + 1};$
+
+ ${\beta }_{j} \leftarrow \frac{{\mathbf{r}}_{j + 1}^{\top }{\mathbf{z}}_{j + 1}}{{\mathbf{r}}_{j}^{\top }{\mathbf{z}}_{j}};$
+
+ ${\mathbf{p}}_{j + 1} \leftarrow {\mathbf{z}}_{j + 1} + {\beta }_{j}{\mathbf{p}}_{j}$
+
+end
+
+---
+
+Algorithm 2: The regularized PCG solver used in our experiments.
+
+
+
+Figure 3: Three test scenes we designed and used to demonstrate and compare the performance of each pressure solver.
+
+## 4 RESULTS
+
+We use Jacobi, RBGS and PCG solvers implemented on the GPU in C++ with NVIDIA's CUDA, cuBLAS and cuSPARSE libraries. All experiments were carried out on a ${3.6}\mathrm{{GHz}}$ Intel i9-9900k CPU with 32 GB of RAM, and an NVIDIA GeForce RTX 2080 Ti GPU which contains 4352 shading units.
+
+### 4.1 Test Scenes and Parameters
+
+Figure 3 shows the three test scenes that we created to evaluate error and convergence of different solvers in different situations. The Bunny scene is designed to examine solver's basic performance in response to a simple scenario. It consists of an obstacle in the shape of a 2D Stanford bunny located at the centre of the domain, with an emitter shooting out red plumes from the midpoint of the bottom side. The Two Plumes scene is identical to the Bunny scene except that there is a second blue plume emitter placed in the gap behind the ears of bunny model. This scene mainly aims to test the case of multiple plume sources. Finally, we designed a Channeling scene, which has symmetry in vertical direction, to both test fluid channeling behaviours and to visually compare how well can different pressure solvers preserve the natural symmetry within the plume. In all scenes, white colours represent obstacles, and each side is a wall but not shown explicitly.
+
+In order to obtain consistent and convincing results, we kept the test environment parameters identical across all our experiments. Specifically, we use a resolution of ${256} \times {256}$ pixels, and run every simulation for 300 frames with a step size of 0.25 seconds, and for the iterative solvers we executed 200 iterations for each. We do not use vorticity confinement, gravity, or buoyancy.
+
+Notice that the CNN-based solver by Tompson et al. [21] is not an iterative method, so it does not have the concept of residual that converges like the Jacobi, RBGS, and PCG solvers. However, using the Helmholtz-Hodge decomposition we can observe that the residual is in fact equal to the norm of the divergence of the velocity at the next time step (see Appendix A). Thus we use this quantity as a means of comparing the iterative methods to the neural network approach.
+
+### 4.2 Performance
+
+Before examining and comparing performances of various methods, we need to first choose a suitable regularization parameter $\lambda$ for our PCG solver with respect to each scene. For the Bunny scene, we investigate the case with no regularization (or $\lambda = 0$ ) by running 400 PCG iterations and noted its relative residual, as shown in Figure 4(a). The simulation is executed for only one frame, because the relative residual diverges and animation of subsequent frames blows up.
+
+Given the diverging residual that we observe with the non full rank matrix $\mathbf{A}$ , we explore different amounts of regularization. Specifically we test how well $\lambda = {10}^{-5},\lambda = {10}^{-4},\lambda = {10}^{-3}$ work, and repeat the same procedure as before. Figure 4(b)-(d) shows the order statistics (minimum, ${25}^{\text{th }}$ percentile, median, ${75}^{\text{th }}$ percentile, maximum) of relative residuals across all 300 frames at different iteration numbers. Since we only run 200 PCG iterations in our main comparisons, we choose $\lambda = {10}^{-4}$ as it attains the lowest true-residual median at iteration 200 among all three choices of $\lambda$ . Using the same criteria, we also selected $\lambda = {10}^{-4}$ for the Two Plumes and Channeling scenes.
+
+Figure 5 shows order statistics of relative residual over Jacobi, RBGS, and regularized PCG solver iterations for all $\mathbf{{Ax}} = \mathbf{b}$ systems and for each scene across 300 frames. We note that the iterations do not take the same amount of time, but the plots are still useful to observe the typical reduction in residual across different steps of the simulation.
+
+We also show the order statistics of absolute residual $\parallel \nabla \cdot \mathbf{u}{\parallel }_{2}$ over total solving time (in milliseconds) for every scene in Figure 6. Since the forward propagation of CNN-based solver (Tompson et al. [21]) always takes a fixed amount of time, its performance is depicted as the intersection of horizontal and vertical purple lines in the graph. While we might expect PCG to have the fastest convergence rate and achieve lowest residual, it takes a longer time to solve compared to the highly parallel GPU implementations of the Jacobi, RBGS, and CNN-based solvers.
+
+### 4.3 Robustness
+
+An important feature of good numerical method is robustness, which we investigate by observing the behaviour of the four solvers when there is a sudden and fierce change of external conditions. In our stress-test, we monitor the ${L}_{2}$ norm of the velocity divergence of the pressure solvers to see how rapidly the residual returns to normal levels after facing a large external pressure is injected into the fluid region.
+
+Figure 7 shows the set-up and result of our experiment. Our fluid simulator supports user interaction with a mouse to perturb the plume behaviour and it is with this mechanism that we inject a scripted mouse movement. We first run the simulation with the Jacobi solver for 100 frames, then at the moment that we inject the external pressure we switch to the different solvers to examine the post disturbance divergence norm trajectory.
+
+The scripted mouse interactions used to inject a consist disturbance in each test, and the result of the disturbance, can be seen in Figure Figure 7(a)-(b). The disturbance lasts a total of 10 frames, with the scripted mouse drag first moving horizontally across the scene for one frame, and then vertically in the next frame, with the process repeating 5 times. This produces a strong shape distortion of the red plume as seen in Figure 7(b). We then observe the norm of the divergence of the velocity after frame 110 to understand how high the norm rises, and then how quickly it drops to normal levels for the different solvers. The divergence norm trajectories in Figure 7(c) show that PCG has the most stable behaviour during the perturbation, while the method of Tompson et al. [21] fluctuates more violently than all others. After the perturbation ends, the norm of velocity divergence gradually drops to lower levels in all cases. Comparing the final divergence norm after the simulation stops at frame 300, we see that PCG achieves the lowest residual, followed by Jacobi and RBGS (with similar behaviour), while Tompson et al. [21] method has much higher residual and does not ultimately return to levels similar to those prior to the disturbance.
+
+
+
+Figure 4: Convergence for the Bunny scene. (a) Without regularization, the relative residual over PCG solver’s iteration for the Ax $= \mathbf{b}$ system corresponding to frame 1 shows poor divergent behaviour. (b)–(d) Order statistics of the true relative residual at each PCG iteration (that is, measured without regularization), for all $\mathbf{{Ax}} = \mathbf{b}$ systems across 300 frames when the regularization parameter is $\lambda = {10}^{-5},\lambda = {10}^{-4},\lambda = {10}^{-3}$ .
+
+
+
+Figure 5: Order statistics of relative residual over each solver’s iteration for all $\mathbf{{Ax}} = \mathbf{b}$ systems of each scene across 300 frames. Five curves from top to bottom of every solver represent maximum, ${75}^{\text{th }}$ percentile, median, ${25}^{\text{th }}$ percentile, minimum, respectively, and only curves for medians are in bold for clarity. PCG shows the true residual rather than that of the regularized problem being solved.
+
+
+
+Figure 6: Order statistics of residual norm $\parallel \nabla \cdot \mathbf{u}\parallel ,$ over each solver’s running time across 300 frames. Five curves from top to bottom of every solver represent maximum, ${75}^{\text{th }}$ percentile, median, ${25}^{\text{th }}$ percentile, minimum, respectively. Median curves are bold for clarity. Since the CNN-based solver (Tompson et al. [21]) always takes a fixed amount of time, its absolute residual over total solving time is shown as the purple intersection.
+
+
+
+Figure 7: (a) The black arrow shows the mouse trajectory for external pressure injection. The mouse moves first horizontally for one frame, then vertically for the next frame, and this process is repeated 5 times for 10 frames violent scene perturbation. (b) Strong shape distortion of the red plume caused by our violent mouse perturbation. (c) ${L}_{2}$ norm of the velocity divergence over simulation frame count for different fluid solvers in our mouse perturbation experiment. The external pressure is injected at frame 101, and the injection terminates after frame 110.
+
+### 4.4 Discussion and Limitations
+
+There are some high level observations we can make about our experiments. First, we expected RBGS to converge more quickly than Jacobi, at least with respect to the iteration count, but we did not observe this in our test scenes. We had also expected that PCG would have much better behaviour, it is perhaps still possible that with a different problem formulation and GPU implementation that we could see better advantages with PCG. We can note that there is likewise a vast number of other solvers we are not including in our comparison, such as those that exploit substructuring, or others based on multigrid. We believe all the results have good plausibility, but we also note that we were not able to observe preservation of symmetry in cases where it would be expected. That is, there is no guarantee that symmetry would be preserved with a neural network approach, and the arbitrary ordering of red before black in RBGS would have us not expect symmetry, but we had expected symmetric plumes in our Channeling scene when using Jacobi and PCG. It is possible that our initial conditions and plume locations make it hard to achieve symmetry in these examples, and the single precision floating point used in the GPU implementations may yet be another explanation. Finally, it is interesting to note that the method of Tompson et al. actually increases the norm of the velocity divergence in our test scenes. While this is surprising, we note that the solutions it produces still tend to be useful for producing plausible incompressible flows.
+
+## 5 CONCLUSION AND FUTURE WORK
+
+We present a collection of tests to understand the convergence behavi-ous of different pressure solvers in 2D fluid simulations. Specifically, we compare Jacobi, red-black Gauss-Seidel, regularized preconditioned conjugate gradients, and a convolutional neural network (CNN) based solution proposed by Tompson et al. [21]. Our results show that Jacobi provides the best bang of the buck with respect to minimizing error using a small fixed time budget when running on the GPU. We note that the CNN solution has less desirable properties in our 2D tests, but may ultimately prove to be a good choice for specific scenarios, or larger 3D simulations. There are a number of interesting directions for future work, such as repeating experiments in $3\mathrm{D}$ , and including comparisons with other solvers such as multigrid methods.
+
+## A HELMHOLTZ-HODGE DECOMPOSITION
+
+The Helmholtz-Hodge decomposition states that every vector field $\mathbf{u}$ can be decomposed into a divergence-free vector field $\mathbf{v}$ and a gradient field $\nabla q$ , that is,
+
+$$
+\mathbf{u} = \mathbf{v} + \nabla q
+$$
+
+Stam [18] shows that the divergence of this decomposition leads to the Poisson problem in Equation 3 needed for the pressure projection.
+
+This problem has the form
+
+$$
+{\nabla }^{2}q = \nabla \cdot {\mathbf{u}}^{B},
+$$
+
+hence, the residual produced by iterative methods is defined as
+
+$$
+\mathbf{r} = \nabla \cdot {\mathbf{u}}^{B} - {\nabla }^{2}q
+$$
+
+Furthermore, the velocity update formula on line 6 in Algorithm 1 can be rewritten as
+
+$$
+{\mathbf{u}}^{t + 1} = {\mathbf{u}}^{B} - \nabla q
+$$
+
+Again, taking divergence of both sides of the above equation yields
+
+$$
+\nabla \cdot {\mathbf{u}}^{t + 1} = \nabla \cdot {\mathbf{u}}^{B} - {\nabla }^{2}q
+$$
+
+$$
+= \mathbf{r}\text{,}
+$$
+
+which justifies our comparison of the residual norm to the norm of the divergence of the next time velocity produced by the neural network approach.
+
+## REFERENCES
+
+[1] R. Ando, N. Thürey, and C. Wojtan. A Dimension-reduced Pressure Solver for Liquid Simulations. EUROGRAPHICS 2015, 2015.
+
+[2] J. Bolz, I. Farmer, E. Grinspun, and P. Schröder. Sparse matrix solvers on the gpu: Conjugate gradients and multigrid. In ACM SIGGRAPH 2003 Papers, SIGGRAPH 03, p. 917924. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/1201775.882364
+
+[3] R. Bridson. Fluid Simulation for Computer Graphics. CRC Press, Boca Raton, FL, 2nd. ed., Sep 2015.
+
+[4] M. Chu and N. Thuerey. Data-driven synthesis of smoke flows with cnn-based feature descriptors. ACM Trans. Graph., 36(4):69:1-69:14, July 2017. doi: 10.1145/3072959.3073643
+
+[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds., Advances in Neural Information Processing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014.
+
+[6] F. H. Harlow and J. E. Welch. Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface. Physics of Fluids, 8:2182-2189, Dec 1965.
+
+[7] M. J. Harris. Fast fluid dynamics simulation on the gpu. SIGGRAPH Courses, 220(10.1145):1198555-1198790, 2005.
+
+[8] H.-R. Jung, S.-T. Kim, J. Noh, and J.-M. Hong. A Heterogeneous CPU-GPU Parallel Approach to a Multigrid Poisson Solver for Incompressible Fluid Simulation. Computer Animation and Virtual Worlds, 24(3-4):185-193, 2013. doi: 10.1002/cav. 1498
+
+[9] B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross, and B. So-lenthaler. Deep fluids: A generative network for parameterized fluid simulations. Computer Graphics Forum, 38(2):59-70, 2019. doi: 10. 1111/cgf. 13619
+
+[10] T. Kim, N. Thürey, D. James, and M. Gross. Wavelet turbulence for fluid simulation. ACM Trans. Graph., 27(3):50:1-50:6, Aug. 2008. doi: 10.1145/1360612.1360649
+
+[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds., Advances in Neural Information Processing Systems 25, pp. 1097-1105. Curran Associates, Inc., 2012.
+
+[12] L. Ladický, S. Jeong, B. Solenthaler, M. Pollefeys, and M. Gross. Data-driven fluid simulations using regression forests. ACM Trans. Graph., 34(6):199:1-199:9, Oct 2015.
+
+[13] M. Lentine, W. Zheng, and R. Fedkiw. A Novel Algorithm for Incompressible Flow Using Only a Coarse Grid Projection. ACM Trans. Graph., 29(4):114:1-114:9, July 2010. doi: 10.1145/1778765.1778851
+
+[14] A. McAdams, E. Sifakis, and J. Teran. A parallel multigrid poisson solver for fluids simulation on large grids. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '10, pp. 65-74. Eurographics Association, Goslar Germany, Germany, 2010.
+
+[15] P. Pall, O. Nylén, and M. Fratarcangeli. Fast Quadrangular Mass-Spring Systems using Red-Black Ordering. In Workshop on Virtual Reality Interaction and Physical Simulation, VRIPHYS'18, pp. 37-43. The Eurographics Association, 2018.
+
+[16] Y. Saad. Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2nd. ed., Jan 2003.
+
+[17] A. Selle, R. Fedkiw, B. Kim, Y. Liu, and J. Rossignac. An unconditionally stable maccormack method. Journal of Scientific Computing, 35(2):350-371, Jun 2008.
+
+[18] J. Stam. Stable fluids. In Proceedings of the ${26}^{\text{th }}$ Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, pp. 121-128. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 1999.
+
+[19] J. Stam. Real-time fluid dynamics for games. In Proceedings of the game developer conference, vol. 18, p. 25, 2003.
+
+[20] J. Steinhoff and D. Underhill. Modification of the Euler equations for "vorticity confinement": Application to the computation of interacting vortex rings. Physics of Fluids, 6(8):2738-2744, 1994. doi: 10.1063/1. 868164
+
+[21] J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin. Accelerating Eulerian Fluid Simulation with Convolutional Networks. In Proceedings of the ${34}^{\text{th }}$ International Conference on Machine Learning, vol. 70 of ICML'17, pp. 3424-3433. JMLR.org, 2017.
+
+[22] Y. Xie, E. Franz, M. Chu, and N. Thuerey. tempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow. ACM Trans. Graph., 37(4):95:1-95:15, July 2018.
+
+[23] C. Yang, X. Yang, and X. Xiao. Data-driven Projection Method in Fluid Simulation. Computer Animation and Virtual Worlds, 27(3-4):415-424, 2016. doi: 10.1002/cav. 1695
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b2c958ce6682abc03461bf4246e9998f32a01b6b
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/mueJ9ZjrfOF/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,285 @@
+Comparing Learned and Iterative Pressure Solvers for Fluid Simulation Category: Research
+
+ < g r a p h i c s >
+
+Figure 1: In the above Channeling scene at Frame 200, a red plume is shot from the bottom centre. White regions represent obstacles, and each side of the scene is a wall (not shown explicitly). The set-up is identical across all four experiments, and we execute each iterative solver for ${300}\mathrm{\;{ms}}$ . The fluid effect produced by each solver is shown accordingly. While different methods have varying plausibility, we ultimately evaluate the magnitude of the velocity divergence for different amounts of compute time in 2D fluid simulations such as this example.
+
+§ ABSTRACT
+
+This paper compares the performance of the neural network based pressure projection approach of Tompson et al. [21] to traditional iterative solvers. Our investigation includes the Jacobi and preconditioned conjugate gradient solver comparison included in the previous work, as well as a red-black Gauss-Seidel method, all running with a GPU implementation. Our investigation focuses 2D fluid simulations and three scenarios that present boundary conditions and velocity sources of different complexity. We collect convergence of the velocity divergence norm as the error in these test simulations and use plots of the error distribution to make high level observations about the performance of iterative solvers in comparison to the fixed time cost of the neural network solution. Our results show that Jacobi provides the best bang of the buck with respect to minimizing error using a small fixed time budget.
+
+Index Terms: Computing methodologies-Physical simulation; Computing methodologies—Neural networks;
+
+§ 1 INTRODUCTION
+
+Simulating realistic fluid effects such as smoke, fire and water is an important problem in physically-based animation with broad applications in film and video games. In comparison to the accuracy required in computational physics and engineering domains, computer graphics research typically aims to provide fast and approximate techniques. The stable fluids work of Stam [18] is an excellent example of this, which has likewise had a strong influence on graphics research over the last two decades. The key idea is to take a splitting approach with the incompressible Navier-Stokes equations, and to use different integration methods for solving each term. A pressure projection to produce a divergence free velocity field is an important and costly step, and has been on its own a topic of research.
+
+Our paper focuses specifically on comparing GPU implementations of iterative solvers with the work of Tompson et al. [21], which proposes the use of a learned convolutional neural network (CNN) to solve the pressure projection problem. With the growing popularity of learning techniques in physics simulation, we believe it is valuable to revisit this recent work using a set of tests that give us a better understanding of error and compute time in comparison to traditional methods. In addition to the Jacobi and preconditioned conjugate gradient (solvers that were discussed by Tompson et al.), we also include experiments with red-black Gauss-Seidel to understand how the convergence properties change in comparison. We design three scenes to characterize typical 2D fluid simulation problems (e.g., see Figure 1 for one example), and collect statistics on the convergence across many time steps. With this data, we present the convergence in these simulations as distributions, allowing an understanding the typical behaviour of the different approaches.
+
+§ 2 RELATED WORK
+
+Pressure projection is known to be one of the main bottlenecks for incompressible fluid simulation as it involves solving a Poisson equation (equivalently, a large-scale sparse linear system). The efficiency of pressure solver can often be the determining factor to overall speed performance of a fluid simulator. Such systems can be solved using iterative numerical methods including Jacobi, Gauss-Seidel (GS) and preconditioned conjugate gradient (PCG) [2,7,18, 19]. With the rise of multi-core CPUs and GPUs for general-purpose parallel computing, there has been a push to adapt classic numerical solvers in all variety of physics based simulation problems for these parallel hardware platforms $\left\lbrack {8,{14}}\right\rbrack$ . For instance, GS is inherently a sequential algorithm suitable only for running on a single thread. However, a small modification inspired by graph colouring leads to the red-black Gauss-Seidel (RBGS) method, which enables a parallel implementation which is useful for a variety of applications. For instance, Pall et al. [15] simulate cloth systems with projective dynamics solved with a GPU-based RBGS implementation, which performs more iterations than its CPU-based serial counterpart, but attains lower residual error given the same time budget.
+
+Novel approaches for speeding up the pressure projection are important contributions in fluid simulation research. Extensions to multigrid approaches are one example that shows great promise in reducing computation times [8, 14]. McAdams et al. [14] employs multigrid as a preconditioner for their PCG solver. In contrast, Jung et al. [8] propose a heterogeneous CPU-GPU Poisson solver that does wavelet decomposition and smooth processing on CPU, and performs coarse level projection on GPU. In related work, Ando et al. [1] improve the coarse grid pressure solver initially presented by Lentine et al. [13], which reduces the size of the problem by introducing a novel change of basis to help resolve free-surface boundary conditions in liquid simulation.
+
+In contrast to numerical methods for fluid simulation, machine learning techniques present an interesting alternative. Yang et al. [23] present a data-driven method based on the neural network to infer pressure per grid cell that completely avoids the computations of traditional Poisson solvers. This approach significantly reduces the computational cost of pressure projection and makes it independent of the scene's resolution due to the algorithmic nature of forward propagation. The method still maintains a sufficiently good accuracy for visual effects compared to traditional Poisson solvers as long as the network is properly trained. However, their method does not generalize well to previously unseen scenarios. Tompson et al. [21] identifies test cases where the method of Yang et al. will fail, and propose a method to overcome these limitations. They instead pose the problem as an unsupervised learning task and introduce a novel architecture for pressure inference using a convolutional neural network (CNN), and their results show both greater stability and accuracy. A key part of their approach involves training and test sets representative of general fluids. They use wavelet turbulent noise [10] to produce random divergence-free velocity fields, and generate diverse boundaries a a collection of $3\mathrm{D}$ models with varying scale, rotation and translation. They also employ data augmentation techniques, such as adding gravity, buoyancy, and vorticity confinement [20]. This helps their trained network be applicable to simulating fluids in a more general setting. In general, these learning based approaches exploit the observation that neural networks are very effective for regression, and are able to fit a deterministic function (a simulation problem) that maps inputs to the solution of an optimization problem.
+
+Similar to the current trend for applications in other fields, machine learning has found increasing popularity for fluid simulations as a powerful tool in a general sense, not only for pressure projection. Ladický et al. [12] formulate the Lagrangian fluid simulation as a regression problem and use the regression forest method to predict positions and velocities of particles, possibly with a very large time step. With a GPU implementation, they obtain a ${10} \times$ to ${1000} \times$ speed-up compared to a state-of-the-art position-based fluid solver. Most recent work heavily adopted CNN-based methods to synthesize fluid flows thanks to its proven successful performance on computer vision problems such as image classification (e.g., AlexNet [11]). Chu et al. [4] use a CNN to learn small yet expressive feature descriptors from fluid densities and velocities. With the help of those descriptors, their simulation algorithm is capable of rapidly generating high-resolution realistic smoke flows with reusable space-time flow data. Kim et al. [9] trained a generative model with underlying CNN structure to reconstruct divergence-free fluid simulation velocities from a collection of reduced parameters. Their network includes an autoencoder architecture to encode parametrizable velocity fields into a latent space representation. They report a ${1300} \times$ increase in compression rates and a ${700} \times$ performance improvement compared to a CPU solver.
+
+Beyond the core problem of fluid simulation, fluid super-resolution and controlling style is another direction of enormous interest that has seen great success in exploiting learning techniques. By leveraging the idea behind the generative adversarial network (GAN) [5]), Xie et al. [22] introduce tempoGAN, which comprises two discriminators, one being a novel temporal discriminator, to learn both spatial and temporal evolution of data for further synthesis of fluid flows with high-resolution details only from low-resolution data. We observe that learning techniques are broadly useful in fluid simulation, and in our work we put effort into better understanding their benefits and limitations within the core problem, specifically in the solution of the Poisson problem to produce divergence free velocity fields for incompressible fluids.
+
+§ 3 BACKGROUND
+
+In this section, we give a brief review of the mathematical background and approaches to physically-based fluid simulation. We only focus on the two-dimensional case, but it is easy to generalize them to the three-dimensional space.
+
+§ 3.1 FLUID EQUATIONS
+
+The motion of inviscid fluid flow is governed by the Euler equations, a special case of the famous Navier-Stokes equations [3],
+
+$$
+\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} + \frac{1}{\rho }\nabla p = \mathbf{f} \tag{1}
+$$
+
+$$
+\nabla \cdot \mathbf{u} = 0 \tag{2}
+$$
+
+where Equations 1 and 2 are usually referred to as the momentum equation, which in fact is Newton's second law applied to fluids, and the incompressibility condition which restricts the volume of fluids to be constant, respectively. In these equations, $\mathbf{u}$ stands for velocity, $p$ is pressure, $\rho$ is density, and $\mathbf{f}$ is the total external force (e.g., gravity, buoyancy, vorticity confinement) acting on the fluid.
+
+We spatially discretize the above PDEs on a marker-and-cell (MAC) grid, as done by Harlow and Welch [6], rather than a simple collocated grid to avoid the non-trivial nullspace problem, and to make the boundary condition easier to handle. Then, we use the finite difference method to approximate all partial derivatives. On the fluid-solid boundary, we require the normal component of the fluid velocity to be equal to that of the solid's velocity, that is,
+
+$$
+\mathbf{u} \cdot \widehat{\mathbf{n}} = {\mathbf{u}}_{\text{ solid }} \cdot \widehat{\mathbf{n}},
+$$
+
+or their relative velocity has zero normal component.
+
+§ 3.2 STABLE FLUIDS FRAMEWORK
+
+Suppose the current velocity field ${\mathbf{u}}^{t}$ at time $t$ is known, and we would like to advance our simulation to the next state and obtain ${\mathbf{u}}^{t + {\Delta t}}$ given time step ${\Delta t}$ . It requires us to integrate the momentum equation while keeping the incompressibility condition satisfied, but the momentum equation involves three terms (not counting the velocity partial derivative with respect to time). Because it is challenging to handle all terms simultaneously, we use Stam's operator splitting approach to integrate them one by one [18]. This provides a modular framework, shown in Algorithm 1, which is first-order accurate (see Bridson's book [3]). Thus, we solve for the velocity field at time $t + {\Delta t}$ in three steps.
+
+On line 3 of Algorithm 1, we first compute and add all external forces $\mathbf{f}$ over time step ${\Delta t}$ to the velocity field according to the standard forward Euler integration formula. Then on line 4, we self-advect the velocity field using the MacCormack method proposed by Selle et al. [17]. Finally, on line 5-6, we perform the pressure projection to make the resulting field ${\mathbf{u}}^{t + 1}$ divergence-free by solving the Poisson problem
+
+$$
+{\nabla }^{2}p = \frac{\Delta t}{\rho }\nabla \cdot {\mathbf{u}}^{B}. \tag{3}
+$$
+
+This is equivalent to a linear system $\mathbf{{Ax}} = \mathbf{b}$ under our spatial dis-cretization scheme, where $\mathbf{A}$ is a sparse, symmetric, and positive definite matrix often called five-point Laplacian matrix, $\mathbf{b}$ is a vector of velocity divergence of every fluid cell, and $\mathbf{x}$ consists of all pressure unknowns we desire to find. Stam [18] showed that solvers coupled with a semi-Lagragian advection algorithm such as Mac-Cormack and an accurate Poisson solver are unconditionally stable, meaning our simulation will not "blow up" no matter how large the time step ${\Delta t}$ . As a result, we can arbitrarily vary the time step ${\Delta t}$ in accordance with our requirement to create fluid animations with different visual effects.
+
+Initialize a divergence-free velocity field ${\mathbf{u}}^{0}$ ;
+
+for $t \leftarrow 0,1,2,\cdots$ do
+
+ ${\mathbf{u}}^{A} \leftarrow {\mathbf{u}}^{t} + {\Delta t}\mathbf{f};$
+
+ ${\mathbf{u}}^{B} \leftarrow \operatorname{Advect}\left( {{\mathbf{u}}^{A},{\Delta t}}\right)$ ;
+
+ $p \leftarrow$ SolvePoisson $\left( {\mathbf{u}}^{B}\right)$ ;
+
+ ${\mathbf{u}}^{t + 1} \leftarrow {\mathbf{u}}^{B} - \frac{\Delta t}{\rho }\nabla p$
+
+end
+
+ Algorithm 1: Stable fluids solver.
+
+§ 3.3 ITERATIVE METHODS FOR LINEAR SYSTEMS
+
+Classic direct linear solvers, such as Cholesky factorization, can be used to solve the above Poisson equation if the number of unknowns is relatively small. When it comes to large-scale applications, however, those methods become impractical as they suffer from high time and space complexities. Even for fluid simulations where the matrix $\mathbf{A}$ is very sparse, the nonzero fill in the Cholesky factorization for large problems can become unattractive. Instead, we can use iterative linear solvers that gradually approximate the solution by repeatedly applying the update rule ${\mathbf{x}}^{k + 1} = f\left( {\mathbf{x}}^{k}\right)$ specified by solver given an initial guess to the solution ${\mathbf{x}}^{0}$ . Iterative methods can be fast compared to their direct counterparts, using minimal storage, and can be stopped at any time to yield an approximate solution. We briefly review the methods relevant to our experiments in this section, while Saad's book [16] provides an excellent reference for these techniques.
+
+§ 3.3.1 JACOBI
+
+The update rule of Jacobi method can be written as
+
+$$
+{\mathbf{x}}_{i}^{k + 1} = \frac{1}{{\mathbf{A}}_{ii}}\left( {{\mathbf{b}}_{i} - \mathop{\sum }\limits_{{j \neq i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k}}\right) , \tag{4}
+$$
+
+where ${\mathbf{A}}_{ij},{\mathbf{x}}_{j}^{k}$ denote the entry of matrix $\mathbf{A}$ at row $i$ and column $j$ , and the ${j}^{\text{ th }}$ component of vector $\mathbf{p}$ at time step $k$ , respectively. Note that the positive definiteness of $\mathbf{A}$ implies $\mathbf{A}$ is invertible and all diagonal elements ${\mathbf{A}}_{ii}$ are positive, ensuring the validity of our update rule. Furthermore, every component of ${\mathbf{x}}^{k + 1}$ depends on known values ${\mathbf{x}}_{j}^{k}$ , hence Jacobi is inherently parallelizable, making it trivial to implement and efficient to run on GPU. However, this method tends to have a slow convergence rate and it may take many iterations before we get a satisfying solution.
+
+§ 3.3.2 GAUSS-SEIDEL (GS)
+
+Consider modifying the update rule of Jacobi to use the updated values ${\mathbf{x}}_{j}^{k + 1}$ once they are available. In this way, we obtain the Gauss-Seidel method defined as
+
+$$
+{\mathbf{x}}_{i}^{k + 1} = \frac{1}{{\mathbf{A}}_{ii}}\left( {{\mathbf{b}}_{i} - \mathop{\sum }\limits_{{j < i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k + 1} - \mathop{\sum }\limits_{{j > i}}{\mathbf{A}}_{ij}{\mathbf{x}}_{j}^{k}}\right) . \tag{5}
+$$
+
+More precisely, for the ${i}^{\text{ th }}$ component of ${\mathbf{x}}^{k + 1}$ , we compute the matrix-vector product by replacing ${\mathbf{x}}_{j}^{k}$ with ${\mathbf{x}}_{j}^{k + 1}$ when $j < i$ , and keep everything unchanged when $j > i$ . Gauss-Seidel tends to exhibit a faster convergence rate, but loses Jacobi's parallel nature and becomes a sequential algorithm.
+
+§ 3.3.3 RED-BLACK GAUSS-SEIDEL (RBGS)
+
+Graph colouring can provide a mechanism for obtaining the benefits of Gauss-Seidel while still allowing for parallelization. We can construct an undirected dependency graph $G$ as follows:
+
+ < g r a p h i c s >
+
+Figure 2: Left: The dependency graph for a $4 \times 4$ fluid pressure grid. Right: A red-black colouring partitions the graph into two independent sets. All vertices within the same set can be solved in parallel.
+
+ * Each component of $\mathbf{x}$ is a vertex;
+
+ * If the computation of ${\mathbf{x}}_{i}$ and ${\mathbf{x}}_{j}$ is interdependent, then add an edge(i, j)to $G$ .
+
+At the(i, j)-entry of fluid grid, the Laplacian of the pressure $\mathbf{P}$ is equal to the divergence of the velocities $\mathbf{u}$ . The discretized relationship is given by
+
+$$
+4{\mathbf{P}}_{i,j} - {\mathbf{P}}_{i - 1,j} - {\mathbf{P}}_{i + 1,j} - {\mathbf{P}}_{i,j - 1} - {\mathbf{P}}_{i,j + 1} = {\left( \nabla \cdot \mathbf{u}\right) }_{i,j}. \tag{6}
+$$
+
+A vertex in graph $G$ corresponding to pressure ${\mathbf{P}}_{i,j}$ is connected to all its neighbours in a five-point stencil, namely, ${\mathbf{P}}_{i - 1,j},{\mathbf{P}}_{i + 1,j},{\mathbf{P}}_{i,j - 1}$ and ${\mathbf{P}}_{i,j + 1}$ . We can colour this dependency graph by alternatively choosing from two colours, red and black. The coloured graph displays a checkerboard pattern, where a simplified example is shown in Figure 2. Observe that vertices of the same colour form an independent set, which indicates the computation of every vertex in this set can be done in parallel. As a consequence, we can solve Equation 3 in two parallel steps.
+
+§ 3.3.4 PRECONDITIONED CONJUGATE GRADIENT (PCG)
+
+Given matrix $\mathbf{A}$ and vector $\mathbf{v}$ , a Krylov subspace of ${\mathbb{R}}^{n}$ has the form
+
+$$
+{\mathcal{K}}_{n}\left( {\mathbf{A},\mathbf{v}}\right) = \operatorname{span}\left\{ {\mathbf{v},\mathbf{{Av}},{\mathbf{A}}^{2}\mathbf{v},\cdots ,{\mathbf{A}}^{n - 1}\mathbf{v}}\right\} .
+$$
+
+Conjugate Gradient (CG) is an orthogonal projection method onto the Krylov subspace ${\mathcal{K}}_{n}\left( {\mathbf{A},{\mathbf{r}}_{0}}\right)$ where ${\mathbf{r}}_{0} = \mathbf{b} - \mathbf{A}{\mathbf{p}}_{0}$ is the initial residual. It rephrases solving linear system as an optimization problem of the quadratic function
+
+$$
+f\left( \mathbf{x}\right) = \frac{1}{2}{\mathbf{x}}^{\top }\mathbf{A}\mathbf{x} - {\mathbf{x}}^{\top }\mathbf{b},
+$$
+
+in which the solution to the system ${\mathbf{x}}^{ * }$ is its unique minimizer. CG then follows from the gradient descent algorithm. To improve convergence rate, we further incorporate an incomplete Cholesky precondi-tioner with zero-fill, $\operatorname{IC}\left( 0\right)$ . In our experiments, this is what we use for preconditioned conjugate gradients (PCG). CG and PCG expect the matrix $\mathbf{A}$ to be positive, symmetric, and definite, but eliminating variables by substituting boundary conditions into Equation 6 leads to a rank deficient matrix. To avoid the problems caused by matrix $\mathbf{A}$ being not full rank (both for the incomplete Cholesky preconditioner, and for the CG method), we instead solve the regularized system
+
+$$
+\left( {\mathbf{A} + \lambda \mathbf{I}}\right) \mathbf{x} = \mathbf{b}
+$$
+
+for a carefully selected regularization parameter $\lambda$ . Notice that in all our results we still report the residual norm as $r = \parallel \mathbf{b} - \mathbf{{Ax}}\parallel$ rather than $\parallel \mathbf{b} - \left( {\mathbf{A} + \lambda \mathbf{I}}\right) \mathbf{x}\parallel$ . Our regularized $\operatorname{IC}\left( 0\right)$ PCG solver is given in Algorithm 2.
+
+Data: $\mathbf{A},\mathbf{b}$ , regularization parameter $\lambda$
+
+Result: $x$
+
+$\mathbf{M} \leftarrow$ Generate-IC®-Preconditioner(A);
+
+$\mathbf{A} \leftarrow \mathbf{A} + \lambda \mathbf{I};$
+
+${\mathbf{r}}_{0} \leftarrow \mathbf{b} - {\mathbf{{Ap}}}_{0}$ ;
+
+${\mathbf{z}}_{0} \leftarrow {\mathbf{M}}^{-1}{\mathbf{r}}_{0}$
+
+${\mathbf{p}}_{0} \leftarrow {\mathbf{z}}_{0}$ ;
+
+for $j \leftarrow 0,1,2,\cdots$ do
+
+ ${\alpha }_{j} \leftarrow \frac{{\mathbf{r}}_{j}^{\top }{\mathbf{z}}_{j}}{{\mathbf{p}}_{j}^{\top }{\mathbf{{Ap}}}_{j}};$
+
+ ${\mathbf{x}}_{j + 1} \leftarrow {\mathbf{x}}_{j} + {\alpha }_{j}{\mathbf{p}}_{j}$
+
+ ${\mathbf{r}}_{j + 1} \leftarrow {\mathbf{r}}_{j} - {\alpha }_{j}{\mathbf{{Ap}}}_{j}$ ;
+
+ if $\begin{Vmatrix}{{\mathbf{r}}_{j + 1} + \lambda {\mathbf{x}}_{j + 1}}\end{Vmatrix} < \varepsilon$ for given threshold $\varepsilon$ then
+
+ exit the loop;
+
+ end
+
+ ${\mathbf{z}}_{j + 1} \leftarrow {\mathbf{M}}^{-1}{\mathbf{r}}_{j + 1};$
+
+ ${\beta }_{j} \leftarrow \frac{{\mathbf{r}}_{j + 1}^{\top }{\mathbf{z}}_{j + 1}}{{\mathbf{r}}_{j}^{\top }{\mathbf{z}}_{j}};$
+
+ ${\mathbf{p}}_{j + 1} \leftarrow {\mathbf{z}}_{j + 1} + {\beta }_{j}{\mathbf{p}}_{j}$
+
+end
+
+Algorithm 2: The regularized PCG solver used in our experiments.
+
+ < g r a p h i c s >
+
+Figure 3: Three test scenes we designed and used to demonstrate and compare the performance of each pressure solver.
+
+§ 4 RESULTS
+
+We use Jacobi, RBGS and PCG solvers implemented on the GPU in C++ with NVIDIA's CUDA, cuBLAS and cuSPARSE libraries. All experiments were carried out on a ${3.6}\mathrm{{GHz}}$ Intel i9-9900k CPU with 32 GB of RAM, and an NVIDIA GeForce RTX 2080 Ti GPU which contains 4352 shading units.
+
+§ 4.1 TEST SCENES AND PARAMETERS
+
+Figure 3 shows the three test scenes that we created to evaluate error and convergence of different solvers in different situations. The Bunny scene is designed to examine solver's basic performance in response to a simple scenario. It consists of an obstacle in the shape of a 2D Stanford bunny located at the centre of the domain, with an emitter shooting out red plumes from the midpoint of the bottom side. The Two Plumes scene is identical to the Bunny scene except that there is a second blue plume emitter placed in the gap behind the ears of bunny model. This scene mainly aims to test the case of multiple plume sources. Finally, we designed a Channeling scene, which has symmetry in vertical direction, to both test fluid channeling behaviours and to visually compare how well can different pressure solvers preserve the natural symmetry within the plume. In all scenes, white colours represent obstacles, and each side is a wall but not shown explicitly.
+
+In order to obtain consistent and convincing results, we kept the test environment parameters identical across all our experiments. Specifically, we use a resolution of ${256} \times {256}$ pixels, and run every simulation for 300 frames with a step size of 0.25 seconds, and for the iterative solvers we executed 200 iterations for each. We do not use vorticity confinement, gravity, or buoyancy.
+
+Notice that the CNN-based solver by Tompson et al. [21] is not an iterative method, so it does not have the concept of residual that converges like the Jacobi, RBGS, and PCG solvers. However, using the Helmholtz-Hodge decomposition we can observe that the residual is in fact equal to the norm of the divergence of the velocity at the next time step (see Appendix A). Thus we use this quantity as a means of comparing the iterative methods to the neural network approach.
+
+§ 4.2 PERFORMANCE
+
+Before examining and comparing performances of various methods, we need to first choose a suitable regularization parameter $\lambda$ for our PCG solver with respect to each scene. For the Bunny scene, we investigate the case with no regularization (or $\lambda = 0$ ) by running 400 PCG iterations and noted its relative residual, as shown in Figure 4(a). The simulation is executed for only one frame, because the relative residual diverges and animation of subsequent frames blows up.
+
+Given the diverging residual that we observe with the non full rank matrix $\mathbf{A}$ , we explore different amounts of regularization. Specifically we test how well $\lambda = {10}^{-5},\lambda = {10}^{-4},\lambda = {10}^{-3}$ work, and repeat the same procedure as before. Figure 4(b)-(d) shows the order statistics (minimum, ${25}^{\text{ th }}$ percentile, median, ${75}^{\text{ th }}$ percentile, maximum) of relative residuals across all 300 frames at different iteration numbers. Since we only run 200 PCG iterations in our main comparisons, we choose $\lambda = {10}^{-4}$ as it attains the lowest true-residual median at iteration 200 among all three choices of $\lambda$ . Using the same criteria, we also selected $\lambda = {10}^{-4}$ for the Two Plumes and Channeling scenes.
+
+Figure 5 shows order statistics of relative residual over Jacobi, RBGS, and regularized PCG solver iterations for all $\mathbf{{Ax}} = \mathbf{b}$ systems and for each scene across 300 frames. We note that the iterations do not take the same amount of time, but the plots are still useful to observe the typical reduction in residual across different steps of the simulation.
+
+We also show the order statistics of absolute residual $\parallel \nabla \cdot \mathbf{u}{\parallel }_{2}$ over total solving time (in milliseconds) for every scene in Figure 6. Since the forward propagation of CNN-based solver (Tompson et al. [21]) always takes a fixed amount of time, its performance is depicted as the intersection of horizontal and vertical purple lines in the graph. While we might expect PCG to have the fastest convergence rate and achieve lowest residual, it takes a longer time to solve compared to the highly parallel GPU implementations of the Jacobi, RBGS, and CNN-based solvers.
+
+§ 4.3 ROBUSTNESS
+
+An important feature of good numerical method is robustness, which we investigate by observing the behaviour of the four solvers when there is a sudden and fierce change of external conditions. In our stress-test, we monitor the ${L}_{2}$ norm of the velocity divergence of the pressure solvers to see how rapidly the residual returns to normal levels after facing a large external pressure is injected into the fluid region.
+
+Figure 7 shows the set-up and result of our experiment. Our fluid simulator supports user interaction with a mouse to perturb the plume behaviour and it is with this mechanism that we inject a scripted mouse movement. We first run the simulation with the Jacobi solver for 100 frames, then at the moment that we inject the external pressure we switch to the different solvers to examine the post disturbance divergence norm trajectory.
+
+The scripted mouse interactions used to inject a consist disturbance in each test, and the result of the disturbance, can be seen in Figure Figure 7(a)-(b). The disturbance lasts a total of 10 frames, with the scripted mouse drag first moving horizontally across the scene for one frame, and then vertically in the next frame, with the process repeating 5 times. This produces a strong shape distortion of the red plume as seen in Figure 7(b). We then observe the norm of the divergence of the velocity after frame 110 to understand how high the norm rises, and then how quickly it drops to normal levels for the different solvers. The divergence norm trajectories in Figure 7(c) show that PCG has the most stable behaviour during the perturbation, while the method of Tompson et al. [21] fluctuates more violently than all others. After the perturbation ends, the norm of velocity divergence gradually drops to lower levels in all cases. Comparing the final divergence norm after the simulation stops at frame 300, we see that PCG achieves the lowest residual, followed by Jacobi and RBGS (with similar behaviour), while Tompson et al. [21] method has much higher residual and does not ultimately return to levels similar to those prior to the disturbance.
+
+ < g r a p h i c s >
+
+Figure 4: Convergence for the Bunny scene. (a) Without regularization, the relative residual over PCG solver’s iteration for the Ax $= \mathbf{b}$ system corresponding to frame 1 shows poor divergent behaviour. (b)–(d) Order statistics of the true relative residual at each PCG iteration (that is, measured without regularization), for all $\mathbf{{Ax}} = \mathbf{b}$ systems across 300 frames when the regularization parameter is $\lambda = {10}^{-5},\lambda = {10}^{-4},\lambda = {10}^{-3}$ .
+
+ < g r a p h i c s >
+
+Figure 5: Order statistics of relative residual over each solver’s iteration for all $\mathbf{{Ax}} = \mathbf{b}$ systems of each scene across 300 frames. Five curves from top to bottom of every solver represent maximum, ${75}^{\text{ th }}$ percentile, median, ${25}^{\text{ th }}$ percentile, minimum, respectively, and only curves for medians are in bold for clarity. PCG shows the true residual rather than that of the regularized problem being solved.
+
+ < g r a p h i c s >
+
+Figure 6: Order statistics of residual norm $\parallel \nabla \cdot \mathbf{u}\parallel ,$ over each solver’s running time across 300 frames. Five curves from top to bottom of every solver represent maximum, ${75}^{\text{ th }}$ percentile, median, ${25}^{\text{ th }}$ percentile, minimum, respectively. Median curves are bold for clarity. Since the CNN-based solver (Tompson et al. [21]) always takes a fixed amount of time, its absolute residual over total solving time is shown as the purple intersection.
+
+ < g r a p h i c s >
+
+Figure 7: (a) The black arrow shows the mouse trajectory for external pressure injection. The mouse moves first horizontally for one frame, then vertically for the next frame, and this process is repeated 5 times for 10 frames violent scene perturbation. (b) Strong shape distortion of the red plume caused by our violent mouse perturbation. (c) ${L}_{2}$ norm of the velocity divergence over simulation frame count for different fluid solvers in our mouse perturbation experiment. The external pressure is injected at frame 101, and the injection terminates after frame 110.
+
+§ 4.4 DISCUSSION AND LIMITATIONS
+
+There are some high level observations we can make about our experiments. First, we expected RBGS to converge more quickly than Jacobi, at least with respect to the iteration count, but we did not observe this in our test scenes. We had also expected that PCG would have much better behaviour, it is perhaps still possible that with a different problem formulation and GPU implementation that we could see better advantages with PCG. We can note that there is likewise a vast number of other solvers we are not including in our comparison, such as those that exploit substructuring, or others based on multigrid. We believe all the results have good plausibility, but we also note that we were not able to observe preservation of symmetry in cases where it would be expected. That is, there is no guarantee that symmetry would be preserved with a neural network approach, and the arbitrary ordering of red before black in RBGS would have us not expect symmetry, but we had expected symmetric plumes in our Channeling scene when using Jacobi and PCG. It is possible that our initial conditions and plume locations make it hard to achieve symmetry in these examples, and the single precision floating point used in the GPU implementations may yet be another explanation. Finally, it is interesting to note that the method of Tompson et al. actually increases the norm of the velocity divergence in our test scenes. While this is surprising, we note that the solutions it produces still tend to be useful for producing plausible incompressible flows.
+
+§ 5 CONCLUSION AND FUTURE WORK
+
+We present a collection of tests to understand the convergence behavi-ous of different pressure solvers in 2D fluid simulations. Specifically, we compare Jacobi, red-black Gauss-Seidel, regularized preconditioned conjugate gradients, and a convolutional neural network (CNN) based solution proposed by Tompson et al. [21]. Our results show that Jacobi provides the best bang of the buck with respect to minimizing error using a small fixed time budget when running on the GPU. We note that the CNN solution has less desirable properties in our 2D tests, but may ultimately prove to be a good choice for specific scenarios, or larger 3D simulations. There are a number of interesting directions for future work, such as repeating experiments in $3\mathrm{D}$ , and including comparisons with other solvers such as multigrid methods.
+
+§ A HELMHOLTZ-HODGE DECOMPOSITION
+
+The Helmholtz-Hodge decomposition states that every vector field $\mathbf{u}$ can be decomposed into a divergence-free vector field $\mathbf{v}$ and a gradient field $\nabla q$ , that is,
+
+$$
+\mathbf{u} = \mathbf{v} + \nabla q
+$$
+
+Stam [18] shows that the divergence of this decomposition leads to the Poisson problem in Equation 3 needed for the pressure projection.
+
+This problem has the form
+
+$$
+{\nabla }^{2}q = \nabla \cdot {\mathbf{u}}^{B},
+$$
+
+hence, the residual produced by iterative methods is defined as
+
+$$
+\mathbf{r} = \nabla \cdot {\mathbf{u}}^{B} - {\nabla }^{2}q
+$$
+
+Furthermore, the velocity update formula on line 6 in Algorithm 1 can be rewritten as
+
+$$
+{\mathbf{u}}^{t + 1} = {\mathbf{u}}^{B} - \nabla q
+$$
+
+Again, taking divergence of both sides of the above equation yields
+
+$$
+\nabla \cdot {\mathbf{u}}^{t + 1} = \nabla \cdot {\mathbf{u}}^{B} - {\nabla }^{2}q
+$$
+
+$$
+= \mathbf{r}\text{ , }
+$$
+
+which justifies our comparison of the residual norm to the norm of the divergence of the next time velocity produced by the neural network approach.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..b0a92d6b24e86b7cd6511de482211ddeba24d8cc
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,405 @@
+# Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces
+
+Nils Rodrigues* Christoph Schulz* Antoine Lhuillier ${}^{ \dagger }$ Daniel Weiskopf*
+
+Visualization Research Center (VISUS)
+
+University of Stuttgart, Germany
+
+
+
+Figure 1: Our Cluster-Flow Parallel Coordinates Plot (CF-PCP) combines advantages of regular Parallel Coordinate Plots (PCPs) and Scatter Plots (SPs) as shown here using 2,000 generated points in dimensions D1-D4. CF-PCPs are read from left to right: The data is grouped by pairwise dimensional clustering, i.e., stacked axes beneath ${D}_{i}$ show all clusters from subspace ${D}_{i - 1} \times {D}_{i}$ . CF-PCPs allow for salient illustration of clusters and traceability across multiple dimensions alike. Thus, we argue that our technique can reveal patters that are difficult to perceive from a linked combination of SPs and traditional PCPs (cf. red and blue data points).
+
+## Abstract
+
+We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our cluster-flow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on ${\mathrm{A}}^{ * }$ . It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).
+
+Index Terms: Human-centered computing-Visualization-
+
+Graphics Interface Conference 2020
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically. Visualization techniques; Human-centered computing-Visualization-Visualization application domains-Information visualization
+
+## 1 INTRODUCTION
+
+The analysis of multivariate or multidimensional data is a longstanding research topic in visualization [51]. Nowadays, with multivariate data being ubiquitous, a good interplay of automated data analysis and visualization is very important for gaining insights. In the realm of data analysis, subspace clustering allows analysts to find cross-dimensional relationships between data points, leading to useful classification methods $\left\lbrack {9,{16}}\right\rbrack$ . On the other side of the spectrum stands an important class of visualization techniques: parallel coordinates plots (PCPs) [22,27]. They play an important role in visualizing multivariate data as their core concept of parallel axes is easy to grasp and, unlike scatter plots, they scale for increasing dimensionality. Typically, they render each data point as a polyline, curve, or density field [21]. Existing techniques combine PCPs with either a single global cluster assignment for each data point [2] or with clusters in 1D data dimensions [39]. They then resort to edge bundling or color coding to show cluster memberships [22].
+
+In this work, we aim to combine subspace clustering with the visualization advantages of PCPs. To this end, we propose a novel approach that facilitates the visualization of clusters in 2D subspaces while still maintaining information about the characteristics of individual data elements in PCPs: the Cluster-Flow Parallel Coordinates Plot (CF-PCP). As depicted in Figure 1, our approach works on two visual levels inherent to the image. On the overview level, a coarse visualization allows users to trace and follow the evolution of subspace clusters across dimensions by duplicating axes for each cluster and stacking them vertically. On the detail level, it maintains the readability of correlation of data points and other data characteristics by ensuring that the incoming and outgoing links reflect the original information from regular PCPs, i.e., our approach keeps the original slopes of each data element. We demonstrate that CF-PCPs allow users to trace both hard and fuzzy subspace clusters across dimensions. Moreover, we propose new metrics to optimize dimension ordering in CF-PCPs based on subspace clusters and to reduce crossings between clusters. Our main contributions are
+
+---
+
+*e-mail: fistname.lastname@visus.uni-stuttgart.de
+
+${}^{ \dagger }$ e-mail: antoine.lhuillier@gmail.com
+
+---
+
+
+
+Figure 2: Comparison of different clustering and PCP techniques using the NetPerf data set [49]. The edge-bundling layout (a) replaces lines entirely and only draws a single band between each pair of neighboring 1D clusters [39]. Our cluster-flow parallel coordinates (b) draw individual lines between 2D clusters of neighboring axes. Clusters are arranged vertically with our crossing minimization algorithm from Section 4.3. Illustrative parallel coordinates (c) emphasize clusters over all dimensions using colored lines and force-based bundling [36]. Figures (a) and (c) © 2014 IEEE. Reprinted, with permission, from [39].
+
+- a new PCP with density rendering for subspace clustering that preserves the readability of correlations,
+
+- an approach to visualizing uncertainty from fuzzy clustering,
+
+- an A*-based algorithm for the optimal layout with respect to a novel set of metrics that reflect the compatibilities of clusters between dimensions, and
+
+- a sample implementation of CF-PCP ${}^{1}$ using fuzzy DB- ${\mathrm{{SCAN}}}^{2}$ for subspace clustering.
+
+## 2 RELATED WORK
+
+Multivariate or multidimensional visualization is a major and vibrant research area of the visualization community. Respective survey papers are available from Wong and Bergeron [51] and Liu et al. [35]. Our work addresses the visual mapping of multivariate data to PCPs; therefore, the discussion of related work focuses on parallel coordinates, in particular, in combination with clustering. Parallel coordinates for data analysis go back to seminal work by Inselberg [26, 28] and later by Wegman [50]. For a comprehensive presentation of PCPs, we refer to Inselberg's book [27]. Despite its popularity, the underlying geometry of a PCP coupled with a high number of data points can quickly lead to overdraw and thus visual clutter [22]. This makes it hard for users to explore and analyze patterns in the data set. To address these challenges, researchers have investigated cluster visualization and the saliency of underlying patterns.
+
+A first approach to cluster visualization in PCPs is to explicitly compute clusters in the data set and display them using different visual encodings. Inselberg [26] suggested drawing the envelope of the respective lines in parallel coordinates using the convex hull. Fua et al. [14] investigated rendering clusters with convex quadrilaterals resembling the axis-aligned bounding box of a cluster. More recently, Palmas et al. [39] proposed pre-computing 1D clusters for each dimension using a kernel density estimation approach and then linking neighboring axes using compact tubes in which the width encodes the number of data points in the cluster (see Figure 2a). They then used color coding to mark the clusters of a chosen dimension. Their technique produces a highly summarized and largely clutter-free visualization, reminiscent of a stream visualization such as baobab trees [46] as well as timelines by Vehlow et al. [47]. Although visually similar, these techniques differ from our approach as they do not keep the correlation details of data elements encoded in the PCP links, nor do they show the cluster flow between subspaces.
+
+Another approach changes the visual mapping of the lines to implicitly show clusters. Here, edge bundling [33] has shown to be an effective technique that helps users find clusters and patterns within PCPs [20]. Illustrative parallel coordinates [36] bundle PCPs in image space by pre-clustering the data set first (via $k$ -means) and then render the lines as curves using B-splines (see Figure 2c). Zhou et al. [54] used a variant of force-directed edge bundling [23] to directly compute the clusters based on the patterns emerging from the bundling algorithm. Heinrich et al. [20] extended previous work $\left\lbrack {{36},{54}}\right\rbrack$ by providing ${C}^{1}$ -continuity between B-splines to emphasize end-to-end tracing. Our approach also changes the visual mapping of the links and reduces clutter. However, our overall composition of links differs because we duplicate axes and focus on keeping information about the correlation of data elements in the clusters.
+
+Another approach to improve PCPs is to change the order of dimensions. While ordering can also be applied to other visualization techniques, it is especially inherent to the construction of PCPs, where the order of axes directly affects the revealed patterns [50]. Pargnostics [8] uses metrics such as the number of crossings, the angle of crossings, or the parallelism between dimensions for ordering. Tatu et al. [45] presented a method to rank axes in parallel coordinates using features of the Hough transform. Ankers et al. [1] proposed a pairwise dimension similarity based on Euclidean distance. Peng et al. [41] defined a clutter-based measure and used it in an ${\mathrm{A}}^{ * }$ algorithm to order the dimensions. Ferdosi and Roer-dink [12] generalized the notion of dimensional ordering of axes by expanding the concept of a pairwise similarity measure to subspace similarity measure using a predefined quality criterion. Later, Zhao and Kaufman [52] proposed several clustering techniques to optimize the ordering of axes in PCPs, e.g., a $k$ -means or a spectral approach. Tatu et al. [45] also proposed subspace similarity measures based on dimension overlap and data topology. In general, all these methods focus on subspaces. Our approach differs from the previous ones as our duplication of axes emphasizes flow between clusters and imposes the definition of new similarity measures to order both data dimensions and clusters.
+
+---
+
+${}^{1}$ https://github.com/NilsRodrigues/clusterflow-pcp
+
+${}^{2}$ https://github.com/schulzch/fuzzy_dbscan
+
+---
+
+
+
+Figure 3: Construction of CF-PCPs. Scatter plots are well suited to show clusters of multivariate data in 2D subspaces (a) but do not readily show cluster relations across more dimensions. We extend PCPs by creating an individually duplicated axis for each cluster to show high-level flow between clusters (b). Showing the underlying data points as individual lines increases the available level of detail (c). To maintain visual data patterns and the perception of correlations, we restore the original PCP's line angles near the axes (d).
+
+Our paper also addresses the problem of visualizing uncertainty $\left\lbrack {5,6,{40}}\right\rbrack$ from fuzzy clustering. Techniques for hatching, sketchiness, as a specialized form of spatial uncertainty, have gained a lot of attention $\left\lbrack {3,{15},{34}}\right\rbrack$ . However, there are only few works on uncertainty in the context of parallel coordinates. Dasgupta et al. [7] conceptualized a taxonomy of different types of visual uncertainty inherent to PCPs. Feng et al. [11] focused on showing uncertainty in the data by mapping confidence to saliency in order to reduce misinterpretation of data, relying on density representation of uncertainty. We adopt color mapping to visualize fuzziness of clustering in PCPs.
+
+To the best of our knowledge, there is no previous work that combines all mentioned topics. Kosara et al. [32] also visualize flow but for categorical data instead of clusters. Their portrait layout of PCPs also does not calculate an optimal order for dimensions or categories. Nested PCPs by Wang et al. [48] look very similar but have a global clustering instead of using subspaces. They are meant for ensemble visualization in conjunction with other plots and use global color mapping that only depends on a single dimension.
+
+## 3 MODEL AND OVERVIEW
+
+We first provide an overview and outline intermediate steps of our technique (see Figure 3). We assume that we have a set of data points in a multivariate data set as input: $P = \left\{ {{\overrightarrow{p}}_{i} \in {\mathbb{R}}^{n}}\right\}$ . An algorithm assigns each of these points a degree of membership to clusters, resulting in tuples $\left( {{\overrightarrow{p}}_{i},{m}_{k, i}}\right)$ . Here, ${m}_{k, i} \in \left\lbrack {0,1}\right\rbrack$ describes the degree to which data point ${\overrightarrow{p}}_{i}$ belongs to the cluster with index $k$ . Hard clustering is a special case of this, with ${m}_{k, i} \in \{ 0,1\}$ . In contrast, soft labeling allows for partial memberships. A single cluster is then defined as ${c}_{k} = \left\{ {\left( {{\overrightarrow{p}}_{i},{m}_{k, i}}\right) \mid {m}_{k, i} > 0}\right\}$ and $C = \left\{ {c}_{k}\right\}$ comprises all clusters. We focus on subspace clustering because distance measures lose expressiveness when dimensionality increases, leading to superfluously fuzzy or inconceivable clusters. Hence, we compute clusters for each pair of dimensions $\left( {{\mathbb{R}}_{i},{\mathbb{R}}_{j}}\right)$ as shown in Figure 3a, i.e., we look at sets of 2D subspaces. Our current implementation employs FuzzyDBSCAN [25], which is an extension of the classic and popular DBSCAN algorithm, for fuzzy clustering. However, our visualization is independent of the chosen clustering algorithm, it just assumes that clustering provides labels for each data point. The data that serves as source for clustering is shown beneath the axes.
+
+The goal of CF-PCPs is to show the information about subspace clusters and the actual data points alike. On a coarse level, they show the number of clusters in the subspaces and how data elements virtually flow from one cluster to another (see Figure 3b). Our approach to showing this flow is based on axis duplication: instead of a single (vertical) axis for a data dimension, we place clones of this axis on top of each other, i.e., at the same horizontal position. Each data cluster gets its own axis clone. Through this duplication, we can render the stream of data elements between clusters, which are visible even if one does not focus on individual lines in the CF-PCPs but just looks at the visualization as a whole. Section 4 describes the details of the axis duplication and how we can achieve an optimal layout of the data dimensions and cluster ordering.
+
+
+
+Figure 4: Illustration of two clusters, using different coordinate systems. The scatter plot (a) uses Cartesian coordinates to clearly show them as dots placed along two straight lines. The clusters are hard to distinguish in a regular PCP (b) without additional visual variables, e.g., color. Our cluster-flow layout duplicates PCP axes for each cluster to reduce clutter and increase readability (c).
+
+By rendering an individual line for each data point, we include further details (see Figure 3c) in a fine-grained level of our proposed visualization. We use a curve model (see Figure 3d), as described in Section 5, that allows us to infer correlations and other data characteristics for individual data elements, similar to regular PCPs. Furthermore, we present a density rendering and color mapping approach that allows us to visualize large data sets and uncertainty from fuzzy clustering (see Sections 5.2 and 5.3).
+
+## 4 LAYOUT
+
+The key aspect of our layout of CF-PCPs is the duplication of axes (Section 4.1) because this serves as a basis to visually separate clusters and show the flow between clusters in the different subspaces. Each ordering of axes results in a unique flow pattern between clusters, therefore we present a new method to optimize the horizontal axis order according to patterns in data flow (Section 4.2). Section 4.3 introduces a technique to order the clusters on each data axes (y-axis) vertically.
+
+### 4.1 Axis Duplication
+
+CF-PCPs extend traditional PCPs by using the vertical image space to show cluster assignments. Unlike any other PCP variant, we visualize each cluster on its own (local) axis. As Figure 4 shows, we create two duplicates of the axes to draw the two clusters and stack them vertically.
+
+
+
+Figure 5: Various reading directions with data points colored according to clusters in $B \times C$ . Without a reading direction, we obtain overlapping axes and discontinuous lines (a) because the 2D subspaces $A \times B$ and $B \times C$ do not have the same number of clusters. With a reading direction, axes show the clusters within the subspace of the current dimension and its neighbor to the left (b) or right (c).
+
+Axis duplication becomes necessary because we aim to show clusters in 2D subspaces, not just in 1D [39]. For the latter case of 1D clustering, we could just squeeze the data vertically to bundle the line into clusters on each axis individually. However, this approach will only work if the clusters are already separable in a single dimension. Figure 4a shows an example with two clusters that are visible in a scatter plot and linearly separable in two dimensions but not along 1D axes. Here, traditional PCPs lead to intertwined visualizations of the two clusters, and even axis scaling could not separate them Figure 4b. With axis duplication Figure 4c, we now see two clearly distinguishable bundles of lines that correspond to the two clusters, i.e., clusters are easy to recognize. However, there is yet another important benefit: the PCP lines can now be inspected for each cluster separately and, thus, we can investigate correlations or other data characteristics independently within each cluster.
+
+Up to now, we only handled a pair of two data dimensions, but parallel coordinates support multivariate data. Figure 5 shows an example sequence with three dimensions. In subspace $A \times B$ , there are two clusters, so we create two copies. $B \times C$ contains three clusters, so we draw three copies. This results in overplotted axes for dimension $B$ and interrupted polylines for the data points (see Figure 5a). Therefore, we introduce a reading direction left-to-right (LTR) to create as many copies of axis $B$ as there are clusters in $A \times B$ . We duplicate axis $C$ as often as there are clusters in $B \times C$ (see Figure 5b). Since axis $A$ has no pair to the left, there is no explicit clustering and no duplication. While the opposite reading direction works accordingly (see Figure 5c), we chose a consistent LTR layout for all figures in this paper. Beneath the axes, CF-PCPs also show the two data dimensions that were used for clustering to help the viewer identify the reading direction.
+
+Cloning the axes and stacking them vertically increases the plot area and vertically shears the lines that represent the data points, especially for large numbers of clusters. We address both issues by scaling down cloned axes with a root function. Assuming the height ${h}_{1}$ of a single regular PCP axis and $n$ as the number of stacked axes, we get a total height of
+
+$$
+h\left( n\right) = n \cdot {h}_{1} \cdot {\left( 1/n\right) }^{r}, \tag{1}
+$$
+
+where $r \in \left\lbrack {0,1}\right\rbrack$ controls the root scaling. Selecting $r = 0$ will result in no downscaling at all, while $r = 1$ will keep the original height without any growth. Spaces between axes are determined with
+
+$$
+s\left( n\right) = {s}_{2} \cdot {h}_{1} \cdot {\left( 1/\left( n - 1\right) \right) }^{r}, \tag{2}
+$$
+
+where ${s}_{2}$ is the percentage of ${h}_{1}$ we want to use as space between two duplicates. The remaining space is then used for the actual axis clones. We chose $r = {0.8}$ and ${s}_{2} = {0.1}$ for figures in this paper to balance between growing height and readability.
+
+Axes in regular PCPs are normalized to only display the used range of each data dimension. Multiple labeled tick marks enable viewers to read values from data points at their intersections with the axes. As the number of clusters rises, axes in CF-PCPs shrink. Retaining the same amount of labeled tick marks as in regular PCPs would add more overdraw to the already densely packed area around the axes. Therefore, we limit the ticks marks to the maximum and minimum of the data range in each data dimension and progressively scale the label size according to the space between axes.
+
+
+
+Figure 6: Tree representation of our model for ordering of dimensions ${d}_{1}$ to ${d}_{4}$ . The blue numbers beneath the dimension names represent costs (left: individual; right: accumulated). Levels 0 and 1 have no costs because there are no pairs of clusters yet. The path ${d}_{1},{d}_{2},{d}_{4}$ , ${d}_{3}$ is optimal as it is at the deepest level and others are at least as expensive. Missing paths at level 3 and 4 have not been expanded yet, because their costs were never minimal.
+
+### 4.2 Dimension Ordering
+
+Just as with regular parallel coordinates, the visual patterns in a CF-PCP depend heavily on the order of data axes. For this reason, we propose a method for optimizing the order automatically. Previous work investigated various metrics-like number of crossings, angles between lines, correlation strength, and more-to optimize the order $\left\lbrack {8,{30}}\right\rbrack$ . At first glance, we could have simply reused existing algorithms or defined dimension ordering as an overlap-removal problem for the polylines in the PCP. However, CF-PCPs do not only show the underlying data points but also depict flow between subspace clusters on a coarse scale. For example, to calculate the costs of chaining two sets of clusters between display axes ${a}_{i}$ and ${a}_{i + 1}$ , we also need to know the previous axis ${a}_{i - 1}$ . This is due to our 2D subspace clustering and reading direction LTR: clusters shown at ${a}_{i}$ are calculated with data from dimensions ${d}_{i - 1} \times {d}_{i}$ . Only then can we calculate costs with the next axis ${a}_{i + 1}$ , which contains clusters from ${d}_{i} \times {d}_{i + 1}$ .
+
+As shown in Figure 6, we choose a tree $T$ as primary data structure for our proposed order optimization. We start by creating the root node at level 0 and add a child for each data dimension at level 1. Then, we expand each leaf recursively by attaching a node for each unused data dimension. Each level corresponds to an index in the sequence of display dimensions. This way, for $d$ data dimensions we obtain a tree with $d + 1$ levels, where each node has $d -$ level children. The tree contains $d$ ! different paths-corresponding to all permutations of the data dimensions. Costs from one set of clusters to the next depend only on three consecutive data dimensions. Thus, there are only $d \cdot \left( {d - 1}\right) \cdot \left( {d - 2}\right)$ different costs to compute, which can be cached and reused in the tree traversal.
+
+We apply the A* algorithm [17] to compute the shortest path in $T$ and implement it by lazily expanding a tree node when it is the leaf with the lowest cumulative costs. The heuristic $d$ -level estimates the remaining costs. This has implications on the metric we use to define the distance between two sets of clusters: The minimum cost from a parent to a child has to be 1 for the heuristic to work. With this model, the A* algorithm is guaranteed to find an optimal solution. We combine lazy evaluation with caching of cost values to compute the A* optimization efficiently.
+
+
+
+Figure 7: Simplified CF-PCP with cluster sets ${c}_{i}$ and ${c}_{j}$ . Top and bottom (black) never cross any other bundles. A red reference bundle only crosses lines that start below and end above (blue).
+
+Having decided on a basic optimization algorithm, we choose a measure for the costs of displaying clusters next to each other. CF-PCPs are targeted at showing how clusters evolve from one subspace to the other, how the data points flow from one to another. Therefore, we propose a metric that focuses on the similarities and differences of clusters. Let ${c}_{i}$ and ${c}_{j}$ be two sets of clusters. The number of elements in them are $n = \left| {c}_{i}\right|$ and $m = \left| {c}_{j}\right|$ . In a first step, we generate a similarity matrix using the Jaccard indices [29]:
+
+$$
+{S}_{{c}_{i},{c}_{j}} = \left( \begin{matrix} J\left( {{c}_{i,1},{c}_{j,1}}\right) & \cdots & \\ \vdots & \ddots & \vdots \\ & \cdots & J\left( {{c}_{i, n},{c}_{j, m}}\right) \end{matrix}\right) . \tag{3}
+$$
+
+The best match of cluster sets would yield few elements with value 1 and many with 0 . Bad matches have many data points changing between clusters, leading to an array with many values $< 1$ . Since we prefer to see dimension orders with many good matches, we subtract the matrix’s mean value from all its elements ${s}_{x, y}$ . We then square all results and sum them up into the grand sum to get a scalar similarity value between the sets of clusters:
+
+$$
+{\operatorname{similarity}}_{i, j} = \mathop{\sum }\limits_{\substack{{1 \leq x \leq n} \\ {1 \leq y \leq m} }}{\left\lbrack {s}_{x, y} - \operatorname{mean}\left( {S}_{{c}_{i},{c}_{j}}\right) \right\rbrack }^{2}. \tag{4}
+$$
+
+To obtain a cost function that fits the requirements of ${\mathrm{A}}^{ * }$ and our tree $T$ , we invert the similarity. However, $\operatorname{sim}\left( {{c}_{i},{c}_{j}}\right)$ can be 0 when all matrix entries have the same value or we are comparing sets of size 1. Thus, we define the cost function as
+
+$$
+{\text{horizontal Cost}}_{i, j} = 1 + {\left\lbrack {\text{similarity}}_{i, j} + 1\right\rbrack }^{-1}
+$$
+
+$\geq 1$(5)
+
+to avoid division by 0 . Using our proposed metric helps the optimization algorithm in avoiding adjacent dimensions with no or only one cluster between them.
+
+### 4.3 Cluster Ordering
+
+We separate the optimization of horizontal dimension and vertical cluster order because CF-PCPs aim to primarily display the data flow between subspace clusters. The list of available clusters for display is controlled by the sequence of dimensions but not by their internal vertical arrangement. Therefore, the order of dimensions is our highest priority. Afterward, we still have flexibility in choosing the vertical arrangement along each data dimension. Our approach to optimizing the vertical sequence is similar to the dimension ordering: employ A* to find an optimal path from tree level 1 to a leaf at level $d$ . The main difference lies in the way we generate the children of each node. When there are $m$ clusters to be arranged, we generate all permutations and add them to the parent node, which is itself a node among permutations of the previous clusters.
+
+We adopted a metric based on the number of line crossings as a cost function-which is popular for ordering in PCPs [8]-and adapted it for the display of data flow from one cluster to another. The comparison of Figures $4\mathrm{\;b}$ and $4\mathrm{c}$ shows that lines within cluster pairs move closer to each other. This is caused by the axis scaling from Section 4.1 and creates the impression of bundles. Intertwined lines within such a bundle do not interfere with tracing the data flow, but crossing bundles are difficult to follow visually. Therefore, we define our metric to penalize these inter-bundle crossings.
+
+
+
+Figure 8: Composite line geometry in CF-PCPs. In regular parallel coordinates (a), lines have an angle $\gamma$ , depending on the underlying data point's values in each display dimension. Lines in CF-PCPs (b) have segments with the same angle in a zone close to the axes (red lines on gray background). These are then connected by a cubic Hermite curve with tangents T1 and T2.
+
+A bundle between clusters contains all shared points, e.g., ${c}_{i,1} \cap$ ${c}_{j,1}$ . We assume that each line has a certainty $l \in \left\lbrack {0,1}\right\rbrack$ that describes its-possibly fuzzy-cluster membership. Fuzzy lines will not be as visible as certain ones and contribute less to overall clutter. See Section 5.3 for more details on how the certainty will affect line rendering. Our metric sums up the certainty within each bundle into $L$ and uses it as an adjusted line count. This new count is used to populate a matrix with values for all bundles between neighboring dimensions $i$ and $j$ :
+
+$$
+{B}_{i, j} = \left( \begin{matrix} L\left( {{c}_{i,1} \cap {c}_{j,1}}\right) & \cdots & \\ \vdots & \ddots & \vdots \\ & \cdots & L\left( {{c}_{i, n} \cap {c}_{j, m}}\right) \end{matrix}\right) . \tag{6}
+$$
+
+Our approach calculates the total number of crossings using a method by Rit [43]: bundles only cross if they start lower and end higher (see Figure 7). The opposite case (from higher to lower) is just a different view on the intersection of the same lines. The actual calculation is done by multiplying each element ${b}_{x, y}$ of ${B}_{i, j}$ with the submatrix ${}_{x, y}{B}_{i, c}$ to its lower left and then getting the grand sum. The first column and last row do not have a submatrix to the lower left, so we do not need to calculate crossings for them. The resulting values are written into a matrix ${I}_{i, j}$ and all its elements ${i}_{x, y}$ are summed up. The final cost for placing one sequence of clusters next to another is then
+
+$$
+\text{ vertical Cost }{}_{i, j} = 1 + \mathop{\sum }\limits_{\substack{{1 \leq x \leq n} \\ {1 \leq y \leq m} }}{i}_{x, y}\; \geq 1. \tag{7}
+$$
+
+We start with a minimal vertical cost of 1 to ensure that the metric fits the requirements set by the level-based heuristic for ${\mathrm{A}}^{ * }$ . The width of the primary tree structure $T$ grows much faster than in Section 4.2. Therefore we recommend using the lazy A* algorithm for the optimization of up to 15 clusters per 2D subspace and reverting to a greedy approach for more complex problems. For example, sorting clusters by the mean value of contained data points approximates the vertical line layout from PCPs whilst retaining advantages of their cluster-flow counterparts.
+
+## 5 RENDERING
+
+At this point, CF-PCPs exist as a model that does not completely specify how to display lines and encode uncertainty. To be able to render actual visual output, we now address these aspects.
+
+### 5.1 Curve Geometry
+
+Traditional PCPs draw data points as straight lines between two neighboring dimensions. If we do the same in CF-PCPs, we get a relatively clean visualization without much clutter (Figure 9a). Unfortunately, the naive approach of axis duplication also changes the line slopes with respect to traditional PCPs. In fact, the slope depends no longer just on the data values but much more on the cluster membership. A related issue is that the angular resolution is reduced because lines become "compressed" (i.e., closer in angles) if they belong to the same cluster. Therefore, it becomes harder to recognize correlations. A recent eye-tracking study [37] showed that participants focus primarily on the area around the axes when reading PCPs. Another study with extreme bundling in PCPs [19, 20] showed that the perception of data characteristics and correlation is even possible when participants could not use the center parts between axes. These findings give us an opportunity to address the aforementioned problems: we replace straight lines by composite connections (see Figure 8). The key to our model is that we start and finish the connections with short straight line segments with the same slope as in original PCPs. This creates a zone in the very important focal area around the axes that retains the same information as in regular PCPs.
+
+
+
+Figure 9: Comparison of line geometry in CF-PCPs. Straight lines (a) make it hard to recognize correlations within clusters. Curved composite lines (b) preserve information on correlations by starting and ending with the same angles as in regular PCPs. Adjusting tangents used in the central Hermite splines separates lines that would cover and occlude each other completely.
+
+
+
+Figure 10: Fuzzy clustering provides a single data point with multiple soft labels in various clusters. Out technique draws lines between all possible pairs of these clusters. Their opacity is calculated by multiplying the point's labels in both connected clusters. This spreads each data point's constant total density of 1 over all possible connections.
+
+In the center region-in-between data axes-we connect the straight segments with a cubic Hermite curve. Choosing the tangents of the curve to be identical to the original slope guarantees a smooth transition between segments. The result of this approach is shown in Figure 9b. Instead of using the same factor to scale the Hermite curve’s tangents ${T1}$ and ${T2}$ , we vary it between data points. The relative index of each row in the source data set is added to a base factor on ${T1}$ and subtracted for ${T2}$ . This spreads the composite lines along the direction they would have had in a classic PCP. This is very helpful when there are multiple row with the same values in both neighboring data dimensions: the wider the lines spread, the more rows share the same values. The effect can be observed in the curves between the dimensions Oil and Biomass in Figure 15. In regular PCPs, this information would be lost in overplotting or would require a density rendering technique.
+
+
+
+Figure 11: Visualizing uncertainty in the clustering results. The label weights are binned into 3 ranges. Each bin is separately rendered to a density field and then mapped to color (a)-(c). Finally, the results are sorted by uncertainty and alpha blended (d).
+
+### 5.2 Density Rendering
+
+CF-PCPs are designed to work with large data sets that can incur the problem of overplotting. We address this issue by adopting the established method of splatting [53] the lines and showing their density $\left\lbrack {2,{31}}\right\rbrack$ . This is achieved by splitting the rendering process into separate passes. The first pass uses additive blending to compute an intermediate density field. Since the densities typically cover a large dynamic range, we apply a nonlinear transformation before we render the final visualization. In our implementation, we apply a logarithmic mapping together with a logistic function to guarantee user-specified extrema within $\left\lbrack {0,1}\right\rbrack$ . The result is then multiplied with the line color's alpha value. The result is best demonstrated in Figure 1, where individual lines can be traced in areas of both high and low density. We discuss the choice of color in the following context of uncertainty visualization.
+
+### 5.3 Uncertainty
+
+Fuzzy clustering involves soft labeling of individual data points. In our implementation, we distinguished between no clusters at all (black axes in Figure 11), noise (red), and actual clusters (blue). This gives viewers additional information about the underlying algorithm's success in clustering the source data. Going further, more levels of certainty could be visualized as long as the axis colors remain well distinguishable.
+
+Soft clustering can also select multiple labels for a single point, weakly assigning it to multiple clusters, instead of a single strong result. We treat these labels as probabilities and, by definition, the total probability that a point exists is ${100}\%$ , i.e., the sum of all assigned soft labels has to be 1 . The clustering technique does not supply information on flow between sets of fuzzy clusters in separate subspaces. Therefore, we cannot infer whether a point moved between specific pairs of neighboring clusters. The probability that it belongs to one cluster or the other is our only information. We take this into account by rendering a single source data point as multiple lines to obtain a faithful visual representation. When a data row is labeled more than once in a 2D subspace, we draw a line between all neighboring clusters it belongs to (see Figure 10). We determine the opacity of each line by multiplying the label values of the clusters it connects. Our method corresponds to providing a field with probable paths for the movement of data points and results in an invariant total opacity of 1 for each point. This property is compatible with the rendering technique from Section 5.2 because it does not affect the total density.
+
+
+
+Figure 12: Inter-cluster patterns. Clustering in subspaces ${D1} \times {D2}$ and ${D2} \times {D3}$ gives the exact same result (a, b). Cluster ${C}_{1}$ splits into ${C}_{3}$ and ${C}_{4}$ in ${D3} \times {D4}$ (c). The displayed clusters in (d) have very low similarity with the results from the previous subspace.
+
+Up to this point, our approach only renders the probability that a data point flows from one cluster to the other. Color is a strong visual variable with many distinguishable values, but we have not used it for line rendering, yet. Hence, CF-PCPs encode the numerical uncertainty from the clustering algorithm by mapping it to color. Each line between two axes is rendered according to its underlying data point's label in the target cluster (right axis for reading direction LTR). This means that the color of lines for the same data row may change from one side of an axis to the other, leaving the positional attributes as hints for tracing the line paths. We choose a sequential color scale (from light blue to black) to map the label weights in our examples (see Figure 11). The gradient from light blue to black only varies significantly in saturation and value, making it suitable even for viewers with most common color vision deficiencies.
+
+CF-PCPs apply binning for good readability [38] because smooth color gradients can be problematic for reading exact values [4]. Another benefit of binning becomes evident when creating the density fields (see Section 5.2): additive blending would mix and distort the used colors. Furthermore, it would make multiple overlayed fuzzy lines look like a single certain one. To address these issues, our approach creates a separate density field for each bin in the uncertainty color map. As depicted in Figure 11, CF-PCPs convert them separately into regular images with the RGB values from the color map, while the alpha channel encodes the field's density. In the final render pass, these individual fields are sorted by fuzziness and alpha-blended with the over operator [42].
+
+## 6 Visual Patterns
+
+As discussed earlier, CF-PCPs show streams of 2D clusters across the subspaces of a data set while still displaying the original correlations between dimensions. As such, our visualization aims at providing two levels of granularity: overview and detail. The overview level shows similar 2D subspaces, clusters, and inter-cluster patterns, i.e., how data flows between clusters across different subspaces. The detail level shows intra-cluster patterns, i.e., what is the correlation within pairs of clusters from neighboring dimensions.
+
+Ordering: Our horizontal ordering, coupled with our ${\mathrm{A}}^{ * }$ approach, implies strong similarities of clusters in neighboring dimensions because we minimize the total dissimilarity according to our metrics. More specifically, this means that the two subspaces between three neighboring dimensions contain similar clusters.
+
+Clusters: The duplication of axes allows for easy identification of the number of cluster within each 2D subspace. In turn, this helps us identify data classes across different pairs of dimensions. Coupled soft labeling, this makes it easy to recognize well-separated classes (with a high level of certainty) over fuzzy ones.
+
+Inter-Cluster Patterns: We created Figure 12 as an example with five dimensions that always have two clusters in their subspaces. In Figures 12a and 12b the flow of data between clusters in ${D1} \times {D2}$ and ${D2} \times {D3}$ shows highly similar subspaces: each cluster on the left is associated with a unique cluster on the right. This one-to-one relationships denotes high or total similarity between subspaces. The situation is different in Figure 12c, where the CF-PCPs shows that cluster ${C}_{1}$ from ${D2} \times {D3}$ splits into ${C}_{3}$ and ${C}_{4}$ in ${D3} \times {D4}$ . It would also be correct to say that ${C}_{4}$ splits into ${C}_{1}$ and ${C}_{2}$ , depending on the point of view. Finally, in Figure 12d, we see two very dissimilar subspaces. Here, both clusters ${C}_{3}$ and ${C}_{4}$ split and mix in ${D4} \times {D5}$ .
+
+
+
+Figure 13: Intra-cluster patterns. When CF-PCP axes are at the same height (a and c), patterns from positive and negative correlations look very similar to their counterparts in regular PCPs. The line geometry discussed in Section 5.1 preserves these patterns close to the axes, even if they are at different heights (b and d).
+
+More generally, if each cluster in the subspace of two dimensions is uniquely associated with every other cluster of a neighboring subspace (i.e., the similarity matrix in Equation (3) contains only ones and zeros), it means that they behave similarly. If, in turn, all clusters are linked with each other (i.e., the similarity matrix contains many values $< 1$ and $> 0$ ), this shows a completely dissimilar clustering behavior between the three dimensions that span the two subspaces. Additionally, an axis having different incoming clusters is equal to a merge or split, depending on the point of view. Overall, in CF-PCPs, the number of splits or merges between clusters conveys the connectedness of their two subspaces.
+
+Intra-Cluster Patterns: Atop showing the high-level information on flow of data between clusters across subspaces, our visualization also maintains patterns known from traditional parallel coordinates. Since the duplication retains the entire value range of an axis, our cluster-flow layout shows the distribution of each cluster's data points in the original dimensions. As demonstrated by the clusters labeled ${D3}$ in Figure 1, we can see where the values are positioned in regards to the whole range: larger values are in the upper cluster, smaller ones in the lower. This approach also visualizes whether the data points move closer or further apart, e.g., the bundle from ${D3}$ spread out in ${D4}$ .
+
+The examples in Figure 13 illustrate how CF-PCPs show data correlations. If two neighboring clusters are aligned (a and c) then positive and negative correlations look very similar to the classic PCP approach. In the case of non-aligned clusters (b and d), our line geometry from Section 5 ensures that the slopes near the axes remain identical to the ones in regular PCPs. Hence, our approach preserves the most important areas of PCPs [37] and adds additional information on clusters and data flow.
+
+Overall, the visual patterns of our approach can be seen in different places of the visualization: inter-cluster patterns are located in the entire space between two display dimensions, whereas the vicinity of duplicated axes reveals intra-cluster patterns.
+
+
+
+Figure 14: Classification of E. coli bacteria according to a tree (a) where nodes correspond to metrics and leafs to inferred classes [24]. All red metrics are necessary to distinguish the blue classes. These are shown as dimensions in the regular PCP (b). The scatter plots show color-coded fuzzy subspace clusters between the last three metrics (c). Our CF-PCP (d) technique combines aspects of both previous visualizations. Its line colors encode certainty: 0 1 100%.
+
+## 7 EXAMPLES
+
+To demonstrate the applicability of CF-PCPs, we provide examples for typical real-world data sets and compare our results to previous approaches.
+
+### 7.1 Escherichia Coli
+
+Horton and Nakai used machine learning to automatically predict localization sites of proteins [24]. Their results included a decision tree that uses multiple measured metrics for Escherichia coli bacteria in order to arrive at a predicted classification. Figure 14a shows this tree and highlights five red nodes that we selected for visualization. They correspond to the metrics that define four classifications (blue). We start our analysis with a regular PCP and density rendering (Figure 14b). It shows that the dimension ${lip}$ is only binary. There seem to be negative correlations between alm 2 and its neighbors, but it is difficult to map the resulting class to a specific influence of each previous dimension. The CF-PCP in Figure 14d uses 2D subspace clustering and shows additional information. The first impression is of mostly green color, which shows low certainty. This is due to a low degree of separation between the clusters. The second impression is of flow between clusters. The observable flow from the CF-PCP matches the tree structure from Figure 14a. Using Fuzzy DBSCAN with dimensions ${mcg}$ , lip, and ${gvh}$ yields a single cluster and noise each time. The classification tree confirms that they are not sufficient to infer any specific classes. Subspace $g{vh} \times {alm2}$ shows the first split into two actual clusters, just as the tree also arrives at its first class ${imU}$ . Class ${imS}$ is inferred next and the CF-PCP also contains two clusters in alm $2 \times$ aac. Combining the information from aac $\times$ class reveals four final clusters, which matches the the classification count in the highlighted subtree.
+
+The scatter plots with color-coded clusters do not show this data flow between dimensions because the selected colors are not linked between each plot. Even the display of the fuzziness poses a problem: some points belong to multiple clusters. Plotting them multiple time on the same position only retains the color of their last label.
+
+### 7.2 NetPerf
+
+To help understand the characteristics of CF-PCPs, we also compare our technique with two clustering-oriented PCP approaches in Figure 2, using the NetPerf data set [49]: Illustrative Parallel Coordinates (IPC) [36] and an edge bundling layout (EBL) for interactive parallel coordinates [39]. In contrast to our proposed pairwise fuzzy method, the clustering in IPCs is calculated and rendered for all dimensions at once (see Figure 2c). Conversely, the EBL method clusters the data separately in each single dimension (see Figure 2a). Both approaches use color to encode clusters consistently over every dimension. Our selected sample visualizations show four dimensions and three clusters in the NetPerf data set. To increase comparability, we limit ourselves to the four dimensions used in the previously published visualizations of the same data. Dimension ordering is disabled, while the vertical axis order minimizes our metric on line crossings from Section 4.3.
+
+In Figure 2c, Illustrative Parallel Coordinates bundle PCP poly-lines based on the cluster they belong to. This reduces inter-cluster overdraw and emphasizes visual cluster separation. Line distortion is applied at the expense of clarity of pairwise axis correlation, because their direction no longer corresponds to the original angle in regular PCPs. Clustering in IPCs works across all visible dimensions and shows overlap between clusters, for instance, in the signal strength dimension. In contrast, EBL [39] reduces clutter in PCPs by bundling edges within pairs of one-dimensional clusters that are computed with a $k$ -means algorithm. Similarly to our approach, this method highlights subspace clusters-albeit one-dimensional ones-and allows viewers to follow the flow of bundles between dimensions. As such, the method avoids overdraw of clusters on axes. However, the one-dimensional approach simplifies the visualization to the detriment of details when it comes to diverging clusters. For example, edges in the top cluster of the throughput axis split into three different clusters in signal strength, but without further interaction, users cannot infer the distribution of data points that belong to the highest cluster in the signal strength dimension.
+
+Similarly to both alternative approaches, our cluster-flow layout shows three main clusters per axis pair (see Figure 2b). However, our visualization goes beyond the alternative techniques and is able to show overlap between clusters while allowing the viewer to trace complex cluster patterns. More specifically, our plot shows that low framerates are always associated with low throughput. Just as with logical consequence, the opposite is not true. Similarly to IPCs (Figure 2c), the CF-PCP shows that the main cluster of high throughput is associated with three clusters of high to low values of signal strength. Contrary to the broad bands in EBLs, fine details in CF-PCPs allow the viewer to see that the highest values of signal strength are always associated with the highest values of throughput. The large cluster at the top of the throughput dimension in the EBL of Figure 2a does not support this conclusion.
+
+
+
+Figure 15: Visualization of electric power production by primary energy sources in Germany. Clusters in this CF-PCP are vertically ordered by their mean value. Line colors encode certainty from fuzzy clustering: 0 100%
+
+A commonality with the EBL is the top cluster between signal strength and throughput, which forks into two different clusters when combining the data on signal strength and bandwidth: one with high, the other with low values. In the CF-PCP, we can quickly determine that there are many data points with high frame-rate, despite having only medium to low throughput and minimal signal strength and bandwidth. Color coding fuzziness, we are also able to see that there are uncertain assignments, e.g., the third cluster between framerate and throughput is completely fuzzy and many of its data points are more likely to be noise or belong to the second cluster. Overall, our implementation avoids the inconvenience of overlap from $n$ -dimensional clustering and allows for detailed analysis by tracing individual lines and clusters across dimensions.
+
+### 7.3 Energy Production
+
+Lastly, we visualize a larger data set of electricity production by primary energy source $\left\lbrack {{13},{44}}\right\rbrack$ . It contains 1750 data rows and lists a timestamp, 12 sources of primary energy, as well as international imports and exports as dimensions we can use in the CF-PCP. While the restriction to subspaces generally reduces the number of clusters, this is not the case with the uniformly distributet data in the time dimension: it yields 12 clusters. We removed it to get a cleaner and less cluttered plot for further analysis. We provide further visualizations of the original and reduced data set as supplemental material.
+
+Clustering between pairs of dimensions and separating them vertically can help with visual detection of dependencies in the data. From Figure 15, we can extract information between hard and brown coal. On the one hand, low energy production from the latter only occurs when hard coal also has lower values. On the other hand, power production on the left-hand side varies greatly, while brown coal is often at a high output level. Therefore, it seems that between these two, brown coal is burned more constantly while hard coal is preferred for adjustments to changing levels in power production from unsteady sources. Even with density rendering, the high degree of overplotting would create an almost constant background between both axes and thus cannot facilitate these observations.
+
+## 8 CONCLUSION AND FUTURE WORK
+
+We have presented a technique for cluster-centric visualization of high-dimensional data using parallel coordinates. The main idea of our technique is to deliberately duplicate axes for each cluster to show data flow between 2D subspaces. At the same time, this also creates an opportunity to display fuzzy clusters with parallel coordinates. We have described an algorithm to compute the cluster-flow layout, i.e., for ordering dimensions and axes. While the automatic optimization is algorithmicly complex, manual interaction with the axes and dimensions is always a viable alternative for data exploration. We analyzed visual correspondence of data patterns and discussed the applicability of our technique using multiple examples. Our layout is an improvement over the traditional approach when clusters are not linearly separable over a single dimension or all dimensions together and the number of clusters in subspaces is small: in this case, we bundle lines over separate axis clones instead of plotting many overlapping lines on a single large axis.
+
+In future work, we want to thoroughly evaluate user performance in controlled studies. In a first step, CF-PCPs should be compared against regular PCPs and scatter plot matrices (SPLOM) [10, 18] separately. A second step would compare them against a combination of both, for example, in coordinated views. It might also be interesting to investigate whether the vertical position of the largest cluster influences user performance.
+
+Extensions of our work might look into alternative optimization targets for the layout, such as aesthetic aspects or faithfulness. Parallel coordinates and scatter plots have their strengths and weaknesses. We would like to integrate scatter plots into our layout, e.g., beneath or between duplicated axes. A progressive clustering and cluster-flow layout would also be of interest for the analysis of dynamic data with live updates. We used CF-PCPs with a reading direction, where each display dimension is used for clustering with its neighbor. Our method could be extended to use both neighbors. Another change could be to cluster all displayed dimensions with a common reference dimension. This would be similar to selecting a row or column from a scatter plot matrix and visualizing it with parallel coordinates. Considering the example with E. coli bacteria, a hierarchical approach would be very beneficial for small numbers of dimensions. Here, we would enforce a reading direction and progressively create subclusters that mimic the classification tree from Horton and Nakai's work [24]. Finally, we would like to expand our approach to the field of interactive model peeling in the context of regression and machine learning.
+
+## ACKNOWLEDGMENTS
+
+Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project ID 251654672 - TRR 161 (projects A01 and B01). We would like to thank Prof. Dr. Bruno Burger (Fraunhofer Institute for Solar Energy Systems ISE) for his cooperation in the analysis of energy production data.
+
+[1] M. Ankerst, S. Berchtold, and D. A. Keim. Similarity clustering of
+
+dimensions for an enhanced visualization of multidimensional data. In IEEE Symposium on Information Visualization, pp. 52-60, 1998. doi: 10.1109/INFVIS. 1998.729559
+
+[2] A. O. Artero, M. C. F. de Oliveira, and H. Levkowitz. Uncovering clusters in crowded parallel coordinates visualizations. In IEEE Symposium on Information Visualization, pp. 81-88, 2004. doi: 10.1109/ INFVIS.2004.68
+
+[3] N. Boukhelifa, A. Bezerianos, T. Isenberg, and J.-D. Fekete. Evaluating sketchiness as a visual variable for the depiction of qualitative uncertainty. IEEE Transactions on Visualization and Computer Graphics, 18(12):2769-2778, 2012. doi: 10.1109/tvcg.2012.220
+
+[4] C. A. Brewer. Prediction of simultaneous contrast between map colors with hunt's model of color appearance. Color Research and Application, 21(3):221-235, 1996.
+
+[5] K. Brodlie, R. Allendes Osorio, and A. Lopes. A review of uncertainty in data visualization. In J. Dill, R. Earnshaw, D. Kasik, J. Vince, and P. C. Wong, eds., Expanding the Frontiers of Visual Analytics and Visualization, pp. 81-109. Springer, London, 2012. doi: 10.1007/978 $- 1 - {4471} - {2804} - 5 - 6$
+
+[6] C. Correa, Y.-H. Chan, and K.-L. Ma. A framework for uncertainty-aware visual analytics. In IEEE Symposium on Visual Analytics Science and Technology, pp. 51-58, 2009. doi: 10.1109/VAST.2009. 5332611
+
+[7] A. Dasgupta, M. Chen, and R. Kosara. Conceptualizing visual uncertainty in parallel coordinates. Computer Graphics Forum, 31:1015- 1024, 2012. doi: 10.1111/j.1467-8659.2012.03094.x
+
+[8] A. Dasgupta and R. Kosara. Pargnostics: screen-space metrics for parallel coordinates. IEEE Transactions on Visualization and Computer Graphics, 16(6):1017-1026, 2010. doi: 10.1109/TVCG.2010.184
+
+[9] H. Ding, M. Sharpnack, C. Wang, K. Huang, and R. Machiraju. Integrative cancer patient stratification via subspace merging. Bioinfor-matics, 35(10):1653-1659, 2019. doi: 10.1093/bioinformatics/bty866
+
+[10] J. W. Emerson, W. A. Green, B. Schloerke, J. Crowley, D. Cook, H. Hofmann, and H. Wickham. The generalized pairs plot. Journal of Computational and Graphical Statistics, 22(1):79-91, may 2012. doi: 10.1080/10618600.2012.694762
+
+[11] D. Feng, L. Kwock, Y. Lee, R. M. Taylor, and R. M. T. Ii. Matching visual saliency to confidence in plots of uncertain data. IEEE Transactions on Visualization and Computer Graphics, 16(6):980-989, 2010. doi: 10.1109/TVCG.2010.176
+
+[12] B. J. Ferdosi and J. B. Roerdink. Visualizing high-dimensional structures by dimension ordering and filtering using subspace analysis. Computer Graphics Forum, 30(3):1121-1130, 2011. doi: 10.1111/j .1467-8659.2011.01961.x
+
+[13] Fraunhofer Institute for Solar Energy Systems ISE. Home - Energy Charts. https://energy-charts.de/, 2019. [Online; accessed 2019-09-26].
+
+[14] Y.-H. Fua, M. O. Ward, and E. A. Rundensteiner. Hierarchical parallel coordinates for exploration of large datasets. In IEEE Conference on Visualization, pp. 43-50, 1999.
+
+[15] J. Gortler, C. Schulz, D. Weiskopf, and O. Deussen. Bubble treemaps for uncertainty visualization. IEEE Transactions on Visualization and Computer Graphics, 24(1):719-728, 1 2018. doi: 10.1109/TVCG. 2017.2743959
+
+[16] X. Han. Cancer molecular pattern discovery by subspace consensus kernel classification, pp. 55-65. 9 2007. doi: 10.1142/ 9781860948732_0010
+
+[17] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107, 1968. doi: 10. 1109/TSSC.1968.300136
+
+[18] J. A. Hartigan. Printer graphics for clustering. Journal of Statistical Computation and Simulation, 4(3):187-213, jan 1975. doi: 10.1080/ 00949657508810123
+
+[19] J. Heinrich, Y. Luo, A. E. Kirkpatrick, and D. Weiskopf. Evaluation of a bundling technique for parallel coordinates. In International Conference on Information Visualization Theory and Applications (IVAPP),
+
+pp. 594-602, 2012.
+
+[20] J. Heinrich, Y. Luo, A. E. Kirkpatrick, H. Zhang, and D. Weiskopf. Evaluation of a bundling technique for parallel coordinates. arXiv preprint arXiv:1109.6073, 2011.
+
+[21] J. Heinrich and D. Weiskopf. Continuous parallel coordinates. IEEE Transactions on Visualization and Computer Graphics, 15(6):1531- 1538, 2009. doi: 10.1109/TVCG. 2009.131
+
+[22] J. Heinrich and D. Weiskopf. State of the art of parallel coordinates. In M. Sbert and L. Szirmay-Kalos, eds., Eurographics 2013 - State of the Art Reports, pp. 95-116. The Eurographics Association, 2013. doi: 10.2312/conf/eg2013/stars/095-116
+
+[23] D. Holten and J. J. van Wijk. Force-directed edge bundling for graph visualization. Computer Graphics Forum, 28(3):983-990, 2009. doi: 10.1111/j.1467-8659.2009.01450.x
+
+[24] P. Horton and K. Nakai. A probabilistic classification system for predicting the cellular localization sites of proteins. In International Conference on Intelligent Systems for Molecular Biology, vol. 4, pp. 109- 115, 1996.
+
+[25] D. Ienco and G. Bordogna. Fuzzy extensions of the DBSCAN clustering algorithm. Soft Computing, 22(5):1719-1730, 2016. doi: 10. 1007/s00500-016-2435-0
+
+[26] A. Inselberg. The plane with parallel coordinates. The Visual Computer, 1(2):69-91, 1985. doi: 10.1007/BF01898350
+
+[27] A. Inselberg. Parallel coordinates: visual multidimensional geometry and its applications. Springer-Verlag, New York, 2009.
+
+[28] A. Inselberg and B. Dimsdale. Parallel coordinates for visualizing multi-dimensional geometry. In Computer Graphics, pp. 25-44. Springer Japan, 1987. doi: 10.1007/978-4-431-68057-4.3
+
+[29] P. Jaccard. Étude comparative de la distribution florale dans une portion des alpes et des jura. Bulletin de la Société Sciences Nat, 37:547- 579, 1901.
+
+[30] J. Johansson and M. D. Cooper. A screen space quality method for data abstraction. Computer Graphics Forum, 27(3):1039-1046, 2008. doi: 10.1111/j.1467-8659.2008.01240.x
+
+[31] J. Johansson, P. Ljung, M. Jern, and M. D. Cooper. Revealing structure within clustered parallel coordinates displays. In IEEE Symposium on Information Visualization, pp. 125-132, 2005. doi: 10.1109/INFVIS. 2005.1532138
+
+[32] R. Kosara, F. Bendix, and H. Hauser. Parallel sets: interactive exploration and visual analysis of categorical data. IEEE Transactions on Visualization and Computer Graphics, 12(4):558-568, 2006. doi: 10. 1109/tvcg.2006.76
+
+[33] A. Lhuillier, C. Hurter, and A. Telea. State of the art in edge and trail bundling techniques. Computer Graphics Forum, 36(3):619-645, 2017. doi: 10.1111/cgf. 13213
+
+[34] D. Limberger, C. Fiedler, S. Hahn, M. Trapp, and J. Döllner. Evaluation of sketchiness as a visual variable for ${2.5}\mathrm{\;d}$ treemaps. In International Conference on Information Visualisation (IV), pp. 183-189, 2016. doi: 10.1109/IV.2016.61
+
+[35] S. Liu, D. Maljovec, B. Wang, P. T. Bremer, and V. Pascucci. Visualizing high-dimensional data: advances in the past decade. IEEE Transactions on Visualization and Computer Graphics, 23(3):1249- 1268, 2017. doi: 10.1109/TVCG.2016.2640960
+
+[36] K. T. McDonnell and K. Mueller. Illustrative parallel coordinates. Computer Graphics Forum, 27(3):1031-1038, 2008. doi: 10.1111/j. 1467-8659.2008.01239.x
+
+[37] R. Netzel, J. Vuong, U. Engelke, S. O'Donoghue, D. Weiskopf, and J. Heinrich. Comparative eye-tracking evaluation of scatterplots and parallel coordinates. Visual Informatics, 1(2):118-131, 2017. doi: 10. 1016/j.visinf.2017.11.001
+
+[38] L. M. K. Padilla, P. S. Quinan, M. D. Meyer, and S. H. Creem-Regehr. Evaluating the impact of binning 2D scalar fields. IEEE Transactions on Visualization and Computer Graphics, 23(1):431-440, 2017. doi: 10.1109/TVCG.2016.2599106
+
+[39] G. Palmas, M. Bachynskyi, A. Oulasvirta, H. P. Seidel, and T. Wein-kauf. An edge-bundling layout for interactive parallel coordinates. In IEEE Pacific Visualization Symposium, pp. 57-64, 2014. doi: 10. 1109/PacificVis.2014.40
+
+[40] A. T. Pang, C. M. Wittenbrink, and S. K. Lodha. Approaches to uncertainty visualization. The Visual Computer, 13(8):370-390, 1997.
+
+doi: 10.1007/s003710050111
+
+[41] W. Peng, M. O. Ward, and E. A. Rundensteiner. Clutter reduction in multi-dimensional data visualization using dimension reordering. In IEEE Symposium on Information Visualization, pp. 89-96, 2004. doi: 10.1109/INFVIS. 2004.15
+
+[42] T. K. Porter and T. Duff. Compositing digital images. In Annual Conference on Computer Graphics and Interactive Techniques (SIG-GRAPH), pp. 253-259, 1984. doi: 10.1145/800031.808606
+
+[43] J. Rit. Propagating temporal constraints for scheduling. In T. Kehler, ed., 5th National Conference on Artificial Intelligence. Volume 1: Science., pp. 383-388. Morgan Kaufmann, 1986.
+
+[44] N. Rodrigues, R. Netzel, K. R. Ullah, M. Burch, A. Schultz, B. Burger, and D. Weiskopf. Visualization of time series data with spatial context: communicating the energy production of power plants. In International Symposium on Visual Information Communication and Interaction, pp. 37-44, 2017. doi: 10.1145/3105971.3105982
+
+[45] A. Tatu, G. Albuquerque, M. Eisemann, J. Schneidewind, H. Theisel, M. Magnork, and D. Keim. Combining automated analysis and visualization techniques for effective exploration of high-dimensional data. In IEEE Symposium on Visual Analytics Science and Technology, pp. 59-66, 2009. doi: 10.1109/VAST.2009.5332628
+
+[46] S. van den Elzen and J. J. van Wijk. BaobabView: interactive construction and analysis of decision trees. In IEEE Conference on Visual Analytics Science and Technology, pp. 151-160, 2011. doi: 10.1109/ VAST.2011.6102453
+
+[47] C. Vehlow, F. Beck, P. Auwärter, and D. Weiskopf. Visualizing the evolution of communities in dynamic graphs. Computer Graphics Forum, 34(1):277-288, 2015. doi: 10.1111/cgf. 12512
+
+[48] J. Wang, X. Liu, H.-W. Shen, and G. Lin. Multi-resolution climate ensemble parameter analysis with nested parallel coordinates plots. IEEE Transactions on Visualization and Computer Graphics, 23(1):81-90, 2017. doi: 10.1109/tvcg.2016.2598830
+
+[49] M. O. Ward. XmdvTool Home Page: Downloads: Data Sets. http:// davis.wpi.edu/xmdv/datasets.html, 2019. [Online; Accessed 2019-03-31].
+
+[50] E. J. Wegman. Hyperdimensional data analysis using parallel coordinates. Journal of the American Statistical Association, 85(411):664- 675, 1990. doi: 10.1080/01621459.1990. 10474926
+
+[51] P. C. Wong and R. D. Bergeron. 30 years of multidimensional multivariate visualization. Scientific Visualization, Overviews, Methodologies, and Techniques, 2:3-33, 1994.
+
+[52] X. Zhao and A. Kaufman. Structure revealing techniques based on parallel coordinates plot. The Visual Computer, 28(6):541-551, 2012. doi: 10.1007/s00371-012-0713-0
+
+[53] H. Zhou, W. Cui, H. Qu, Y. Wu, X. Yuan, and W. Zhuo. Splatting the lines in parallel coordinates. Computer Graphics Forum, 28(3):759- 766, 2009. doi: 10.1111/j.1467-8659.2009.01476.x
+
+[54] H. Zhou, X. Yuan, H. Qu, W. Cui, and B. Chen. Visual clustering in parallel coordinates. Computer Graphics Forum, 27(3):1047-1054, 2008. doi: 10.1111/j.1467-8659.2008.01241.x
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d1bb22f052f447e61610dcc39e56fc3aeafcd6ae
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/oVHjlwLkl-/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,283 @@
+§ CLUSTER-FLOW PARALLEL COORDINATES: TRACING CLUSTERS ACROSS SUBSPACES
+
+Nils Rodrigues* Christoph Schulz* Antoine Lhuillier ${}^{ \dagger }$ Daniel Weiskopf*
+
+Visualization Research Center (VISUS)
+
+University of Stuttgart, Germany
+
+ < g r a p h i c s >
+
+Figure 1: Our Cluster-Flow Parallel Coordinates Plot (CF-PCP) combines advantages of regular Parallel Coordinate Plots (PCPs) and Scatter Plots (SPs) as shown here using 2,000 generated points in dimensions D1-D4. CF-PCPs are read from left to right: The data is grouped by pairwise dimensional clustering, i.e., stacked axes beneath ${D}_{i}$ show all clusters from subspace ${D}_{i - 1} \times {D}_{i}$ . CF-PCPs allow for salient illustration of clusters and traceability across multiple dimensions alike. Thus, we argue that our technique can reveal patters that are difficult to perceive from a linked combination of SPs and traditional PCPs (cf. red and blue data points).
+
+§ ABSTRACT
+
+We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our cluster-flow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on ${\mathrm{A}}^{ * }$ . It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).
+
+Index Terms: Human-centered computing-Visualization-
+
+Graphics Interface Conference 2020
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically. Visualization techniques; Human-centered computing-Visualization-Visualization application domains-Information visualization
+
+§ 1 INTRODUCTION
+
+The analysis of multivariate or multidimensional data is a longstanding research topic in visualization [51]. Nowadays, with multivariate data being ubiquitous, a good interplay of automated data analysis and visualization is very important for gaining insights. In the realm of data analysis, subspace clustering allows analysts to find cross-dimensional relationships between data points, leading to useful classification methods $\left\lbrack {9,{16}}\right\rbrack$ . On the other side of the spectrum stands an important class of visualization techniques: parallel coordinates plots (PCPs) [22,27]. They play an important role in visualizing multivariate data as their core concept of parallel axes is easy to grasp and, unlike scatter plots, they scale for increasing dimensionality. Typically, they render each data point as a polyline, curve, or density field [21]. Existing techniques combine PCPs with either a single global cluster assignment for each data point [2] or with clusters in 1D data dimensions [39]. They then resort to edge bundling or color coding to show cluster memberships [22].
+
+In this work, we aim to combine subspace clustering with the visualization advantages of PCPs. To this end, we propose a novel approach that facilitates the visualization of clusters in 2D subspaces while still maintaining information about the characteristics of individual data elements in PCPs: the Cluster-Flow Parallel Coordinates Plot (CF-PCP). As depicted in Figure 1, our approach works on two visual levels inherent to the image. On the overview level, a coarse visualization allows users to trace and follow the evolution of subspace clusters across dimensions by duplicating axes for each cluster and stacking them vertically. On the detail level, it maintains the readability of correlation of data points and other data characteristics by ensuring that the incoming and outgoing links reflect the original information from regular PCPs, i.e., our approach keeps the original slopes of each data element. We demonstrate that CF-PCPs allow users to trace both hard and fuzzy subspace clusters across dimensions. Moreover, we propose new metrics to optimize dimension ordering in CF-PCPs based on subspace clusters and to reduce crossings between clusters. Our main contributions are
+
+*e-mail: fistname.lastname@visus.uni-stuttgart.de
+
+${}^{ \dagger }$ e-mail: antoine.lhuillier@gmail.com
+
+ < g r a p h i c s >
+
+Figure 2: Comparison of different clustering and PCP techniques using the NetPerf data set [49]. The edge-bundling layout (a) replaces lines entirely and only draws a single band between each pair of neighboring 1D clusters [39]. Our cluster-flow parallel coordinates (b) draw individual lines between 2D clusters of neighboring axes. Clusters are arranged vertically with our crossing minimization algorithm from Section 4.3. Illustrative parallel coordinates (c) emphasize clusters over all dimensions using colored lines and force-based bundling [36]. Figures (a) and (c) © 2014 IEEE. Reprinted, with permission, from [39].
+
+ * a new PCP with density rendering for subspace clustering that preserves the readability of correlations,
+
+ * an approach to visualizing uncertainty from fuzzy clustering,
+
+ * an A*-based algorithm for the optimal layout with respect to a novel set of metrics that reflect the compatibilities of clusters between dimensions, and
+
+ * a sample implementation of CF-PCP ${}^{1}$ using fuzzy DB- ${\mathrm{{SCAN}}}^{2}$ for subspace clustering.
+
+§ 2 RELATED WORK
+
+Multivariate or multidimensional visualization is a major and vibrant research area of the visualization community. Respective survey papers are available from Wong and Bergeron [51] and Liu et al. [35]. Our work addresses the visual mapping of multivariate data to PCPs; therefore, the discussion of related work focuses on parallel coordinates, in particular, in combination with clustering. Parallel coordinates for data analysis go back to seminal work by Inselberg [26, 28] and later by Wegman [50]. For a comprehensive presentation of PCPs, we refer to Inselberg's book [27]. Despite its popularity, the underlying geometry of a PCP coupled with a high number of data points can quickly lead to overdraw and thus visual clutter [22]. This makes it hard for users to explore and analyze patterns in the data set. To address these challenges, researchers have investigated cluster visualization and the saliency of underlying patterns.
+
+A first approach to cluster visualization in PCPs is to explicitly compute clusters in the data set and display them using different visual encodings. Inselberg [26] suggested drawing the envelope of the respective lines in parallel coordinates using the convex hull. Fua et al. [14] investigated rendering clusters with convex quadrilaterals resembling the axis-aligned bounding box of a cluster. More recently, Palmas et al. [39] proposed pre-computing 1D clusters for each dimension using a kernel density estimation approach and then linking neighboring axes using compact tubes in which the width encodes the number of data points in the cluster (see Figure 2a). They then used color coding to mark the clusters of a chosen dimension. Their technique produces a highly summarized and largely clutter-free visualization, reminiscent of a stream visualization such as baobab trees [46] as well as timelines by Vehlow et al. [47]. Although visually similar, these techniques differ from our approach as they do not keep the correlation details of data elements encoded in the PCP links, nor do they show the cluster flow between subspaces.
+
+Another approach changes the visual mapping of the lines to implicitly show clusters. Here, edge bundling [33] has shown to be an effective technique that helps users find clusters and patterns within PCPs [20]. Illustrative parallel coordinates [36] bundle PCPs in image space by pre-clustering the data set first (via $k$ -means) and then render the lines as curves using B-splines (see Figure 2c). Zhou et al. [54] used a variant of force-directed edge bundling [23] to directly compute the clusters based on the patterns emerging from the bundling algorithm. Heinrich et al. [20] extended previous work $\left\lbrack {{36},{54}}\right\rbrack$ by providing ${C}^{1}$ -continuity between B-splines to emphasize end-to-end tracing. Our approach also changes the visual mapping of the links and reduces clutter. However, our overall composition of links differs because we duplicate axes and focus on keeping information about the correlation of data elements in the clusters.
+
+Another approach to improve PCPs is to change the order of dimensions. While ordering can also be applied to other visualization techniques, it is especially inherent to the construction of PCPs, where the order of axes directly affects the revealed patterns [50]. Pargnostics [8] uses metrics such as the number of crossings, the angle of crossings, or the parallelism between dimensions for ordering. Tatu et al. [45] presented a method to rank axes in parallel coordinates using features of the Hough transform. Ankers et al. [1] proposed a pairwise dimension similarity based on Euclidean distance. Peng et al. [41] defined a clutter-based measure and used it in an ${\mathrm{A}}^{ * }$ algorithm to order the dimensions. Ferdosi and Roer-dink [12] generalized the notion of dimensional ordering of axes by expanding the concept of a pairwise similarity measure to subspace similarity measure using a predefined quality criterion. Later, Zhao and Kaufman [52] proposed several clustering techniques to optimize the ordering of axes in PCPs, e.g., a $k$ -means or a spectral approach. Tatu et al. [45] also proposed subspace similarity measures based on dimension overlap and data topology. In general, all these methods focus on subspaces. Our approach differs from the previous ones as our duplication of axes emphasizes flow between clusters and imposes the definition of new similarity measures to order both data dimensions and clusters.
+
+${}^{1}$ https://github.com/NilsRodrigues/clusterflow-pcp
+
+${}^{2}$ https://github.com/schulzch/fuzzy_dbscan
+
+ < g r a p h i c s >
+
+Figure 3: Construction of CF-PCPs. Scatter plots are well suited to show clusters of multivariate data in 2D subspaces (a) but do not readily show cluster relations across more dimensions. We extend PCPs by creating an individually duplicated axis for each cluster to show high-level flow between clusters (b). Showing the underlying data points as individual lines increases the available level of detail (c). To maintain visual data patterns and the perception of correlations, we restore the original PCP's line angles near the axes (d).
+
+Our paper also addresses the problem of visualizing uncertainty $\left\lbrack {5,6,{40}}\right\rbrack$ from fuzzy clustering. Techniques for hatching, sketchiness, as a specialized form of spatial uncertainty, have gained a lot of attention $\left\lbrack {3,{15},{34}}\right\rbrack$ . However, there are only few works on uncertainty in the context of parallel coordinates. Dasgupta et al. [7] conceptualized a taxonomy of different types of visual uncertainty inherent to PCPs. Feng et al. [11] focused on showing uncertainty in the data by mapping confidence to saliency in order to reduce misinterpretation of data, relying on density representation of uncertainty. We adopt color mapping to visualize fuzziness of clustering in PCPs.
+
+To the best of our knowledge, there is no previous work that combines all mentioned topics. Kosara et al. [32] also visualize flow but for categorical data instead of clusters. Their portrait layout of PCPs also does not calculate an optimal order for dimensions or categories. Nested PCPs by Wang et al. [48] look very similar but have a global clustering instead of using subspaces. They are meant for ensemble visualization in conjunction with other plots and use global color mapping that only depends on a single dimension.
+
+§ 3 MODEL AND OVERVIEW
+
+We first provide an overview and outline intermediate steps of our technique (see Figure 3). We assume that we have a set of data points in a multivariate data set as input: $P = \left\{ {{\overrightarrow{p}}_{i} \in {\mathbb{R}}^{n}}\right\}$ . An algorithm assigns each of these points a degree of membership to clusters, resulting in tuples $\left( {{\overrightarrow{p}}_{i},{m}_{k,i}}\right)$ . Here, ${m}_{k,i} \in \left\lbrack {0,1}\right\rbrack$ describes the degree to which data point ${\overrightarrow{p}}_{i}$ belongs to the cluster with index $k$ . Hard clustering is a special case of this, with ${m}_{k,i} \in \{ 0,1\}$ . In contrast, soft labeling allows for partial memberships. A single cluster is then defined as ${c}_{k} = \left\{ {\left( {{\overrightarrow{p}}_{i},{m}_{k,i}}\right) \mid {m}_{k,i} > 0}\right\}$ and $C = \left\{ {c}_{k}\right\}$ comprises all clusters. We focus on subspace clustering because distance measures lose expressiveness when dimensionality increases, leading to superfluously fuzzy or inconceivable clusters. Hence, we compute clusters for each pair of dimensions $\left( {{\mathbb{R}}_{i},{\mathbb{R}}_{j}}\right)$ as shown in Figure 3a, i.e., we look at sets of 2D subspaces. Our current implementation employs FuzzyDBSCAN [25], which is an extension of the classic and popular DBSCAN algorithm, for fuzzy clustering. However, our visualization is independent of the chosen clustering algorithm, it just assumes that clustering provides labels for each data point. The data that serves as source for clustering is shown beneath the axes.
+
+The goal of CF-PCPs is to show the information about subspace clusters and the actual data points alike. On a coarse level, they show the number of clusters in the subspaces and how data elements virtually flow from one cluster to another (see Figure 3b). Our approach to showing this flow is based on axis duplication: instead of a single (vertical) axis for a data dimension, we place clones of this axis on top of each other, i.e., at the same horizontal position. Each data cluster gets its own axis clone. Through this duplication, we can render the stream of data elements between clusters, which are visible even if one does not focus on individual lines in the CF-PCPs but just looks at the visualization as a whole. Section 4 describes the details of the axis duplication and how we can achieve an optimal layout of the data dimensions and cluster ordering.
+
+ < g r a p h i c s >
+
+Figure 4: Illustration of two clusters, using different coordinate systems. The scatter plot (a) uses Cartesian coordinates to clearly show them as dots placed along two straight lines. The clusters are hard to distinguish in a regular PCP (b) without additional visual variables, e.g., color. Our cluster-flow layout duplicates PCP axes for each cluster to reduce clutter and increase readability (c).
+
+By rendering an individual line for each data point, we include further details (see Figure 3c) in a fine-grained level of our proposed visualization. We use a curve model (see Figure 3d), as described in Section 5, that allows us to infer correlations and other data characteristics for individual data elements, similar to regular PCPs. Furthermore, we present a density rendering and color mapping approach that allows us to visualize large data sets and uncertainty from fuzzy clustering (see Sections 5.2 and 5.3).
+
+§ 4 LAYOUT
+
+The key aspect of our layout of CF-PCPs is the duplication of axes (Section 4.1) because this serves as a basis to visually separate clusters and show the flow between clusters in the different subspaces. Each ordering of axes results in a unique flow pattern between clusters, therefore we present a new method to optimize the horizontal axis order according to patterns in data flow (Section 4.2). Section 4.3 introduces a technique to order the clusters on each data axes (y-axis) vertically.
+
+§ 4.1 AXIS DUPLICATION
+
+CF-PCPs extend traditional PCPs by using the vertical image space to show cluster assignments. Unlike any other PCP variant, we visualize each cluster on its own (local) axis. As Figure 4 shows, we create two duplicates of the axes to draw the two clusters and stack them vertically.
+
+ < g r a p h i c s >
+
+Figure 5: Various reading directions with data points colored according to clusters in $B \times C$ . Without a reading direction, we obtain overlapping axes and discontinuous lines (a) because the 2D subspaces $A \times B$ and $B \times C$ do not have the same number of clusters. With a reading direction, axes show the clusters within the subspace of the current dimension and its neighbor to the left (b) or right (c).
+
+Axis duplication becomes necessary because we aim to show clusters in 2D subspaces, not just in 1D [39]. For the latter case of 1D clustering, we could just squeeze the data vertically to bundle the line into clusters on each axis individually. However, this approach will only work if the clusters are already separable in a single dimension. Figure 4a shows an example with two clusters that are visible in a scatter plot and linearly separable in two dimensions but not along 1D axes. Here, traditional PCPs lead to intertwined visualizations of the two clusters, and even axis scaling could not separate them Figure 4b. With axis duplication Figure 4c, we now see two clearly distinguishable bundles of lines that correspond to the two clusters, i.e., clusters are easy to recognize. However, there is yet another important benefit: the PCP lines can now be inspected for each cluster separately and, thus, we can investigate correlations or other data characteristics independently within each cluster.
+
+Up to now, we only handled a pair of two data dimensions, but parallel coordinates support multivariate data. Figure 5 shows an example sequence with three dimensions. In subspace $A \times B$ , there are two clusters, so we create two copies. $B \times C$ contains three clusters, so we draw three copies. This results in overplotted axes for dimension $B$ and interrupted polylines for the data points (see Figure 5a). Therefore, we introduce a reading direction left-to-right (LTR) to create as many copies of axis $B$ as there are clusters in $A \times B$ . We duplicate axis $C$ as often as there are clusters in $B \times C$ (see Figure 5b). Since axis $A$ has no pair to the left, there is no explicit clustering and no duplication. While the opposite reading direction works accordingly (see Figure 5c), we chose a consistent LTR layout for all figures in this paper. Beneath the axes, CF-PCPs also show the two data dimensions that were used for clustering to help the viewer identify the reading direction.
+
+Cloning the axes and stacking them vertically increases the plot area and vertically shears the lines that represent the data points, especially for large numbers of clusters. We address both issues by scaling down cloned axes with a root function. Assuming the height ${h}_{1}$ of a single regular PCP axis and $n$ as the number of stacked axes, we get a total height of
+
+$$
+h\left( n\right) = n \cdot {h}_{1} \cdot {\left( 1/n\right) }^{r}, \tag{1}
+$$
+
+where $r \in \left\lbrack {0,1}\right\rbrack$ controls the root scaling. Selecting $r = 0$ will result in no downscaling at all, while $r = 1$ will keep the original height without any growth. Spaces between axes are determined with
+
+$$
+s\left( n\right) = {s}_{2} \cdot {h}_{1} \cdot {\left( 1/\left( n - 1\right) \right) }^{r}, \tag{2}
+$$
+
+where ${s}_{2}$ is the percentage of ${h}_{1}$ we want to use as space between two duplicates. The remaining space is then used for the actual axis clones. We chose $r = {0.8}$ and ${s}_{2} = {0.1}$ for figures in this paper to balance between growing height and readability.
+
+Axes in regular PCPs are normalized to only display the used range of each data dimension. Multiple labeled tick marks enable viewers to read values from data points at their intersections with the axes. As the number of clusters rises, axes in CF-PCPs shrink. Retaining the same amount of labeled tick marks as in regular PCPs would add more overdraw to the already densely packed area around the axes. Therefore, we limit the ticks marks to the maximum and minimum of the data range in each data dimension and progressively scale the label size according to the space between axes.
+
+ < g r a p h i c s >
+
+Figure 6: Tree representation of our model for ordering of dimensions ${d}_{1}$ to ${d}_{4}$ . The blue numbers beneath the dimension names represent costs (left: individual; right: accumulated). Levels 0 and 1 have no costs because there are no pairs of clusters yet. The path ${d}_{1},{d}_{2},{d}_{4}$ , ${d}_{3}$ is optimal as it is at the deepest level and others are at least as expensive. Missing paths at level 3 and 4 have not been expanded yet, because their costs were never minimal.
+
+§ 4.2 DIMENSION ORDERING
+
+Just as with regular parallel coordinates, the visual patterns in a CF-PCP depend heavily on the order of data axes. For this reason, we propose a method for optimizing the order automatically. Previous work investigated various metrics-like number of crossings, angles between lines, correlation strength, and more-to optimize the order $\left\lbrack {8,{30}}\right\rbrack$ . At first glance, we could have simply reused existing algorithms or defined dimension ordering as an overlap-removal problem for the polylines in the PCP. However, CF-PCPs do not only show the underlying data points but also depict flow between subspace clusters on a coarse scale. For example, to calculate the costs of chaining two sets of clusters between display axes ${a}_{i}$ and ${a}_{i + 1}$ , we also need to know the previous axis ${a}_{i - 1}$ . This is due to our 2D subspace clustering and reading direction LTR: clusters shown at ${a}_{i}$ are calculated with data from dimensions ${d}_{i - 1} \times {d}_{i}$ . Only then can we calculate costs with the next axis ${a}_{i + 1}$ , which contains clusters from ${d}_{i} \times {d}_{i + 1}$ .
+
+As shown in Figure 6, we choose a tree $T$ as primary data structure for our proposed order optimization. We start by creating the root node at level 0 and add a child for each data dimension at level 1. Then, we expand each leaf recursively by attaching a node for each unused data dimension. Each level corresponds to an index in the sequence of display dimensions. This way, for $d$ data dimensions we obtain a tree with $d + 1$ levels, where each node has $d -$ level children. The tree contains $d$ ! different paths-corresponding to all permutations of the data dimensions. Costs from one set of clusters to the next depend only on three consecutive data dimensions. Thus, there are only $d \cdot \left( {d - 1}\right) \cdot \left( {d - 2}\right)$ different costs to compute, which can be cached and reused in the tree traversal.
+
+We apply the A* algorithm [17] to compute the shortest path in $T$ and implement it by lazily expanding a tree node when it is the leaf with the lowest cumulative costs. The heuristic $d$ -level estimates the remaining costs. This has implications on the metric we use to define the distance between two sets of clusters: The minimum cost from a parent to a child has to be 1 for the heuristic to work. With this model, the A* algorithm is guaranteed to find an optimal solution. We combine lazy evaluation with caching of cost values to compute the A* optimization efficiently.
+
+ < g r a p h i c s >
+
+Figure 7: Simplified CF-PCP with cluster sets ${c}_{i}$ and ${c}_{j}$ . Top and bottom (black) never cross any other bundles. A red reference bundle only crosses lines that start below and end above (blue).
+
+Having decided on a basic optimization algorithm, we choose a measure for the costs of displaying clusters next to each other. CF-PCPs are targeted at showing how clusters evolve from one subspace to the other, how the data points flow from one to another. Therefore, we propose a metric that focuses on the similarities and differences of clusters. Let ${c}_{i}$ and ${c}_{j}$ be two sets of clusters. The number of elements in them are $n = \left| {c}_{i}\right|$ and $m = \left| {c}_{j}\right|$ . In a first step, we generate a similarity matrix using the Jaccard indices [29]:
+
+$$
+{S}_{{c}_{i},{c}_{j}} = \left( \begin{matrix} J\left( {{c}_{i,1},{c}_{j,1}}\right) & \cdots & \\ \vdots & \ddots & \vdots \\ & \cdots & J\left( {{c}_{i,n},{c}_{j,m}}\right) \end{matrix}\right) . \tag{3}
+$$
+
+The best match of cluster sets would yield few elements with value 1 and many with 0 . Bad matches have many data points changing between clusters, leading to an array with many values $< 1$ . Since we prefer to see dimension orders with many good matches, we subtract the matrix’s mean value from all its elements ${s}_{x,y}$ . We then square all results and sum them up into the grand sum to get a scalar similarity value between the sets of clusters:
+
+$$
+{\operatorname{similarity}}_{i,j} = \mathop{\sum }\limits_{\substack{{1 \leq x \leq n} \\ {1 \leq y \leq m} }}{\left\lbrack {s}_{x,y} - \operatorname{mean}\left( {S}_{{c}_{i},{c}_{j}}\right) \right\rbrack }^{2}. \tag{4}
+$$
+
+To obtain a cost function that fits the requirements of ${\mathrm{A}}^{ * }$ and our tree $T$ , we invert the similarity. However, $\operatorname{sim}\left( {{c}_{i},{c}_{j}}\right)$ can be 0 when all matrix entries have the same value or we are comparing sets of size 1. Thus, we define the cost function as
+
+$$
+{\text{ horizontal Cost }}_{i,j} = 1 + {\left\lbrack {\text{ similarity }}_{i,j} + 1\right\rbrack }^{-1}
+$$
+
+$\geq 1$(5)
+
+to avoid division by 0 . Using our proposed metric helps the optimization algorithm in avoiding adjacent dimensions with no or only one cluster between them.
+
+§ 4.3 CLUSTER ORDERING
+
+We separate the optimization of horizontal dimension and vertical cluster order because CF-PCPs aim to primarily display the data flow between subspace clusters. The list of available clusters for display is controlled by the sequence of dimensions but not by their internal vertical arrangement. Therefore, the order of dimensions is our highest priority. Afterward, we still have flexibility in choosing the vertical arrangement along each data dimension. Our approach to optimizing the vertical sequence is similar to the dimension ordering: employ A* to find an optimal path from tree level 1 to a leaf at level $d$ . The main difference lies in the way we generate the children of each node. When there are $m$ clusters to be arranged, we generate all permutations and add them to the parent node, which is itself a node among permutations of the previous clusters.
+
+We adopted a metric based on the number of line crossings as a cost function-which is popular for ordering in PCPs [8]-and adapted it for the display of data flow from one cluster to another. The comparison of Figures $4\mathrm{\;b}$ and $4\mathrm{c}$ shows that lines within cluster pairs move closer to each other. This is caused by the axis scaling from Section 4.1 and creates the impression of bundles. Intertwined lines within such a bundle do not interfere with tracing the data flow, but crossing bundles are difficult to follow visually. Therefore, we define our metric to penalize these inter-bundle crossings.
+
+ < g r a p h i c s >
+
+Figure 8: Composite line geometry in CF-PCPs. In regular parallel coordinates (a), lines have an angle $\gamma$ , depending on the underlying data point's values in each display dimension. Lines in CF-PCPs (b) have segments with the same angle in a zone close to the axes (red lines on gray background). These are then connected by a cubic Hermite curve with tangents T1 and T2.
+
+A bundle between clusters contains all shared points, e.g., ${c}_{i,1} \cap$ ${c}_{j,1}$ . We assume that each line has a certainty $l \in \left\lbrack {0,1}\right\rbrack$ that describes its-possibly fuzzy-cluster membership. Fuzzy lines will not be as visible as certain ones and contribute less to overall clutter. See Section 5.3 for more details on how the certainty will affect line rendering. Our metric sums up the certainty within each bundle into $L$ and uses it as an adjusted line count. This new count is used to populate a matrix with values for all bundles between neighboring dimensions $i$ and $j$ :
+
+$$
+{B}_{i,j} = \left( \begin{matrix} L\left( {{c}_{i,1} \cap {c}_{j,1}}\right) & \cdots & \\ \vdots & \ddots & \vdots \\ & \cdots & L\left( {{c}_{i,n} \cap {c}_{j,m}}\right) \end{matrix}\right) . \tag{6}
+$$
+
+Our approach calculates the total number of crossings using a method by Rit [43]: bundles only cross if they start lower and end higher (see Figure 7). The opposite case (from higher to lower) is just a different view on the intersection of the same lines. The actual calculation is done by multiplying each element ${b}_{x,y}$ of ${B}_{i,j}$ with the submatrix ${}_{x,y}{B}_{i,c}$ to its lower left and then getting the grand sum. The first column and last row do not have a submatrix to the lower left, so we do not need to calculate crossings for them. The resulting values are written into a matrix ${I}_{i,j}$ and all its elements ${i}_{x,y}$ are summed up. The final cost for placing one sequence of clusters next to another is then
+
+$$
+\text{ vertical Cost }{}_{i,j} = 1 + \mathop{\sum }\limits_{\substack{{1 \leq x \leq n} \\ {1 \leq y \leq m} }}{i}_{x,y}\; \geq 1. \tag{7}
+$$
+
+We start with a minimal vertical cost of 1 to ensure that the metric fits the requirements set by the level-based heuristic for ${\mathrm{A}}^{ * }$ . The width of the primary tree structure $T$ grows much faster than in Section 4.2. Therefore we recommend using the lazy A* algorithm for the optimization of up to 15 clusters per 2D subspace and reverting to a greedy approach for more complex problems. For example, sorting clusters by the mean value of contained data points approximates the vertical line layout from PCPs whilst retaining advantages of their cluster-flow counterparts.
+
+§ 5 RENDERING
+
+At this point, CF-PCPs exist as a model that does not completely specify how to display lines and encode uncertainty. To be able to render actual visual output, we now address these aspects.
+
+§ 5.1 CURVE GEOMETRY
+
+Traditional PCPs draw data points as straight lines between two neighboring dimensions. If we do the same in CF-PCPs, we get a relatively clean visualization without much clutter (Figure 9a). Unfortunately, the naive approach of axis duplication also changes the line slopes with respect to traditional PCPs. In fact, the slope depends no longer just on the data values but much more on the cluster membership. A related issue is that the angular resolution is reduced because lines become "compressed" (i.e., closer in angles) if they belong to the same cluster. Therefore, it becomes harder to recognize correlations. A recent eye-tracking study [37] showed that participants focus primarily on the area around the axes when reading PCPs. Another study with extreme bundling in PCPs [19, 20] showed that the perception of data characteristics and correlation is even possible when participants could not use the center parts between axes. These findings give us an opportunity to address the aforementioned problems: we replace straight lines by composite connections (see Figure 8). The key to our model is that we start and finish the connections with short straight line segments with the same slope as in original PCPs. This creates a zone in the very important focal area around the axes that retains the same information as in regular PCPs.
+
+ < g r a p h i c s >
+
+Figure 9: Comparison of line geometry in CF-PCPs. Straight lines (a) make it hard to recognize correlations within clusters. Curved composite lines (b) preserve information on correlations by starting and ending with the same angles as in regular PCPs. Adjusting tangents used in the central Hermite splines separates lines that would cover and occlude each other completely.
+
+ < g r a p h i c s >
+
+Figure 10: Fuzzy clustering provides a single data point with multiple soft labels in various clusters. Out technique draws lines between all possible pairs of these clusters. Their opacity is calculated by multiplying the point's labels in both connected clusters. This spreads each data point's constant total density of 1 over all possible connections.
+
+In the center region-in-between data axes-we connect the straight segments with a cubic Hermite curve. Choosing the tangents of the curve to be identical to the original slope guarantees a smooth transition between segments. The result of this approach is shown in Figure 9b. Instead of using the same factor to scale the Hermite curve’s tangents ${T1}$ and ${T2}$ , we vary it between data points. The relative index of each row in the source data set is added to a base factor on ${T1}$ and subtracted for ${T2}$ . This spreads the composite lines along the direction they would have had in a classic PCP. This is very helpful when there are multiple row with the same values in both neighboring data dimensions: the wider the lines spread, the more rows share the same values. The effect can be observed in the curves between the dimensions Oil and Biomass in Figure 15. In regular PCPs, this information would be lost in overplotting or would require a density rendering technique.
+
+ < g r a p h i c s >
+
+Figure 11: Visualizing uncertainty in the clustering results. The label weights are binned into 3 ranges. Each bin is separately rendered to a density field and then mapped to color (a)-(c). Finally, the results are sorted by uncertainty and alpha blended (d).
+
+§ 5.2 DENSITY RENDERING
+
+CF-PCPs are designed to work with large data sets that can incur the problem of overplotting. We address this issue by adopting the established method of splatting [53] the lines and showing their density $\left\lbrack {2,{31}}\right\rbrack$ . This is achieved by splitting the rendering process into separate passes. The first pass uses additive blending to compute an intermediate density field. Since the densities typically cover a large dynamic range, we apply a nonlinear transformation before we render the final visualization. In our implementation, we apply a logarithmic mapping together with a logistic function to guarantee user-specified extrema within $\left\lbrack {0,1}\right\rbrack$ . The result is then multiplied with the line color's alpha value. The result is best demonstrated in Figure 1, where individual lines can be traced in areas of both high and low density. We discuss the choice of color in the following context of uncertainty visualization.
+
+§ 5.3 UNCERTAINTY
+
+Fuzzy clustering involves soft labeling of individual data points. In our implementation, we distinguished between no clusters at all (black axes in Figure 11), noise (red), and actual clusters (blue). This gives viewers additional information about the underlying algorithm's success in clustering the source data. Going further, more levels of certainty could be visualized as long as the axis colors remain well distinguishable.
+
+Soft clustering can also select multiple labels for a single point, weakly assigning it to multiple clusters, instead of a single strong result. We treat these labels as probabilities and, by definition, the total probability that a point exists is ${100}\%$ , i.e., the sum of all assigned soft labels has to be 1 . The clustering technique does not supply information on flow between sets of fuzzy clusters in separate subspaces. Therefore, we cannot infer whether a point moved between specific pairs of neighboring clusters. The probability that it belongs to one cluster or the other is our only information. We take this into account by rendering a single source data point as multiple lines to obtain a faithful visual representation. When a data row is labeled more than once in a 2D subspace, we draw a line between all neighboring clusters it belongs to (see Figure 10). We determine the opacity of each line by multiplying the label values of the clusters it connects. Our method corresponds to providing a field with probable paths for the movement of data points and results in an invariant total opacity of 1 for each point. This property is compatible with the rendering technique from Section 5.2 because it does not affect the total density.
+
+ < g r a p h i c s >
+
+Figure 12: Inter-cluster patterns. Clustering in subspaces ${D1} \times {D2}$ and ${D2} \times {D3}$ gives the exact same result (a, b). Cluster ${C}_{1}$ splits into ${C}_{3}$ and ${C}_{4}$ in ${D3} \times {D4}$ (c). The displayed clusters in (d) have very low similarity with the results from the previous subspace.
+
+Up to this point, our approach only renders the probability that a data point flows from one cluster to the other. Color is a strong visual variable with many distinguishable values, but we have not used it for line rendering, yet. Hence, CF-PCPs encode the numerical uncertainty from the clustering algorithm by mapping it to color. Each line between two axes is rendered according to its underlying data point's label in the target cluster (right axis for reading direction LTR). This means that the color of lines for the same data row may change from one side of an axis to the other, leaving the positional attributes as hints for tracing the line paths. We choose a sequential color scale (from light blue to black) to map the label weights in our examples (see Figure 11). The gradient from light blue to black only varies significantly in saturation and value, making it suitable even for viewers with most common color vision deficiencies.
+
+CF-PCPs apply binning for good readability [38] because smooth color gradients can be problematic for reading exact values [4]. Another benefit of binning becomes evident when creating the density fields (see Section 5.2): additive blending would mix and distort the used colors. Furthermore, it would make multiple overlayed fuzzy lines look like a single certain one. To address these issues, our approach creates a separate density field for each bin in the uncertainty color map. As depicted in Figure 11, CF-PCPs convert them separately into regular images with the RGB values from the color map, while the alpha channel encodes the field's density. In the final render pass, these individual fields are sorted by fuzziness and alpha-blended with the over operator [42].
+
+§ 6 VISUAL PATTERNS
+
+As discussed earlier, CF-PCPs show streams of 2D clusters across the subspaces of a data set while still displaying the original correlations between dimensions. As such, our visualization aims at providing two levels of granularity: overview and detail. The overview level shows similar 2D subspaces, clusters, and inter-cluster patterns, i.e., how data flows between clusters across different subspaces. The detail level shows intra-cluster patterns, i.e., what is the correlation within pairs of clusters from neighboring dimensions.
+
+Ordering: Our horizontal ordering, coupled with our ${\mathrm{A}}^{ * }$ approach, implies strong similarities of clusters in neighboring dimensions because we minimize the total dissimilarity according to our metrics. More specifically, this means that the two subspaces between three neighboring dimensions contain similar clusters.
+
+Clusters: The duplication of axes allows for easy identification of the number of cluster within each 2D subspace. In turn, this helps us identify data classes across different pairs of dimensions. Coupled soft labeling, this makes it easy to recognize well-separated classes (with a high level of certainty) over fuzzy ones.
+
+Inter-Cluster Patterns: We created Figure 12 as an example with five dimensions that always have two clusters in their subspaces. In Figures 12a and 12b the flow of data between clusters in ${D1} \times {D2}$ and ${D2} \times {D3}$ shows highly similar subspaces: each cluster on the left is associated with a unique cluster on the right. This one-to-one relationships denotes high or total similarity between subspaces. The situation is different in Figure 12c, where the CF-PCPs shows that cluster ${C}_{1}$ from ${D2} \times {D3}$ splits into ${C}_{3}$ and ${C}_{4}$ in ${D3} \times {D4}$ . It would also be correct to say that ${C}_{4}$ splits into ${C}_{1}$ and ${C}_{2}$ , depending on the point of view. Finally, in Figure 12d, we see two very dissimilar subspaces. Here, both clusters ${C}_{3}$ and ${C}_{4}$ split and mix in ${D4} \times {D5}$ .
+
+ < g r a p h i c s >
+
+Figure 13: Intra-cluster patterns. When CF-PCP axes are at the same height (a and c), patterns from positive and negative correlations look very similar to their counterparts in regular PCPs. The line geometry discussed in Section 5.1 preserves these patterns close to the axes, even if they are at different heights (b and d).
+
+More generally, if each cluster in the subspace of two dimensions is uniquely associated with every other cluster of a neighboring subspace (i.e., the similarity matrix in Equation (3) contains only ones and zeros), it means that they behave similarly. If, in turn, all clusters are linked with each other (i.e., the similarity matrix contains many values $< 1$ and $> 0$ ), this shows a completely dissimilar clustering behavior between the three dimensions that span the two subspaces. Additionally, an axis having different incoming clusters is equal to a merge or split, depending on the point of view. Overall, in CF-PCPs, the number of splits or merges between clusters conveys the connectedness of their two subspaces.
+
+Intra-Cluster Patterns: Atop showing the high-level information on flow of data between clusters across subspaces, our visualization also maintains patterns known from traditional parallel coordinates. Since the duplication retains the entire value range of an axis, our cluster-flow layout shows the distribution of each cluster's data points in the original dimensions. As demonstrated by the clusters labeled ${D3}$ in Figure 1, we can see where the values are positioned in regards to the whole range: larger values are in the upper cluster, smaller ones in the lower. This approach also visualizes whether the data points move closer or further apart, e.g., the bundle from ${D3}$ spread out in ${D4}$ .
+
+The examples in Figure 13 illustrate how CF-PCPs show data correlations. If two neighboring clusters are aligned (a and c) then positive and negative correlations look very similar to the classic PCP approach. In the case of non-aligned clusters (b and d), our line geometry from Section 5 ensures that the slopes near the axes remain identical to the ones in regular PCPs. Hence, our approach preserves the most important areas of PCPs [37] and adds additional information on clusters and data flow.
+
+Overall, the visual patterns of our approach can be seen in different places of the visualization: inter-cluster patterns are located in the entire space between two display dimensions, whereas the vicinity of duplicated axes reveals intra-cluster patterns.
+
+ < g r a p h i c s >
+
+Figure 14: Classification of E. coli bacteria according to a tree (a) where nodes correspond to metrics and leafs to inferred classes [24]. All red metrics are necessary to distinguish the blue classes. These are shown as dimensions in the regular PCP (b). The scatter plots show color-coded fuzzy subspace clusters between the last three metrics (c). Our CF-PCP (d) technique combines aspects of both previous visualizations. Its line colors encode certainty: 0 1 100%.
+
+§ 7 EXAMPLES
+
+To demonstrate the applicability of CF-PCPs, we provide examples for typical real-world data sets and compare our results to previous approaches.
+
+§ 7.1 ESCHERICHIA COLI
+
+Horton and Nakai used machine learning to automatically predict localization sites of proteins [24]. Their results included a decision tree that uses multiple measured metrics for Escherichia coli bacteria in order to arrive at a predicted classification. Figure 14a shows this tree and highlights five red nodes that we selected for visualization. They correspond to the metrics that define four classifications (blue). We start our analysis with a regular PCP and density rendering (Figure 14b). It shows that the dimension ${lip}$ is only binary. There seem to be negative correlations between alm 2 and its neighbors, but it is difficult to map the resulting class to a specific influence of each previous dimension. The CF-PCP in Figure 14d uses 2D subspace clustering and shows additional information. The first impression is of mostly green color, which shows low certainty. This is due to a low degree of separation between the clusters. The second impression is of flow between clusters. The observable flow from the CF-PCP matches the tree structure from Figure 14a. Using Fuzzy DBSCAN with dimensions ${mcg}$ , lip, and ${gvh}$ yields a single cluster and noise each time. The classification tree confirms that they are not sufficient to infer any specific classes. Subspace $g{vh} \times {alm2}$ shows the first split into two actual clusters, just as the tree also arrives at its first class ${imU}$ . Class ${imS}$ is inferred next and the CF-PCP also contains two clusters in alm $2 \times$ aac. Combining the information from aac $\times$ class reveals four final clusters, which matches the the classification count in the highlighted subtree.
+
+The scatter plots with color-coded clusters do not show this data flow between dimensions because the selected colors are not linked between each plot. Even the display of the fuzziness poses a problem: some points belong to multiple clusters. Plotting them multiple time on the same position only retains the color of their last label.
+
+§ 7.2 NETPERF
+
+To help understand the characteristics of CF-PCPs, we also compare our technique with two clustering-oriented PCP approaches in Figure 2, using the NetPerf data set [49]: Illustrative Parallel Coordinates (IPC) [36] and an edge bundling layout (EBL) for interactive parallel coordinates [39]. In contrast to our proposed pairwise fuzzy method, the clustering in IPCs is calculated and rendered for all dimensions at once (see Figure 2c). Conversely, the EBL method clusters the data separately in each single dimension (see Figure 2a). Both approaches use color to encode clusters consistently over every dimension. Our selected sample visualizations show four dimensions and three clusters in the NetPerf data set. To increase comparability, we limit ourselves to the four dimensions used in the previously published visualizations of the same data. Dimension ordering is disabled, while the vertical axis order minimizes our metric on line crossings from Section 4.3.
+
+In Figure 2c, Illustrative Parallel Coordinates bundle PCP poly-lines based on the cluster they belong to. This reduces inter-cluster overdraw and emphasizes visual cluster separation. Line distortion is applied at the expense of clarity of pairwise axis correlation, because their direction no longer corresponds to the original angle in regular PCPs. Clustering in IPCs works across all visible dimensions and shows overlap between clusters, for instance, in the signal strength dimension. In contrast, EBL [39] reduces clutter in PCPs by bundling edges within pairs of one-dimensional clusters that are computed with a $k$ -means algorithm. Similarly to our approach, this method highlights subspace clusters-albeit one-dimensional ones-and allows viewers to follow the flow of bundles between dimensions. As such, the method avoids overdraw of clusters on axes. However, the one-dimensional approach simplifies the visualization to the detriment of details when it comes to diverging clusters. For example, edges in the top cluster of the throughput axis split into three different clusters in signal strength, but without further interaction, users cannot infer the distribution of data points that belong to the highest cluster in the signal strength dimension.
+
+Similarly to both alternative approaches, our cluster-flow layout shows three main clusters per axis pair (see Figure 2b). However, our visualization goes beyond the alternative techniques and is able to show overlap between clusters while allowing the viewer to trace complex cluster patterns. More specifically, our plot shows that low framerates are always associated with low throughput. Just as with logical consequence, the opposite is not true. Similarly to IPCs (Figure 2c), the CF-PCP shows that the main cluster of high throughput is associated with three clusters of high to low values of signal strength. Contrary to the broad bands in EBLs, fine details in CF-PCPs allow the viewer to see that the highest values of signal strength are always associated with the highest values of throughput. The large cluster at the top of the throughput dimension in the EBL of Figure 2a does not support this conclusion.
+
+ < g r a p h i c s >
+
+Figure 15: Visualization of electric power production by primary energy sources in Germany. Clusters in this CF-PCP are vertically ordered by their mean value. Line colors encode certainty from fuzzy clustering: 0 100%
+
+A commonality with the EBL is the top cluster between signal strength and throughput, which forks into two different clusters when combining the data on signal strength and bandwidth: one with high, the other with low values. In the CF-PCP, we can quickly determine that there are many data points with high frame-rate, despite having only medium to low throughput and minimal signal strength and bandwidth. Color coding fuzziness, we are also able to see that there are uncertain assignments, e.g., the third cluster between framerate and throughput is completely fuzzy and many of its data points are more likely to be noise or belong to the second cluster. Overall, our implementation avoids the inconvenience of overlap from $n$ -dimensional clustering and allows for detailed analysis by tracing individual lines and clusters across dimensions.
+
+§ 7.3 ENERGY PRODUCTION
+
+Lastly, we visualize a larger data set of electricity production by primary energy source $\left\lbrack {{13},{44}}\right\rbrack$ . It contains 1750 data rows and lists a timestamp, 12 sources of primary energy, as well as international imports and exports as dimensions we can use in the CF-PCP. While the restriction to subspaces generally reduces the number of clusters, this is not the case with the uniformly distributet data in the time dimension: it yields 12 clusters. We removed it to get a cleaner and less cluttered plot for further analysis. We provide further visualizations of the original and reduced data set as supplemental material.
+
+Clustering between pairs of dimensions and separating them vertically can help with visual detection of dependencies in the data. From Figure 15, we can extract information between hard and brown coal. On the one hand, low energy production from the latter only occurs when hard coal also has lower values. On the other hand, power production on the left-hand side varies greatly, while brown coal is often at a high output level. Therefore, it seems that between these two, brown coal is burned more constantly while hard coal is preferred for adjustments to changing levels in power production from unsteady sources. Even with density rendering, the high degree of overplotting would create an almost constant background between both axes and thus cannot facilitate these observations.
+
+§ 8 CONCLUSION AND FUTURE WORK
+
+We have presented a technique for cluster-centric visualization of high-dimensional data using parallel coordinates. The main idea of our technique is to deliberately duplicate axes for each cluster to show data flow between 2D subspaces. At the same time, this also creates an opportunity to display fuzzy clusters with parallel coordinates. We have described an algorithm to compute the cluster-flow layout, i.e., for ordering dimensions and axes. While the automatic optimization is algorithmicly complex, manual interaction with the axes and dimensions is always a viable alternative for data exploration. We analyzed visual correspondence of data patterns and discussed the applicability of our technique using multiple examples. Our layout is an improvement over the traditional approach when clusters are not linearly separable over a single dimension or all dimensions together and the number of clusters in subspaces is small: in this case, we bundle lines over separate axis clones instead of plotting many overlapping lines on a single large axis.
+
+In future work, we want to thoroughly evaluate user performance in controlled studies. In a first step, CF-PCPs should be compared against regular PCPs and scatter plot matrices (SPLOM) [10, 18] separately. A second step would compare them against a combination of both, for example, in coordinated views. It might also be interesting to investigate whether the vertical position of the largest cluster influences user performance.
+
+Extensions of our work might look into alternative optimization targets for the layout, such as aesthetic aspects or faithfulness. Parallel coordinates and scatter plots have their strengths and weaknesses. We would like to integrate scatter plots into our layout, e.g., beneath or between duplicated axes. A progressive clustering and cluster-flow layout would also be of interest for the analysis of dynamic data with live updates. We used CF-PCPs with a reading direction, where each display dimension is used for clustering with its neighbor. Our method could be extended to use both neighbors. Another change could be to cluster all displayed dimensions with a common reference dimension. This would be similar to selecting a row or column from a scatter plot matrix and visualizing it with parallel coordinates. Considering the example with E. coli bacteria, a hierarchical approach would be very beneficial for small numbers of dimensions. Here, we would enforce a reading direction and progressively create subclusters that mimic the classification tree from Horton and Nakai's work [24]. Finally, we would like to expand our approach to the field of interactive model peeling in the context of regression and machine learning.
+
+§ ACKNOWLEDGMENTS
+
+Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project ID 251654672 - TRR 161 (projects A01 and B01). We would like to thank Prof. Dr. Bruno Burger (Fraunhofer Institute for Solar Energy Systems ISE) for his cooperation in the analysis of energy production data.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..19981c69a38cd46c87a13f97b8bcc63778df177a
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,323 @@
+# Generation of 3D Human Models and Animations Using Simple Sketches
+
+Alican Akman*
+
+Koc University
+
+Yusuf Sahillioğlu†
+
+Middle East Technical University
+
+T. Metin Sezgin ${}^{ \ddagger }$
+
+Koç University
+
+
+
+Figure 1: Our framework is capable of performing three tasks. (a) It can generate 3D models from given 2D stick figure sketches. (b) It can generate dynamic 3D models, i.e., animations, between given source and target stick figures. (c) It can further extrapolate the produced 3D model sequence by using the learned interpolation vector.
+
+## Abstract
+
+Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.
+
+Index Terms: Computing methodologies-Neural networks-;- —Computing methodologies—Learning latent representations—; Computing methodologies-Shape modeling-;-
+
+## 1 INTRODUCTION
+
+$3\mathrm{D}$ content is still not as big as image and video data. One of the main reasons of this lack of abundance is the labor going into the creation process. Despite the increasing number of talented artists and automated tools, it is obviously not as simple and quick as hitting a record button on the phone.
+
+$3\mathrm{D}$ content is, on the other hand, as important as the image and video data since it is used in many useful pipelines ranging from 3D printing to $3\mathrm{D}$ gaming and filming.
+
+With these considerations in mind, we aim to make the important 3D content creation task simpler and faster. To this end, we train a neural network over $2\mathrm{D}$ stick figure and corresponding $3\mathrm{D}$ model pairs. Utilization of easy-to-sketch and sufficiently-expressive 2D stick figures is a unique feature of our system that makes our system work properly even with a moderate amount of training, e.g., 72 distinct poses of a human model are used. We focus on human models only as they come forward in 3D applications.
+
+Given 2D human stick figure sketches, our algorithm is able to produce visually appealing 3D point cloud models without requiring any other input such as a rigged template model. After an easy and seamless tweaking in the network, the system is also capable of producing dynamic 3D models, i.e., animations, between source and target stick figures.
+
+---
+
+*e-mail: alicanakman4@gmail.com
+
+${}^{ \dagger }$ e-mail: ys@ceng.metu.edu.tr
+
+${}^{ \ddagger }$ e-mail: mtsezgin@ku.edu.tr
+
+---
+
+## 2 RELATED WORK
+
+Thanks to their natural expressiveness power, sketches are common modes for interaction for various graphics applications [30,39].
+
+The majority of sketch-based 3D human modeling methods deal with re-posing a rigged input model under the guidance of user sketches. [15] performs this action by transforming imaginary lines running down a character's major bone chains, whereas [32] and [16] propose incremental schemes that pose characters one limb at a time. 2D stick figures to pose characters benefit from user annotations [9], specific priors [26], and database queries [8, 45]. Bessmeltsev et al. [5] claim that ambiguity problems of all these methods can be alleviated by contour-based gesture drawing. Deep regression network of [17] utilizes contour drawing to allow face creation in minutes. Another system which takes one or more contour drawings as its input uses deep convolutional neural networks to create variety of 3D shapes [10]. We avoid the need for a rigged model in input specification and merely require a user sketch, which, when fed into our network, produces the 3D model quickly in the specified pose. As the network is trained with the SCAPE models [3], our resulting 3D shape looks like the SCAPE actor, i.e., a 30 year-old fit man.
+
+There also exist sketch-based modeling methods for other specific objects such as hairs $\left\lbrack {{13},{37}}\right\rbrack$ and plants $\left\lbrack 2\right\rbrack$ , as well as general-purpose methods that are not restricted to a particular object class. These generic methods, consequently, may not perform as accurately as their object-specific counterparts for those objects but still demonstrate impressive results. 3D free-form design by the pioneer Teddy model [19] is improved in FiberMesh [29] and SmoothSketch [21] by removing the potential cusps and T-junctions with the addition of features such as topological shape reconstruction and hidden contour completion. The recent SymmSketch system [28] exploits symmetry assumption to generate symmetric $3\mathrm{D}$ free-form models from $2\mathrm{D}$ sketches. In order to increase quality in generating 3D models, [42] focuses piecewise-smooth man-made shapes. Their deep neural network-based system infers a set of parametric surfaces that realize the drawing in 3D. Other solutions to the sketch-based generic 3D model creation problem depend on image guidance [43, 47], snapping one of the predefined primitives to the sketch by fitting its projection to the sketch lines [41], and controlled curvature variation patterns [25].
+
+$3\mathrm{D}$ model generation and editing have been extended to $3\mathrm{D}$ scenes as well. Dating back to 1996 [48], this line of works generally index 3D model repositories by their 2D projections from multiple views and retrieve the elements that best match the 2D sketch query [23,40]. $\mathrm{{Xu}}$ et al. [46] extend this idea further by jointly processing the sketched objects, e.g., while a single computer mouse sketch is not easy to recognize, other input sketches such as computer keyboard may provide useful cues. Sketch-based user interfaces arise in 2D image generation as well [7, 12].
+
+Sketches also arise frequently in shape retrieval applications due to their simplicity and expressiveness. Our focus, the human stick figure sketch, has been used successfully in [35] to retrieve 3D human models from large databases. The prominent example in this domain [11] as well as the convolutional neural network based method [44] report good performance with gesture drawings when it comes to retrieving humans. These three methods, as well as many other sketch-based retrieval methods [24], are in general successful on retrieving non-human models as well.
+
+Although human body types under the same pose can be learned easily with moderate amounts of data through statistical shape modeling $\left\lbrack {1,{38}}\right\rbrack$ , this approach requires much greater amounts of input data to learn plausible shape poses under various deformations [18,33]. In addition to the data issue, this family of methods that are based on statistical analysis of human body shape operate directly on vertex positions, which brings the disadvantage that rotated body parts have a completely different representation. This issue is addressed with various heuristics, most successful of which leads to the SMPL model [27] that enables 3D human extraction from 2D images $\left\lbrack {{20},{31}}\right\rbrack$ . Our learning-based solution requires moderate amount of training data, and also alleviates the rotated part issue by simply populating the input data with 17 other rotated versions of each model.
+
+## 3 OVERVIEW
+
+We have two main objectives: (i) Generating 3D models from a single sketched stick figure, (ii) creating 3D animations between two 3D models, generated from 2D source and target sketches. In addition, we present an application that allows interactive modeling using our algorithm.
+
+Our approach is powered by a Variational Autoencoder network (VAE). We train this network with pairs of 3D and 2D points. The 3D points come from the SCAPE 3D human figure database, while the 2D points are obtained by projecting joint positions of these models on a 2D surface. Hence the correspondence information is preserved. Our neural network model ties the 2D and 3D representations through a latent space, which allows us to generate $3\mathrm{D}$ point clouds from 2D stick figures.
+
+The latent space that ties the 2D and 3D representations also acts as a convenient lower dimensional embedding for interpolation and extrapolation. Given a set of target key frames in the form of 2D drawings, we can map them into the lower dimensional embedding space, and then interpolate between them to obtain a number of via points in the embedding space. These via points can then be mapped to 3D through the network to obtain a smooth 3D animation sequence. Furthermore, extrapolation allows extending the animation sequence beyond the target key frames.
+
+## 4 METHODOLOGY
+
+Our method aims to generate static and dynamic 3D human models from scratch, that is, we require only the 2D input sketch and no other data such as a rigged 3D model waiting to be reposed. To make this possible, we learn a model that maps $2\mathrm{D}$ stick figures to 3D models.
+
+### 4.1 Training Data Generation
+
+The original SCAPE dataset consists of 72 key 3D meshes of a human actor [3]. It also contains point-to-point correspondence information between these distinct model poses. We use a simple algorithm to extend this dataset by rotating the existing figures with different angles. First, we determine the axes and the angles of the rotation with respect to the original coordinate system shown in the wrapped figure. We ignore the rotation with respect to the $x$ -axis, since stick figures are less likely to be drawn from this view. Next, we rotate the models with respect to the $y$ -axis and $z$ -axis, in a range of -90 degree to 90 at intervals of 30 degrees for the $y$ -axis, and -45 to 45 degrees at intervals of 45 degrees for the $z$ -axis. In the end of this process, we output 21 models per key model in the SCAPE dataset.
+
+
+
+Since our network is trained with (2D joints, 3D model) pairs, we also extract 2D joints from a 3D model in a particular perspective. We designate 11 essential points that alone can describe a 3D human pose. These are the following: forehead, elbows, hands, neck, abdomen, knees and feet. Since the dataset has the point-to-point correspondence information in itself, we select these 3D points in a pilot mesh from the dataset. We project these joints onto a $2\mathrm{D}$ camera plane ( $x - y$ in our case) across our entire dataset to create 2D joint projections. In order to be independent from the coordinate system, we represent these points with relative positions $\left( {{\Delta x},{\Delta y}}\right)$ . We determine a specific order in 2D points forming a sketching path with 17 points (some joints are visited twice but in reverse direction). This sketching path determines the order of $2\mathrm{D}$ points that form the input vector of our neural network. The input vector format also handles front/back ambiguity while generating 3D models. We set the first point in the sketching path as the origin,(0,0)and then we set the remaining points with respect to their relative position to the preceding point.
+
+
+
+Figure 2: Our neural network architecture. (a) Train Network: We train this network with (3D point cloud, 2D points of stick figure) pairs during training time. It consists of a VAE: Encoder3D and Decoder consecutively, and another external encoder: Encoder2D. We use regression loss from the output of Encoder2D to the mean vector of the VAE in addition to standard losses of VAE. (b) Test Network: We remove Encoder3D and reparameterization layer from our VAE and use Encoder2D-Decoder as our network in our experiments.
+
+
+
+Figure 3: Screenshots from our user interface. (a) 3D model generation mode. (b) 3D animation generation mode.
+
+### 4.2 Neural Network Architecture
+
+We build upon the work of Girdhar et al. [14] while designing our neural network architecture. Girdhar et al. aim to learn a vector representation that is generative, i.e., it can generate 3D models in the form of voxels; and predictable, i.e., it can be predicted by a $2\mathrm{D}$ image. We utilize variational autoencoders rather than standard autoencoders to build our neural network as shown in Figure 2. Unlike standard autoencoders, VAEs are generative networks whose latent distribution is regularized during the training in order to be close to a standard Gaussian distribution. This property of VAEs ensures that its latent distribution has a meaningful organization which allows us to generate novel $3\mathrm{D}$ models by sampling in this distribution. In addition to generating novel 3D models, since our framework is capable of learning the vector space around these $3\mathrm{D}$ models, it enables meaningful transitions between them and extrapolations beyond them.
+
+For our training network, we have two encoders and one decoder: Encoder3D, Encoder2D, and Decoder. Encoder3D and Decoder together serve as a VAE. Our VAE takes in a 3D point cloud as input, and reconstructs the same model as the output. While our VAE learns to reconstruct 3D models, it forces latent distribution of the dataset to approximate normal distribution which makes the latent space interpolatable. Meanwhile, we use our Encoder2D to predict latent vectors of corresponding $3\mathrm{D}$ models from $2\mathrm{D}$ points. In order to provide our latent distribution with similarity information between 3D models, we design this partial architecture for our neural network instead of using a VAE which directly generates 3D models from 2D sketches. Thus, our Encoder3D is capable of learning relations between 3D models rather than $2\mathrm{D}$ sketches while creating a regularized latent distribution. With this method, we aim to explore latent space better and generate more meaningful transitions between 3D models.
+
+## Encoder3D-Decoder VAE Network Architecture:
+
+Our VAE takes ${12500} \times 3$ point cloud of human 3D model as input. Encoder3D contains two fully connected layers and its outputs are a mean vector and a deviation vector. We use ReLu as an activation layer in all the internal layers. There is no activation layer in the output layers. Our Decoder takes the latent vector $z$ as input. It also consists of two fully connected layers with a ReLu activation layer and one fully connected output layer with a tanh activation layer. It gives a reconstructed point cloud of the input 3D model as the output.
+
+We train our VAE with the standard KL divergence and reconstruction losses. The total loss for our VAE is given in Equation 1.
+
+$$
+{L}_{\mathrm{{VAE}}} = \alpha \frac{1}{BN}\mathop{\sum }\limits_{{i = 1}}^{B}\mathop{\sum }\limits_{{j = 1}}^{N}{\left( {x}_{ij} - {\widehat{x}}_{ij}\right) }^{2} + {D}_{\mathrm{{KL}}}\left( {q\left( {z \mid x}\right) \parallel p\left( z\right) }\right) \tag{1}
+$$
+
+In Equation $1,\alpha$ is a parameter to balance between the reconstruction loss and KL divergence loss, $B$ is the training batch size, $N$ is the 1D dimension of vectorized point cloud (37500 in our case), ${x}_{ij}$ is the $j$ -th dimension of $i$ -th model in the training data batch, ${\widehat{x}}_{ij}$ is the $j$ -th dimension of model $i$ ’s output from our VAE, $z$ is the reparameterized latent vector, $p\left( z\right)$ is the prior probability, $q\left( {z \mid f}\right)$ is the posterior probability, and ${D}_{\mathrm{{KL}}}$ is KL divergence.
+
+## Mapping 2D Sketch Points to Latent Vector Space:
+
+Our Encoder2D learns to predict the latent vectors of 3D models that corresponds to $2\mathrm{D}$ sketches as discussed. It takes ${11} \times 2$ points as input to map it into mean vector. It has the same structure with Encoder3D except its input and internal fully connected layers' dimensions. In the test case we use Encoder2D and Decoder as a standard autoencoder. Decoder takes the mean vector output of Encoder2D as its input and generates 3D point cloud as the output.
+
+We train our Encoder2D with mean square loss to regress 256D representation of mean vector given by pre-trained Encoder3D. The loss for our Encoder2D is given in Equation 2.
+
+$$
+{L}_{2\mathrm{D}} = \frac{1}{BZ}\mathop{\sum }\limits_{{i = 1}}^{B}\mathop{\sum }\limits_{{j = 1}}^{Z}{\left( {\mu }_{i}^{1} - {\mu }_{i}^{2}\right) }^{2} \tag{2}
+$$
+
+In Equation $2, B$ is the training batch size, $Z$ is the dimension of latent space (256 in our case), ${\mu }_{ij}^{1}$ is the $j$ -th dimension of mean vector produced by Encoder3D to $i$ -th model in training batch, and ${\mu }_{ij}^{2}$ is the $j$ -th dimension of mean vector produced by Encoder2D to $i$ -th model in the training batch.
+
+### 4.3 Training Details
+
+We follow a three-stage process to train our network. (i) We train our variational auto encoder independently $\mathrm{w}$ ith the loss function in Equation (1). We run this stage for 300 epochs. (ii) We train our Encoder2D with the loss function in Equation (2) using Encoder3D trained from (i). Specifically, we train our Encoder2D to regress the latent vector produced from the pre-trained Encoder3D for the input 3D model. We run this stage for 300 epochs. (iii) We use both losses jointly to finetune our framework. We run this stage for 200 epochs. It takes about two days to complete whole training session.
+
+For the experiments throughout this paper, we set $\alpha = {10}^{5}$ . We set the prior probability over latent variables as a standard normal distribution, $p\left( z\right) = \mathcal{N}\left( {z;0, I}\right)$ . We set the learning rate as ${10}^{-3}$ . We use the Adam Optimizer as our optimizer in training.
+
+### 4.4 User Interface
+
+As we have explained in prior sections, our method can be used in a variety of applications in different fields such as character generation and making quick animations. In order to better utilize these applications we propose a user interface with facilitative properties that enables users to perform our method in a better manner (Figure 3). Our user interface acts as an agent that ensures the input-output communication between the user and our neural network. The user can choose whether to generate $3\mathrm{D}$ models from $2\mathrm{D}$ stick figure sketches or create animations between the source and target stick figure sketches. While it takes about one second to process a sketch input for generation of the corresponding 3D model, this time extends approximately to five seconds for producing animation between (interpolation) and beyond (extrapolation) sketch inputs.
+
+Our user interface takes a sketch input from the user via an embedded 2D canvas (right part). The collected sketch is transformed into a map of the joint locations as an input to our neural network by requiring users to sketch in a predetermined path. The user interface then shows the 3D model output produced by the neural network on the embedded 3D canvas (left part). Although our output is a 3D point cloud, for better visualization, our user interface utilizes the mesh information that already exists in the SCAPE dataset. Generated point clouds are consequently combined with this information to display 3D models as surfaces with appropriate rendering mechanisms such as shading and lighting.
+
+Since we trained an abundance of neural networks until achieving the best one with the optimal parameters, our user interface showed two different 3D model outputs coming from two different neural networks for comparisons during the development phase. The interface in Figure 3 belongs to our release mode where only the promoted 3D output is displayed.
+
+We construct our user interface such that it is purified from unnatural interaction tools such as buttons and menus. Generation process starts as soon as the last stroke is drawn without forcing the user to hit the start button. We provide brief information in text that describes the canvas organization. To make the interaction more fluent, we add a simple "scribble" detector to understand the undo operation.
+
+## 5 EXPERIMENTS AND RESULTS
+
+In this section, we first evaluate our framework qualitatively and quantitatively. We evaluate three tasks performed by our framework. (i) Generating 3D models from 2D stick figure sketches. (ii) Generating 3D animations between source and target stick figure sketches. (iii) Performing simple extrapolation beyond stick figures.
+
+### 5.1 Framework Evaluation
+
+Standard Autoencoder Baseline. To quantitatively justify our preference on variational autoencoders, we design a standard autoen-coder (AE) baseline with similar dimensions and activation functions (Figure 4). We train this network for 300 epochs with Euclidean loss between generated $3\mathrm{D}$ models and ground truth ones. We compare the per-vertex reconstruction loss on validation set consisting held-out 3D models with our VAE network and standard AE network. The results on Table 1 shows that our VAE network outperforms AE network, exploiting latent space more efficiently to enhance the generation quality of novel $3\mathrm{D}$ models.
+
+Latent Dimensions. We evaluate different latent dimensions for our VAE framework using per-vertex reconstruction loss on validation set. The results on Table 2 show that using 256 dimensions improves generation quality compared to lower dimensions. Higher dimensions lead to overfitting. We use 256 dimensions for the following experiments.
+
+
+
+Figure 4: Neural network architecture for standard autoencoder baseline.
+
+Table 1: Per-vertex reconstruction error on validation data with different network architectures. VAE represents our method.
+
+| Method | Z (Latent Dim.) | Mean $\left( {\mathrm{x}{10}^{-3}}\right)$ | Std $\left( {\mathrm{x}{10}^{-3}}\right)$ |
| AE | 256 | 2.996 | 1.568 |
| VAE | 256 | 2.118 | 2.598 |
+
+Table 2: Per-vertex reconstruction error on validation data with different latent dimensions. 256 is our promoted latent space dimension.
+
+| Method | Z (Latent Dim.) | Mean $\left( {\mathrm{x}{10}^{-3}}\right)$ | Std $\left( {\mathrm{x}{10}^{-3}}\right)$ |
| VAE | 128 | 3.887 | 3.802 |
| VAE | 256 | 2.118 | 2.598 |
| VAE | 512 | 10.830 | 6.888 |
+
+### 5.2 Generating 3D models from 2D stick figure sketches
+
+To evaluate the generation ability, we feed $2\mathrm{D}$ stick figure sketches as input to our framework. Our user interface is used for the sketching activity. In Figure 5, we compare generation results for sample sketches with our VAE network and standard AE baseline. These results show that standard AE network generates models with anatomical flaws (collapsed arm in Figure 5) and deficient body orientations. Our VAE network produces compatible models of high quality.
+
+We also compare the generation ability of our framework with a recent study [20] that predicts 3D shape from 2D joints. While our framework outputs 3D point clouds, the method described in [20], tries to fit a statistical body shape model, SMPL [6], on 2D joints. Their learned statistical model is a male who is in a different shape than our male model as shown in the second column of Figure 6. We take the visuals reported in their paper and draw the corresponding stick figures for a fair comparison. Despite being restricted to the reported poses in [20], our method compares favorably. The sitting pose, for instance, which is not quite captured by their statistical model shows in an inaccurate 3D sitting while our 3D model sits successfully (Figure 6 - top row).
+
+We finally compare our 3D human model generation ability to the ground-truth by feeding sketches that resemble 3 of the 72 SCAPE poses used during training. Two of these poses were observed during training at the same orientations as we draw them in the first two rows of Figure 7, yet the last one is drawn with a new orientation that was not observed during training (Figure 7-last row). Consequently, we had to align the generated model with the ground-truth model via ICP alignment [4] for the last row prior to the comparison. We observe almost perfect replications of the SCAPE poses in all cases as depicted by the colored difference maps.
+
+### 5.3 Generation of Dynamic 3D Models - Interpolation
+
+We test the capability of our network to generate $3\mathrm{D}$ animations between source and target stick figures. To accomplish this, our framework takes two stick figure sketches via our user interface. The framework then encodes the stick figures into the embedding space to extract their latent variables. We create a list of latent variables by linearly interpolating between the source and the target latent variables. We feed this list as an input to our decoder, with each element of the list being fed one by one to produce the desired 3D model sequence. In Figure 8 (a), we compare our results with direct linear interpolation between source and target 3D model output. Our results show that the interpolation in embedding space can avoid anatomical errors which usually occur in methods using direct linear interpolation. Further interpolation results can be found in the teaser image and the accompanying video.
+
+### 5.4 Generation of Dynamic 3D models - Extrapolation
+
+Our results show that the interpolation vector in the embedding space is capable of reasonably representing the motion in real world actions. To improve upon this idea, we exploit the learned interpolation vector in order to predict future movements. We show our results for extrapolation in Figure 8 (b). This figure shows that the learned interpolation vector between two 3D shapes contains meaningful information of movement in 3D. Further extrapolation results can be found in the teaser image and the accompanying video.
+
+### 5.5 Timing
+
+We finish our evaluations with our timing information. The closest work to our framework reports 10 minutes of production time for an amateur user to create 3D faces from 2D sketches [17]. In our system, on the other hand, it takes an amateur user 10 seconds of stick figure drawing and 1 second of algorithm processing to create $3\mathrm{D}$ bodies from $2\mathrm{D}$ sketches. There are two main reasons of this remarkable performance advantage of our framework: i) Human bodies can be naturally represented with easy-to-draw stick figures whereas faces cannot. The simplicity and expressiveness make the learning easier and more efficient. ii) Our deep regression network is significantly less complex than the one employed in [17].
+
+
+
+Figure 5: Generated 3D models in front and side views for given sketches using our VAE network and standard AE network. Flaws are highlighted with red circles.
+
+
+
+Figure 6: Qualitative comparison of our method (columns 3 and 4) with [20] (columns 1 and 2).
+
+
+
+Figure 7: Generation results for (a) two sketches observed in training and (b) an unobserved sketch. The generated models (last column) are colored with respect to the distance map to the ground truth (first column). Distance values are normalized between 0 and 1 .
+
+
+
+Figure 8: Qualitative comparison with linear interpolation. (a) Produced 3D model sequences for given sketches using our network are better than linear interpolation results. (b) Extrapolation results for 3D model sequences are given as red models.
+
+Our application runs on a PC with $8\mathrm{{GB}}$ ram and i ${7.2.80}\mathrm{{GHz}}\mathrm{{CPU}}$ . Training of our model is done on Tesla K40m GPU and took about 2 days (800 epochs total).
+
+## 6 CONCLUSIONS
+
+In this paper, we presented a deep learning based framework that is capable of generating $3\mathrm{D}$ models from $2\mathrm{D}$ stick figure sketches and producing dynamic 3D models between (interpolation) and beyond (extrapolation) two given stick figure sketches. Unlike existing methods, our method requires neither a statistical body shape model nor a rigged 3D character model. We demonstrated that our framework not only gives reasonable results on generation, but also compares favorably with existing approaches. We further supported our framework with a well-designed user interface to make it practical for a variety of applications.
+
+## 7 LIMITATIONS
+
+The proposed system has several limitations that are listed as follows:
+
+- Training of our network is dependent on the existing 3D shapes in the dataset. Our network cannot learn vastly different shapes than existing ones: it produces incompatible 3D models with sketch inputs that are not closely represented in the dataset. For example, our network does not correctly capture the arm orientation for the right model in Figure 9.
+
+- The system can only generate human shapes because of the content of the dataset.
+
+- The system can produce articulated shapes. Although it can twist and bend human body in a reasonable way, it can not, for instance, stretch or resize a body part.
+
+- The system benefits from the one-to-one correspondence information of the dataset. Thus, the quality of results depends on this information.
+
+
+
+Figure 9: Failure cases of our framework. Our framework generates anatomically unnatural results if the input sketch has disproportionate body parts (left), or it is significantly different than the ones used during training (right).
+
+- Since our network takes its input in a specific order, our user interface constrains users to sketch in that order. Users cannot sketch stick figures in an arbitrary order.
+
+## 8 Future Work
+
+Potential future work directions that align with our proposed system are described as follows. Human stick figures used in this paper can be generalized to any other shape using their skeletons. Consequently, an automated skeleton extraction algorithm would enable further training of our network, which in turn extends our solution to non-human objects. Voxelization of our input data would spare us from the one-to-one correspondence information requirement, which in turn would enable our interpolation scheme to morph from different object classes that do not often have this type of information, e.g., from cat to giraffe. Automatic one-to-one correspondence computation $\left\lbrack {{22},{34},{36}}\right\rbrack$ can also be considered to avoid voxelization. Latent space can be exploited in a better manner in order to obtain a more sophisticated extrapolation algorithm than the basic one we introduced in this paper. New sketching cues can be designed and incorporated into our network to be able to produce body types different than the one used during training, e.g., training with the fit SCAPE actor and production with a obese actress.
+
+## ACKNOWLEDGMENTS
+
+- This work was supported by TUBITAK under the project EEEAG-115E471.
+
+- This work was supported by European Commission ERA-NET Program, The IMOTION project.
+
+## REFERENCES
+
+[1] B. Allen, B. Curless, and Z. Popović. The space of human body shapes: reconstruction and parameterization from range scans. ${ACM}$ transactions on graphics (TOG), 22(3):587-594, 2003.
+
+[2] F. Anastacio, M. C. Sousa, F. Samavati, and J. A. Jorge. Modeling plant structures using concept sketches. In Proceedings of the 4th international symposium on Non-photorealistic animation and rendering, pp. 105-113. ACM, 2006.
+
+[3] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: Shape completion and animation of people. ACM Trans. Graph., 24(3):408-416, July 2005. doi: 10.1145/1073204. 1073207
+
+[4] P. Besl and N. McKay. Method for registration of 3-d shapes. Sensor Fusion IV: Control Paradigms and Data Structures, 1611, 1992.
+
+[5] M. Bessmeltsev, N. Vining, and A. Sheffer. Gesture3d: posing 3d characters via gesture drawings. ACM Transactions on Graphics (TOG), 35(6):165, 2016.
+
+[6] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In Computer Vision- ECCV 2016, Lecture Notes in Computer Science. Springer International Publishing, Oct. 2016.
+
+[7] T. Chen, M.-M. Cheng, P. Tan, A. Shamir, and S.-M. Hu. Sketch2photo: Internet image montage. ACM Transactions on Graphics (TOG), 28(5):124, 2009.
+
+[8] M. G. Choi, K. Yang, T. Igarashi, J. Mitani, and J. Lee. Retrieval and visualization of human motion data via stick figures. In Computer Graphics Forum, vol. 31, pp. 2057-2065. Wiley Online Library, 2012.
+
+[9] J. Davis, M. Agrawala, E. Chuang, Z. Popović, and D. Salesin. A sketching interface for articulated figure animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 320-328. Eurographics Association, 2003.
+
+[10] J. Delanoy, M. Aubry, P. Isola, A. Efros, and A. Bousseau. 3d sketching using multi-view deep volumetric prediction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(21), may 2018.
+
+[11] M. Eitz, R. Richter, T. Boubekeur, K. Hildebrand, and M. Alexa. Sketch-based shape retrieval. ACM Trans. Graph., 31(4):31-1, 2012.
+
+[12] M. Eitz, R. Richter, K. Hildebrand, T. Boubekeur, and M. Alexa. Pho-tosketcher: interactive sketch-based image synthesis. IEEE Computer Graphics and Applications, 31(6):56-66, 2011.
+
+[13] H. Fu, Y. Wei, C.-L. Tai, and L. Quan. Sketching hairstyles. In Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling, pp. 31-36. ACM, 2007.
+
+[14] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta. Learning a predictable and generative vector representation for objects. CoRR, abs/1603.08637, 2016.
+
+[15] M. Guay, M.-P. Cani, and R. Ronfard. The line of action: an intuitive interface for expressive character posing. ACM Transactions on Graphics (TOG), 32(6):205, 2013.
+
+[16] F. Hahn, F. Mutzel, S. Coros, B. Thomaszewski, M. Nitti, M. Gross, and R. W. Sumner. Sketch abstractions for character posing. In Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 185-191. ACM, 2015.
+
+[17] X. Han, C. Gao, and Y. Yu. Deepsketch2face: A deep learning based sketching system for $3\mathrm{\;d}$ face and caricature modeling. ACM Transactions on Graphics, 36(4):126, 2017.
+
+[18] N. Hasler, C. Stoll, M. Sunkel, B. Rosenhahn, and H.-P. Seidel. A statistical model of human pose and body shape. Computer Graphics Forum, 28(2):337-346, 2009.
+
+[19] T. Igarashi, S. Matsuoka, and H. Tanaka. Teddy: a sketching interface for $3\mathrm{\;d}$ freeform design. In Proceedings of the26th annual conference on Computer graphics and interactive techniques, pp. 409-416. ACM
+
+Press/Addison-Wesley Publishing Co., 1999.
+
+[20] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
+
+[21] O. A. Karpenko and J. F. Hughes. Smoothsketch: 3d free-form shapes from complex sketches. ACM Transactions on Graphics (TOG), 25(3):589-598, 2006.
+
+[22] V. G. Kim, Y. Lipman, and T. Funkhouser. Blended intrinsic maps. ACM Transactions on Graphics (TOG), 30(4):79, 2011.
+
+[23] J. Lee and T. A. Funkhouser. Sketch-based search and composition of 3d models. In ${SBM}$ , pp. 97-104,2008.
+
+[24] B. Li, Y. Lu, C. Li, A. Godil, T. Schreck, M. Aono, M. Burtscher, H. Fu, T. Furuya, H. Johan, et al. Shrec'14 track: Extended large scale sketch-based 3d shape retrieval. In Eurographics workshop on ${3D}$ object retrieval, vol. 2014. ., 2014.
+
+[25] C. Li, H. Pan, Y. Liu, X. Tong, A. Sheffer, and W. Wang. Bendsketch: modeling freeform surfaces through 2d sketching. ACM Transactions on Graphics, 36(4):125, 2017.
+
+[26] J. Lin, T. Igarashi, J. Mitani, M. Liao, and Y. He. A sketching interface for sitting pose design in the virtual environment. IEEE transactions on visualization and computer graphics, 18(11):1979-1991, 2012.
+
+[27] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. ACM Transactions on Graphics (TOG), 34(6):248, 2015.
+
+[28] Y. Miao, F. Hu, X. Zhang, J. Chen, and R. Pajarola. Symmsketch: Creating symmetric $3\mathrm{\;d}$ free-form shapes from $2\mathrm{\;d}$ sketches. Computational Visual Media, 1(1):3-16, 2015.
+
+[29] A. Nealen, T. Igarashi, O. Sorkine, and M. Alexa. Fibermesh: designing freeform surfaces with $3\mathrm{\;d}$ curves. ${ACM}$ transactions on graphics (TOG), 26(3):41, 2007.
+
+[30] L. Olsen, F. F. Samavati, M. C. Sousa, and J. A. Jorge. Sketch-based modeling: A survey. Computers & Graphics, 33(1):85-103, 2009.
+
+[31] M. Omran, C. Lassner, G. Pons-Moll, P. V. Gehler, and B. Schiele. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In 2018 International Conference on ${3D}$ Vision, 3DV 2018, Verona, Italy, September 5-8, 2018, pp. 484-494, 2018. doi: 10.1109/3DV.2018.00062
+
+[32] A. C. Öztireli, I. Baran, T. Popa, B. Dalstein, R. W. Sumner, and M. Gross. Differential blending for expressive sketch-based posing. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 155-164. ACM, 2013.
+
+[33] L. Pishchulin, S. Wuhrer, T. Helten, C. Theobalt, and B. Schiele. Building statistical shape spaces for $3\mathrm{\;d}$ human modeling. Pattern Recognition, 67:276-286, 2017.
+
+[34] Y. Sahillioğlu. A genetic isometric shape correspondence algorithm with adaptive sampling. ACM Transactions on Graphics (TOG), 37(5):175, 2018.
+
+[35] Y. Sahillioğlu and M. Sezgin. Sketch-based articulated 3d shape retrieval. IEEE computer graphics and applications, 37(6):88-101, 2017.
+
+[36] Y. Sahillioğlu and Y. Yemez. Coarse-to-fine isometric shape correspondence by tracking symmetric flips. Computer Graphics Forum, 32(1):177-189, 2013.
+
+[37] S. Seki and T. Igarashi. Sketch-based 3d hair posing by contour drawings. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, p. 29. ACM, 2017.
+
+[38] H. Seo and N. Magnenat-Thalmann. An example-based approach to human body manipulation. Graphical Models, 66(1):1-23, 2004.
+
+[39] T. M. Sezgin, T. Stahovich, and R. Davis. Sketch based interfaces: early processing for sketch understanding. p. 22, 2006.
+
+[40] H. Shin and T. Igarashi. Magic canvas: interactive design of a 3-d scene prototype from freehand sketches. In Proceedings of Graphics Interface 2007, pp. 63-70. ACM, 2007.
+
+[41] A. Shtof, A. Agathos, Y. Gingold, A. Shamir, and D. Cohen-Or. Geose-mantic snapping for sketch-based modeling. Computer graphics forum, 32(2pt2):245-253, 2013.
+
+[42] D. Smirnov, M. Bessmeltsev, and J. Solomon. Deep sketch-based modeling of man-made shapes. ArXiv, abs/1906.12337, 2019.
+
+[43] S. Tsang, R. Balakrishnan, K. Singh, and A. Ranjan. A suggestive interface for image guided 3d sketching. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 591-598. ACM, 2004.
+
+[44] F. Wang, L. Kang, and Y. Li. Sketch-based 3d shape retrieval using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1875-1883, 2015.
+
+[45] X. K. Wei, J. Chai, et al. Intuitive interactive human-character posing with millions of example poses. IEEE Computer Graphics and Applications, 31(4):78-88, 2011.
+
+[46] K. Xu, K. Chen, H. Fu, W.-L. Sun, and S.-M. Hu. Sketch2scene: sketch-based co-retrieval and co-placement of $3\mathrm{\;d}$ models. ACM Transactions on Graphics (TOG), 32(4):123, 2013.
+
+[47] C. Yang, D. Sharon, and M. van de Panne. Sketch-based modeling of parameterized objects. In SIGGRAPH Sketches, p. 89. Citeseer, 2005.
+
+[48] R. Zeleznik, K. Herndon, and J. Hughes. Sketch: an interface for sketching 3d scenes. In Proceedings of the SIGGRAPH, 1996.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..117e3e5d11e1e7c6ce1af308822871d0116f30c9
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/ozFu9KivuQ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,242 @@
+§ GENERATION OF 3D HUMAN MODELS AND ANIMATIONS USING SIMPLE SKETCHES
+
+Alican Akman*
+
+Koc University
+
+Yusuf Sahillioğlu†
+
+Middle East Technical University
+
+T. Metin Sezgin ${}^{ \ddagger }$
+
+Koç University
+
+ < g r a p h i c s >
+
+Figure 1: Our framework is capable of performing three tasks. (a) It can generate 3D models from given 2D stick figure sketches. (b) It can generate dynamic 3D models, i.e., animations, between given source and target stick figures. (c) It can further extrapolate the produced 3D model sequence by using the learned interpolation vector.
+
+§ ABSTRACT
+
+Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.
+
+Index Terms: Computing methodologies-Neural networks-;- —Computing methodologies—Learning latent representations—; Computing methodologies-Shape modeling-;-
+
+§ 1 INTRODUCTION
+
+$3\mathrm{D}$ content is still not as big as image and video data. One of the main reasons of this lack of abundance is the labor going into the creation process. Despite the increasing number of talented artists and automated tools, it is obviously not as simple and quick as hitting a record button on the phone.
+
+$3\mathrm{D}$ content is, on the other hand, as important as the image and video data since it is used in many useful pipelines ranging from 3D printing to $3\mathrm{D}$ gaming and filming.
+
+With these considerations in mind, we aim to make the important 3D content creation task simpler and faster. To this end, we train a neural network over $2\mathrm{D}$ stick figure and corresponding $3\mathrm{D}$ model pairs. Utilization of easy-to-sketch and sufficiently-expressive 2D stick figures is a unique feature of our system that makes our system work properly even with a moderate amount of training, e.g., 72 distinct poses of a human model are used. We focus on human models only as they come forward in 3D applications.
+
+Given 2D human stick figure sketches, our algorithm is able to produce visually appealing 3D point cloud models without requiring any other input such as a rigged template model. After an easy and seamless tweaking in the network, the system is also capable of producing dynamic 3D models, i.e., animations, between source and target stick figures.
+
+*e-mail: alicanakman4@gmail.com
+
+${}^{ \dagger }$ e-mail: ys@ceng.metu.edu.tr
+
+${}^{ \ddagger }$ e-mail: mtsezgin@ku.edu.tr
+
+§ 2 RELATED WORK
+
+Thanks to their natural expressiveness power, sketches are common modes for interaction for various graphics applications [30,39].
+
+The majority of sketch-based 3D human modeling methods deal with re-posing a rigged input model under the guidance of user sketches. [15] performs this action by transforming imaginary lines running down a character's major bone chains, whereas [32] and [16] propose incremental schemes that pose characters one limb at a time. 2D stick figures to pose characters benefit from user annotations [9], specific priors [26], and database queries [8, 45]. Bessmeltsev et al. [5] claim that ambiguity problems of all these methods can be alleviated by contour-based gesture drawing. Deep regression network of [17] utilizes contour drawing to allow face creation in minutes. Another system which takes one or more contour drawings as its input uses deep convolutional neural networks to create variety of 3D shapes [10]. We avoid the need for a rigged model in input specification and merely require a user sketch, which, when fed into our network, produces the 3D model quickly in the specified pose. As the network is trained with the SCAPE models [3], our resulting 3D shape looks like the SCAPE actor, i.e., a 30 year-old fit man.
+
+There also exist sketch-based modeling methods for other specific objects such as hairs $\left\lbrack {{13},{37}}\right\rbrack$ and plants $\left\lbrack 2\right\rbrack$ , as well as general-purpose methods that are not restricted to a particular object class. These generic methods, consequently, may not perform as accurately as their object-specific counterparts for those objects but still demonstrate impressive results. 3D free-form design by the pioneer Teddy model [19] is improved in FiberMesh [29] and SmoothSketch [21] by removing the potential cusps and T-junctions with the addition of features such as topological shape reconstruction and hidden contour completion. The recent SymmSketch system [28] exploits symmetry assumption to generate symmetric $3\mathrm{D}$ free-form models from $2\mathrm{D}$ sketches. In order to increase quality in generating 3D models, [42] focuses piecewise-smooth man-made shapes. Their deep neural network-based system infers a set of parametric surfaces that realize the drawing in 3D. Other solutions to the sketch-based generic 3D model creation problem depend on image guidance [43, 47], snapping one of the predefined primitives to the sketch by fitting its projection to the sketch lines [41], and controlled curvature variation patterns [25].
+
+$3\mathrm{D}$ model generation and editing have been extended to $3\mathrm{D}$ scenes as well. Dating back to 1996 [48], this line of works generally index 3D model repositories by their 2D projections from multiple views and retrieve the elements that best match the 2D sketch query [23,40]. $\mathrm{{Xu}}$ et al. [46] extend this idea further by jointly processing the sketched objects, e.g., while a single computer mouse sketch is not easy to recognize, other input sketches such as computer keyboard may provide useful cues. Sketch-based user interfaces arise in 2D image generation as well [7, 12].
+
+Sketches also arise frequently in shape retrieval applications due to their simplicity and expressiveness. Our focus, the human stick figure sketch, has been used successfully in [35] to retrieve 3D human models from large databases. The prominent example in this domain [11] as well as the convolutional neural network based method [44] report good performance with gesture drawings when it comes to retrieving humans. These three methods, as well as many other sketch-based retrieval methods [24], are in general successful on retrieving non-human models as well.
+
+Although human body types under the same pose can be learned easily with moderate amounts of data through statistical shape modeling $\left\lbrack {1,{38}}\right\rbrack$ , this approach requires much greater amounts of input data to learn plausible shape poses under various deformations [18,33]. In addition to the data issue, this family of methods that are based on statistical analysis of human body shape operate directly on vertex positions, which brings the disadvantage that rotated body parts have a completely different representation. This issue is addressed with various heuristics, most successful of which leads to the SMPL model [27] that enables 3D human extraction from 2D images $\left\lbrack {{20},{31}}\right\rbrack$ . Our learning-based solution requires moderate amount of training data, and also alleviates the rotated part issue by simply populating the input data with 17 other rotated versions of each model.
+
+§ 3 OVERVIEW
+
+We have two main objectives: (i) Generating 3D models from a single sketched stick figure, (ii) creating 3D animations between two 3D models, generated from 2D source and target sketches. In addition, we present an application that allows interactive modeling using our algorithm.
+
+Our approach is powered by a Variational Autoencoder network (VAE). We train this network with pairs of 3D and 2D points. The 3D points come from the SCAPE 3D human figure database, while the 2D points are obtained by projecting joint positions of these models on a 2D surface. Hence the correspondence information is preserved. Our neural network model ties the 2D and 3D representations through a latent space, which allows us to generate $3\mathrm{D}$ point clouds from 2D stick figures.
+
+The latent space that ties the 2D and 3D representations also acts as a convenient lower dimensional embedding for interpolation and extrapolation. Given a set of target key frames in the form of 2D drawings, we can map them into the lower dimensional embedding space, and then interpolate between them to obtain a number of via points in the embedding space. These via points can then be mapped to 3D through the network to obtain a smooth 3D animation sequence. Furthermore, extrapolation allows extending the animation sequence beyond the target key frames.
+
+§ 4 METHODOLOGY
+
+Our method aims to generate static and dynamic 3D human models from scratch, that is, we require only the 2D input sketch and no other data such as a rigged 3D model waiting to be reposed. To make this possible, we learn a model that maps $2\mathrm{D}$ stick figures to 3D models.
+
+§ 4.1 TRAINING DATA GENERATION
+
+The original SCAPE dataset consists of 72 key 3D meshes of a human actor [3]. It also contains point-to-point correspondence information between these distinct model poses. We use a simple algorithm to extend this dataset by rotating the existing figures with different angles. First, we determine the axes and the angles of the rotation with respect to the original coordinate system shown in the wrapped figure. We ignore the rotation with respect to the $x$ -axis, since stick figures are less likely to be drawn from this view. Next, we rotate the models with respect to the $y$ -axis and $z$ -axis, in a range of -90 degree to 90 at intervals of 30 degrees for the $y$ -axis, and -45 to 45 degrees at intervals of 45 degrees for the $z$ -axis. In the end of this process, we output 21 models per key model in the SCAPE dataset.
+
+ < g r a p h i c s >
+
+Since our network is trained with (2D joints, 3D model) pairs, we also extract 2D joints from a 3D model in a particular perspective. We designate 11 essential points that alone can describe a 3D human pose. These are the following: forehead, elbows, hands, neck, abdomen, knees and feet. Since the dataset has the point-to-point correspondence information in itself, we select these 3D points in a pilot mesh from the dataset. We project these joints onto a $2\mathrm{D}$ camera plane ( $x - y$ in our case) across our entire dataset to create 2D joint projections. In order to be independent from the coordinate system, we represent these points with relative positions $\left( {{\Delta x},{\Delta y}}\right)$ . We determine a specific order in 2D points forming a sketching path with 17 points (some joints are visited twice but in reverse direction). This sketching path determines the order of $2\mathrm{D}$ points that form the input vector of our neural network. The input vector format also handles front/back ambiguity while generating 3D models. We set the first point in the sketching path as the origin,(0,0)and then we set the remaining points with respect to their relative position to the preceding point.
+
+ < g r a p h i c s >
+
+Figure 2: Our neural network architecture. (a) Train Network: We train this network with (3D point cloud, 2D points of stick figure) pairs during training time. It consists of a VAE: Encoder3D and Decoder consecutively, and another external encoder: Encoder2D. We use regression loss from the output of Encoder2D to the mean vector of the VAE in addition to standard losses of VAE. (b) Test Network: We remove Encoder3D and reparameterization layer from our VAE and use Encoder2D-Decoder as our network in our experiments.
+
+ < g r a p h i c s >
+
+Figure 3: Screenshots from our user interface. (a) 3D model generation mode. (b) 3D animation generation mode.
+
+§ 4.2 NEURAL NETWORK ARCHITECTURE
+
+We build upon the work of Girdhar et al. [14] while designing our neural network architecture. Girdhar et al. aim to learn a vector representation that is generative, i.e., it can generate 3D models in the form of voxels; and predictable, i.e., it can be predicted by a $2\mathrm{D}$ image. We utilize variational autoencoders rather than standard autoencoders to build our neural network as shown in Figure 2. Unlike standard autoencoders, VAEs are generative networks whose latent distribution is regularized during the training in order to be close to a standard Gaussian distribution. This property of VAEs ensures that its latent distribution has a meaningful organization which allows us to generate novel $3\mathrm{D}$ models by sampling in this distribution. In addition to generating novel 3D models, since our framework is capable of learning the vector space around these $3\mathrm{D}$ models, it enables meaningful transitions between them and extrapolations beyond them.
+
+For our training network, we have two encoders and one decoder: Encoder3D, Encoder2D, and Decoder. Encoder3D and Decoder together serve as a VAE. Our VAE takes in a 3D point cloud as input, and reconstructs the same model as the output. While our VAE learns to reconstruct 3D models, it forces latent distribution of the dataset to approximate normal distribution which makes the latent space interpolatable. Meanwhile, we use our Encoder2D to predict latent vectors of corresponding $3\mathrm{D}$ models from $2\mathrm{D}$ points. In order to provide our latent distribution with similarity information between 3D models, we design this partial architecture for our neural network instead of using a VAE which directly generates 3D models from 2D sketches. Thus, our Encoder3D is capable of learning relations between 3D models rather than $2\mathrm{D}$ sketches while creating a regularized latent distribution. With this method, we aim to explore latent space better and generate more meaningful transitions between 3D models.
+
+§ ENCODER3D-DECODER VAE NETWORK ARCHITECTURE:
+
+Our VAE takes ${12500} \times 3$ point cloud of human 3D model as input. Encoder3D contains two fully connected layers and its outputs are a mean vector and a deviation vector. We use ReLu as an activation layer in all the internal layers. There is no activation layer in the output layers. Our Decoder takes the latent vector $z$ as input. It also consists of two fully connected layers with a ReLu activation layer and one fully connected output layer with a tanh activation layer. It gives a reconstructed point cloud of the input 3D model as the output.
+
+We train our VAE with the standard KL divergence and reconstruction losses. The total loss for our VAE is given in Equation 1.
+
+$$
+{L}_{\mathrm{{VAE}}} = \alpha \frac{1}{BN}\mathop{\sum }\limits_{{i = 1}}^{B}\mathop{\sum }\limits_{{j = 1}}^{N}{\left( {x}_{ij} - {\widehat{x}}_{ij}\right) }^{2} + {D}_{\mathrm{{KL}}}\left( {q\left( {z \mid x}\right) \parallel p\left( z\right) }\right) \tag{1}
+$$
+
+In Equation $1,\alpha$ is a parameter to balance between the reconstruction loss and KL divergence loss, $B$ is the training batch size, $N$ is the 1D dimension of vectorized point cloud (37500 in our case), ${x}_{ij}$ is the $j$ -th dimension of $i$ -th model in the training data batch, ${\widehat{x}}_{ij}$ is the $j$ -th dimension of model $i$ ’s output from our VAE, $z$ is the reparameterized latent vector, $p\left( z\right)$ is the prior probability, $q\left( {z \mid f}\right)$ is the posterior probability, and ${D}_{\mathrm{{KL}}}$ is KL divergence.
+
+§ MAPPING 2D SKETCH POINTS TO LATENT VECTOR SPACE:
+
+Our Encoder2D learns to predict the latent vectors of 3D models that corresponds to $2\mathrm{D}$ sketches as discussed. It takes ${11} \times 2$ points as input to map it into mean vector. It has the same structure with Encoder3D except its input and internal fully connected layers' dimensions. In the test case we use Encoder2D and Decoder as a standard autoencoder. Decoder takes the mean vector output of Encoder2D as its input and generates 3D point cloud as the output.
+
+We train our Encoder2D with mean square loss to regress 256D representation of mean vector given by pre-trained Encoder3D. The loss for our Encoder2D is given in Equation 2.
+
+$$
+{L}_{2\mathrm{D}} = \frac{1}{BZ}\mathop{\sum }\limits_{{i = 1}}^{B}\mathop{\sum }\limits_{{j = 1}}^{Z}{\left( {\mu }_{i}^{1} - {\mu }_{i}^{2}\right) }^{2} \tag{2}
+$$
+
+In Equation $2,B$ is the training batch size, $Z$ is the dimension of latent space (256 in our case), ${\mu }_{ij}^{1}$ is the $j$ -th dimension of mean vector produced by Encoder3D to $i$ -th model in training batch, and ${\mu }_{ij}^{2}$ is the $j$ -th dimension of mean vector produced by Encoder2D to $i$ -th model in the training batch.
+
+§ 4.3 TRAINING DETAILS
+
+We follow a three-stage process to train our network. (i) We train our variational auto encoder independently $\mathrm{w}$ ith the loss function in Equation (1). We run this stage for 300 epochs. (ii) We train our Encoder2D with the loss function in Equation (2) using Encoder3D trained from (i). Specifically, we train our Encoder2D to regress the latent vector produced from the pre-trained Encoder3D for the input 3D model. We run this stage for 300 epochs. (iii) We use both losses jointly to finetune our framework. We run this stage for 200 epochs. It takes about two days to complete whole training session.
+
+For the experiments throughout this paper, we set $\alpha = {10}^{5}$ . We set the prior probability over latent variables as a standard normal distribution, $p\left( z\right) = \mathcal{N}\left( {z;0,I}\right)$ . We set the learning rate as ${10}^{-3}$ . We use the Adam Optimizer as our optimizer in training.
+
+§ 4.4 USER INTERFACE
+
+As we have explained in prior sections, our method can be used in a variety of applications in different fields such as character generation and making quick animations. In order to better utilize these applications we propose a user interface with facilitative properties that enables users to perform our method in a better manner (Figure 3). Our user interface acts as an agent that ensures the input-output communication between the user and our neural network. The user can choose whether to generate $3\mathrm{D}$ models from $2\mathrm{D}$ stick figure sketches or create animations between the source and target stick figure sketches. While it takes about one second to process a sketch input for generation of the corresponding 3D model, this time extends approximately to five seconds for producing animation between (interpolation) and beyond (extrapolation) sketch inputs.
+
+Our user interface takes a sketch input from the user via an embedded 2D canvas (right part). The collected sketch is transformed into a map of the joint locations as an input to our neural network by requiring users to sketch in a predetermined path. The user interface then shows the 3D model output produced by the neural network on the embedded 3D canvas (left part). Although our output is a 3D point cloud, for better visualization, our user interface utilizes the mesh information that already exists in the SCAPE dataset. Generated point clouds are consequently combined with this information to display 3D models as surfaces with appropriate rendering mechanisms such as shading and lighting.
+
+Since we trained an abundance of neural networks until achieving the best one with the optimal parameters, our user interface showed two different 3D model outputs coming from two different neural networks for comparisons during the development phase. The interface in Figure 3 belongs to our release mode where only the promoted 3D output is displayed.
+
+We construct our user interface such that it is purified from unnatural interaction tools such as buttons and menus. Generation process starts as soon as the last stroke is drawn without forcing the user to hit the start button. We provide brief information in text that describes the canvas organization. To make the interaction more fluent, we add a simple "scribble" detector to understand the undo operation.
+
+§ 5 EXPERIMENTS AND RESULTS
+
+In this section, we first evaluate our framework qualitatively and quantitatively. We evaluate three tasks performed by our framework. (i) Generating 3D models from 2D stick figure sketches. (ii) Generating 3D animations between source and target stick figure sketches. (iii) Performing simple extrapolation beyond stick figures.
+
+§ 5.1 FRAMEWORK EVALUATION
+
+Standard Autoencoder Baseline. To quantitatively justify our preference on variational autoencoders, we design a standard autoen-coder (AE) baseline with similar dimensions and activation functions (Figure 4). We train this network for 300 epochs with Euclidean loss between generated $3\mathrm{D}$ models and ground truth ones. We compare the per-vertex reconstruction loss on validation set consisting held-out 3D models with our VAE network and standard AE network. The results on Table 1 shows that our VAE network outperforms AE network, exploiting latent space more efficiently to enhance the generation quality of novel $3\mathrm{D}$ models.
+
+Latent Dimensions. We evaluate different latent dimensions for our VAE framework using per-vertex reconstruction loss on validation set. The results on Table 2 show that using 256 dimensions improves generation quality compared to lower dimensions. Higher dimensions lead to overfitting. We use 256 dimensions for the following experiments.
+
+ < g r a p h i c s >
+
+Figure 4: Neural network architecture for standard autoencoder baseline.
+
+Table 1: Per-vertex reconstruction error on validation data with different network architectures. VAE represents our method.
+
+max width=
+
+Method Z (Latent Dim.) Mean $\left( {\mathrm{x}{10}^{-3}}\right)$ Std $\left( {\mathrm{x}{10}^{-3}}\right)$
+
+1-4
+AE 256 2.996 1.568
+
+1-4
+VAE 256 2.118 2.598
+
+1-4
+
+Table 2: Per-vertex reconstruction error on validation data with different latent dimensions. 256 is our promoted latent space dimension.
+
+max width=
+
+Method Z (Latent Dim.) Mean $\left( {\mathrm{x}{10}^{-3}}\right)$ Std $\left( {\mathrm{x}{10}^{-3}}\right)$
+
+1-4
+VAE 128 3.887 3.802
+
+1-4
+VAE 256 2.118 2.598
+
+1-4
+VAE 512 10.830 6.888
+
+1-4
+
+§ 5.2 GENERATING 3D MODELS FROM 2D STICK FIGURE SKETCHES
+
+To evaluate the generation ability, we feed $2\mathrm{D}$ stick figure sketches as input to our framework. Our user interface is used for the sketching activity. In Figure 5, we compare generation results for sample sketches with our VAE network and standard AE baseline. These results show that standard AE network generates models with anatomical flaws (collapsed arm in Figure 5) and deficient body orientations. Our VAE network produces compatible models of high quality.
+
+We also compare the generation ability of our framework with a recent study [20] that predicts 3D shape from 2D joints. While our framework outputs 3D point clouds, the method described in [20], tries to fit a statistical body shape model, SMPL [6], on 2D joints. Their learned statistical model is a male who is in a different shape than our male model as shown in the second column of Figure 6. We take the visuals reported in their paper and draw the corresponding stick figures for a fair comparison. Despite being restricted to the reported poses in [20], our method compares favorably. The sitting pose, for instance, which is not quite captured by their statistical model shows in an inaccurate 3D sitting while our 3D model sits successfully (Figure 6 - top row).
+
+We finally compare our 3D human model generation ability to the ground-truth by feeding sketches that resemble 3 of the 72 SCAPE poses used during training. Two of these poses were observed during training at the same orientations as we draw them in the first two rows of Figure 7, yet the last one is drawn with a new orientation that was not observed during training (Figure 7-last row). Consequently, we had to align the generated model with the ground-truth model via ICP alignment [4] for the last row prior to the comparison. We observe almost perfect replications of the SCAPE poses in all cases as depicted by the colored difference maps.
+
+§ 5.3 GENERATION OF DYNAMIC 3D MODELS - INTERPOLATION
+
+We test the capability of our network to generate $3\mathrm{D}$ animations between source and target stick figures. To accomplish this, our framework takes two stick figure sketches via our user interface. The framework then encodes the stick figures into the embedding space to extract their latent variables. We create a list of latent variables by linearly interpolating between the source and the target latent variables. We feed this list as an input to our decoder, with each element of the list being fed one by one to produce the desired 3D model sequence. In Figure 8 (a), we compare our results with direct linear interpolation between source and target 3D model output. Our results show that the interpolation in embedding space can avoid anatomical errors which usually occur in methods using direct linear interpolation. Further interpolation results can be found in the teaser image and the accompanying video.
+
+§ 5.4 GENERATION OF DYNAMIC 3D MODELS - EXTRAPOLATION
+
+Our results show that the interpolation vector in the embedding space is capable of reasonably representing the motion in real world actions. To improve upon this idea, we exploit the learned interpolation vector in order to predict future movements. We show our results for extrapolation in Figure 8 (b). This figure shows that the learned interpolation vector between two 3D shapes contains meaningful information of movement in 3D. Further extrapolation results can be found in the teaser image and the accompanying video.
+
+§ 5.5 TIMING
+
+We finish our evaluations with our timing information. The closest work to our framework reports 10 minutes of production time for an amateur user to create 3D faces from 2D sketches [17]. In our system, on the other hand, it takes an amateur user 10 seconds of stick figure drawing and 1 second of algorithm processing to create $3\mathrm{D}$ bodies from $2\mathrm{D}$ sketches. There are two main reasons of this remarkable performance advantage of our framework: i) Human bodies can be naturally represented with easy-to-draw stick figures whereas faces cannot. The simplicity and expressiveness make the learning easier and more efficient. ii) Our deep regression network is significantly less complex than the one employed in [17].
+
+ < g r a p h i c s >
+
+Figure 5: Generated 3D models in front and side views for given sketches using our VAE network and standard AE network. Flaws are highlighted with red circles.
+
+ < g r a p h i c s >
+
+Figure 6: Qualitative comparison of our method (columns 3 and 4) with [20] (columns 1 and 2).
+
+ < g r a p h i c s >
+
+Figure 7: Generation results for (a) two sketches observed in training and (b) an unobserved sketch. The generated models (last column) are colored with respect to the distance map to the ground truth (first column). Distance values are normalized between 0 and 1 .
+
+ < g r a p h i c s >
+
+Figure 8: Qualitative comparison with linear interpolation. (a) Produced 3D model sequences for given sketches using our network are better than linear interpolation results. (b) Extrapolation results for 3D model sequences are given as red models.
+
+Our application runs on a PC with $8\mathrm{{GB}}$ ram and i ${7.2.80}\mathrm{{GHz}}\mathrm{{CPU}}$ . Training of our model is done on Tesla K40m GPU and took about 2 days (800 epochs total).
+
+§ 6 CONCLUSIONS
+
+In this paper, we presented a deep learning based framework that is capable of generating $3\mathrm{D}$ models from $2\mathrm{D}$ stick figure sketches and producing dynamic 3D models between (interpolation) and beyond (extrapolation) two given stick figure sketches. Unlike existing methods, our method requires neither a statistical body shape model nor a rigged 3D character model. We demonstrated that our framework not only gives reasonable results on generation, but also compares favorably with existing approaches. We further supported our framework with a well-designed user interface to make it practical for a variety of applications.
+
+§ 7 LIMITATIONS
+
+The proposed system has several limitations that are listed as follows:
+
+ * Training of our network is dependent on the existing 3D shapes in the dataset. Our network cannot learn vastly different shapes than existing ones: it produces incompatible 3D models with sketch inputs that are not closely represented in the dataset. For example, our network does not correctly capture the arm orientation for the right model in Figure 9.
+
+ * The system can only generate human shapes because of the content of the dataset.
+
+ * The system can produce articulated shapes. Although it can twist and bend human body in a reasonable way, it can not, for instance, stretch or resize a body part.
+
+ * The system benefits from the one-to-one correspondence information of the dataset. Thus, the quality of results depends on this information.
+
+ < g r a p h i c s >
+
+Figure 9: Failure cases of our framework. Our framework generates anatomically unnatural results if the input sketch has disproportionate body parts (left), or it is significantly different than the ones used during training (right).
+
+ * Since our network takes its input in a specific order, our user interface constrains users to sketch in that order. Users cannot sketch stick figures in an arbitrary order.
+
+§ 8 FUTURE WORK
+
+Potential future work directions that align with our proposed system are described as follows. Human stick figures used in this paper can be generalized to any other shape using their skeletons. Consequently, an automated skeleton extraction algorithm would enable further training of our network, which in turn extends our solution to non-human objects. Voxelization of our input data would spare us from the one-to-one correspondence information requirement, which in turn would enable our interpolation scheme to morph from different object classes that do not often have this type of information, e.g., from cat to giraffe. Automatic one-to-one correspondence computation $\left\lbrack {{22},{34},{36}}\right\rbrack$ can also be considered to avoid voxelization. Latent space can be exploited in a better manner in order to obtain a more sophisticated extrapolation algorithm than the basic one we introduced in this paper. New sketching cues can be designed and incorporated into our network to be able to produce body types different than the one used during training, e.g., training with the fit SCAPE actor and production with a obese actress.
+
+§ ACKNOWLEDGMENTS
+
+ * This work was supported by TUBITAK under the project EEEAG-115E471.
+
+ * This work was supported by European Commission ERA-NET Program, The IMOTION project.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ec0b38dbd02e2bce58f15d99df86756f667645c7
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,391 @@
+# Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons
+
+Carl Gutwin ${}^{ \dagger }$ , Carl Hofmeister ${}^{ \dagger }$ , David Ledo ${}^{ \ddagger }$ , and Alix Goguey*
+
+${}^{ \dagger }$ University of Saskatchewan
+
+#University of Calgary
+
+*Grenoble Alpes University
+
+## Abstract
+
+Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.
+
+Keywords: Augmented interaction; modes; chording interfaces.
+
+Index Terms: H.5.m. Information interfaces and presentation (e.g., HCI)
+
+## 1 INTRODUCTION
+
+Mobile touchscreen devices such as smartphones, tablets, and smartwatches are now ubiquitous. The simplicity of touch-based interaction is one of the main reasons for their popularity, but touch interfaces have low expressiveness - they are limited in terms of the number of actions that the user can produce in a single input. As a result, touch interactions often involve additional actions to choose modes or to navigate menu hierarchies.
+
+These limitations on touch input can be addressed by adding new degrees of freedom to touch devices. For example, both Android and IOS devices have augmentations that allow the user to specify the difference between scrolling and selecting: Android uses a timeout on the initial touch (i.e., a drag starts with either a short press or a long press), and some IOS devices use pressure-sensitive screens that use different pressure levels to specify selection and scrolling [13]. Researchers have also proposed adding a wide variety of new degrees of freedom for touch devices - including multi-touch and bimanual input [16],[32],[42], external buttons and force sensors [44], back-of-device touch [3], sensors for pen state [26] or screen tilt [39],[43], and pressure sensors [8],[9].
+
+Studies have shown these additional degrees of freedom to be effective at increasing the expressive power of interaction with a mobile device. However, previous research has only looked at these new degrees of freedom in single contexts, and as a result we know little about how augmented input will work when it is used in multiple different applications: if an augmented input is mapped to a set of actions that are specific to one application, will there be interference when the same augmentations are mapped to a different set of actions in another application?
+
+To find out how multiple mappings for a new degree of freedom affect learning and usage for one type of augmentation, we carried out a study with a device that provides three buttons on the side of a smartphone case. The buttons can be chorded, giving seven inputs that can be used for discrete commands or transient modes. We developed three different mappings for these chording buttons for three different contexts: shortcuts for a launcher app, colour selections for a drawing app; and modes for a text-editing app. Our study looked at three issues: first, whether learning multiple mappings with the chorded buttons would interfere with learning or accuracy; second, whether people could transfer their learning from training to usage tasks that set the button commands into more complex and realistic activities; and third, whether memory of the multiple mappings would be retained over one week, without any intervening practice.
+
+Our evaluation results provide new insights into the use of chorded buttons as augmented input for mobile devices:
+
+- Learning multiple mappings did not reduce performance - people were able to learn all three mappings well, and actually learned the second and third mappings significantly faster than the first;
+
+- Multiple mappings did not reduce accuracy - people were as accurate on a memory test with three mappings as they were when learning the individual mappings;
+
+- Performance did transfer from training to more realistic usage tasks, although accuracy decreased slightly;
+
+- Retention after one week was initially poor (accuracy was half that of the first session), but performance quickly returned to near-expert levels.
+
+Our work provides two main contributions. First, we show that chorded button input is a successful way to provide a rich input vocabulary that can be used with multiple applications. Second, we provide empirical evidence that mapping augmented input to multiple contexts does not impair performance. Our results provide new evidence that augmented input can realistically increase the expressive power of interactions with mobile devices.
+
+---
+
+carl.gutwin@usask.ca,
+
+carl.hofmeister@usask.ca,
+
+david.ledo@ucalgary.ca
+
+alix.goguey@univ-grenoble-alpes.fr
+
+---
+
+## 2 RELATED WORK
+
+### 2.1 Increasing Interaction Expressiveness
+
+HCI researchers have looked at numerous ways of increasing the richness of interactions with computer systems and have proposed a variety of methods including new theories of interaction, new input devices and new combinations of existing devices, and new ways of organizing interaction. Several researchers have proposed new frameworks and theories of interaction that provide explanatory and generative power for augmented interactions. For example, several conceptual frameworks of input devices and capabilities exist (e.g., [7],[19],[22],[23]), and researchers have proposed new paradigms of interaction (e.g., for eyes-free ubiquitous computing [29] or for post-WIMP devices [4],[5],[24]) that can incorporate different types of augmentation. Cechanowicz and colleagues also created a framework specifically about augmented interactions [9]; they suggest several ways of adding to an interaction, such as adding states to a discrete degree of freedom, adding an entirely new degree of freedom, or "upgrading" a discrete degree of freedom to use continuous input.
+
+### 2.2 Chorded Text Input
+
+Chorded input for text entry has existed for many years (e.g., stenographic machines for court reporters, or Engelbart and English's one-hand keyboard in the NLS system [6]). Researchers have studied several issues in chorded text input, including performance, learning, and device design.
+
+A longitudinal study of training performance with the Twiddler one-handed keyboard [30] showed that users can learn chorded devices and can gain a high level of expertise. The study had 10 participants train for 20 sessions of 20 minutes each; results showed that by session eight, chording was faster than the multi-tap technique, and that by session 20, the mean typing speed was 26 words per minute. Five participants who continued the study to 25 hours of training had a mean typing speed of 47 words per minute [30]. Because this high level of performance requires substantial training time, researchers have also looked at ways of reducing training time for novices. For example, studies have investigated the effects of using different types of phrase sets in training [31], and the effects of feedback [39],[46].
+
+Several chording designs have been demonstrated for text entry on keypad-style mobile phones. The ChordTap system added external buttons to the phone case [44]; to type a letter, the dominant hand selects a number key on the phone (which represents up to four letters) and the non-dominant hand presses the chording keys to select a letter within the group. A study showed that the system was quickly learned, and outperformed multi-tap. A similar system used three of the keypad buttons to select the letter within the group, allowing chorded input without external buttons [33]. The TiltText prototype used the four directions of the phone's tilt sensor to choose a letter within the group [43].
+
+### 2.3 Chorded Input for Touch
+
+Other types of chording have also been seen in multi-touch devices, where combinations of fingers are used to indicate different states. Several researchers have looked at multi-touch input for menu selection. For example, finger-count menus use the number of fingers in two different areas of the touch surface to indicate a category (with the left hand) and an item within that menu (with the right hand) [2]. Two-handed marking menus [26] also divide the screen into left and right sections, with a stroke on the left side selecting the submenu and a stroke on the right selecting the item. Multitouch marking menus [27] combine these two approaches, using vision-based finger identification to increase the number of possible combinations. Each multi-finger chord indicates which menu is to be displayed, and the subsequent direction in which the touch points are moved indicates the item to be selected. HandMarks [41] is a bimanual technique that uses the left hand on the surface as a reference frame for selecting menu items with the right hand. FastTap uses chorded multitouch to switch to menu mode and simultaneously select an item from a grid menu [17].
+
+Other kinds of chording have also been investigated with touch devices. The BiTouch system was a general-purpose technique that allowed touches from the supporting hand to be used in conjunction with touches from the dominant hand [41]. Olafsdottir and Appert [32] developed a taxonomy of multi-touch gestures (including chords), and Ghomi and colleagues [12] developed a training technique for learning multi-touch chords. Finally, multi-finger input on a phone case was also shown by Wilson and Brewster [44], who developed a prototype with pressure sensors under each finger holding the phone; input could involve single fingers or combinations of fingers (with pressure level as an added DoF).
+
+### 2.4 Augmenting Touch with Other Degrees of Freedom
+
+Researchers have also developed touch devices and techniques that involve other types of additional input, including methods for combining pen input with touch [20], using vocal input [18], using the back of the device as well as the front [3], using tilt state with a directional swipe on the touch surface to create an input vocabulary [39], or using a phone's accelerometers to enhance touch and create both enhanced motion gestures (e.g., one-handed zooming by combining touch and tilt), and more expressive touch [21].
+
+### 2.5 Augmented Input for Mode Selection
+
+Enhanced input can also address issues with interface modes, which are often considered to be a cause of errors [35]. Modes can be persistent or "spring loaded" (also called quasimodes [35]); these are active only when the user maintains a physical action (e.g., holding down a key), and this kinesthetic feedback can help people remember that they are in a different mode [37].
+
+When interfaces involve persistent modes, several means for switching have been proposed. For example, $\mathrm{{Li}}$ and colleagues [26] compared several mode-switch mechanisms for changing from inking to gesturing with a stylus: a pen button, a separate button in the non-dominant hand, a timeout, pen pressure, and the eraser end of the pen. They found that a button held in the other hand was fastest and most preferred, and that the timeout was slow and error prone [26]. Other researchers have explored implicit modes that do not require an explicit switch: for example, Chu and colleagues created pressure-sensitive "haptic conviction widgets" that allow either normal or forceful interaction to indicate different levels of confidence [9]. Similarly, some IOS devices use touch pressure to differentiate between actions such as selection and scrolling [13].
+
+Many techniques add new sensing capabilities to create the additional modes - for example, pressure sensors have also been used to enhance mouse input [8] and pen-based widgets [32]; three-state switches were added to a mouse to create pop-through buttons [46]; and height sensing was used to enable different actions in different height layers (e.g., the hover state of a pen [11], or the space above a digital table [38]). Other techniques use existing sensing that is currently unused in an interaction. For example, OrthoZoom exploits the unused horizontal dimension in a standard scrollbar to add zooming (by moving the pointer left or right) [1].
+
+Despite the work that has been carried out in this area, there is relatively little research on issues of interference, transfer, or retention for augmented interfaces - particularly with multiple mappings. The study below provides initial baseline information for these issues - but first, we describe the design and construction of the prototype that we used as the basis for our evaluation.
+
+## 3 CHORDING PHONE CASE PROTOTYPE
+
+In order to test learning, interference, and retention, we developed a prototype that adds three hardware buttons to a custom-printed phone case and makes the state of those buttons available to applications. This design was chosen because it would enable mobile use and provide a large number of states.
+
+### 3.1 Hardware
+
+We designed and 3D-printed a case for an Android Nexus 5 phone, with a compartment mounted on the back to hold the circuit boards from three Flic buttons (Bluetooth LE buttons made by Shortcut Labs). The Flic devices can be configured to perform various predetermined actions when pressed; Shortcut Labs also provides an Android API for using the buttons with custom software.
+
+We removed the PCBs containing the Bluetooth circuitry, and soldered new buttons to the PCBs (Figure 1). The new pushbuttons are momentary switches (i.e., they return to the "off" state when released) with ${11}\mathrm{\;{mm}}$ -diameter push surfaces and $5\mathrm{\;{mm}}$ travel. We tested several button styles and sizes, in order to find devices that were comfortable to push, that provided tactile feedback about the state of the press, and that were small enough to fit under three fingers. This design allows us to use the Flic Bluetooth events but with buttons that can be mounted closer together. The new buttons do not require any changes to our use of the API.
+
+The prototype is held as a normal phone with the left hand, with the index, middle, and ring fingers placed on the pushbuttons (Figure 2). The pushbuttons are stiff enough that these three fingers can also grip the phone without engaging the buttons; the fifth finger of the left hand can be placed comfortably on the phone case, adding stability when performing chorded button combinations. We also tested a four-button version, but there were too many erroneous presses because of the user needing to grip the phone. Finally, we note that the button housing on our prototype was larger than would be required by a commercial device; we estimate that the hardware could easily be built into a housing that is only marginally larger than a typical phone case.
+
+
+
+Figure 1: Chording prototype. Left: button housing. Right: Flic Bluetooth PCBs (inset shows pushbutton).
+
+The prototype worked well in our study sessions. No participant complained of fatigue or difficulty (although we observed a few difficulties matching the timeout period, as described below). The phone case was easy to hold, and the button positions were adequate for the hand sizes of our participants. Pressing the buttons in chords did not appear to cause difficulty for any participant (although with some timing issues, as described later).
+
+### 3.2 Software and Chord Identification
+
+We wrote a simple wrapper library for Android to attach callback functions to the buttons through the Flic API. Android applications can poll the current combined state of the buttons through this library wrapper. Callback functions attached through the wrapper library are put on a short timer, allowing time for multiple buttons to be depressed before executing the callback. In all the applications we created, we assigned a single callback function to all the buttons; this function checks the state of all buttons and determines the appropriate behavior based on the combined state.
+
+Identifying chords represents an interpretation problem for any input system. When only individual buttons can be pressed, software can execute actions as soon as the signal has been received from any button. When chorded input is allowed, however, this method is insufficient, because users do not press all of the buttons of a chord at exactly the same time. Therefore, we implemented a ${200}\mathrm{\;{ms}}$ wait time (determined through informal testing) before processing input after an initial button signal - after this delay, the callback read the state of all buttons, and reported the combined pattern (i.e., a chord or a single press). Once an input is registered, all buttons must return to their "off" states before another input.
+
+With three buttons, the user can specify eight states - but in our applications, we assume that there is a default state that corresponds to having no buttons pressed. This approach prevents the user from having to maintain pressure on the buttons during default operation.
+
+## 4 EVALUATION
+
+We carried out a study of our chording system to investigate our three main research questions:
+
+- Interference: does learning additional mappings with the same buttons reduce learning or accuracy?
+
+- Transfer: is performance maintained when users move from training to usage tasks that set the button commands into more realistic activities?
+
+- Retention: does memory of the command mappings persist over one week (without intervening practice)?
+
+We chose not to compare to a baseline (e.g., GUI-based commands) for two reasons: first, in many small-screen devices screen space is at a premium, and dedicating a part of the screen to interface components is often not a viable alternative; second, command structures stored in menus or ribbons (which do not take additional space) have been shown to be significantly slower than memory-based interfaces in several studies (e.g., [2][17][41]).
+
+To test whether learning multiple mappings interferes with learning rate or accuracy, we created a training application to teach three mappings to participants: seven application shortcuts (Apps), seven colors (Colors), and seven text-editing commands (Text) (Table 1). Participants learned the mappings one at a time, as this fits the way that users typically become expert with one application through frequent use, then become expert with another.
+
+To further test interference, after all mappings were learned we gave participants a memory test to determine whether they could remember individual commands from all of the mappings. This test corresponds to scenarios where users switch between applications and must remember different mappings at different times.
+
+To test whether the mappings learned in the training system would transfer, we asked participants to use two of the mappings in simulated usage tasks. Colors were used in a drawing program where participants were asked to draw shapes in a particular line color, and Text commands were used in a simple editor where participants were asked to manipulate text formatting.
+
+To test retention, we recruited a subset of participants to carry out the memory test and the usage tasks a second time, one week after the initial session. Participants were not told that they would have to remember the mappings, and did not practice during the intervening week.
+
+Table 1. Button patterns and mappings.
+
+| Buttons | Pattern | Color | Command | App |
| 1 | ●○○ | Red | Copy | Contacts |
| 2 | ... | Green | Paste | Browser |
| 3 | ○○● | Blue | Italic | Phone |
| 1+2 | ... | Yellow | Small font | Maps |
| 1+3 | ●○● | Magenta | Bold | Camera |
| 2+3 | ... | Cyan | Large | E-Mail |
| 1+2+3 | ... | Black | Select | Calendar |
| 0 | OOO | | | |
+
+### 4.1 Part 1: Learning Phase
+
+The first part of the study had participants learn and practice the mappings over ten blocks of trials. The system displayed a target item on the screen, and asked the user to press the appropriate button combination for that item (see Figure 2). The system provided feedback about the user's selection (Figure 2, bottom of screen); when the user correctly selected the target item, the played a short tone, and the system moved on to the next item. Users could consult a dialog that displayed the entire current mapping but had to close the dialog to complete the trial. The system presented each item in the seven-item mapping twice per block (sampling without replacement), and continued for ten blocks. The same system was used for all three mappings, and recorded selection time as well as any incorrect selections (participants continued their attempts until they selected the correct item).
+
+Button Study Contacts
+
+Figure 2: Training system showing Apps mapping (target at center of screen, selection feedback at bottom). Training for Color and Text mappings was similar.
+
+### 4.2 Part 2: Usage Tasks
+
+We created two applications (Drawing and TextEdit) to test usage of two mappings in larger and more complex activities.
+
+Drawing. The Drawing application (Figure 3) is a simple paint program that uses the chord buttons to control line color (see Table 1). The application treated the button input as a set of spring-loaded modes - that is, the drawing color was set based on the current state of the buttons, and was unset when the buttons were released. For example, to draw a red square as shown in Figure 3, users held down the first button with their left hand and drew the square with their right hand; when the button was released, the system returned to its default mode (where touch was interpreted as panning). If the user released the buttons in the middle of a stroke, the line colour changed back to default grey.
+
+For each task in the Drawing application, a message on the screen asked the participant to draw a shape in a particular color. Tasks were grouped into blocks of 14 , with each color appearing twice per block. A task was judged to be complete when the participant drew at least one line with the correct color (we did not evaluate whether the shape was correct, but participants did not know this). Participants completed three blocks in total. TextEdit. The TextEdit application asked users to select lines of text and apply manipulations such as cutting and pasting the text, setting the style (bold or italic), and increasing or decreasing the font size. Each of these six manipulations was mapped to a button combination. The seventh action for this mapping was used for selection, implemented as a spring-loaded mode that was combined with a touch action. We mapped selection to the combination of all three buttons since selection had to be carried out frequently - and this combination was easy to remember and execute.
+
+Button Study
+
+Figure 3: Drawing Task
+
+For each TextEdit task, the lines on the screen told the user what manipulations to make to the text (see Figure 4). Each task asked the participant to select some text and then perform a manipulation. There were six manipulations in total, and we combined copy and paste into a single task, so there were five tasks. Tasks were repeated twice per block, and there were four blocks. Tasks were judged to be correct when the correct styling was applied; if the wrong formatting was applied, the user had to press an undo button to reset the text to its original form, and perform the task again.
+
+Button Study This is the fourth test Make this line large for
+
+Figure 4: TextEdit task after selecting text.
+
+### 4.3 Part 3: Memory Test
+
+The third stage of the study was a memory test that had a similar interface to the learning system described above. The system gave prompts for each of the 21 commands in random order (Apps, Colors, and Text were mixed together, and sampled without replacement). Participants pressed the button combination for each prompt, but no feedback was given about what was selected, or whether their selection was correct or incorrect. Participants were only allowed to answer once per prompt, and after each response the system moved to the next item.
+
+### 4.4 Part 4: Retention
+
+To determine participants' retention of the mappings, after the study was over we recruited 8 of the 15 participants to return to the lab after one week to carry out the memory test and the usage tasks again (two blocks of each of the drawing and text tasks). Participants were not told during the first study that they would be asked to remember the mappings beyond the study; participants for the one-week follow-up were recruited after the initial data collection was complete. The usage and memory tests operated as described above.
+
+### 4.5 Procedure
+
+After completing an informed consent form and a demographics questionnaire, participants were shown the system and introduced to the use of the external buttons. Participants were randomly assigned to a mapping-order condition (counterbalanced using a Latin square), and then started the training tasks for their first mapping. Participants were told that both time and accuracy would be recorded but were encouraged to use their memory of the chords even if they were not completely sure. After the Color and Text mappings, participants also completed the usage tasks as described above (there was no usage task for the Apps mapping). After completing the learning and tasks with each mapping, participants filled out an effort questionnaire based on the NASA-TLX survey. After all mappings, participants completed the memory test.
+
+For the retention test, participants filled out a second consent form, then completed the memory test with no assistance or reminder of the mappings. They then carried out two blocks of each of the usage tasks (the Drawing and TextEdit apps had the same order as in the first study).
+
+### 4.6 Participants and Apparatus
+
+Fifteen participants were recruited from the local university community ( 8 women, 7 men, mean age 28.6). All participants were experienced with mobile devices (more than ${30}\mathrm{\;{min}}/\mathrm{{day}}$ average use). All but one of the participants was right-handed, and the one left-handed participant stated that they were used to operating mobile devices in a right-handed fashion.
+
+The study used the chording prototype described above. Sessions were carried out with participants seated at a desk, holding the phone (and operating the chording buttons) with their left hands. The system recorded all performance data; questionnaire responses were entered on a separate $\mathrm{{PC}}$ .
+
+### 4.7 Design
+
+The main study used two $3 \times {10}$ repeated-measures designs. The first looked at differences across mappings, and used factors Mapping (Apps, Colors, Text) and Block (1-10). The second looked at interference by analyzing differences by the position of the mapping in the overall sequence, and used factors Position (first, second, third) and Block (1-10). For the memory tests, we used a ${21} \times 3 \times 7$ design with several planned comparisons; factors were Item (the 21 items shown in Table 1), Pattern (the 7 button patterns shown in column 2 of Table 1), and Mapping (Apps, Colors, Text).
+
+Dependent variables were selection time, accuracy (the proportion of trials where the correct item was chosen on the first try), and errors.
+
+## 5 RESULTS
+
+No outliers were removed from the data. In the following analyses, significant ANOVA results report partial eta-squared $\left( {\eta }^{2}\right)$ as a measure of effect size (where .01 can be considered small, .06 medium, and $> {.14}$ large [11]). We organize the results below around the main issues under investigation: training performance when learning three different mappings, interference effects, transfer performance, and retention after one week.
+
+### 5.1 Training: Learning Rate, Accuracy, and Effort
+
+Selection time. Overall, mean selection times for the three mappings were 2626ms for Apps (s.d. 2186), 2271 for Colors (s.d. 2095, and 2405 for Text (s.d. 2193). A 3x10 (Mapping x Block) RM-ANOVA showed no effect of Mapping $\left( {{\mathrm{F}}_{2,{28}} = {1.06},\mathrm{p} = {0.36}}\right)$ . As Figure 5 shows, selection times decreased substantially across trial blocks; ANOVA showed a significant main effect of Block $\left( {{\mathrm{F}}_{9,{126}} = {42.3},\mathrm{p} < {0.0001},{\eta }^{2} = {0.40}}\right)$ . There was no interaction (F18,252) $= {0.40}$ , p=0.98). Accuracy. Across all blocks, the proportion of trials where the correct item was chosen first was 0.83 for both Apps and Colors and 0.84 for Text (all s.d. 0.37). RM-ANOVA showed no effect of Mapping $\left( {{\mathrm{F}}_{2,{28}} = {0.019},\mathrm{p} = {0.98}}\right)$ , but again showed a significant effect of ${Block}\left( {{\mathrm{F}}_{9,{126}} = {4.42},\mathrm{p} < {0.001},{\eta }^{2} = {0.09}}\right)$ , with no interaction $\left( {{\mathrm{F}}_{{18},{252}} = {0.71},\mathrm{p} = {0.78}}\right)$ . Overall error rates (i.e., the total number of selections per trial, since participants continued to make selections until they got the correct answer) for all mappings were low: 0.25 errors / selection for Apps, 0.26 for Colors, and 0.24 for Text.
+
+5000 5 6 10 Block Apps - Colors - Text Selection Time (ms) 4000 3000 2000 0 1 2
+
+Figure 5: Mean selection time (±s.e.), by block and mapping
+
+0.9 7 10 Block Apps - Colors - Text 0.8 0.7 Accuracy 0.6 0.5 0.4 0.2 0.1 0.0 2 4 5
+
+Figure 6: Mean accuracy (±s.e.) by block and mapping.
+
+During the sessions we identified a hardware-based source of error that reduced accuracy. The ${200}\mathrm{\;{ms}}$ timeout period in some cases caused errors when people held the buttons for the wrong period of time, when the Bluetooth buttons did not transmit a signal fast enough, or when people formed a chord in stages. This issue contributes to the accuracy rates shown above: our observations indicate that people had the button combinations correctly memorized but had occasional problem in producing the combination with the prototype. We believe that this difficulty can be fixed by adjusting our timeout values and by using an embedded microprocessor to read button states (to avoid Bluetooth delay).
+
+Perceived Effort. Responses to the TLX effort questionnaire are shown in Figure 7; overall, people felt that all of the mappings required relatively low effort. Friedman rank sum tests showed only one difference between mappings - people saw themselves as being less successful with the Apps mapping $\left( {{\chi }^{2} = 7,\mathrm{p} = {0.030}}\right)$ . In the end-of-session questionnaire, 12 participants stated that Colors were easiest to remember (e.g., one person stated "colours were easier to remember" and another said that "memorizing the colours felt the easiest").
+
+7 (most) Rushed/ Perceived Hard Work Annoyance/ Hurried Success Required Frustration NASA TLX Question Mean Score 1 (least) Mental Physical Effort Effort
+
+Figure 7: NASA-TLX responses (±s.e.), by mapping
+
+### 5.2 Interference 1: Effects of Learning New Mappings
+
+To determine whether learning a second and third mapping would be hindered because of the already-memorized mappings, we analysed the performance data based on whether the mapping was the first, second, or third to be learned. Figure 9 shows selection time over ten blocks for the first, second, and third mappings (the specific mapping in each position was counterbalanced).
+
+Selection time. A 3x10 RM-ANOVA looked for effects of position in the sequence on selection time. We did find a significant main effect of Position $\left( {{\mathrm{F}}_{2,{28}} = {19.68},\mathrm{p} < {0.0001},{\eta }^{2} = {0.22}}\right)$ , but as shown in Figure 8, the second and third mappings were actually faster than the first mapping. Both subsequent mappings were faster than the first; follow-up t-tests with Bonferroni correction show that these differences are significant, $\mathrm{p} < {0.01}$ ). The difference was more obvious in the early blocks (indicated by a significant interaction between Position and Block, ${\mathrm{F}}_{{18},{252}} = {4.63},\mathrm{p} < {0.0001}$ , ${\eta }^{2} = {0.14})$ . These findings suggest that adding new mappings for the same buttons does not impair learning or performance for subsequent mappings. Accuracy. We carried out a similar 3x10 RM-ANOVA to look for effects on accuracy (Figure 9). As with selection time, performance was worse with the first mapping (accuracy 0.8 ) than the second and third mappings (0.85 and 0.86). ANOVA showed a main effect of Position on accuracy $\left( {{\mathrm{F}}_{2,{28}} = {7.18},\mathrm{p} = {0.003},{\eta }^{2} = {0.072}}\right)$ , but with no Position $\mathrm{x}$ Block interaction $\left( {{\mathrm{F}}_{{18},{252}} = {1.20},\mathrm{p} = {0.051}}\right)$ .
+
+7000 7 8 Block - First Mapping $\rightarrow$ Second Mapping $\rightarrow$ Third Mapping Selection Time (ms) 6000 5000 4000 3000 2000 1000 0 3
+
+Figure 8: Mean selection time (±s.e.), by position and block.
+
+1.0 7 Block - First Mapping $\rightarrow$ Second Mapping $\rightarrow$ Third Mapping 0.9 0.8 0.7 Accuracy 0.6 0.4 0.3 0.2 0.1 0.0 2
+
+Figure 9: Mean accuracy (±s.e.), by position and block.
+
+### 5.3 Interference 2: Memory Test with All Mappings
+
+The third stage of the study was the memory test, in which participants selected each of the 21 commands from all three mappings, in random order. Participants answered once per item with no feedback. The overall accuracy was ${0.87}({0.86}$ for Apps, 0.86 for Colors, and 0.89 for Text); see Figure 11. Note that this accuracy is higher that seen with the individual mappings during the training sessions.
+
+To determine whether there were differences in accuracy for individual items, mappings, or button patterns, we carried out a ${21} \times 3 \times 7$ (Item x Mapping x Pattern) RM-ANOVA. We found no significant effects of any of these factors (for Item, ${\mathrm{F}}_{{20},{240}} = {1.55}$ , $\mathrm{p} = {0.067}$ ; for Mapping: ${\mathrm{F}}_{2,{24}} = {0.43}$ , $\mathrm{p} = {0.65}$ ; for Pattern, ${\mathrm{F}}_{6,{12}} = {0.0004},\mathrm{p} = {0.99}$ ), and no interactions.
+
+1.0 Camera (101)- Maps (110)- Contacts (011)- alendar (111)- Copy (100)- Paste (010) - Italic (001). Bold (101)- Small (110)- Large (011). Select (111)- Target (Button Pattern) and Mapping group Apps Colors Text Memory test 1. Mean accuracy (±s.e.), by item 0.9 0.8 Accuracy 0.7 0.6 0.4 0.2 0.1 0.0 Red (100)- Green (010)- Blue (001)- Magenta (101)- Yellow (110)- Cyan (011)- Black (111)- Email (100)- 3 rowser (010)- Phone (001)- Figure 10: and mapping. Button patterns shown in parentheses for each item.
+
+Figure 10 also shows that multi-finger chords are not substantially different from single-finger button presses. Accuracy was only slightly lower with the (101) and (011) patterns than the single-finger patterns, and the one three-finger pattern (111) had an accuracy above 90% for two of the three mappings.
+
+### 5.4 Transfer: Performance Transfer to Usage Tasks
+
+After learning the Color and Text mappings, participants carried out usage tasks in the TextEdit and Drawing applications. Accuracy results are summarized in Figure 10 (note that the text task had four blocks, and the drawing task had three blocks). Accuracy in the usage tasks ranged from 0.7 to 0.8 across the trial blocks - slightly lower than the 0.8-0.9 accuracy seen in the training stage of the study. It is possible that the additional mental requirements of the task (e.g., determining what to do, working with text, drawing lines) disrupted people's memory of the mappings - but the overall difference was small.
+
+1.0 Block Drawing Task - Text Task 0.9 0.8 0.7 Accuracy 0.6 0.5 0.3 0.2 0.1 0.0
+
+Figure 11: Mean accuracy (±s.e.), by task and block.
+
+### 5.5 Retention: Performance After One Week
+
+The one-week followup asked eight participants to carry out the memory test and two blocks of each of the usage tasks, to determine whether participants' memory of the mappings had persisted without any intervening practice (or even any knowledge that they would be re-tested). Overall, the follow-up showed that accuracy decayed substantially over one week - but that participants quickly returned to their previous level of expertise once they started the usage tasks. In the memory test, overall accuracy dropped to 0.49 (0.43 for Apps, 0.50 for Colors, and 0.54 for Text), with some individual items as low as 10% accuracy. Only two items maintained accuracy above 0.85 - "Red" and "Copy".
+
+The two usage tasks (Drawing and Text editing) were carried out after the memory test, and in these tasks, participant accuracy recovered considerably. In the first task (immediately after the memory test), participants had an overall 0.60 accuracy in selection; and by the second block, performance rose to accuracy levels similar to the first study (for Drawing, 0.82; for Text, 0.70).
+
+This follow-up study is limited - it did not compare retention when learning only one mapping, so it is impossible to determine whether the decay arose because of the number of overloaded mappings learned in the first study. However, the study shows that retention is an important issue for designers of chorded memory-based techniques. With only a short training period (less than one hour for all three mappings) appears to be insufficient to ensure retention after one week with no intervening practice; however, in an ecological context users would likely use the chords more regularly. In addition, participants' memory of the mappings was restored after only a few minutes of use.
+
+## 6 Discussion
+
+## Our study provides several main findings:
+
+- The training phase showed that people were able to learn all three mappings quickly (performance followed a power law), and were able to achieve ${90}\%$ accuracy after training;
+
+- Overloading the buttons with three mappings did not cause any problems for participants - the second and third mappings were learned faster than the first, and there was no difference in performance across the position of the learned mappings;
+
+- People were able to successfully transfer their expertise from the training system to the usage tasks - although performance dropped by a small amount;
+
+- Performance in the memory test, which mixed all three mappings together, was very strong, with many of the items remembered at near 100% accuracy;
+
+- Retention over one week without any intervening practice was initially poor (about half the accuracy of the first memory test), but recovered quickly in the usage tasks to near the levels seen in the first sessions.
+
+In the following paragraphs we discuss the reasons for our results, and comment on how our findings can be generalized and used in the design of richer touch-based interactions.
+
+### 6.1 Reasons for results
+
+People's overall success in learning to map twenty-one total items to different button combinations is not particularly surprising - evidence from other domains such as chording keyboards suggests that with practice, humans can be very successful in this type of task. It is more interesting, however, that these 21 items were grouped into three overloaded sets that used the same button combinations - and we did not see any evidence of interference between the mappings. One reason for people's success in learning with multiple button mappings may be that the contexts of the three mappings were quite different, and there were few conceptual overlaps in the semantics of the different groups of items (e.g., colors and application shortcuts are quite different in the ways that they are used). However, there are likely many opportunities in mobile device use where this type of clean separation of semantics occurs - suggesting that overloading can be used to substantially increase the expressive power of limited input.
+
+People were also reasonably successful in using the learned commands in two usage tasks. This success shows that moving to more realistic tasks does not substantially disrupt memories built up during a training exercise - although it is likely that the added complexity of the tasks caused the reduction in accuracy compared to training. The overall difference between the training and usage environments was relatively small, however; more work is needed to examine transfer effects to real-world use.
+
+The relatively low accuracy of our system (between 80% and 90%) is a potential problem for real-world use. The error rate in our device may have been inflated due to the timeout issue described above; further work is needed to investigate ways of reducing this cause of error. We note, however, that there are still situations in which techniques with non-perfect accuracy can still be effective (such as interface for setting non-destructive parameters and states).
+
+Finally, the additional decay in memory of the mappings over one week may simply be an effect of the human memory system - our training period was short, and early studies on "forgetting curves" show approximately similar decay to what we observed. It is likely that in real-world settings, the frequency of mobile phone use would have provided intervening practice that would have maintained users' memory - but this issue requires further study.
+
+### 6.2 Limitations and opportunities for future work
+
+The main limitations of our work are in the breadth and realism of our evaluations, and in the physical design of the prototype. First, although our work takes important steps towards ecological validity for augmented input, our study was still a controlled experiment. We designed the study to focus on real-world issues of interference, transfer, and retention but the realism of our tasks was relatively low. Therefore, a critical area for further work is in testing our system with real tasks in real-world settings. The Flic software allows us to map button inputs to actions in real Android applications, so we plan to have people use the next version of the system over a longer time period and with their own applications.
+
+Second, it is clear that additional engineering work can be done to improve both the ergonomics and the performance of the prototype. The potential errors introduced by our ${200}\mathrm{\;{ms}}$ timeout are a problem that can likely be solved, but the timeout caused other problems as well - once participants were expert with the commands, some of them felt that holding the combination until the application registered the command slowed them down. Adjusting the timeout and ensuring that the system does not introduce additional errors is an important area for our future work. We also plan to experiment with different invocation mechanisms (e.g., selection on button release) and with the effects of providing feedback as the chord is being produced.
+
+An additional opportunity for future work that was identified by participants during the study is the potential use of external chorded buttons as an eyes-free input mechanism. The button interface allows people to change input modes without shifting their visual attention from the current site of work, and also allows changing tools without needing to move the finger doing the drawing (and without occluding the workspace with menus or toolbars).
+
+## 7 CONCLUSION
+
+Expressiveness is limited in mobile touch interfaces. Many researchers have devised new ways of augmenting these interactions, but there is still little understanding of issues of interference, transfer, and retention for augmented touch interactions, particularly those that use multiple mappings for different usage contexts. To provide information about these issues with one type of augmented system, we developed a phone case with three pushbuttons that can be chorded to provide seven input states. The external buttons can provide quick access to command shortcuts and transient modes, increasing the expressive power of interaction. We carried out a four-part study with the system, and found that people can successfully learn multiple mappings of chorded commands, and can maintain their expertise in more-complex usage tasks (although overall accuracy was low). Retention was also an important issue - accuracy dropped over one week, but was quickly restored after a short period of use. Our work provides new knowledge about the use of chorded input, and shows that adding simple input mechanisms such as chording buttons have promise as a way to augment mobile interactions.
+
+## ACKNOWLEDGMENTS
+
+Funding for this project was provided by the Natural Sciences and Engineering Research Council of Canada, and the Plant Phenotyping and Imaging Research Centre.
+
+## REFERENCES
+
+[1] Appert, C. and Fekete, J., 2006, OrthoZoom scroller: 1D multi-scale navigation. Proceedings of the SIGCHI conference on Human Factors in computing systems 21-30.
+
+[2] Bailly, G., Müller, J., and Lecolinet, E., 2012. Design and evaluation of finger-count interaction: Combining multitouch gestures and menus. International Journal of Human-Computer Studies, 70 (10), 673-689.
+
+[3] Baudisch, P. and Chu, G., 2009, Back-of-device interaction allows creating very small touch devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1923-1932).
+
+[4] Beaudouin-Lafon, M. Instrumental interaction: an interaction model for designing post-WIMP user interfaces. Proc. Human Factors in Computing Systems, 2000, 446-453.
+
+[5] Beaudouin-Lafon, M., 2004, Designing interaction, not interfaces. In Proceedings of the working conference on Advanced visual interfaces, 15-22.
+
+[6] Bill Buxton, Chorded Keyboards, retrieved from www.billbuxton.com/input06.ChordKeyboards.pdf
+
+[7] Buxton, W. 1983. Lexical and pragmatic considerations of input structures. ACM SIGGRAPH Computer Graphics, 17(1), 31-37.
+
+[8] Cechanowicz, J., Irani, P. and Subramanian, S., 2007, Augmenting the mouse with pressure sensitive input. In Proceedings of the SIGCHI conference on Human factors in computing systems, 1385-1394.
+
+[9] Cechanowicz, J. and Gutwin, C., 2009, August. Augmented interactions: A framework for adding expressive power to GUI widgets. IFIP Conference on Human-Computer Interaction, 878-891.
+
+[10] Chu, G., Moscovich, T. and Balakrishnan, R., 2009, Haptic conviction widgets. Proceedings of Graphics Interface 2009, 207-210.
+
+[11] Cohen, J. Eta-squared and partial eta-squared in communication science. Human Communication Research 28(56), 473-490.
+
+[12] Emilien Ghomi, Stéphane Huot, Olivier Bau, Michel Beaudouin-Lafon, and Wendy E. Mackay. 2013. Arpège: learning multitouch chord gestures vocabularies. In Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces (ITS '13). 209-218.
+
+[13] Alix Goguey, Sylvain Malacria, and Carl Gutwin. 2018. Improving Discoverability and Expert Performance in Force-Sensitive Text Selection for Touch Devices with Mode Gauges. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), paper 477.
+
+[14] Hinckley, K., Baudisch, P., Agrawala, M., and Balakrishnan, R. Hover widgets: using the tracking state to extend the capabilities of pen-operated devices. Proc. ACM Human Factors in Computing Systems, 861-870, 2006.
+
+[15] Guimbretiere, F., Baudisch, P., Sarin, R., Agrawala, M. and Cutrell, E., 2006, The springboard: multiple modes in one spring-loaded control. In Proceedings of the SIGCHI conference on Human Factors in computing systems, 181-190.
+
+[16] Guimbretière, F., and Nguyen, C., 2012. Bimanual marking menu for near surface interactions. In Proc. ACM Human Factors in Computing Systems (CHI 2012), 825-828.
+
+[17] Gutwin, C., Cockburn, A., Scarr, J., Malacria, S. and Olson, S.C., 2014, Faster command selection on tablets with FastTap. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2617-2626.
+
+[18] Harada, S., Saponas, T.S. and Landay, J.A., 2007, November. VoicePen: Augmenting pen input with simultaneous non-linguisitic vocalization. In Proceedings of the 9th international conference on Multimodal interfaces, 178-185.
+
+[19] Hartson, H.R., Siochi, A.C. and Hix, D. 1990. The UAN: A user-oriented representation for direct manipulation interface designs. ACM Transactions on Information Systems (TOIS), 8(3), 181-203.
+
+[20] Hinckley, K., Yatani, K., Pahud, M., Coddington, N., Rodenhouse, J., Wilson, A., Benko, H. and Buxton, B., 2010, October. Pen+ touch= new tools. Proceedings of the 23nd annual ACM symposium on User interface software and technology, 27-36.
+
+[21] Hinckley, K., and Song, H. Sensor synaesthesia: touch in motion, and motion in touch. Proc. ACM Human Factors in Computing Systems (2011), 801-810.
+
+[22] Hinckley, K., 2002, Input technologies and techniques. In The human-computer interaction handbook. Erlbaum, 2002, 151-168.
+
+[23] Jacob, R.J., Sibert, L.E., McFarlane, D.C. and Mullen Jr, M.P., 1994. Integrality and separability of input devices. ACM Transactions on Computer-Human Interaction (TOCHI), 1(1), 3-26.
+
+[24] Jacob, R.J., Girouard, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T. and Zigelbaum, J., 2008, April. Reality-based interaction: a framework for post-WIMP interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems, 201- 210.
+
+[25] Kabbash, P., Buxton, W. and Sellen, A., 1994. Two-handed input in a
+
+compound task. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 417-423.
+
+[26] Kin, K., Hartmann, B., and Agrawala, M. (2011. Two-handed marking menus for multitouch devices. ACM Transactions on Computer-Human Interaction, 18(3).
+
+[27] Lepinski, G. J., Grossman, T., & Fitzmaurice, G., 2010. The design and evaluation of multitouch marking menus. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2233- 2242.
+
+[28] Li, Y., Hinckley, K., Guan, Z. and Landay, J.A., 2005, Experimental analysis of mode switching techniques in pen-based user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems, 461-470.
+
+[29] Lumsden, J., and Brewster, S. 2003. A paradigm shift: alternative interaction techniques for use with mobile & wearable devices. In Proceedings of the 2003 conference of the Centre for Advanced Studies on Collaborative research (CASCON '03), 197-210.
+
+[30] Lyons, K., Starner, T., & Gane, B., 2006. Experimental evaluations of the twiddler one-handed chording mobile keyboard. Human-Computer Interaction, 21(4), 343-392.
+
+[31] Lyons, K., Gane, B., Starner, T., & Catrambone, R., 2005. Improving Novice Performance on the Twiddler One-Handed Chording Keyboard. In Proceedings of the International Forum on Applied Wearable Computing, 145-159.
+
+[32] Halla Olafsdottir and Caroline Appert. 2014. Multi-touch gestures for discrete and continuous control. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI'14), 177-184.
+
+[33] Patel, N., Clawson, J., & Starner, T., 2009. A model of two-thumb chording on a phone keypad. In Proceedings of the 11 th International Conference on Human-Computer Interaction with Mobile Devices and Services, 8-17.
+
+[34] Ramos, G., Boulos, M. and Balakrishnan, R., 2004, Pressure widgets. Proceedings of the SIGCHI conference on Human factors in computing systems, 487-494.
+
+[35] Raskin, J., The humane interface: new directions for designing interactive systems. Addison-Wesley, 2000.
+
+[36] Saund, E. and Lank, E., 2003, November. Stylus input and editing without prior selection of mode. In Proceedings of the 16th annual ACM symposium on User interface software and technology, 213-216.
+
+[37] Abigail J. Sellen, Gordon P. Kurtenbach, and William A. S. Buxton. 1992. The prevention of mode errors through sensory feedback. Human-Computer Interaction 7, 2 (June 1992), 141-164.
+
+[38] Subramanian, S., Aliakseyeu, D. and Lucero, A., 2006, October. Multi-layer interaction for digital tables. In Proceedings of the 19th annual ACM symposium on User interface software and technology, 269-272.
+
+[39] Tarniceriu, A. D., Dillenbourg, P., & Rimoldi, B., 2013. The effect of feedback on chord typing. In Proceedings of The Seventh International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies.
+
+[40] Tsandilas, T., Appert, C., Bezerianos, A. and Bonnet, D., 2014, April. Coordination of tilt and touch in one-and two-handed use. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2001-2004.
+
+[41] Md. Sami Uddin, Carl Gutwin, and Benjamin Lafreniere. 2016. HandMark Menus: Rapid Command Selection and Large Command Sets on Multi-Touch Displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 5836-5848.
+
+[42] Julie Wagner, Stéphane Huot, and Wendy Mackay. 2012. BiTouch and BiPad: designing bimanual interaction for hand-held tablets. Proceedings of SIGCHI Human Factors in Computing Systems, 2317- 2326.
+
+[43] Wigdor, D. and Balakrishnan, R., 2003, November. TiltText: using tilt for text input to mobile phones. In Proceedings of the 16th annual ACM symposium on User interface software and technology, 81-90.
+
+[44] Wigdor, D., & Balakrishnan, R., 2004. A comparison of consecutive and concurrent input text entry techniques for mobile phones. In Proceedings of the SIGCHI conference on Human factors in computing systems, 81-88.
+
+[45] Wilson, G., Brewster, S. and Halvey, M., 2013, April. Towards utilising one-handed multi-digit pressure input. In CHI'13 Extended Abstracts on Human Factors in Computing Systems 1317-1322.
+
+[46] Wu, F. G., & Shi, W. Z., 2018. The input efficiency of chord keyboards. International Journal of Occupational Safety and Ergonomics, 24(4), 638-645.
+
+[47] Zeleznik, R., Miller, T. and Forsberg, A., 2001, November. Pop through mouse button interactions. In Proceedings of the 14th annual ACM symposium on User interface software and technology, 195-196.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4596f0775472fa6ee7b9e4652c6daeb0c47b2450
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qAkUC5RKBD/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,319 @@
+§ LEARNING MULTIPLE MAPPINGS: AN EVALUATION OF INTERFERENCE, TRANSFER, AND RETENTION WITH CHORDED SHORTCUT BUTTONS
+
+Carl Gutwin ${}^{ \dagger }$ , Carl Hofmeister ${}^{ \dagger }$ , David Ledo ${}^{ \ddagger }$ , and Alix Goguey*
+
+${}^{ \dagger }$ University of Saskatchewan
+
+#University of Calgary
+
+*Grenoble Alpes University
+
+§ ABSTRACT
+
+Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.
+
+Keywords: Augmented interaction; modes; chording interfaces.
+
+Index Terms: H.5.m. Information interfaces and presentation (e.g., HCI)
+
+§ 1 INTRODUCTION
+
+Mobile touchscreen devices such as smartphones, tablets, and smartwatches are now ubiquitous. The simplicity of touch-based interaction is one of the main reasons for their popularity, but touch interfaces have low expressiveness - they are limited in terms of the number of actions that the user can produce in a single input. As a result, touch interactions often involve additional actions to choose modes or to navigate menu hierarchies.
+
+These limitations on touch input can be addressed by adding new degrees of freedom to touch devices. For example, both Android and IOS devices have augmentations that allow the user to specify the difference between scrolling and selecting: Android uses a timeout on the initial touch (i.e., a drag starts with either a short press or a long press), and some IOS devices use pressure-sensitive screens that use different pressure levels to specify selection and scrolling [13]. Researchers have also proposed adding a wide variety of new degrees of freedom for touch devices - including multi-touch and bimanual input [16],[32],[42], external buttons and force sensors [44], back-of-device touch [3], sensors for pen state [26] or screen tilt [39],[43], and pressure sensors [8],[9].
+
+Studies have shown these additional degrees of freedom to be effective at increasing the expressive power of interaction with a mobile device. However, previous research has only looked at these new degrees of freedom in single contexts, and as a result we know little about how augmented input will work when it is used in multiple different applications: if an augmented input is mapped to a set of actions that are specific to one application, will there be interference when the same augmentations are mapped to a different set of actions in another application?
+
+To find out how multiple mappings for a new degree of freedom affect learning and usage for one type of augmentation, we carried out a study with a device that provides three buttons on the side of a smartphone case. The buttons can be chorded, giving seven inputs that can be used for discrete commands or transient modes. We developed three different mappings for these chording buttons for three different contexts: shortcuts for a launcher app, colour selections for a drawing app; and modes for a text-editing app. Our study looked at three issues: first, whether learning multiple mappings with the chorded buttons would interfere with learning or accuracy; second, whether people could transfer their learning from training to usage tasks that set the button commands into more complex and realistic activities; and third, whether memory of the multiple mappings would be retained over one week, without any intervening practice.
+
+Our evaluation results provide new insights into the use of chorded buttons as augmented input for mobile devices:
+
+ * Learning multiple mappings did not reduce performance - people were able to learn all three mappings well, and actually learned the second and third mappings significantly faster than the first;
+
+ * Multiple mappings did not reduce accuracy - people were as accurate on a memory test with three mappings as they were when learning the individual mappings;
+
+ * Performance did transfer from training to more realistic usage tasks, although accuracy decreased slightly;
+
+ * Retention after one week was initially poor (accuracy was half that of the first session), but performance quickly returned to near-expert levels.
+
+Our work provides two main contributions. First, we show that chorded button input is a successful way to provide a rich input vocabulary that can be used with multiple applications. Second, we provide empirical evidence that mapping augmented input to multiple contexts does not impair performance. Our results provide new evidence that augmented input can realistically increase the expressive power of interactions with mobile devices.
+
+carl.gutwin@usask.ca,
+
+carl.hofmeister@usask.ca,
+
+david.ledo@ucalgary.ca
+
+alix.goguey@univ-grenoble-alpes.fr
+
+§ 2 RELATED WORK
+
+§ 2.1 INCREASING INTERACTION EXPRESSIVENESS
+
+HCI researchers have looked at numerous ways of increasing the richness of interactions with computer systems and have proposed a variety of methods including new theories of interaction, new input devices and new combinations of existing devices, and new ways of organizing interaction. Several researchers have proposed new frameworks and theories of interaction that provide explanatory and generative power for augmented interactions. For example, several conceptual frameworks of input devices and capabilities exist (e.g., [7],[19],[22],[23]), and researchers have proposed new paradigms of interaction (e.g., for eyes-free ubiquitous computing [29] or for post-WIMP devices [4],[5],[24]) that can incorporate different types of augmentation. Cechanowicz and colleagues also created a framework specifically about augmented interactions [9]; they suggest several ways of adding to an interaction, such as adding states to a discrete degree of freedom, adding an entirely new degree of freedom, or "upgrading" a discrete degree of freedom to use continuous input.
+
+§ 2.2 CHORDED TEXT INPUT
+
+Chorded input for text entry has existed for many years (e.g., stenographic machines for court reporters, or Engelbart and English's one-hand keyboard in the NLS system [6]). Researchers have studied several issues in chorded text input, including performance, learning, and device design.
+
+A longitudinal study of training performance with the Twiddler one-handed keyboard [30] showed that users can learn chorded devices and can gain a high level of expertise. The study had 10 participants train for 20 sessions of 20 minutes each; results showed that by session eight, chording was faster than the multi-tap technique, and that by session 20, the mean typing speed was 26 words per minute. Five participants who continued the study to 25 hours of training had a mean typing speed of 47 words per minute [30]. Because this high level of performance requires substantial training time, researchers have also looked at ways of reducing training time for novices. For example, studies have investigated the effects of using different types of phrase sets in training [31], and the effects of feedback [39],[46].
+
+Several chording designs have been demonstrated for text entry on keypad-style mobile phones. The ChordTap system added external buttons to the phone case [44]; to type a letter, the dominant hand selects a number key on the phone (which represents up to four letters) and the non-dominant hand presses the chording keys to select a letter within the group. A study showed that the system was quickly learned, and outperformed multi-tap. A similar system used three of the keypad buttons to select the letter within the group, allowing chorded input without external buttons [33]. The TiltText prototype used the four directions of the phone's tilt sensor to choose a letter within the group [43].
+
+§ 2.3 CHORDED INPUT FOR TOUCH
+
+Other types of chording have also been seen in multi-touch devices, where combinations of fingers are used to indicate different states. Several researchers have looked at multi-touch input for menu selection. For example, finger-count menus use the number of fingers in two different areas of the touch surface to indicate a category (with the left hand) and an item within that menu (with the right hand) [2]. Two-handed marking menus [26] also divide the screen into left and right sections, with a stroke on the left side selecting the submenu and a stroke on the right selecting the item. Multitouch marking menus [27] combine these two approaches, using vision-based finger identification to increase the number of possible combinations. Each multi-finger chord indicates which menu is to be displayed, and the subsequent direction in which the touch points are moved indicates the item to be selected. HandMarks [41] is a bimanual technique that uses the left hand on the surface as a reference frame for selecting menu items with the right hand. FastTap uses chorded multitouch to switch to menu mode and simultaneously select an item from a grid menu [17].
+
+Other kinds of chording have also been investigated with touch devices. The BiTouch system was a general-purpose technique that allowed touches from the supporting hand to be used in conjunction with touches from the dominant hand [41]. Olafsdottir and Appert [32] developed a taxonomy of multi-touch gestures (including chords), and Ghomi and colleagues [12] developed a training technique for learning multi-touch chords. Finally, multi-finger input on a phone case was also shown by Wilson and Brewster [44], who developed a prototype with pressure sensors under each finger holding the phone; input could involve single fingers or combinations of fingers (with pressure level as an added DoF).
+
+§ 2.4 AUGMENTING TOUCH WITH OTHER DEGREES OF FREEDOM
+
+Researchers have also developed touch devices and techniques that involve other types of additional input, including methods for combining pen input with touch [20], using vocal input [18], using the back of the device as well as the front [3], using tilt state with a directional swipe on the touch surface to create an input vocabulary [39], or using a phone's accelerometers to enhance touch and create both enhanced motion gestures (e.g., one-handed zooming by combining touch and tilt), and more expressive touch [21].
+
+§ 2.5 AUGMENTED INPUT FOR MODE SELECTION
+
+Enhanced input can also address issues with interface modes, which are often considered to be a cause of errors [35]. Modes can be persistent or "spring loaded" (also called quasimodes [35]); these are active only when the user maintains a physical action (e.g., holding down a key), and this kinesthetic feedback can help people remember that they are in a different mode [37].
+
+When interfaces involve persistent modes, several means for switching have been proposed. For example, $\mathrm{{Li}}$ and colleagues [26] compared several mode-switch mechanisms for changing from inking to gesturing with a stylus: a pen button, a separate button in the non-dominant hand, a timeout, pen pressure, and the eraser end of the pen. They found that a button held in the other hand was fastest and most preferred, and that the timeout was slow and error prone [26]. Other researchers have explored implicit modes that do not require an explicit switch: for example, Chu and colleagues created pressure-sensitive "haptic conviction widgets" that allow either normal or forceful interaction to indicate different levels of confidence [9]. Similarly, some IOS devices use touch pressure to differentiate between actions such as selection and scrolling [13].
+
+Many techniques add new sensing capabilities to create the additional modes - for example, pressure sensors have also been used to enhance mouse input [8] and pen-based widgets [32]; three-state switches were added to a mouse to create pop-through buttons [46]; and height sensing was used to enable different actions in different height layers (e.g., the hover state of a pen [11], or the space above a digital table [38]). Other techniques use existing sensing that is currently unused in an interaction. For example, OrthoZoom exploits the unused horizontal dimension in a standard scrollbar to add zooming (by moving the pointer left or right) [1].
+
+Despite the work that has been carried out in this area, there is relatively little research on issues of interference, transfer, or retention for augmented interfaces - particularly with multiple mappings. The study below provides initial baseline information for these issues - but first, we describe the design and construction of the prototype that we used as the basis for our evaluation.
+
+§ 3 CHORDING PHONE CASE PROTOTYPE
+
+In order to test learning, interference, and retention, we developed a prototype that adds three hardware buttons to a custom-printed phone case and makes the state of those buttons available to applications. This design was chosen because it would enable mobile use and provide a large number of states.
+
+§ 3.1 HARDWARE
+
+We designed and 3D-printed a case for an Android Nexus 5 phone, with a compartment mounted on the back to hold the circuit boards from three Flic buttons (Bluetooth LE buttons made by Shortcut Labs). The Flic devices can be configured to perform various predetermined actions when pressed; Shortcut Labs also provides an Android API for using the buttons with custom software.
+
+We removed the PCBs containing the Bluetooth circuitry, and soldered new buttons to the PCBs (Figure 1). The new pushbuttons are momentary switches (i.e., they return to the "off" state when released) with ${11}\mathrm{\;{mm}}$ -diameter push surfaces and $5\mathrm{\;{mm}}$ travel. We tested several button styles and sizes, in order to find devices that were comfortable to push, that provided tactile feedback about the state of the press, and that were small enough to fit under three fingers. This design allows us to use the Flic Bluetooth events but with buttons that can be mounted closer together. The new buttons do not require any changes to our use of the API.
+
+The prototype is held as a normal phone with the left hand, with the index, middle, and ring fingers placed on the pushbuttons (Figure 2). The pushbuttons are stiff enough that these three fingers can also grip the phone without engaging the buttons; the fifth finger of the left hand can be placed comfortably on the phone case, adding stability when performing chorded button combinations. We also tested a four-button version, but there were too many erroneous presses because of the user needing to grip the phone. Finally, we note that the button housing on our prototype was larger than would be required by a commercial device; we estimate that the hardware could easily be built into a housing that is only marginally larger than a typical phone case.
+
+ < g r a p h i c s >
+
+Figure 1: Chording prototype. Left: button housing. Right: Flic Bluetooth PCBs (inset shows pushbutton).
+
+The prototype worked well in our study sessions. No participant complained of fatigue or difficulty (although we observed a few difficulties matching the timeout period, as described below). The phone case was easy to hold, and the button positions were adequate for the hand sizes of our participants. Pressing the buttons in chords did not appear to cause difficulty for any participant (although with some timing issues, as described later).
+
+§ 3.2 SOFTWARE AND CHORD IDENTIFICATION
+
+We wrote a simple wrapper library for Android to attach callback functions to the buttons through the Flic API. Android applications can poll the current combined state of the buttons through this library wrapper. Callback functions attached through the wrapper library are put on a short timer, allowing time for multiple buttons to be depressed before executing the callback. In all the applications we created, we assigned a single callback function to all the buttons; this function checks the state of all buttons and determines the appropriate behavior based on the combined state.
+
+Identifying chords represents an interpretation problem for any input system. When only individual buttons can be pressed, software can execute actions as soon as the signal has been received from any button. When chorded input is allowed, however, this method is insufficient, because users do not press all of the buttons of a chord at exactly the same time. Therefore, we implemented a ${200}\mathrm{\;{ms}}$ wait time (determined through informal testing) before processing input after an initial button signal - after this delay, the callback read the state of all buttons, and reported the combined pattern (i.e., a chord or a single press). Once an input is registered, all buttons must return to their "off" states before another input.
+
+With three buttons, the user can specify eight states - but in our applications, we assume that there is a default state that corresponds to having no buttons pressed. This approach prevents the user from having to maintain pressure on the buttons during default operation.
+
+§ 4 EVALUATION
+
+We carried out a study of our chording system to investigate our three main research questions:
+
+ * Interference: does learning additional mappings with the same buttons reduce learning or accuracy?
+
+ * Transfer: is performance maintained when users move from training to usage tasks that set the button commands into more realistic activities?
+
+ * Retention: does memory of the command mappings persist over one week (without intervening practice)?
+
+We chose not to compare to a baseline (e.g., GUI-based commands) for two reasons: first, in many small-screen devices screen space is at a premium, and dedicating a part of the screen to interface components is often not a viable alternative; second, command structures stored in menus or ribbons (which do not take additional space) have been shown to be significantly slower than memory-based interfaces in several studies (e.g., [2][17][41]).
+
+To test whether learning multiple mappings interferes with learning rate or accuracy, we created a training application to teach three mappings to participants: seven application shortcuts (Apps), seven colors (Colors), and seven text-editing commands (Text) (Table 1). Participants learned the mappings one at a time, as this fits the way that users typically become expert with one application through frequent use, then become expert with another.
+
+To further test interference, after all mappings were learned we gave participants a memory test to determine whether they could remember individual commands from all of the mappings. This test corresponds to scenarios where users switch between applications and must remember different mappings at different times.
+
+To test whether the mappings learned in the training system would transfer, we asked participants to use two of the mappings in simulated usage tasks. Colors were used in a drawing program where participants were asked to draw shapes in a particular line color, and Text commands were used in a simple editor where participants were asked to manipulate text formatting.
+
+To test retention, we recruited a subset of participants to carry out the memory test and the usage tasks a second time, one week after the initial session. Participants were not told that they would have to remember the mappings, and did not practice during the intervening week.
+
+Table 1. Button patterns and mappings.
+
+max width=
+
+Buttons Pattern Color Command App
+
+1-5
+1 ●○○ Red Copy Contacts
+
+1-5
+2 ... Green Paste Browser
+
+1-5
+3 ○○● Blue Italic Phone
+
+1-5
+1+2 ... Yellow Small font Maps
+
+1-5
+1+3 ●○● Magenta Bold Camera
+
+1-5
+2+3 ... Cyan Large E-Mail
+
+1-5
+1+2+3 ... Black Select Calendar
+
+1-5
+0 OOO
+
+1-5
+
+§ 4.1 PART 1: LEARNING PHASE
+
+The first part of the study had participants learn and practice the mappings over ten blocks of trials. The system displayed a target item on the screen, and asked the user to press the appropriate button combination for that item (see Figure 2). The system provided feedback about the user's selection (Figure 2, bottom of screen); when the user correctly selected the target item, the played a short tone, and the system moved on to the next item. Users could consult a dialog that displayed the entire current mapping but had to close the dialog to complete the trial. The system presented each item in the seven-item mapping twice per block (sampling without replacement), and continued for ten blocks. The same system was used for all three mappings, and recorded selection time as well as any incorrect selections (participants continued their attempts until they selected the correct item).
+
+ < g r a p h i c s >
+
+Figure 2: Training system showing Apps mapping (target at center of screen, selection feedback at bottom). Training for Color and Text mappings was similar.
+
+§ 4.2 PART 2: USAGE TASKS
+
+We created two applications (Drawing and TextEdit) to test usage of two mappings in larger and more complex activities.
+
+Drawing. The Drawing application (Figure 3) is a simple paint program that uses the chord buttons to control line color (see Table 1). The application treated the button input as a set of spring-loaded modes - that is, the drawing color was set based on the current state of the buttons, and was unset when the buttons were released. For example, to draw a red square as shown in Figure 3, users held down the first button with their left hand and drew the square with their right hand; when the button was released, the system returned to its default mode (where touch was interpreted as panning). If the user released the buttons in the middle of a stroke, the line colour changed back to default grey.
+
+For each task in the Drawing application, a message on the screen asked the participant to draw a shape in a particular color. Tasks were grouped into blocks of 14, with each color appearing twice per block. A task was judged to be complete when the participant drew at least one line with the correct color (we did not evaluate whether the shape was correct, but participants did not know this). Participants completed three blocks in total. TextEdit. The TextEdit application asked users to select lines of text and apply manipulations such as cutting and pasting the text, setting the style (bold or italic), and increasing or decreasing the font size. Each of these six manipulations was mapped to a button combination. The seventh action for this mapping was used for selection, implemented as a spring-loaded mode that was combined with a touch action. We mapped selection to the combination of all three buttons since selection had to be carried out frequently - and this combination was easy to remember and execute.
+
+ < g r a p h i c s >
+
+Figure 3: Drawing Task
+
+For each TextEdit task, the lines on the screen told the user what manipulations to make to the text (see Figure 4). Each task asked the participant to select some text and then perform a manipulation. There were six manipulations in total, and we combined copy and paste into a single task, so there were five tasks. Tasks were repeated twice per block, and there were four blocks. Tasks were judged to be correct when the correct styling was applied; if the wrong formatting was applied, the user had to press an undo button to reset the text to its original form, and perform the task again.
+
+ < g r a p h i c s >
+
+Figure 4: TextEdit task after selecting text.
+
+§ 4.3 PART 3: MEMORY TEST
+
+The third stage of the study was a memory test that had a similar interface to the learning system described above. The system gave prompts for each of the 21 commands in random order (Apps, Colors, and Text were mixed together, and sampled without replacement). Participants pressed the button combination for each prompt, but no feedback was given about what was selected, or whether their selection was correct or incorrect. Participants were only allowed to answer once per prompt, and after each response the system moved to the next item.
+
+§ 4.4 PART 4: RETENTION
+
+To determine participants' retention of the mappings, after the study was over we recruited 8 of the 15 participants to return to the lab after one week to carry out the memory test and the usage tasks again (two blocks of each of the drawing and text tasks). Participants were not told during the first study that they would be asked to remember the mappings beyond the study; participants for the one-week follow-up were recruited after the initial data collection was complete. The usage and memory tests operated as described above.
+
+§ 4.5 PROCEDURE
+
+After completing an informed consent form and a demographics questionnaire, participants were shown the system and introduced to the use of the external buttons. Participants were randomly assigned to a mapping-order condition (counterbalanced using a Latin square), and then started the training tasks for their first mapping. Participants were told that both time and accuracy would be recorded but were encouraged to use their memory of the chords even if they were not completely sure. After the Color and Text mappings, participants also completed the usage tasks as described above (there was no usage task for the Apps mapping). After completing the learning and tasks with each mapping, participants filled out an effort questionnaire based on the NASA-TLX survey. After all mappings, participants completed the memory test.
+
+For the retention test, participants filled out a second consent form, then completed the memory test with no assistance or reminder of the mappings. They then carried out two blocks of each of the usage tasks (the Drawing and TextEdit apps had the same order as in the first study).
+
+§ 4.6 PARTICIPANTS AND APPARATUS
+
+Fifteen participants were recruited from the local university community ( 8 women, 7 men, mean age 28.6). All participants were experienced with mobile devices (more than ${30}\mathrm{\;{min}}/\mathrm{{day}}$ average use). All but one of the participants was right-handed, and the one left-handed participant stated that they were used to operating mobile devices in a right-handed fashion.
+
+The study used the chording prototype described above. Sessions were carried out with participants seated at a desk, holding the phone (and operating the chording buttons) with their left hands. The system recorded all performance data; questionnaire responses were entered on a separate $\mathrm{{PC}}$ .
+
+§ 4.7 DESIGN
+
+The main study used two $3 \times {10}$ repeated-measures designs. The first looked at differences across mappings, and used factors Mapping (Apps, Colors, Text) and Block (1-10). The second looked at interference by analyzing differences by the position of the mapping in the overall sequence, and used factors Position (first, second, third) and Block (1-10). For the memory tests, we used a ${21} \times 3 \times 7$ design with several planned comparisons; factors were Item (the 21 items shown in Table 1), Pattern (the 7 button patterns shown in column 2 of Table 1), and Mapping (Apps, Colors, Text).
+
+Dependent variables were selection time, accuracy (the proportion of trials where the correct item was chosen on the first try), and errors.
+
+§ 5 RESULTS
+
+No outliers were removed from the data. In the following analyses, significant ANOVA results report partial eta-squared $\left( {\eta }^{2}\right)$ as a measure of effect size (where .01 can be considered small, .06 medium, and $> {.14}$ large [11]). We organize the results below around the main issues under investigation: training performance when learning three different mappings, interference effects, transfer performance, and retention after one week.
+
+§ 5.1 TRAINING: LEARNING RATE, ACCURACY, AND EFFORT
+
+Selection time. Overall, mean selection times for the three mappings were 2626ms for Apps (s.d. 2186), 2271 for Colors (s.d. 2095, and 2405 for Text (s.d. 2193). A 3x10 (Mapping x Block) RM-ANOVA showed no effect of Mapping $\left( {{\mathrm{F}}_{2,{28}} = {1.06},\mathrm{p} = {0.36}}\right)$ . As Figure 5 shows, selection times decreased substantially across trial blocks; ANOVA showed a significant main effect of Block $\left( {{\mathrm{F}}_{9,{126}} = {42.3},\mathrm{p} < {0.0001},{\eta }^{2} = {0.40}}\right)$ . There was no interaction (F18,252) $= {0.40}$ , p=0.98). Accuracy. Across all blocks, the proportion of trials where the correct item was chosen first was 0.83 for both Apps and Colors and 0.84 for Text (all s.d. 0.37). RM-ANOVA showed no effect of Mapping $\left( {{\mathrm{F}}_{2,{28}} = {0.019},\mathrm{p} = {0.98}}\right)$ , but again showed a significant effect of ${Block}\left( {{\mathrm{F}}_{9,{126}} = {4.42},\mathrm{p} < {0.001},{\eta }^{2} = {0.09}}\right)$ , with no interaction $\left( {{\mathrm{F}}_{{18},{252}} = {0.71},\mathrm{p} = {0.78}}\right)$ . Overall error rates (i.e., the total number of selections per trial, since participants continued to make selections until they got the correct answer) for all mappings were low: 0.25 errors / selection for Apps, 0.26 for Colors, and 0.24 for Text.
+
+ < g r a p h i c s >
+
+Figure 5: Mean selection time (±s.e.), by block and mapping
+
+ < g r a p h i c s >
+
+Figure 6: Mean accuracy (±s.e.) by block and mapping.
+
+During the sessions we identified a hardware-based source of error that reduced accuracy. The ${200}\mathrm{\;{ms}}$ timeout period in some cases caused errors when people held the buttons for the wrong period of time, when the Bluetooth buttons did not transmit a signal fast enough, or when people formed a chord in stages. This issue contributes to the accuracy rates shown above: our observations indicate that people had the button combinations correctly memorized but had occasional problem in producing the combination with the prototype. We believe that this difficulty can be fixed by adjusting our timeout values and by using an embedded microprocessor to read button states (to avoid Bluetooth delay).
+
+Perceived Effort. Responses to the TLX effort questionnaire are shown in Figure 7; overall, people felt that all of the mappings required relatively low effort. Friedman rank sum tests showed only one difference between mappings - people saw themselves as being less successful with the Apps mapping $\left( {{\chi }^{2} = 7,\mathrm{p} = {0.030}}\right)$ . In the end-of-session questionnaire, 12 participants stated that Colors were easiest to remember (e.g., one person stated "colours were easier to remember" and another said that "memorizing the colours felt the easiest").
+
+ < g r a p h i c s >
+
+Figure 7: NASA-TLX responses (±s.e.), by mapping
+
+§ 5.2 INTERFERENCE 1: EFFECTS OF LEARNING NEW MAPPINGS
+
+To determine whether learning a second and third mapping would be hindered because of the already-memorized mappings, we analysed the performance data based on whether the mapping was the first, second, or third to be learned. Figure 9 shows selection time over ten blocks for the first, second, and third mappings (the specific mapping in each position was counterbalanced).
+
+Selection time. A 3x10 RM-ANOVA looked for effects of position in the sequence on selection time. We did find a significant main effect of Position $\left( {{\mathrm{F}}_{2,{28}} = {19.68},\mathrm{p} < {0.0001},{\eta }^{2} = {0.22}}\right)$ , but as shown in Figure 8, the second and third mappings were actually faster than the first mapping. Both subsequent mappings were faster than the first; follow-up t-tests with Bonferroni correction show that these differences are significant, $\mathrm{p} < {0.01}$ ). The difference was more obvious in the early blocks (indicated by a significant interaction between Position and Block, ${\mathrm{F}}_{{18},{252}} = {4.63},\mathrm{p} < {0.0001}$ , ${\eta }^{2} = {0.14})$ . These findings suggest that adding new mappings for the same buttons does not impair learning or performance for subsequent mappings. Accuracy. We carried out a similar 3x10 RM-ANOVA to look for effects on accuracy (Figure 9). As with selection time, performance was worse with the first mapping (accuracy 0.8 ) than the second and third mappings (0.85 and 0.86). ANOVA showed a main effect of Position on accuracy $\left( {{\mathrm{F}}_{2,{28}} = {7.18},\mathrm{p} = {0.003},{\eta }^{2} = {0.072}}\right)$ , but with no Position $\mathrm{x}$ Block interaction $\left( {{\mathrm{F}}_{{18},{252}} = {1.20},\mathrm{p} = {0.051}}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 8: Mean selection time (±s.e.), by position and block.
+
+ < g r a p h i c s >
+
+Figure 9: Mean accuracy (±s.e.), by position and block.
+
+§ 5.3 INTERFERENCE 2: MEMORY TEST WITH ALL MAPPINGS
+
+The third stage of the study was the memory test, in which participants selected each of the 21 commands from all three mappings, in random order. Participants answered once per item with no feedback. The overall accuracy was ${0.87}({0.86}$ for Apps, 0.86 for Colors, and 0.89 for Text); see Figure 11. Note that this accuracy is higher that seen with the individual mappings during the training sessions.
+
+To determine whether there were differences in accuracy for individual items, mappings, or button patterns, we carried out a ${21} \times 3 \times 7$ (Item x Mapping x Pattern) RM-ANOVA. We found no significant effects of any of these factors (for Item, ${\mathrm{F}}_{{20},{240}} = {1.55}$ , $\mathrm{p} = {0.067}$ ; for Mapping: ${\mathrm{F}}_{2,{24}} = {0.43}$ , $\mathrm{p} = {0.65}$ ; for Pattern, ${\mathrm{F}}_{6,{12}} = {0.0004},\mathrm{p} = {0.99}$ ), and no interactions.
+
+ < g r a p h i c s >
+
+and mapping. Button patterns shown in parentheses for each item.
+
+Figure 10 also shows that multi-finger chords are not substantially different from single-finger button presses. Accuracy was only slightly lower with the (101) and (011) patterns than the single-finger patterns, and the one three-finger pattern (111) had an accuracy above 90% for two of the three mappings.
+
+§ 5.4 TRANSFER: PERFORMANCE TRANSFER TO USAGE TASKS
+
+After learning the Color and Text mappings, participants carried out usage tasks in the TextEdit and Drawing applications. Accuracy results are summarized in Figure 10 (note that the text task had four blocks, and the drawing task had three blocks). Accuracy in the usage tasks ranged from 0.7 to 0.8 across the trial blocks - slightly lower than the 0.8-0.9 accuracy seen in the training stage of the study. It is possible that the additional mental requirements of the task (e.g., determining what to do, working with text, drawing lines) disrupted people's memory of the mappings - but the overall difference was small.
+
+ < g r a p h i c s >
+
+Figure 11: Mean accuracy (±s.e.), by task and block.
+
+§ 5.5 RETENTION: PERFORMANCE AFTER ONE WEEK
+
+The one-week followup asked eight participants to carry out the memory test and two blocks of each of the usage tasks, to determine whether participants' memory of the mappings had persisted without any intervening practice (or even any knowledge that they would be re-tested). Overall, the follow-up showed that accuracy decayed substantially over one week - but that participants quickly returned to their previous level of expertise once they started the usage tasks. In the memory test, overall accuracy dropped to 0.49 (0.43 for Apps, 0.50 for Colors, and 0.54 for Text), with some individual items as low as 10% accuracy. Only two items maintained accuracy above 0.85 - "Red" and "Copy".
+
+The two usage tasks (Drawing and Text editing) were carried out after the memory test, and in these tasks, participant accuracy recovered considerably. In the first task (immediately after the memory test), participants had an overall 0.60 accuracy in selection; and by the second block, performance rose to accuracy levels similar to the first study (for Drawing, 0.82; for Text, 0.70).
+
+This follow-up study is limited - it did not compare retention when learning only one mapping, so it is impossible to determine whether the decay arose because of the number of overloaded mappings learned in the first study. However, the study shows that retention is an important issue for designers of chorded memory-based techniques. With only a short training period (less than one hour for all three mappings) appears to be insufficient to ensure retention after one week with no intervening practice; however, in an ecological context users would likely use the chords more regularly. In addition, participants' memory of the mappings was restored after only a few minutes of use.
+
+§ 6 DISCUSSION
+
+§ OUR STUDY PROVIDES SEVERAL MAIN FINDINGS:
+
+ * The training phase showed that people were able to learn all three mappings quickly (performance followed a power law), and were able to achieve ${90}\%$ accuracy after training;
+
+ * Overloading the buttons with three mappings did not cause any problems for participants - the second and third mappings were learned faster than the first, and there was no difference in performance across the position of the learned mappings;
+
+ * People were able to successfully transfer their expertise from the training system to the usage tasks - although performance dropped by a small amount;
+
+ * Performance in the memory test, which mixed all three mappings together, was very strong, with many of the items remembered at near 100% accuracy;
+
+ * Retention over one week without any intervening practice was initially poor (about half the accuracy of the first memory test), but recovered quickly in the usage tasks to near the levels seen in the first sessions.
+
+In the following paragraphs we discuss the reasons for our results, and comment on how our findings can be generalized and used in the design of richer touch-based interactions.
+
+§ 6.1 REASONS FOR RESULTS
+
+People's overall success in learning to map twenty-one total items to different button combinations is not particularly surprising - evidence from other domains such as chording keyboards suggests that with practice, humans can be very successful in this type of task. It is more interesting, however, that these 21 items were grouped into three overloaded sets that used the same button combinations - and we did not see any evidence of interference between the mappings. One reason for people's success in learning with multiple button mappings may be that the contexts of the three mappings were quite different, and there were few conceptual overlaps in the semantics of the different groups of items (e.g., colors and application shortcuts are quite different in the ways that they are used). However, there are likely many opportunities in mobile device use where this type of clean separation of semantics occurs - suggesting that overloading can be used to substantially increase the expressive power of limited input.
+
+People were also reasonably successful in using the learned commands in two usage tasks. This success shows that moving to more realistic tasks does not substantially disrupt memories built up during a training exercise - although it is likely that the added complexity of the tasks caused the reduction in accuracy compared to training. The overall difference between the training and usage environments was relatively small, however; more work is needed to examine transfer effects to real-world use.
+
+The relatively low accuracy of our system (between 80% and 90%) is a potential problem for real-world use. The error rate in our device may have been inflated due to the timeout issue described above; further work is needed to investigate ways of reducing this cause of error. We note, however, that there are still situations in which techniques with non-perfect accuracy can still be effective (such as interface for setting non-destructive parameters and states).
+
+Finally, the additional decay in memory of the mappings over one week may simply be an effect of the human memory system - our training period was short, and early studies on "forgetting curves" show approximately similar decay to what we observed. It is likely that in real-world settings, the frequency of mobile phone use would have provided intervening practice that would have maintained users' memory - but this issue requires further study.
+
+§ 6.2 LIMITATIONS AND OPPORTUNITIES FOR FUTURE WORK
+
+The main limitations of our work are in the breadth and realism of our evaluations, and in the physical design of the prototype. First, although our work takes important steps towards ecological validity for augmented input, our study was still a controlled experiment. We designed the study to focus on real-world issues of interference, transfer, and retention but the realism of our tasks was relatively low. Therefore, a critical area for further work is in testing our system with real tasks in real-world settings. The Flic software allows us to map button inputs to actions in real Android applications, so we plan to have people use the next version of the system over a longer time period and with their own applications.
+
+Second, it is clear that additional engineering work can be done to improve both the ergonomics and the performance of the prototype. The potential errors introduced by our ${200}\mathrm{\;{ms}}$ timeout are a problem that can likely be solved, but the timeout caused other problems as well - once participants were expert with the commands, some of them felt that holding the combination until the application registered the command slowed them down. Adjusting the timeout and ensuring that the system does not introduce additional errors is an important area for our future work. We also plan to experiment with different invocation mechanisms (e.g., selection on button release) and with the effects of providing feedback as the chord is being produced.
+
+An additional opportunity for future work that was identified by participants during the study is the potential use of external chorded buttons as an eyes-free input mechanism. The button interface allows people to change input modes without shifting their visual attention from the current site of work, and also allows changing tools without needing to move the finger doing the drawing (and without occluding the workspace with menus or toolbars).
+
+§ 7 CONCLUSION
+
+Expressiveness is limited in mobile touch interfaces. Many researchers have devised new ways of augmenting these interactions, but there is still little understanding of issues of interference, transfer, and retention for augmented touch interactions, particularly those that use multiple mappings for different usage contexts. To provide information about these issues with one type of augmented system, we developed a phone case with three pushbuttons that can be chorded to provide seven input states. The external buttons can provide quick access to command shortcuts and transient modes, increasing the expressive power of interaction. We carried out a four-part study with the system, and found that people can successfully learn multiple mappings of chorded commands, and can maintain their expertise in more-complex usage tasks (although overall accuracy was low). Retention was also an important issue - accuracy dropped over one week, but was quickly restored after a short period of use. Our work provides new knowledge about the use of chorded input, and shows that adding simple input mechanisms such as chording buttons have promise as a way to augment mobile interactions.
+
+§ ACKNOWLEDGMENTS
+
+Funding for this project was provided by the Natural Sciences and Engineering Research Council of Canada, and the Plant Phenotyping and Imaging Research Centre.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c46356e3c0f67735451bcd579cbc4357609f88bb
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,463 @@
+# Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software
+
+Minsuk Chang*
+
+School of Computing
+
+KAIST
+
+Ben Lafreniere ${}^{ \dagger }$
+
+Autodesk Research
+
+Juho Kim‡
+
+School of Computing
+
+KAIST
+
+George Fitzmaurice ${}^{§}$
+
+Autodesk Research
+
+Tovi Grossman ${}^{¶}$
+
+University of Toronto
+
+## Abstract
+
+This paper introduces Workflow graphs, or W-graphs, which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph's nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.
+
+Index Terms: Human-centered computing-Interactive systems and tools—;—
+
+## 1 INTRODUCTION
+
+There are common situations in which many users of complex software perform the same task, such as designing a chair or table, bringing their unique set of skills and knowledge to bear on a set goal. For example, this occurs when multiple people perform the same tutorial, complete an assignment for a course, or work on sub-tasks that frequently occur in the context of a larger task, such as 3D modeling joints when designing furniture. It is also common for users to discuss and compare different methods of completing a single task in online communities for 3D modeling software (for an example of such discussion, see Figure 2). This raises an interesting possibility-what if the range of different methods for performing a task could be captured and represented as rich workflow recordings, as a way to help experienced users discover alternative methods and expand their workflow knowledge, or to assist novice users in learning advanced practices?
+
+In this research, we investigate how multiple demonstrations of a fixed task can be captured and represented in a workflow graph (W-graph) (Figure 1). The idea is to automatically discover the different means of accomplishing a goal from the interaction traces of multiple users, and to encode these in a graph representation. The graph thus represents diverse understanding of the task, opening up a range of possible applications. For example, the graph could be used to provide targeted suggestions of segments of the task for which alternative methods exist, or to synthesize the most efficient means of completing the task from the many demonstrations encoded in the graph. It could also be used to synthesize and populate tutorials tailored to particular users, for example by only showing methods that use tools known to that user.
+
+blank 5 Workflow Videos 0:20 to 2:42 (avg: 0:32) Unique Commands: min:1 max:5 (avg: 2)
+
+Figure 1: W-graphs encode multiple demonstrations of a fixed task, based on commonalities in the workflows employed by users. Nodes represent semantically similar states across demonstrations. Edges represent alternative workflows for sub-tasks. The width of edges represents the number of distinct workflows between two states.
+
+To investigate this approach, we instrumented Tinkercad ${}^{1}$ , a 3D solid modeling application popular in the maker community, to gather screen recordings, command sequences, and changes to the CSG (constructive solid geometry) tree of the specific 3D model being built. The interaction traces for multiple users performing the same task are processed by an algorithm we developed, which combines them into a W-graph representing the collective actions of all users. Unlike past approaches to workflow modeling in this domain, which have focused on command sequence data (e.g., [38]), our approach additionally leverages the $3\mathrm{D}$ model content being created by the user. This allows us to track the progress of the task in direct relation to changes in the content (i.e., the 3D model) to detect common stages of the task progression across multiple demonstrations. We use an autoencoder [34] to represent the 3D geometry information of each 3D model snapshot, which we found to be a robust and scalable method for detecting workflow-relevant changes in the geometry, as compared to metrics such as comparing CSG trees, 2D renders, and 3D meshes.
+
+The result is a graph in which each directed edge from the starting node to a terminal node represents a potential workflow for completing the task, and multiple edges between any two states represent alternative approaches for performing that segment of the task. The collected command log data and screen recordings associated with the edges of the graph can be processed to define metrics on paths (such as average workflow duration or number of unique commands used), and displayed as demonstration content in interfaces.
+
+The main contributions of this paper are:
+
+---
+
+*e-mail: minsuk@kaist.ac.kr
+
+${}^{ \dagger }$ e-mail: ben.lafreniere@gmail.com
+
+${}^{ \ddagger }$ e-mail: juhokim@kaist.ac.kr
+
+§e-mail: george.fitzmaurice@autodesk.com
+
+Te-mail: tovi@dgp.toronto.edu
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+ACM to publish electronically.
+
+${}^{1}$ https://tinkercad.com
+
+---
+
+fm trying to design a guitar for ${m}_{2}$ father for Christmas and i cart work nighting tancy about it, Good tack. Line Anglie 210 plane. "Patch" between the two splines and use the resulting 199-Regge 29 left to accomplish this. You can also relate a spherical or LSS Regio TD I can't use a fiel because its a complex shape. create Schere and subtract it from the model badly with it. Use Amply -1
+
+Figure 2: Fifteen distinct suggestions on how to perform a 3D modeling task - from the largest Fusion 360 user community on Facebook
+
+- The concept of W-graphs, which represent the semantic structure of a task, based on demonstrations from multiple users
+
+- A computational pipeline for constructing W-graphs and a demonstration of the approach for sample tasks in Tinkercad
+
+- The description of possible applications enabled by W-graphs
+
+We begin with a review of prior work, then describe the W-graph construction approach at a conceptual level. Next, we present work-flow graphs constructed for two sample tasks performed by Tin-kercad users, and discuss three applications enabled by W-graph-workflow feedback, on-demand task guidance, and instructor support. Finally, we present preliminary user feedback on a prototype of one of these applications, W-suggest, and conclude with a discussion of directions for future work.
+
+## 2 RELATED WORK
+
+This work expands prior research on software learning and work-flow capture, mining organically-created instructional content, and supporting learning at scale.
+
+### 2.1 Software Learning and Workflow Capture
+
+Early HCI research recognized the challenges of learning software applications [7], and identified the benefits of minimalist and task-centric help-resources [6]. More recently, Grossman et al. [21] identified five common classes of challenges that users face when learning feature-rich software applications: understanding how to perform a task, awareness of tools and features, locating tools and features, understanding how to use specific tools, and transitioning to efficient behaviors.
+
+Of the challenges listed above, the majority of existing work on assisting users to acquire alternative workflows has looked at how to promote the use of keyboard shortcuts and other expert interaction techniques $\left\lbrack {{20},{30},{31},{36}}\right\rbrack$ , with less attention on the adoption of more efficient workflows. Closer to the current work is CADament [29], a real-time multi-player game in which users compete to try and perform a 2D CAD task faster than one another. In the time between rounds of the game, the user is shown video of peers who are at a higher level of performance than they are, a feature which was found to prompt users to adopt more efficient methods. While CADament shares some similarity with the current work, the improvements were at the level of refining use of individual commands, rather than understanding alternative multi-command workflows.
+
+Beyond systems explicitly designed to promote use of more efficient behaviors, a number of systems have been designed to capture workflows from users, which could then be made available to others. Photo Manipulation Tutorials by Demonstration [19] and MixT [10] enable users to perform a workflow, and automatically convert that demonstration into a tutorial that can be shared with other users. Meshflow [12] and Chronicle [22] continuously record the user as they work, capturing rich metadata and screen recordings, and then provide visualizations and interaction techniques for exploring that editing history. In contrast to these works, which capture individual demonstrations of a task, W-graphs captures demonstrations from multiple users, and then uses these to recommend alternate workflows. In this respect, the current work is somewhat similar to Community Enhanced Tutorials [28], which records video demonstrations of the actions performed on each step of an image-editing tutorial and provides these examples to subsequent users of the tutorial. However, W-graphs looks at a more general problem, where the task is not sub-divided into pre-defined steps, and users thus have much more freedom in how they complete the task.
+
+Summarizing the above, there has been relatively little work on software learning systems that capture alternative workflows, and we are unaware of any work that has tried to do so by building a representation that encompasses many different means of performing a fixed $3\mathrm{D}$ modeling task.
+
+### 2.2 Mining and Summarizing Procedural Content
+
+A number of research projects have investigated how user-created procedural content can be analyzed or mined for useful information. RecipeScape [9] enables users to browse and analyze hundreds of cooking instructions for an individual dish by visually summarizing their structural patterns. Closer to our domain of interest, Delta [27] produces visual summaries of image editing workflows for Photoshop, and enables users to visually compare pairs of workflows. We take inspiration from the Delta system and this work's findings on how users compare workflows. That being said, our focus is on automatically building a data structure representing the many different ways that a task can be performed, rather than on how to best visualize or compare workflows.
+
+Query-Feature Graphs [16] provide a mapping between high-level descriptions of user goals and the specific features of an interactive system relevant to achieving those goals, and are produced by combining a range of data sources, including search query logs, search engine results, and web page content. While this approach could be valuable for understanding the tasks performed in an application, and the commands related to those commands, query-feature graphs do not in themselves provide a means of discovering alternative or improved workflows.
+
+Several research projects have investigated how to model a user's context as they work in a software application with the goal of aiding the retrieval and use of procedural learning content, for example using command log data [32], interactions gathered through accessibility APIs across multiple applications [17], or coordinated web browser and application activities [15]. Along similar lines, Wang et al. [38] developed a set of recommender algorithms for software workflows, and demonstrated how they could be used to recommend community-generated videos for a 3D modeling tool. While the above works share our goal of providing users with relevant work-flow information, their algorithms have focused on using the stream of actions being performed by the user, not the content that is being edited. Moreover, these techniques are not designed to capture the many different ways a fixed task can be performed, which limits their ability to recommend ways that a user can improve on the workflows they already use.
+
+### 2.3 Learning at Scale
+
+A final area of related work concerns how technology can enable learning at scale, for example by helping a scarce pool of experts to efficiently teach many learners, or by enabling learners to help one another. As a recent example, CodeOpticon [23] enables a single tutor to monitor and chat with many remote students working on programming exercises through a dashboard that shows each learner's code editor, and provides real-time text differences in visualizations and highlighting of compilation errors.
+
+Most related to the current work are learnersourcing techniques, which harness the activities of learners to contribute to human computation workflows. This approach has been used to provide labeling of how-to videos [25], and to generate hints to learners by asking other learners to reflect on obstacles they have overcome [18]. The AXIS system [40] asks learners to provide explanations as they solve math problems, and uses machine learning to dynamically determine which explanations to present to future learners.
+
+Along similar lines, Whitehill and Seltzer investigated the viability of crowdsourcing as a means of collecting video demonstrations of mathematical problem solving [39]. To analyze the diversity of problem-solving methods, the authors manually extracted the problem solving steps from 17 videos to created a graph of different solution paths. W-graphs produce a similar artifact for the domain of software workflows, and with an automated approach for constructing the graphs.
+
+In summary, by capturing and representing the workflows employed by users with varying backgrounds and skill levels, we see W-graphs as a potentially valuable approach for scaling the learning and improvement of software workflows.
+
+## 3 WORKFLOW GRAPHS
+
+The key problem that we address is that designers and researchers currently lack scalable approaches for analyzing and supporting user workflows. To develop such an approach, we need techniques that can map higher level user intents (e.g., 3D modeling a mug), to strategy level workflows (e.g., modeling the handle before the body), and user actions (the specific sequence of actions involved).
+
+We can broadly classify approaches for modeling user workflows derived from action sequences into bottom-up approaches and top-down approaches.
+
+Bottom-up approaches record users' action sequences, and then attempt to infer the user's intent at a post-processing stage using unsupervised modeling techniques such as semantic segmentation, clustering, or topic modeling $\left\lbrack {2,9}\right\rbrack$ . A disadvantage of this approach is that the results can be difficult to present to users, because the results of unsupervised modeling techniques are not human-readable labels. Meaningful labels could conceivably be added to the resulting clusters (e.g., using crowdsourcing techniques [11,26,37]), but this is a non-trivial problem under active research.
+
+An alternative is a top-down approach, in which a small number of domain experts break down a task into meaningful units (e.g., subgoals [8]), and then users or crowdworkers use these pre-created units as labels for their own command log data, or that of other users. This approach also comes with disadvantages-users must perform the labeling, their interpretation of pre-defined labels can differ, and the overall breakdown of the task depends on the judgement of a few domain experts, limiting the scalability of the approach.
+
+Then, how can we develop an approach for organizing users' collective interaction data into a meaningful structure while maintaining the scalability of naively recording user action sequences without interrupting them to acquire any labels?
+
+To investigate this possibility, we developed Workflow graphs (W-graphs), which synthesize many demonstrations of a fixed task (i.e., re-creating the same 3D model) such that the commonalities and differences between the approaches taken by users are encoded in the graph. To ensure the technique can scale, the goal is to automate the construction process, using recordings of demonstrations of the task as input (which may include screen recordings, command log data, content snapshots, etc.).
+
+Formally, a W-graph is a directed graph $G = \left( {V, A}\right)$ which consists of the following components:
+
+### 3.1 Graph Vertices
+
+$$
+V = \left\{ {{v}_{i};1 \leq i \leq N}\right\}
+$$
+
+The vertices of the graph represent semantically-meaningful states in the demonstrations, such as a sub-goal of the task. These states can be thought of as sub-goals in the workflow-ideally, we want them to capture the points where a user has completed a given sub-task, and has yet to start the next sub-task. Detecting these states automatically from unlabeled demonstrations is a challenge, but the idea is to leverage the demonstrations of multiple users to discover common states that occur across their respective methods for completing the task. If a new demonstration is completely different from those already represented in the graph, it might not share any nodes with those already in the graph, apart from the start and final nodes, which are shared by all demonstrations.
+
+Note that the appropriate criteria for judging which states from multiple demonstrations are semantically-similar is ill-defined, and dependent on the intended application of the W-graph. For example, one criterion could be used to construct a W-graph that indicates coarse differences between approaches for completing the task, while a more strict criterion for similarity could create a more complex graph, which reveals finer differences between similar approaches. As we discuss in the next section, our algorithm allows the threshold for the similarity to be tuned based on the intended application.
+
+### 3.2 Graph Edges
+
+$$
+A = \left\{ {\left( {{v}_{i},{v}_{j},{d}_{k},{E}_{i, j}}\right) ;{v}_{i},{v}_{j} \in V}\right\}
+$$
+
+$$
+{E}_{i, j, k} = \left\{ {{\text{ event }}_{1},{\text{ event }}_{2},{\text{ event }}_{3},\ldots }\right\}
+$$
+
+The directed edges of the graph represent workflows used by a user to move between semantically-similar states. There may be multiple directed edges between a given pair of states, if multiple demonstrations ${d}_{k}$ include a segment from state ${v}_{i}$ to ${v}_{j}$ .
+
+Each directed edge is associated with a set of events ${E}_{i, j, k}$ which include the timestamped interaction trace of events in demonstration ${d}_{k}$ performed in the segment between state ${v}_{i}$ and ${v}_{j}$ . This trace of events could includes timestamped command invocations, 3D model snapshots, or any other timestamped data that was gathered from the recorded demonstrations.
+
+### 3.3 Interaction Data
+
+The interaction trace data associated with edges enables a great deal of flexibility in how the W-graph is used. For example, this data could be used to retrieve snippets of screen recordings of the demonstrations associated with the segment of the task between two states, or it could be used to define metrics on the different workflows used for that segment of the task (e.g., the number of unique commands used, or the average time it takes to perform the workflow). As another example, analyzing the interaction traces along many different paths between states can reveal the average time for sub-tasks or the variance across users. Later in the paper, we present some example applications of W-graphs to illustrate the full flexibility of this data representation.
+
+## 4 PIPELINE FOR CONSTRUCTING W-GRAPHS
+
+In this section we describe the computational pipeline we have developed for constructing W-graphs. We start by discussing our instrumentation of Tinkercad and the data set we collected, then present the multi-step pipeline for processing the data and the similarity metric for identifying equivalent-intermediate states. The choice of a method for identifying equivalent-intermediate states is a key aspect of the pipeline, and we experimented with several alternative methods.
+
+### 4.1 Tinkercad Data Collection
+
+We instrumented a customized version of Tinkercad to record times-tamped command invocations and snapshots of the 3D model the user is working on after each command is executed (represented as a constructive solid geometry (CSG) tree with unique IDs for each object, to enable the association of model parts across multiple snapshots). To capture the instrumentation data, participants were asked to install Autodesk Screencast ${}^{2}$ , a screen recording application that can associate command metadata with the timeline of recorded video data. Collectively, this allowed us to gather timestamp-aligned command invocation data, 3D model snapshots, and screen recordings of participants performing $3\mathrm{D}$ modeling tasks. An example of a user-recorded screencast video can be seen in Figure 3.
+
+Workplane 日入血 2000 9.30 10.00 10.20 IEIIH BIBB 包 ion 5.30 6.00
+
+Figure 3: Screencast of a user demonstration, consisting of the (a) screen recording, (b) command sequences, and (c) 3D model snapshots
+
+Using this approach, we collected user demonstrations for two tasks-modeling a mug and modeling a standing desk (Figure 4). These tasks were selected because they could be completed in under 30 minutes, and represent different levels of complexity. The mug task is relatively simple, requiring fewer operations and primitives, while the desk task can be complex and time consuming if the user does not have knowledge of particular Tinkercad tools, such as the Align and Ruler. The Desk model also requires approximately twice as many primitives as the Mug model.
+
+We recruited participants through UserTesting.com and an email to an internal mailing list at a large software company. 14 participants were recruited for the Mug task, and 11 participants were recruited for the Desk task, but we excluded participants who did not follow the instructions, or failed to upload their recordings in the final step. After applying this criteria, we had 8 participants for the mug task ( 6 male, 2 female, ages 27-48), and 6 participants for the standing desk task (5 male, 1 female, ages 21-43).
+
+The result of data collection procedure were 8 demonstrations for the Mug task, which took ${26}\mathrm{\;m} : {24}\mathrm{\;s}$ on average $\left( {\mathrm{{SD}} = {10}\mathrm{\;m} : {46}\mathrm{\;s}}\right)$ and consisted of an average of 142 command invocations $\left( {\mathrm{{SD}} = {101}}\right)$ ; and 6 demonstrations for the Desk task, which took ${23}\mathrm{\;m} : {23}\mathrm{\;s}$ on average $\left( {\mathrm{{SD}} = 8\mathrm{\;m} : {20}\mathrm{\;s}}\right)$ and consisted of an average of 223 command invocations $\left( {\mathrm{{SD}} = {107}}\right)$ .
+
+### 4.2 Workflow to Graph Construction
+
+The W-graphs construction pipeline consists of three steps: preprocessing, collapsing node sequences, and sequence merging.
+
+(a) (b)
+
+Figure 4: Models used for data collection - (a) Mug, (b) Desk
+
+#### 4.2.1 Step 1. Preprocessing
+
+To start, we collapse repeated or redundant commands in the sequence of events (both keystroke and clickstream data) for each demonstration. For example, multiple invocations of "arrow key presses" for moving an object are merged into one "object moved with keyboard" and multiple invocations of "panning viewpoint" are merged into "panning".
+
+Next, the sequence of events for each user is considered as a set of nodes (one node per event), with directed edges connecting each event in timestamped sequence (Figure 5a). The 3D model snapshot for each event is associated with the corresponding node, and the event data (including timestamped command invocations) is associated with the incoming edge to that node. Since each demonstration starts from a blank document and finishes with the completed 3D model, we add a START node with directed edges to the first node in each demonstration, and we merge the final nodes of each demonstration into an END node. At this point, each demonstration represents a distinct directed path from the START node to the END node (Figure 5b).
+
+(c) (b) (d)
+
+Figure 5: Illustration of how sequences get compressed and merged into a W-graph
+
+#### 4.2.2 Step 2. Collapsing Node Sequences
+
+Next, the pipeline merges sequences of nodes with similar geometry along each path from START to END, by clustering the snapshots of 3D model geometry associated with the nodes along each path (Figure 5c). The metric we use for 3D model similarity is discussed at the end of this section. To identify sequences with similar geometry, we first apply the DBSCAN [13] algorithm to cluster the 3D model snapshots associated with each path. We then merge contiguous subsequences of nodes that were assigned to the same cluster, keeping the 3D model snapshot of the final state in the subsequence as the representation of that node. We selected DBSCAN because it does not require a pre-defined number of clusters, as in alternative clustering algorithms such as K-Means. The hyperparameters of DBSCAN are tuned using the K-Nearest Neighborhood distance method, which is a standard practice for this algorithm $\left\lbrack {4,5,{35}}\right\rbrack$ .
+
+---
+
+${}^{2}$ https://knowledge.autodesk.com/community/screencast
+
+---
+
+#### 4.2.3 Step 3. Sequence Merging
+
+Finally, the pipeline detects "equivalent-intermediate" nodes across the paths representing multiple demonstrations (Figure 5d). To do this, we compute the $3\mathrm{D}$ model similarity metric for all pairs of nodes that are not associated with the same demonstration (i.e., we only consider pairs of nodes from different demonstrations). We then merge all nodes with a similarity value below a threshold $\varepsilon$ that we manually tuned. In our experience, varying $\varepsilon$ can yield graphs that capture more or less granularity in variations in the task, and it would be interesting to consider an interactive system in which users can select a granularity that is suited to their use of the W-graph.
+
+At this point, the W-graph construction is complete. As at the start of the pipeline, the directed edges from START to END collectively include all the events from the original demonstrations, but now certain edges contain multiple events (because the nodes between them have been collapsed), and some nodes are shared between multiple demonstrations.
+
+### 4.3 Metrics for Detecting "Equivalent-intermediate States”
+
+The most crucial part of the pipeline is determining the "similarity" between 3D model snapshots, as this is used to merge sequences of events in demonstrations, and to detect shared states across multiple demonstrations. We experimented with four different methods of computing similarity between 3D model snapshots, which we discuss below.
+
+#### 4.3.1 Comparing CSG trees
+
+3D model snapshots are represented as CSG trees by Tinkercad, which consist of geometric primitives (e.g., cubes, cylinders, cones), combined together using Boolean operations (e.g., union, intersection, difference) in a hierarchical structure. A naive method of quantifying the difference between two snapshots would be to compare their respective trees directly, for example by trying to associate corresponding nodes, and then comparing the primitives or other characteristics of the tree. However, we quickly rejected this method because different procedures for modeling the same geometry can produce significantly different CSG trees. This makes the naive CSG comparison a poor method of judging similarity, where we specifically want to identify states where a similar end-result was reached through distinct methods.
+
+#### 4.3.2 Comparing 2D Images of Rendered Geometry
+
+Inspired by prior work that has used visual summaries of code structure to understand the progress of students on programming problems [41], we next explored how visual renderings of the models could be used to facilitate comparison. We rendered the geometry of each 3D model snapshot from 20 different angles, and then compared the resulting images for pairs of models to quantify their difference. The appeal of this approach is that the method used to arrive at a model does not matter, so long as the resulting models look the same. However, we ultimately rejected this approach due to challenges with setting an appropriate threshold for judging two models as similar based on pixel differences between their renders.
+
+#### 4.3.3 Comparing 3D Meshes
+
+Next, we experimented with using the Hausdorff distance [3], a commonly used mesh comparison metric, to compare the $3\mathrm{D}$ meshes of pairs of 3D model snapshots. As with the comparison of rendered images, this method required extensive trial and error to set an appropriate threshold. However, the biggest drawback of this method was that the distances produced by the metric are in absolute terms, with the result that conceptually minor changes to a 3D model, such as adding a cube to the scene, can lead to huge changes in the distance metric. Ideally we would like to capture how "semantically" meaningful changes are, which is not always reflected in how much of the resulting mesh has been altered.
+
+#### 4.3.4 Latent Space Embedding using Autoencoders
+
+The final method we tried was to use an autoencoder to translate $3\mathrm{D}$ point cloud data for each 3D model snapshot into a 512-dimensional vector. Autoencoders learn compact representations of input data by learning to encode a training set of data to a latent space of smaller dimensions, from which it can decode to the original data. We trained a latent model with a variation of PointNet [34] for encoding 3D point clouds to vectors, and PointSet Generation Network [14] for decoding vectors back to point clouds. The model was trained using the ShapeNet [43] dataset, which consists of 55 common object categories with about51,300unique $3\mathrm{D}$ models. By using an additional clustering loss function [42], the resulting distributed representation captures the characteristics that matter for clustering tasks. One of the limitations of PointNet autoencoders is that current techniques cannot perform rotational-invariant comparisons of geometries. However, this fits nicely with our purpose, because rotating geometry does not affect semantic similarity for the 3D modeling tasks we are targeting.
+
+Once trained, we can use the autoencoder to produce a 512- dimensional vector for each 3D model snapshot, and compare these using cosine distance to quantify the similarity between models. Overall, we found this to be the most effective method. Because it works using 3D point cloud data, it is not sensitive to how a model was produced, just its final geometry. Moreover, it required less tuning than comparing $2\mathrm{D}$ images of rendered geometry or comparing 3D meshes, and in our experiments appeared to be more sensitive to semantically-meaningful changes to models.
+
+### 4.4 Results
+
+As a preliminary evaluation of the pipeline, we examined the graphs constructed for the mug and standing desk tasks. The W-graph for the mug task is shown in Figure 6. From the graph, a few things can be observed. First, the high-level method followed by most users was to first construct the body of the mug (as seen in paths A-B-C, and A-C), and then build and add the handle. Examining the screen recordings, all three users on path A-B-C created the body by first adding a solid cylinder and then adding a cylindrical "hole" object ${}^{3}$ to hollow out the center of the solid cylinder (see Figure 7a). Two of the three users on path A-C followed a slightly different method, creating two solid cylinders first, and then converting one of them into a hole object (Figure 7b). It is encouraging that the pipeline was able to capture these two distinct methods.
+
+The remaining user on path A-C created a hole cylinder first, but ultimately deleted it and started again, following the same procedure as the users on path A-B-C. This highlights an interesting challenge in building W-graphs, which is how to handle backtracking or experimentation behavior (using commands such as Undo and Erase). We revisit this in the Discussion section at the end of the paper.
+
+The users on paths A-D-E-F and A-E-F followed a different approach from those discussed above. Both of these users started by creating a cylinder (as a hole in the case of A-D-E-F, and as a solid in the case of A-E-F), then built the handle, and finally cut out the center of the mug's body. The A-D-E-F user built the handle through the use of a solid box and a hole box (Figure 8a), but the A-E-F user used a creative method-creating a primitive in the shape of a letter ’B’, then cutting out part of it to create the handle (Figure 8b). Again, it is encouraging that the pipeline was able to separate these distinct methods.
+
+For the modeling of the handle, nodes F, G, and H capture the behavior of building the handle apart from the body of the mug, and then attaching it in states I and J. The E-F transition seems strange in Figure 6, but reviewing the screen recording, the user moved the handle away from the mug before cutting the hole in the body, perhaps to create some space to work.
+
+---
+
+${}^{3}$ Tinkercad shapes can be set as solid or as holes, which function like other shapes but cut out their volume when grouped with solid objects.
+
+---
+
+3 1 3 H 1 2 3 D 4 1 2 F
+
+Figure 6: W-graph for the mug task. Edge labels indicate the number of demonstrations for each path. For nodes with multiple demonstrations, a rendering of the 3D model snapshot is shown for one of the demonstrations. A high-res version of this image is included in supplementary materials.
+
+a. to go to 因 $\lambda = - \lambda$ . _____。 to $0 \in \mathbb{R}$ , The
+
+Figure 7: Two distinct methods of creating the mug body: (a) Create a solid cylinder, create a cylindrical hole, and group them; (b) Create two solid cylinders, position them correctly, then convert one into a hole.
+
+a. 0.9 因为 $a = - 1$ , b. Cirlshift
+
+Figure 8: Two methods of creating the handle: (a) Combine a solid box and a box-shaped hole; (b) Cut a letter 'B' shape into the handle using several box-shaped holes.
+
+Overall, the pipeline appears to be effective in capturing the variety of methods used to create the body of the mug, and the edges of the graph captured a few distinct methods for creating the handle. An interesting observation is that the node identification algorithm did not capture any sub-steps involved in creating the handle. One possibility is that the methods used by different users were distinct enough that they did not have any equivalent-intermediate states until the handle was complete. Another possibility is that the autoencoder is not good at identifying similar states for models that are partially constructed (being trained on ShapeNet, which consists of complete models). The above having been said, this is not necessarily a problem as the edges do capture multiple methods of constructing the handle.
+
+The W-graph for the standing desk task is shown in Figure 9. The graph is more complex than that for the mug task, reflecting the added complexity of creating the standing desk, but we do observe similarities in how the graph captures the task. In particular, we can see paths that reflect the different orders in which users created the three main parts of the desk (the top, the legs, and the privacy screen).
+
+We also notice some early nodes with box shapes, which later diverge and become a desk top in some demonstrations, and legs in another. These nodes that represent a common geometric history for different final shapes are interesting, because they represent situations where the algorithm may correctly merge similar geometry, but doing so works counter to the goal of identifying workflows for completing sub-goals of the task, effectively breaking them up into several edges. A possible way to address this would be to modify the pipeline so it takes into account the eventual final placement of each primitive at the end of the task, or several edges forward, in determining which nodes to merge.
+
+## 5 POTENTIAL APPLICATIONS OF W-GRAPHS
+
+This section presents three novel applications that are made possible by W-graphs: 1) W-Suggest, a workflow suggestion interface, 2) W-Guide, an on-demand 3D modeling help interface, and 3) W-Instruct, an instructor dashboard for analyzing workflows.
+
+
+
+Figure 9: W-graph for the standing desk task. Edge labels indicate the number of demonstrations for each path. For nodes with multiple demonstrations, a rendering of the 3D model snapshot is shown for one of the demonstrations. A high-res version of this image is included in supplementary materials.
+
+You 因此选择无法选择A。 W-Suggest 大血腥
+
+Figure 10: W-Suggest - A workflow suggestion interface mockup
+
+### 5.1 W-Suggest: Workflow Suggestion Interface
+
+By representing the structure of how to perform a task, W-graphs can serve as a back-end for applications that suggest alternate workflows to users.
+
+To use the W-Suggest system(Figure 10), the user first records themselves performing a 3D modeling task, similar to the procedure performed by participants in the previous section. However, instead of integrating this new workflow recording into the W-graph, the system compares the workflow to the existing graph and suggests alternate workflows for portions of the task.
+
+W-Suggest uses the following algorithm to make its suggestions. First, it performs Steps 1 and 2 of the W-graph construction pipeline on the user's recording of the task (i.e., preprocessing the events, and collapsing node sequences with similar geometry). Next, the 512-dimensional embedding vector for each remaining 3D model snapshot is computed using the same autoencoder used for the W-graph construction pipeline. The vectors for each of these nodes are then compared to those of the W-graph nodes along the shortest path from START to END (as measured by total command invocations) to detect matches using the same threshold $\varepsilon$ used for graph construction. Finally, for each pair of matched nodes (one from the user, one from the shortest path in the W-graph), the edge originating at the user's node and the edge originating at the W-graph node are compared based on command invocations. Based on all of these comparisons, the algorithm selects the pair for which there is the biggest difference in command invocations between the user's demonstration and the demonstration from the W-graph. In effect, the idea is to identify segments of the user's task for which the W-graph includes a method that uses much fewer command invocations, which can then be suggested to the user.
+
+### 5.2 W-Guide: On-Demand Task Guidance Interface
+
+W-graphs could also serve as a back-end for a W-Guide interface that presents contextually appropriate video content to users on-demand as they work in an application, extending approaches taken by systems such as Ambient Help [32] and Pause-and-Play [33] with peer demonstrations.
+
+O O O O O
+
+Figure 11: W-Guide - An on-demand task guidance interface mockup.
+
+While working on a 3D modeling task in Tinkercad, the user could invoke W-Guide to see possible next steps displayed in a panel to the right of the editor (Figure 11). These videos are populated based on the strategies captured from other users and stored in the W-graph. Specifically, the panel recommends video demonstrations from other users matched to the current user's state, and proceeds to the next "equivalent-intermediate" state (i.e., one edge forward in the graph). Using a similar approach to W-Suggest, these can be provided with meaningful labels (e.g., "Shortest workflow", "Most popular workflow", etc.).
+
+W-Guide could use the identical algorithm as W-Suggest to construct a W-Graph and populate its suggestions. The only difference is that it would attempt to match the user's current incomplete work-flow to the graph. This is achievable because the $\varepsilon$ threshold for collapsing node sequences is flexible, allowing W-Guide to construct a W-Graph from any point in current user's workflow and populate demonstrations for next steps.
+
+An exciting possibility that becomes possible with W-Guide is that the system could dynamically elicit additional demonstrations from users in a targeted way (e.g., by popping up a message asking them to provide different demonstrations than those pre-populated in the panel). This could allow the system to take an active role in fleshing out a W-graph with diverse samples of methods.
+
+### 5.3 W-Instruct: Instructor Tool
+
+Finally, we envision the $W$ -Instruct system in which W-graphs become a flexible and scalable tool for instructors to provide feedback to students, assess their work, and generate tutorials or other instructional materials on performing $3\mathrm{D}$ modeling tasks.
+
+W-Instruct (Figure 12) supports instructors in understanding the different methods used by their students to complete a task-by examining the graph, an instructor can see the approaches taken by students, rather than simply the final artifacts they produce. The grouping of multiple students' workflows could also be used as a means to provide feedback to a large number of learners at scale (e.g., in a MOOC setting). Also, the instructor could quickly identify shortcuts, crucial parts of the workflow to emphasize, or common mistakes by browsing the W-Graph. As shown in Figure 12, edges can be highlighted to show the most common solutions, and the video demonstration corresponding to an edge can be viewed by hovering over a node in the graph.
+
+
+
+Figure 12: W-Instruct - An instructor tool mockup.
+
+Along similar lines to W-Instruct, we see potential for W-graphs to support the generation of tutorials and other learning content, building on past work exploring approaches for generating tutorials by demonstration $\left\lbrack {{10},{19}}\right\rbrack$ . For example, synthetic demonstrations of workflows could potentially be produced that combine the best segments of multiple demonstrations in the W-graph, creating a demonstration that is more personalized to the current user than any individual demonstration.
+
+## 6 User Feedback on W-Suggest
+
+While the main focus of this work is on the computational approach for constructing W-graphs, we implemented the W-Suggest application as a preliminary demonstration of the feasibility of building applications on top of a constructed W-graph (Figure 13). The W-Suggest interface consists of a simplified representation of the user's workflow, with edges highlighted to indicate a part of the task for which the system is suggesting an improved workflow. Below this are two embedded video players, one showing the screen recording of the user's workflow for that part of the task, and the other showing a suggested workflow drawn from other users in the graph. Below this are some metrics on the two workflows, including duration, the distribution of commands used, and the specific sequences of commands used.
+
+To gain some feedback on the prototype, we recruited 4 volunteers to perform one of the two tasks from the previous section (two for the mug task, two for the standing desk task) and presented them with their W-Suggest interface. We asked them to watch the two videos-one showing their workflow, the other showing the suggested workflow-and then asked a few short questions about the interface. Specifically, we asked if they felt it was useful to view the alternate demonstration, and why or why not they felt that way. We also asked them their thoughts on the general utility of this type of workflow suggestion system, and what aspects of workflows they would like suggestions on for software they frequently use.
+
+Due to the small number of participants for these feedback sessions, they are best considered as providing preliminary feedback, and certainly not a rigorous evaluation. That being said, the feedback from participants was quite positive, with all participants agreeing it would be valuable to see alternative workflows. In particular, participants mentioned that it would be valuable to see common workflows, the fastest workflow, and workflows used by experts.
+
+Two participants mentioned that they learned something new about how to use Tinkercad from watching the alternate video, as in the following quote by P2 after seeing a use of the Ruler tool to align objects: Oh, you can adjust the things there [with the Ruler] that's useful. Oh, there's like an alignment thing, that seems really easy.
+
+Your Workflow W-SUGGEST W-SUGGEST suggests Duration: 0:0:2:16 Command Distribution moved Command Sequence 'moved', 'moved' Duration: 0:0:9:18 Command Distribution wiew cube Command Sequence cube', 'resized', 'view cube', 'view cube', 'view cube', 'moved', 'view cube", 'view cube'. 'view cube', 'view cube'. 'view cube'. 'moved'. 'undef
+
+Figure 13: W-Suggest - The implemented interface.
+
+Likewise, P4 observed a use of the Workplane tool that he found valuable: It's assigning relative positions with it [the Workplane and Ruler]-I wanted to do something like that.
+
+All participants agreed that efficiency is an important criterion when recommending alternative workflows. However, P1 and P2 noted that the best method to use in feature-rich software, or other domains such as programming, can often depend on contextual factors. In particular, P1 noted that they might prepare a 3D model differently if it is intended to be $3\mathrm{D}$ printed. This suggests that additional meta-data on the users or the intended purpose for creating a model could be useful for making workflow recommendations.
+
+## 7 DISCUSSION, LIMITATION AND FUTURE WORK
+
+Overall, the W-graphs produced for the mug and standing desk tasks are encouraging, and suggest that our pipeline is effective at capturing different high-level methods for modeling 3D objects. Testing the pipeline on these sample tasks also revealed a number of potential directions for improving the approach, including modeling backtracking behavior in demonstrations, and accounting for subtasks with common intermediate states. Finally, our user feedback sessions for W-Suggest showed enthusiasm for applications built on W-graphs, and revealed insights into criteria for what makes a good demonstration, including the importance of contextual factors.
+
+In this section we revisit the potential of modeling backtracking and experimentation, discuss the question of how many demonstrations are needed to build a useful W-graph graph, and suggest further refinements of the graph construction method. We then discuss how our approach could be generalized to building models of similar tasks.
+
+### 7.1 Backtracking and Experimentation
+
+In our current approach, Undo and Erase are treated the same as other commands. In some situations this may be appropriate, but at other times these commands may be used to backtrack, to recover from mistakes, or to try other workflows, and past work has shown their occurrence may indicate usability challenges [1]. It would be interesting to investigate whether these practices for using Undo and Erase could be detected and represented in a W-graph. This could take the form of edges that go back to previous states, creating directed cycles or self-loops in the graph. Applications built on top of a W-graph could also use the number of Undos as a metric for ranking paths through the graph (e.g., to identify instances of exploratory behavior), or as a filtering metric to cull the graph of such backtracking behavior.
+
+### 7.2 Branching Factors and Graph Saturation
+
+A nice feature of W-graphs is that they can be built with only a few demonstrations. As the number of demonstrations grows, the graph can more fully capture the space of potential workflows for the task. However, it is likely that the graph will eventually reach a point at which it is saturated, beyond which additional workflows will contribute a diminishing number of additional methods. The number of demonstrations needed to reach saturation will likely vary task by task, with more complex tasks requiring more demonstrations than simpler tasks. Examining how the sum of the branching factor for all nodes in the tree changes with each added demonstration may give an indication of when the graph has reached saturation, as the number of branches is likely to stop growing once new methods are no longer being added.
+
+### 7.3 Scalability
+
+In one sense, the W-graph approach is scalable by design, as it relies on computational comparisons of $3\mathrm{D}$ models rather than human interventions such as expert labeling or crowdsourcing. However, more work is needed to understand how the structure of W-graphs produced by our pipeline change as the number of demonstrations in a graph grows. In particular, there is the question of how the parameters for identifying similar intermediate states may need to change in response to a growing number of workflows, in order to produce graphs at the right granularity for a given application, and other issues that may come up when processing many demonstrations. On the application end, metrics could be developed to identify less-used but valuable traces contained in a graph with many demonstrations.
+
+### 7.4 Robustness Against Different Workflow Orders
+
+A potential limitation of our current approach is that it preserves the global order of sub-tasks, including those that could be performed in an arbitrary order (e.g., a user could start by modeling the legs or the top of a table), and this could prevent it from grouping some variations of sub-tasks together if a given sub-task is performed first by some users, and later by others. Preserving the global order of subtasks has some advantages, in that it reveals how users commonly sequence the sub-tasks that make up the overall task, and it can also reveal cases where sub-tasks benefit from being ordered in a certain way, as may occur when objects built as part of a preceding sub-task are used to help with positioning or building objects in a subsequent sub-task. However, it would be interesting to look at approaches that post-process a W-graph to identify edges across the graph where the same sub-task is being performed (e.g., by looking for edges where similar changes to geometry are made, ignoring geometry that isn't changing) to address this limitation and gain insights into sub-task order in the graph.
+
+### 7.5 Extension to Similar Tasks and Sub-Tasks
+
+Another interesting direction for future work is to consider how the W-graph approach could be extended to scenarios where the demonstrations used to produce the graph are not of the exact same task, but instead represent workflows for a class of similar tasks (e.g., modeling chairs). We believe the autoencoder approach we have adopted could be valuable for this, as it is less sensitive to variations in the model, and potentially able to capture semantic similarities between models of different objects within a class, but more research is required. Sub-goal labels provided by users or learners could be valuable here, building on approaches that have been used for how-to videos [25] and math problems [40]. Given a user's explanation of their process or different stages in the task, the graph construction algorithm would have access to natural language descriptions in addition to interaction traces and content snapshots, which could be used to group workflows across distinct but related tasks.
+
+Beyond refining our algorithms to work with similar tasks, it would be interesting to investigate how a large corpus of demonstrations could be mined to identify semantically-similar sub-tasks (which could be then turned into W-graphs). Multi-W-graphs could conceivably be developed that link together the nodes and edges of individual W-graphs, to represent similarities and relationships between the workflows used for different tasks. For example, nodes representing the legs of a desk, chair, or television stand could be linked across their respective graphs, and edges that represent workflows for creating certain effects (e.g., a particular curvature or geometry) could be linked as well. In the limit, one could imagine a set of linked graphs that collectively encode all the tasks commonly performed in a domain, and feed many downstream applications for workflow recommendation and improvement.
+
+### 7.6 Generalizing to Other Software and Domains
+
+Though we demonstrated our approach for $3\mathrm{D}$ modeling software, the W-graph construction approach would be straightforward to extend to other software applications and domains. For many domains, such as 2D graphics or textual media, the technique could be generalized by simply substituting in an appropriate feature extraction mechanism for that domain. More challenging would be extending the approach to apply across a variety of software applications, perhaps by different software developers, where instrumentation to gather commands and content is not easy. To approach this, we could imagine using the screen recording data for the content, and accessibility APIs to gather the actions performed by users (an approach used in recent work [17]). Beyond fully-automated approaches, learnersourcing approaches [25] could be used to elicit sub-goals that have particular pedagogical value, and these peer-generated subgoals could be turned into feedback for other learners in the system, using similar methods to those explored in other applications [24].
+
+## 8 CONCLUSION
+
+This work has contributed a conceptual approach for representing the different means by which a fixed goal can be achieved in feature rich software, based on recordings of user demonstrations, and has demonstrated a scalable pipeline for constructing such a representation for 3D modeling software. It has also presented a range of applications that could leverage this representation to support users in improving their skill sets over time. Overall, we see this work as a first step toward enabling a new generation of help and learning systems for feature-rich software, powered by data-driven models of tasks and workflows.
+
+## 9 ACKNOWLEDGEMENTS
+
+Thanks to Autodesk Research for all their support, and in particular to Aditya Sanghi and Kaveh Hassani, who provided invaluable advice and guidance on techniques for comparing 3D models. Thanks also to our study participants for their valuable feedback.
+
+[1] D. Akers, R. Jeffries, M. Simpson, and T. Winograd. Backtracking
+
+Events As Indicators of Usability Problems in Creation-Oriented Applications. ACM Trans. Comput.-Hum. Interact., 19(2):16:1-16:40, July 2012. doi: 10.1145/2240156.2240164
+
+[2] P. André, A. Kittur, and S. P. Dow. Crowd synthesis: Extracting categories and clusters from complex data. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, pp. 989-998. ACM, 2014.
+
+[3] N. Aspert, D. Santa-Cruz, and T. Ebrahimi. Mesh: Measuring errors between surfaces using the hausdorff distance. In Proceedings. IEEE International Conference on Multimedia and Expo, vol. 1, pp. 705-708. IEEE, 2002.
+
+[4] D. Birant and A. Kut. St-dbscan: An algorithm for clustering spatial-temporal data. Data & Knowledge Engineering, 60(1):208-221, 2007.
+
+[5] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander. Lof: identifying density-based local outliers. In ${ACM}$ sigmod record, vol. 29, pp. 93- 104. ACM, 2000.
+
+[6] J. M. Carroll. The Nurnberg funnel: designing minimalist instruction for practical computer skill. MIT Press, 1990.
+
+[7] J. M. Carroll and M. B. Rosson. Paradox of the active user. In Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, pp. 80-111. MIT Press, 1987.
+
+[8] R. Catrambone. The subgoal learning model: Creating better examples so that students can solve novel problems. Journal of Experimental Psychology: General, 127(4):355, 1998.
+
+[9] M. Chang, L. V. Guillain, H. Jung, V. M. Hare, J. Kim, and M. Agrawala. Recipescape: An interactive tool for analyzing cooking instructions at scale. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 451. ACM, 2018.
+
+[10] P.-Y. Chi, S. Ahn, A. Ren, M. Dontcheva, W. Li, and B. Hartmann. MixT: Automatic generation of step-by-step mixed media tutorials. In Proceedings of the 25th annual ACM symposium on User interface software and technology, UIST '12, pp. 93-102. ACM, 2012.
+
+[11] L. B. Chilton, G. Little, D. Edge, D. S. Weld, and J. A. Landay. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1999-2008. ACM, 2013.
+
+[12] J. D. Denning, W. B. Kerr, and F. Pellacini. MeshFlow: Interactive visualization of mesh construction sequences. ACM Trans. Graph., 30(4):66:1-66:8, July 2011. doi: 10.1145/2010324.1964961
+
+[13] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In ${Kdd}$ , vol. 96, pp. 226-231,1996.
+
+[14] H. Fan, H. Su, and L. J. Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 605-613, 2017.
+
+[15] A. Fourney, B. Lafreniere, P. K. Chilana, and M. Terry. InterTwine: Creating interapplication information scent to support coordinated use of software. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST '14, p. 10 pages. ACM, 2014.
+
+[16] A. Fourney, R. Mann, and M. Terry. Query-feature graphs: Bridging user vocabulary and system functionality. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, pp. 207-216. ACM, New York, NY, USA, 2011. doi: 10. 1145/2047196.2047224
+
+[17] C. A. Fraser, T. J. Ngoon, M. Dontcheva, and S. Klemmer. Replay: Contextually presenting learning videos across software applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2019.
+
+[18] E. L. Glassman, A. Lin, C. J. Cai, and R. C. Miller. Learnersourcing Personalized Hints. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW '16, pp. 1626-1636. ACM, New York, NY, USA, 2016. doi: 10.1145/ 2818048.2820011
+
+[19] F. Grabler, M. Agrawala, W. Li, M. Dontcheva, and T. Igarashi. Generating photo manipulation tutorials by demonstration. ACM Trans.
+
+Graph., 28(3):66:1-66:9, July 2009. doi: 10.1145/1531326.1531372
+
+[20] T. Grossman, P. Dragicevic, and R. Balakrishnan. Strategies for Accelerating On-line Learning of Hotkeys. In Proceedings of the SIGCHI
+
+Conference on Human Factors in Computing Systems, CHI '07, pp. 1591-1600. ACM, New York, NY, USA, 2007. doi: 10.1145/1240624. 1240865
+
+[21] T. Grossman, G. Fitzmaurice, and R. Attar. A survey of software learnability: Metrics, methodologies and guidelines. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, pp. 649-658. ACM, New York, NY, USA, 2009. doi: 10. 1145/1518701.1518803
+
+[22] T. Grossman, J. Matejka, and G. Fitzmaurice. Chronicle: Capture, exploration, and playback of document workflow histories. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pp. 143-152. ACM, New York, New York, USA, 2010. ACM ID: 1866054. doi: 10.1145/1866029.1866054
+
+[23] P. J. Guo. Codeopticon: Real-Time, One-To-Many Human Tutoring for Computer Programming. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST '15, pp. 599-608. ACM, New York, NY, USA, 2015. doi: 10.1145/2807442. 2807469
+
+[24] H. Jin, M. Chang, and J. Kim. Solvedeep: A system for supporting subgoal learning in online math problem solving. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2019.
+
+[25] J. Kim. Learnersourcing: Improving Learning with Collective Learner Activity. PhD thesis, Massachusetts Institute of Technology, 2015.
+
+[26] J. Kim, P. T. Nguyen, S. Weir, P. J. Guo, R. C. Miller, and K. Z. Gajos. Crowdsourcing step-by-step information extraction to enhance existing how-to videos. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 4017-4026. ACM, 2014.
+
+[27] N. Kong, T. Grossman, B. Hartmann, M. Agrawala, and G. Fitzmaurice. Delta: a tool for representing and comparing workflows. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 1027-1036. ACM, New York, NY, USA, 2012. doi: 10. 1145/2208516.2208549
+
+[28] B. Lafreniere, T. Grossman, and G. Fitzmaurice. Community enhanced tutorials: improving tutorials with multiple demonstrations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1779-1788. ACM, 2013.
+
+[29] W. Li, T. Grossman, and G. Fitzmaurice. CADament: A Gamified Multiplayer Software Tutorial System. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, pp. 3369-3378. ACM, New York, NY, USA, 2014. event-place: Toronto, Ontario, Canada. doi: 10.1145/2556288.2556954
+
+[30] W. Li, Y. Zhang, and G. Fitzmaurice. Tutorialplan: automated tutorial generation from cad drawings. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
+
+[31] S. Malacria, J. Scarr, A. Cockburn, C. Gutwin, and T. Grossman. Skil-lometers: Reflective Widgets That Motivate and Help Users to Improve Performance. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, pp. 321-330. ACM, New York, NY, USA, 2013. doi: 10.1145/2501988.2501996
+
+[32] J. Matejka, T. Grossman, and G. Fitzmaurice. Ambient help. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 2751-2760. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942.1979349
+
+[33] S. Pongnumkul, M. Dontcheva, W. Li, J. Wang, L. Bourdev, S. Avidan, and M. F. Cohen. Pause-and-play: Automatically linking screencast video tutorials with applications. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, pp. 135-144. ACM, New York, NY, USA, 2011. doi: 10.1145/ 2047196.2047213
+
+[34] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for $3\mathrm{\;d}$ classification and segmentation. arXiv preprint arXiv:1612.00593, 2016.
+
+[35] J. Sander, M. Ester, H.-P. Kriegel, and X. Xu. Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data mining and knowledge discovery, 2(2):169-194, 1998.
+
+[36] J. Scarr, A. Cockburn, C. Gutwin, and P. Quinn. Dips and Ceilings:
+
+Understanding and Supporting Transitions to Expertise in User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 2741-2750. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942.1979348
+
+[37] Y. Sun, A. Singla, D. Fox, and A. Krause. Building hierarchies of concepts via crowdsourcing. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
+
+[38] X. Wang, B. J. Lafreniere, and T. Grossman. Leveraging community-generated videos and command logs to classify and recommend software workflows. In ${CHI},{2018}$ .
+
+[39] J. Whitehill and M. Seltzer. A crowdsourcing approach to collecting tutorial videos-toward personalized learning-at-scale. In Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale, pp. 157- 160. ACM, 2017.
+
+[40] J. J. Williams, J. Kim, A. Rafferty, S. Maldonado, K. Z. Gajos, W. S. Lasecki, and N. Heffernan. AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning. In Proceedings of the Third (2016) ACM Conference on Learning @ Scale, L@S '16, pp. 379-388. ACM, New York, NY, USA, 2016. doi: 10.1145/2876034. 2876042
+
+[41] L. Yan, N. McKeown, and C. Piech. The pyramidsnapshot challenge: Understanding student process from visual output of programs. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE'19). ACM, New York, NY, USA. https://doi.org/10.1145/3287324.3287386, 2019.
+
+[42] B. Yang, X. Fu, N. D. Sidiropoulos, and M. Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3861-3870. JMLR. org, 2017.
+
+[43] L. Yi, V. G. Kim, D. Ceylan, I.-C. Shen, M. Yan, H. Su, C. Lu, Q. Huang, A. Sheffer, and L. Guibas. A scalable active framework for region annotation in 3d shape collections. SIGGRAPH Asia, 2016.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..754dfcad9dd67db2039b46c4bb73725f606424ac
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/qXEzq5agzIN/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,357 @@
+§ WORKFLOW GRAPHS: A COMPUTATIONAL MODEL OF COLLECTIVE TASK STRATEGIES FOR 3D DESIGN SOFTWARE
+
+Minsuk Chang*
+
+School of Computing
+
+KAIST
+
+Ben Lafreniere ${}^{ \dagger }$
+
+Autodesk Research
+
+Juho Kim‡
+
+School of Computing
+
+KAIST
+
+George Fitzmaurice ${}^{§}$
+
+Autodesk Research
+
+Tovi Grossman ${}^{¶}$
+
+University of Toronto
+
+§ ABSTRACT
+
+This paper introduces Workflow graphs, or W-graphs, which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph's nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.
+
+Index Terms: Human-centered computing-Interactive systems and tools—;—
+
+§ 1 INTRODUCTION
+
+There are common situations in which many users of complex software perform the same task, such as designing a chair or table, bringing their unique set of skills and knowledge to bear on a set goal. For example, this occurs when multiple people perform the same tutorial, complete an assignment for a course, or work on sub-tasks that frequently occur in the context of a larger task, such as 3D modeling joints when designing furniture. It is also common for users to discuss and compare different methods of completing a single task in online communities for 3D modeling software (for an example of such discussion, see Figure 2). This raises an interesting possibility-what if the range of different methods for performing a task could be captured and represented as rich workflow recordings, as a way to help experienced users discover alternative methods and expand their workflow knowledge, or to assist novice users in learning advanced practices?
+
+In this research, we investigate how multiple demonstrations of a fixed task can be captured and represented in a workflow graph (W-graph) (Figure 1). The idea is to automatically discover the different means of accomplishing a goal from the interaction traces of multiple users, and to encode these in a graph representation. The graph thus represents diverse understanding of the task, opening up a range of possible applications. For example, the graph could be used to provide targeted suggestions of segments of the task for which alternative methods exist, or to synthesize the most efficient means of completing the task from the many demonstrations encoded in the graph. It could also be used to synthesize and populate tutorials tailored to particular users, for example by only showing methods that use tools known to that user.
+
+ < g r a p h i c s >
+
+Figure 1: W-graphs encode multiple demonstrations of a fixed task, based on commonalities in the workflows employed by users. Nodes represent semantically similar states across demonstrations. Edges represent alternative workflows for sub-tasks. The width of edges represents the number of distinct workflows between two states.
+
+To investigate this approach, we instrumented Tinkercad ${}^{1}$ , a 3D solid modeling application popular in the maker community, to gather screen recordings, command sequences, and changes to the CSG (constructive solid geometry) tree of the specific 3D model being built. The interaction traces for multiple users performing the same task are processed by an algorithm we developed, which combines them into a W-graph representing the collective actions of all users. Unlike past approaches to workflow modeling in this domain, which have focused on command sequence data (e.g., [38]), our approach additionally leverages the $3\mathrm{D}$ model content being created by the user. This allows us to track the progress of the task in direct relation to changes in the content (i.e., the 3D model) to detect common stages of the task progression across multiple demonstrations. We use an autoencoder [34] to represent the 3D geometry information of each 3D model snapshot, which we found to be a robust and scalable method for detecting workflow-relevant changes in the geometry, as compared to metrics such as comparing CSG trees, 2D renders, and 3D meshes.
+
+The result is a graph in which each directed edge from the starting node to a terminal node represents a potential workflow for completing the task, and multiple edges between any two states represent alternative approaches for performing that segment of the task. The collected command log data and screen recordings associated with the edges of the graph can be processed to define metrics on paths (such as average workflow duration or number of unique commands used), and displayed as demonstration content in interfaces.
+
+The main contributions of this paper are:
+
+*e-mail: minsuk@kaist.ac.kr
+
+${}^{ \dagger }$ e-mail: ben.lafreniere@gmail.com
+
+${}^{ \ddagger }$ e-mail: juhokim@kaist.ac.kr
+
+§e-mail: george.fitzmaurice@autodesk.com
+
+Te-mail: tovi@dgp.toronto.edu
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+ACM to publish electronically.
+
+${}^{1}$ https://tinkercad.com
+
+ < g r a p h i c s >
+
+Figure 2: Fifteen distinct suggestions on how to perform a 3D modeling task - from the largest Fusion 360 user community on Facebook
+
+ * The concept of W-graphs, which represent the semantic structure of a task, based on demonstrations from multiple users
+
+ * A computational pipeline for constructing W-graphs and a demonstration of the approach for sample tasks in Tinkercad
+
+ * The description of possible applications enabled by W-graphs
+
+We begin with a review of prior work, then describe the W-graph construction approach at a conceptual level. Next, we present work-flow graphs constructed for two sample tasks performed by Tin-kercad users, and discuss three applications enabled by W-graph-workflow feedback, on-demand task guidance, and instructor support. Finally, we present preliminary user feedback on a prototype of one of these applications, W-suggest, and conclude with a discussion of directions for future work.
+
+§ 2 RELATED WORK
+
+This work expands prior research on software learning and work-flow capture, mining organically-created instructional content, and supporting learning at scale.
+
+§ 2.1 SOFTWARE LEARNING AND WORKFLOW CAPTURE
+
+Early HCI research recognized the challenges of learning software applications [7], and identified the benefits of minimalist and task-centric help-resources [6]. More recently, Grossman et al. [21] identified five common classes of challenges that users face when learning feature-rich software applications: understanding how to perform a task, awareness of tools and features, locating tools and features, understanding how to use specific tools, and transitioning to efficient behaviors.
+
+Of the challenges listed above, the majority of existing work on assisting users to acquire alternative workflows has looked at how to promote the use of keyboard shortcuts and other expert interaction techniques $\left\lbrack {{20},{30},{31},{36}}\right\rbrack$ , with less attention on the adoption of more efficient workflows. Closer to the current work is CADament [29], a real-time multi-player game in which users compete to try and perform a 2D CAD task faster than one another. In the time between rounds of the game, the user is shown video of peers who are at a higher level of performance than they are, a feature which was found to prompt users to adopt more efficient methods. While CADament shares some similarity with the current work, the improvements were at the level of refining use of individual commands, rather than understanding alternative multi-command workflows.
+
+Beyond systems explicitly designed to promote use of more efficient behaviors, a number of systems have been designed to capture workflows from users, which could then be made available to others. Photo Manipulation Tutorials by Demonstration [19] and MixT [10] enable users to perform a workflow, and automatically convert that demonstration into a tutorial that can be shared with other users. Meshflow [12] and Chronicle [22] continuously record the user as they work, capturing rich metadata and screen recordings, and then provide visualizations and interaction techniques for exploring that editing history. In contrast to these works, which capture individual demonstrations of a task, W-graphs captures demonstrations from multiple users, and then uses these to recommend alternate workflows. In this respect, the current work is somewhat similar to Community Enhanced Tutorials [28], which records video demonstrations of the actions performed on each step of an image-editing tutorial and provides these examples to subsequent users of the tutorial. However, W-graphs looks at a more general problem, where the task is not sub-divided into pre-defined steps, and users thus have much more freedom in how they complete the task.
+
+Summarizing the above, there has been relatively little work on software learning systems that capture alternative workflows, and we are unaware of any work that has tried to do so by building a representation that encompasses many different means of performing a fixed $3\mathrm{D}$ modeling task.
+
+§ 2.2 MINING AND SUMMARIZING PROCEDURAL CONTENT
+
+A number of research projects have investigated how user-created procedural content can be analyzed or mined for useful information. RecipeScape [9] enables users to browse and analyze hundreds of cooking instructions for an individual dish by visually summarizing their structural patterns. Closer to our domain of interest, Delta [27] produces visual summaries of image editing workflows for Photoshop, and enables users to visually compare pairs of workflows. We take inspiration from the Delta system and this work's findings on how users compare workflows. That being said, our focus is on automatically building a data structure representing the many different ways that a task can be performed, rather than on how to best visualize or compare workflows.
+
+Query-Feature Graphs [16] provide a mapping between high-level descriptions of user goals and the specific features of an interactive system relevant to achieving those goals, and are produced by combining a range of data sources, including search query logs, search engine results, and web page content. While this approach could be valuable for understanding the tasks performed in an application, and the commands related to those commands, query-feature graphs do not in themselves provide a means of discovering alternative or improved workflows.
+
+Several research projects have investigated how to model a user's context as they work in a software application with the goal of aiding the retrieval and use of procedural learning content, for example using command log data [32], interactions gathered through accessibility APIs across multiple applications [17], or coordinated web browser and application activities [15]. Along similar lines, Wang et al. [38] developed a set of recommender algorithms for software workflows, and demonstrated how they could be used to recommend community-generated videos for a 3D modeling tool. While the above works share our goal of providing users with relevant work-flow information, their algorithms have focused on using the stream of actions being performed by the user, not the content that is being edited. Moreover, these techniques are not designed to capture the many different ways a fixed task can be performed, which limits their ability to recommend ways that a user can improve on the workflows they already use.
+
+§ 2.3 LEARNING AT SCALE
+
+A final area of related work concerns how technology can enable learning at scale, for example by helping a scarce pool of experts to efficiently teach many learners, or by enabling learners to help one another. As a recent example, CodeOpticon [23] enables a single tutor to monitor and chat with many remote students working on programming exercises through a dashboard that shows each learner's code editor, and provides real-time text differences in visualizations and highlighting of compilation errors.
+
+Most related to the current work are learnersourcing techniques, which harness the activities of learners to contribute to human computation workflows. This approach has been used to provide labeling of how-to videos [25], and to generate hints to learners by asking other learners to reflect on obstacles they have overcome [18]. The AXIS system [40] asks learners to provide explanations as they solve math problems, and uses machine learning to dynamically determine which explanations to present to future learners.
+
+Along similar lines, Whitehill and Seltzer investigated the viability of crowdsourcing as a means of collecting video demonstrations of mathematical problem solving [39]. To analyze the diversity of problem-solving methods, the authors manually extracted the problem solving steps from 17 videos to created a graph of different solution paths. W-graphs produce a similar artifact for the domain of software workflows, and with an automated approach for constructing the graphs.
+
+In summary, by capturing and representing the workflows employed by users with varying backgrounds and skill levels, we see W-graphs as a potentially valuable approach for scaling the learning and improvement of software workflows.
+
+§ 3 WORKFLOW GRAPHS
+
+The key problem that we address is that designers and researchers currently lack scalable approaches for analyzing and supporting user workflows. To develop such an approach, we need techniques that can map higher level user intents (e.g., 3D modeling a mug), to strategy level workflows (e.g., modeling the handle before the body), and user actions (the specific sequence of actions involved).
+
+We can broadly classify approaches for modeling user workflows derived from action sequences into bottom-up approaches and top-down approaches.
+
+Bottom-up approaches record users' action sequences, and then attempt to infer the user's intent at a post-processing stage using unsupervised modeling techniques such as semantic segmentation, clustering, or topic modeling $\left\lbrack {2,9}\right\rbrack$ . A disadvantage of this approach is that the results can be difficult to present to users, because the results of unsupervised modeling techniques are not human-readable labels. Meaningful labels could conceivably be added to the resulting clusters (e.g., using crowdsourcing techniques [11,26,37]), but this is a non-trivial problem under active research.
+
+An alternative is a top-down approach, in which a small number of domain experts break down a task into meaningful units (e.g., subgoals [8]), and then users or crowdworkers use these pre-created units as labels for their own command log data, or that of other users. This approach also comes with disadvantages-users must perform the labeling, their interpretation of pre-defined labels can differ, and the overall breakdown of the task depends on the judgement of a few domain experts, limiting the scalability of the approach.
+
+Then, how can we develop an approach for organizing users' collective interaction data into a meaningful structure while maintaining the scalability of naively recording user action sequences without interrupting them to acquire any labels?
+
+To investigate this possibility, we developed Workflow graphs (W-graphs), which synthesize many demonstrations of a fixed task (i.e., re-creating the same 3D model) such that the commonalities and differences between the approaches taken by users are encoded in the graph. To ensure the technique can scale, the goal is to automate the construction process, using recordings of demonstrations of the task as input (which may include screen recordings, command log data, content snapshots, etc.).
+
+Formally, a W-graph is a directed graph $G = \left( {V,A}\right)$ which consists of the following components:
+
+§ 3.1 GRAPH VERTICES
+
+$$
+V = \left\{ {{v}_{i};1 \leq i \leq N}\right\}
+$$
+
+The vertices of the graph represent semantically-meaningful states in the demonstrations, such as a sub-goal of the task. These states can be thought of as sub-goals in the workflow-ideally, we want them to capture the points where a user has completed a given sub-task, and has yet to start the next sub-task. Detecting these states automatically from unlabeled demonstrations is a challenge, but the idea is to leverage the demonstrations of multiple users to discover common states that occur across their respective methods for completing the task. If a new demonstration is completely different from those already represented in the graph, it might not share any nodes with those already in the graph, apart from the start and final nodes, which are shared by all demonstrations.
+
+Note that the appropriate criteria for judging which states from multiple demonstrations are semantically-similar is ill-defined, and dependent on the intended application of the W-graph. For example, one criterion could be used to construct a W-graph that indicates coarse differences between approaches for completing the task, while a more strict criterion for similarity could create a more complex graph, which reveals finer differences between similar approaches. As we discuss in the next section, our algorithm allows the threshold for the similarity to be tuned based on the intended application.
+
+§ 3.2 GRAPH EDGES
+
+$$
+A = \left\{ {\left( {{v}_{i},{v}_{j},{d}_{k},{E}_{i,j}}\right) ;{v}_{i},{v}_{j} \in V}\right\}
+$$
+
+$$
+{E}_{i,j,k} = \left\{ {{\text{ event }}_{1},{\text{ event }}_{2},{\text{ event }}_{3},\ldots }\right\}
+$$
+
+The directed edges of the graph represent workflows used by a user to move between semantically-similar states. There may be multiple directed edges between a given pair of states, if multiple demonstrations ${d}_{k}$ include a segment from state ${v}_{i}$ to ${v}_{j}$ .
+
+Each directed edge is associated with a set of events ${E}_{i,j,k}$ which include the timestamped interaction trace of events in demonstration ${d}_{k}$ performed in the segment between state ${v}_{i}$ and ${v}_{j}$ . This trace of events could includes timestamped command invocations, 3D model snapshots, or any other timestamped data that was gathered from the recorded demonstrations.
+
+§ 3.3 INTERACTION DATA
+
+The interaction trace data associated with edges enables a great deal of flexibility in how the W-graph is used. For example, this data could be used to retrieve snippets of screen recordings of the demonstrations associated with the segment of the task between two states, or it could be used to define metrics on the different workflows used for that segment of the task (e.g., the number of unique commands used, or the average time it takes to perform the workflow). As another example, analyzing the interaction traces along many different paths between states can reveal the average time for sub-tasks or the variance across users. Later in the paper, we present some example applications of W-graphs to illustrate the full flexibility of this data representation.
+
+§ 4 PIPELINE FOR CONSTRUCTING W-GRAPHS
+
+In this section we describe the computational pipeline we have developed for constructing W-graphs. We start by discussing our instrumentation of Tinkercad and the data set we collected, then present the multi-step pipeline for processing the data and the similarity metric for identifying equivalent-intermediate states. The choice of a method for identifying equivalent-intermediate states is a key aspect of the pipeline, and we experimented with several alternative methods.
+
+§ 4.1 TINKERCAD DATA COLLECTION
+
+We instrumented a customized version of Tinkercad to record times-tamped command invocations and snapshots of the 3D model the user is working on after each command is executed (represented as a constructive solid geometry (CSG) tree with unique IDs for each object, to enable the association of model parts across multiple snapshots). To capture the instrumentation data, participants were asked to install Autodesk Screencast ${}^{2}$ , a screen recording application that can associate command metadata with the timeline of recorded video data. Collectively, this allowed us to gather timestamp-aligned command invocation data, 3D model snapshots, and screen recordings of participants performing $3\mathrm{D}$ modeling tasks. An example of a user-recorded screencast video can be seen in Figure 3.
+
+ < g r a p h i c s >
+
+Figure 3: Screencast of a user demonstration, consisting of the (a) screen recording, (b) command sequences, and (c) 3D model snapshots
+
+Using this approach, we collected user demonstrations for two tasks-modeling a mug and modeling a standing desk (Figure 4). These tasks were selected because they could be completed in under 30 minutes, and represent different levels of complexity. The mug task is relatively simple, requiring fewer operations and primitives, while the desk task can be complex and time consuming if the user does not have knowledge of particular Tinkercad tools, such as the Align and Ruler. The Desk model also requires approximately twice as many primitives as the Mug model.
+
+We recruited participants through UserTesting.com and an email to an internal mailing list at a large software company. 14 participants were recruited for the Mug task, and 11 participants were recruited for the Desk task, but we excluded participants who did not follow the instructions, or failed to upload their recordings in the final step. After applying this criteria, we had 8 participants for the mug task ( 6 male, 2 female, ages 27-48), and 6 participants for the standing desk task (5 male, 1 female, ages 21-43).
+
+The result of data collection procedure were 8 demonstrations for the Mug task, which took ${26}\mathrm{\;m} : {24}\mathrm{\;s}$ on average $\left( {\mathrm{{SD}} = {10}\mathrm{\;m} : {46}\mathrm{\;s}}\right)$ and consisted of an average of 142 command invocations $\left( {\mathrm{{SD}} = {101}}\right)$ ; and 6 demonstrations for the Desk task, which took ${23}\mathrm{\;m} : {23}\mathrm{\;s}$ on average $\left( {\mathrm{{SD}} = 8\mathrm{\;m} : {20}\mathrm{\;s}}\right)$ and consisted of an average of 223 command invocations $\left( {\mathrm{{SD}} = {107}}\right)$ .
+
+§ 4.2 WORKFLOW TO GRAPH CONSTRUCTION
+
+The W-graphs construction pipeline consists of three steps: preprocessing, collapsing node sequences, and sequence merging.
+
+ < g r a p h i c s >
+
+Figure 4: Models used for data collection - (a) Mug, (b) Desk
+
+§ 4.2.1 STEP 1. PREPROCESSING
+
+To start, we collapse repeated or redundant commands in the sequence of events (both keystroke and clickstream data) for each demonstration. For example, multiple invocations of "arrow key presses" for moving an object are merged into one "object moved with keyboard" and multiple invocations of "panning viewpoint" are merged into "panning".
+
+Next, the sequence of events for each user is considered as a set of nodes (one node per event), with directed edges connecting each event in timestamped sequence (Figure 5a). The 3D model snapshot for each event is associated with the corresponding node, and the event data (including timestamped command invocations) is associated with the incoming edge to that node. Since each demonstration starts from a blank document and finishes with the completed 3D model, we add a START node with directed edges to the first node in each demonstration, and we merge the final nodes of each demonstration into an END node. At this point, each demonstration represents a distinct directed path from the START node to the END node (Figure 5b).
+
+ < g r a p h i c s >
+
+Figure 5: Illustration of how sequences get compressed and merged into a W-graph
+
+§ 4.2.2 STEP 2. COLLAPSING NODE SEQUENCES
+
+Next, the pipeline merges sequences of nodes with similar geometry along each path from START to END, by clustering the snapshots of 3D model geometry associated with the nodes along each path (Figure 5c). The metric we use for 3D model similarity is discussed at the end of this section. To identify sequences with similar geometry, we first apply the DBSCAN [13] algorithm to cluster the 3D model snapshots associated with each path. We then merge contiguous subsequences of nodes that were assigned to the same cluster, keeping the 3D model snapshot of the final state in the subsequence as the representation of that node. We selected DBSCAN because it does not require a pre-defined number of clusters, as in alternative clustering algorithms such as K-Means. The hyperparameters of DBSCAN are tuned using the K-Nearest Neighborhood distance method, which is a standard practice for this algorithm $\left\lbrack {4,5,{35}}\right\rbrack$ .
+
+${}^{2}$ https://knowledge.autodesk.com/community/screencast
+
+§ 4.2.3 STEP 3. SEQUENCE MERGING
+
+Finally, the pipeline detects "equivalent-intermediate" nodes across the paths representing multiple demonstrations (Figure 5d). To do this, we compute the $3\mathrm{D}$ model similarity metric for all pairs of nodes that are not associated with the same demonstration (i.e., we only consider pairs of nodes from different demonstrations). We then merge all nodes with a similarity value below a threshold $\varepsilon$ that we manually tuned. In our experience, varying $\varepsilon$ can yield graphs that capture more or less granularity in variations in the task, and it would be interesting to consider an interactive system in which users can select a granularity that is suited to their use of the W-graph.
+
+At this point, the W-graph construction is complete. As at the start of the pipeline, the directed edges from START to END collectively include all the events from the original demonstrations, but now certain edges contain multiple events (because the nodes between them have been collapsed), and some nodes are shared between multiple demonstrations.
+
+§ 4.3 METRICS FOR DETECTING "EQUIVALENT-INTERMEDIATE STATES”
+
+The most crucial part of the pipeline is determining the "similarity" between 3D model snapshots, as this is used to merge sequences of events in demonstrations, and to detect shared states across multiple demonstrations. We experimented with four different methods of computing similarity between 3D model snapshots, which we discuss below.
+
+§ 4.3.1 COMPARING CSG TREES
+
+3D model snapshots are represented as CSG trees by Tinkercad, which consist of geometric primitives (e.g., cubes, cylinders, cones), combined together using Boolean operations (e.g., union, intersection, difference) in a hierarchical structure. A naive method of quantifying the difference between two snapshots would be to compare their respective trees directly, for example by trying to associate corresponding nodes, and then comparing the primitives or other characteristics of the tree. However, we quickly rejected this method because different procedures for modeling the same geometry can produce significantly different CSG trees. This makes the naive CSG comparison a poor method of judging similarity, where we specifically want to identify states where a similar end-result was reached through distinct methods.
+
+§ 4.3.2 COMPARING 2D IMAGES OF RENDERED GEOMETRY
+
+Inspired by prior work that has used visual summaries of code structure to understand the progress of students on programming problems [41], we next explored how visual renderings of the models could be used to facilitate comparison. We rendered the geometry of each 3D model snapshot from 20 different angles, and then compared the resulting images for pairs of models to quantify their difference. The appeal of this approach is that the method used to arrive at a model does not matter, so long as the resulting models look the same. However, we ultimately rejected this approach due to challenges with setting an appropriate threshold for judging two models as similar based on pixel differences between their renders.
+
+§ 4.3.3 COMPARING 3D MESHES
+
+Next, we experimented with using the Hausdorff distance [3], a commonly used mesh comparison metric, to compare the $3\mathrm{D}$ meshes of pairs of 3D model snapshots. As with the comparison of rendered images, this method required extensive trial and error to set an appropriate threshold. However, the biggest drawback of this method was that the distances produced by the metric are in absolute terms, with the result that conceptually minor changes to a 3D model, such as adding a cube to the scene, can lead to huge changes in the distance metric. Ideally we would like to capture how "semantically" meaningful changes are, which is not always reflected in how much of the resulting mesh has been altered.
+
+§ 4.3.4 LATENT SPACE EMBEDDING USING AUTOENCODERS
+
+The final method we tried was to use an autoencoder to translate $3\mathrm{D}$ point cloud data for each 3D model snapshot into a 512-dimensional vector. Autoencoders learn compact representations of input data by learning to encode a training set of data to a latent space of smaller dimensions, from which it can decode to the original data. We trained a latent model with a variation of PointNet [34] for encoding 3D point clouds to vectors, and PointSet Generation Network [14] for decoding vectors back to point clouds. The model was trained using the ShapeNet [43] dataset, which consists of 55 common object categories with about51,300unique $3\mathrm{D}$ models. By using an additional clustering loss function [42], the resulting distributed representation captures the characteristics that matter for clustering tasks. One of the limitations of PointNet autoencoders is that current techniques cannot perform rotational-invariant comparisons of geometries. However, this fits nicely with our purpose, because rotating geometry does not affect semantic similarity for the 3D modeling tasks we are targeting.
+
+Once trained, we can use the autoencoder to produce a 512- dimensional vector for each 3D model snapshot, and compare these using cosine distance to quantify the similarity between models. Overall, we found this to be the most effective method. Because it works using 3D point cloud data, it is not sensitive to how a model was produced, just its final geometry. Moreover, it required less tuning than comparing $2\mathrm{D}$ images of rendered geometry or comparing 3D meshes, and in our experiments appeared to be more sensitive to semantically-meaningful changes to models.
+
+§ 4.4 RESULTS
+
+As a preliminary evaluation of the pipeline, we examined the graphs constructed for the mug and standing desk tasks. The W-graph for the mug task is shown in Figure 6. From the graph, a few things can be observed. First, the high-level method followed by most users was to first construct the body of the mug (as seen in paths A-B-C, and A-C), and then build and add the handle. Examining the screen recordings, all three users on path A-B-C created the body by first adding a solid cylinder and then adding a cylindrical "hole" object ${}^{3}$ to hollow out the center of the solid cylinder (see Figure 7a). Two of the three users on path A-C followed a slightly different method, creating two solid cylinders first, and then converting one of them into a hole object (Figure 7b). It is encouraging that the pipeline was able to capture these two distinct methods.
+
+The remaining user on path A-C created a hole cylinder first, but ultimately deleted it and started again, following the same procedure as the users on path A-B-C. This highlights an interesting challenge in building W-graphs, which is how to handle backtracking or experimentation behavior (using commands such as Undo and Erase). We revisit this in the Discussion section at the end of the paper.
+
+The users on paths A-D-E-F and A-E-F followed a different approach from those discussed above. Both of these users started by creating a cylinder (as a hole in the case of A-D-E-F, and as a solid in the case of A-E-F), then built the handle, and finally cut out the center of the mug's body. The A-D-E-F user built the handle through the use of a solid box and a hole box (Figure 8a), but the A-E-F user used a creative method-creating a primitive in the shape of a letter ’B’, then cutting out part of it to create the handle (Figure 8b). Again, it is encouraging that the pipeline was able to separate these distinct methods.
+
+For the modeling of the handle, nodes F, G, and H capture the behavior of building the handle apart from the body of the mug, and then attaching it in states I and J. The E-F transition seems strange in Figure 6, but reviewing the screen recording, the user moved the handle away from the mug before cutting the hole in the body, perhaps to create some space to work.
+
+${}^{3}$ Tinkercad shapes can be set as solid or as holes, which function like other shapes but cut out their volume when grouped with solid objects.
+
+ < g r a p h i c s >
+
+Figure 6: W-graph for the mug task. Edge labels indicate the number of demonstrations for each path. For nodes with multiple demonstrations, a rendering of the 3D model snapshot is shown for one of the demonstrations. A high-res version of this image is included in supplementary materials.
+
+ < g r a p h i c s >
+
+Figure 7: Two distinct methods of creating the mug body: (a) Create a solid cylinder, create a cylindrical hole, and group them; (b) Create two solid cylinders, position them correctly, then convert one into a hole.
+
+ < g r a p h i c s >
+
+Figure 8: Two methods of creating the handle: (a) Combine a solid box and a box-shaped hole; (b) Cut a letter 'B' shape into the handle using several box-shaped holes.
+
+Overall, the pipeline appears to be effective in capturing the variety of methods used to create the body of the mug, and the edges of the graph captured a few distinct methods for creating the handle. An interesting observation is that the node identification algorithm did not capture any sub-steps involved in creating the handle. One possibility is that the methods used by different users were distinct enough that they did not have any equivalent-intermediate states until the handle was complete. Another possibility is that the autoencoder is not good at identifying similar states for models that are partially constructed (being trained on ShapeNet, which consists of complete models). The above having been said, this is not necessarily a problem as the edges do capture multiple methods of constructing the handle.
+
+The W-graph for the standing desk task is shown in Figure 9. The graph is more complex than that for the mug task, reflecting the added complexity of creating the standing desk, but we do observe similarities in how the graph captures the task. In particular, we can see paths that reflect the different orders in which users created the three main parts of the desk (the top, the legs, and the privacy screen).
+
+We also notice some early nodes with box shapes, which later diverge and become a desk top in some demonstrations, and legs in another. These nodes that represent a common geometric history for different final shapes are interesting, because they represent situations where the algorithm may correctly merge similar geometry, but doing so works counter to the goal of identifying workflows for completing sub-goals of the task, effectively breaking them up into several edges. A possible way to address this would be to modify the pipeline so it takes into account the eventual final placement of each primitive at the end of the task, or several edges forward, in determining which nodes to merge.
+
+§ 5 POTENTIAL APPLICATIONS OF W-GRAPHS
+
+This section presents three novel applications that are made possible by W-graphs: 1) W-Suggest, a workflow suggestion interface, 2) W-Guide, an on-demand 3D modeling help interface, and 3) W-Instruct, an instructor dashboard for analyzing workflows.
+
+ < g r a p h i c s >
+
+Figure 9: W-graph for the standing desk task. Edge labels indicate the number of demonstrations for each path. For nodes with multiple demonstrations, a rendering of the 3D model snapshot is shown for one of the demonstrations. A high-res version of this image is included in supplementary materials.
+
+ < g r a p h i c s >
+
+Figure 10: W-Suggest - A workflow suggestion interface mockup
+
+§ 5.1 W-SUGGEST: WORKFLOW SUGGESTION INTERFACE
+
+By representing the structure of how to perform a task, W-graphs can serve as a back-end for applications that suggest alternate workflows to users.
+
+To use the W-Suggest system(Figure 10), the user first records themselves performing a 3D modeling task, similar to the procedure performed by participants in the previous section. However, instead of integrating this new workflow recording into the W-graph, the system compares the workflow to the existing graph and suggests alternate workflows for portions of the task.
+
+W-Suggest uses the following algorithm to make its suggestions. First, it performs Steps 1 and 2 of the W-graph construction pipeline on the user's recording of the task (i.e., preprocessing the events, and collapsing node sequences with similar geometry). Next, the 512-dimensional embedding vector for each remaining 3D model snapshot is computed using the same autoencoder used for the W-graph construction pipeline. The vectors for each of these nodes are then compared to those of the W-graph nodes along the shortest path from START to END (as measured by total command invocations) to detect matches using the same threshold $\varepsilon$ used for graph construction. Finally, for each pair of matched nodes (one from the user, one from the shortest path in the W-graph), the edge originating at the user's node and the edge originating at the W-graph node are compared based on command invocations. Based on all of these comparisons, the algorithm selects the pair for which there is the biggest difference in command invocations between the user's demonstration and the demonstration from the W-graph. In effect, the idea is to identify segments of the user's task for which the W-graph includes a method that uses much fewer command invocations, which can then be suggested to the user.
+
+§ 5.2 W-GUIDE: ON-DEMAND TASK GUIDANCE INTERFACE
+
+W-graphs could also serve as a back-end for a W-Guide interface that presents contextually appropriate video content to users on-demand as they work in an application, extending approaches taken by systems such as Ambient Help [32] and Pause-and-Play [33] with peer demonstrations.
+
+ < g r a p h i c s >
+
+Figure 11: W-Guide - An on-demand task guidance interface mockup.
+
+While working on a 3D modeling task in Tinkercad, the user could invoke W-Guide to see possible next steps displayed in a panel to the right of the editor (Figure 11). These videos are populated based on the strategies captured from other users and stored in the W-graph. Specifically, the panel recommends video demonstrations from other users matched to the current user's state, and proceeds to the next "equivalent-intermediate" state (i.e., one edge forward in the graph). Using a similar approach to W-Suggest, these can be provided with meaningful labels (e.g., "Shortest workflow", "Most popular workflow", etc.).
+
+W-Guide could use the identical algorithm as W-Suggest to construct a W-Graph and populate its suggestions. The only difference is that it would attempt to match the user's current incomplete work-flow to the graph. This is achievable because the $\varepsilon$ threshold for collapsing node sequences is flexible, allowing W-Guide to construct a W-Graph from any point in current user's workflow and populate demonstrations for next steps.
+
+An exciting possibility that becomes possible with W-Guide is that the system could dynamically elicit additional demonstrations from users in a targeted way (e.g., by popping up a message asking them to provide different demonstrations than those pre-populated in the panel). This could allow the system to take an active role in fleshing out a W-graph with diverse samples of methods.
+
+§ 5.3 W-INSTRUCT: INSTRUCTOR TOOL
+
+Finally, we envision the $W$ -Instruct system in which W-graphs become a flexible and scalable tool for instructors to provide feedback to students, assess their work, and generate tutorials or other instructional materials on performing $3\mathrm{D}$ modeling tasks.
+
+W-Instruct (Figure 12) supports instructors in understanding the different methods used by their students to complete a task-by examining the graph, an instructor can see the approaches taken by students, rather than simply the final artifacts they produce. The grouping of multiple students' workflows could also be used as a means to provide feedback to a large number of learners at scale (e.g., in a MOOC setting). Also, the instructor could quickly identify shortcuts, crucial parts of the workflow to emphasize, or common mistakes by browsing the W-Graph. As shown in Figure 12, edges can be highlighted to show the most common solutions, and the video demonstration corresponding to an edge can be viewed by hovering over a node in the graph.
+
+ < g r a p h i c s >
+
+Figure 12: W-Instruct - An instructor tool mockup.
+
+Along similar lines to W-Instruct, we see potential for W-graphs to support the generation of tutorials and other learning content, building on past work exploring approaches for generating tutorials by demonstration $\left\lbrack {{10},{19}}\right\rbrack$ . For example, synthetic demonstrations of workflows could potentially be produced that combine the best segments of multiple demonstrations in the W-graph, creating a demonstration that is more personalized to the current user than any individual demonstration.
+
+§ 6 USER FEEDBACK ON W-SUGGEST
+
+While the main focus of this work is on the computational approach for constructing W-graphs, we implemented the W-Suggest application as a preliminary demonstration of the feasibility of building applications on top of a constructed W-graph (Figure 13). The W-Suggest interface consists of a simplified representation of the user's workflow, with edges highlighted to indicate a part of the task for which the system is suggesting an improved workflow. Below this are two embedded video players, one showing the screen recording of the user's workflow for that part of the task, and the other showing a suggested workflow drawn from other users in the graph. Below this are some metrics on the two workflows, including duration, the distribution of commands used, and the specific sequences of commands used.
+
+To gain some feedback on the prototype, we recruited 4 volunteers to perform one of the two tasks from the previous section (two for the mug task, two for the standing desk task) and presented them with their W-Suggest interface. We asked them to watch the two videos-one showing their workflow, the other showing the suggested workflow-and then asked a few short questions about the interface. Specifically, we asked if they felt it was useful to view the alternate demonstration, and why or why not they felt that way. We also asked them their thoughts on the general utility of this type of workflow suggestion system, and what aspects of workflows they would like suggestions on for software they frequently use.
+
+Due to the small number of participants for these feedback sessions, they are best considered as providing preliminary feedback, and certainly not a rigorous evaluation. That being said, the feedback from participants was quite positive, with all participants agreeing it would be valuable to see alternative workflows. In particular, participants mentioned that it would be valuable to see common workflows, the fastest workflow, and workflows used by experts.
+
+Two participants mentioned that they learned something new about how to use Tinkercad from watching the alternate video, as in the following quote by P2 after seeing a use of the Ruler tool to align objects: Oh, you can adjust the things there [with the Ruler] that's useful. Oh, there's like an alignment thing, that seems really easy.
+
+ < g r a p h i c s >
+
+Figure 13: W-Suggest - The implemented interface.
+
+Likewise, P4 observed a use of the Workplane tool that he found valuable: It's assigning relative positions with it [the Workplane and Ruler]-I wanted to do something like that.
+
+All participants agreed that efficiency is an important criterion when recommending alternative workflows. However, P1 and P2 noted that the best method to use in feature-rich software, or other domains such as programming, can often depend on contextual factors. In particular, P1 noted that they might prepare a 3D model differently if it is intended to be $3\mathrm{D}$ printed. This suggests that additional meta-data on the users or the intended purpose for creating a model could be useful for making workflow recommendations.
+
+§ 7 DISCUSSION, LIMITATION AND FUTURE WORK
+
+Overall, the W-graphs produced for the mug and standing desk tasks are encouraging, and suggest that our pipeline is effective at capturing different high-level methods for modeling 3D objects. Testing the pipeline on these sample tasks also revealed a number of potential directions for improving the approach, including modeling backtracking behavior in demonstrations, and accounting for subtasks with common intermediate states. Finally, our user feedback sessions for W-Suggest showed enthusiasm for applications built on W-graphs, and revealed insights into criteria for what makes a good demonstration, including the importance of contextual factors.
+
+In this section we revisit the potential of modeling backtracking and experimentation, discuss the question of how many demonstrations are needed to build a useful W-graph graph, and suggest further refinements of the graph construction method. We then discuss how our approach could be generalized to building models of similar tasks.
+
+§ 7.1 BACKTRACKING AND EXPERIMENTATION
+
+In our current approach, Undo and Erase are treated the same as other commands. In some situations this may be appropriate, but at other times these commands may be used to backtrack, to recover from mistakes, or to try other workflows, and past work has shown their occurrence may indicate usability challenges [1]. It would be interesting to investigate whether these practices for using Undo and Erase could be detected and represented in a W-graph. This could take the form of edges that go back to previous states, creating directed cycles or self-loops in the graph. Applications built on top of a W-graph could also use the number of Undos as a metric for ranking paths through the graph (e.g., to identify instances of exploratory behavior), or as a filtering metric to cull the graph of such backtracking behavior.
+
+§ 7.2 BRANCHING FACTORS AND GRAPH SATURATION
+
+A nice feature of W-graphs is that they can be built with only a few demonstrations. As the number of demonstrations grows, the graph can more fully capture the space of potential workflows for the task. However, it is likely that the graph will eventually reach a point at which it is saturated, beyond which additional workflows will contribute a diminishing number of additional methods. The number of demonstrations needed to reach saturation will likely vary task by task, with more complex tasks requiring more demonstrations than simpler tasks. Examining how the sum of the branching factor for all nodes in the tree changes with each added demonstration may give an indication of when the graph has reached saturation, as the number of branches is likely to stop growing once new methods are no longer being added.
+
+§ 7.3 SCALABILITY
+
+In one sense, the W-graph approach is scalable by design, as it relies on computational comparisons of $3\mathrm{D}$ models rather than human interventions such as expert labeling or crowdsourcing. However, more work is needed to understand how the structure of W-graphs produced by our pipeline change as the number of demonstrations in a graph grows. In particular, there is the question of how the parameters for identifying similar intermediate states may need to change in response to a growing number of workflows, in order to produce graphs at the right granularity for a given application, and other issues that may come up when processing many demonstrations. On the application end, metrics could be developed to identify less-used but valuable traces contained in a graph with many demonstrations.
+
+§ 7.4 ROBUSTNESS AGAINST DIFFERENT WORKFLOW ORDERS
+
+A potential limitation of our current approach is that it preserves the global order of sub-tasks, including those that could be performed in an arbitrary order (e.g., a user could start by modeling the legs or the top of a table), and this could prevent it from grouping some variations of sub-tasks together if a given sub-task is performed first by some users, and later by others. Preserving the global order of subtasks has some advantages, in that it reveals how users commonly sequence the sub-tasks that make up the overall task, and it can also reveal cases where sub-tasks benefit from being ordered in a certain way, as may occur when objects built as part of a preceding sub-task are used to help with positioning or building objects in a subsequent sub-task. However, it would be interesting to look at approaches that post-process a W-graph to identify edges across the graph where the same sub-task is being performed (e.g., by looking for edges where similar changes to geometry are made, ignoring geometry that isn't changing) to address this limitation and gain insights into sub-task order in the graph.
+
+§ 7.5 EXTENSION TO SIMILAR TASKS AND SUB-TASKS
+
+Another interesting direction for future work is to consider how the W-graph approach could be extended to scenarios where the demonstrations used to produce the graph are not of the exact same task, but instead represent workflows for a class of similar tasks (e.g., modeling chairs). We believe the autoencoder approach we have adopted could be valuable for this, as it is less sensitive to variations in the model, and potentially able to capture semantic similarities between models of different objects within a class, but more research is required. Sub-goal labels provided by users or learners could be valuable here, building on approaches that have been used for how-to videos [25] and math problems [40]. Given a user's explanation of their process or different stages in the task, the graph construction algorithm would have access to natural language descriptions in addition to interaction traces and content snapshots, which could be used to group workflows across distinct but related tasks.
+
+Beyond refining our algorithms to work with similar tasks, it would be interesting to investigate how a large corpus of demonstrations could be mined to identify semantically-similar sub-tasks (which could be then turned into W-graphs). Multi-W-graphs could conceivably be developed that link together the nodes and edges of individual W-graphs, to represent similarities and relationships between the workflows used for different tasks. For example, nodes representing the legs of a desk, chair, or television stand could be linked across their respective graphs, and edges that represent workflows for creating certain effects (e.g., a particular curvature or geometry) could be linked as well. In the limit, one could imagine a set of linked graphs that collectively encode all the tasks commonly performed in a domain, and feed many downstream applications for workflow recommendation and improvement.
+
+§ 7.6 GENERALIZING TO OTHER SOFTWARE AND DOMAINS
+
+Though we demonstrated our approach for $3\mathrm{D}$ modeling software, the W-graph construction approach would be straightforward to extend to other software applications and domains. For many domains, such as 2D graphics or textual media, the technique could be generalized by simply substituting in an appropriate feature extraction mechanism for that domain. More challenging would be extending the approach to apply across a variety of software applications, perhaps by different software developers, where instrumentation to gather commands and content is not easy. To approach this, we could imagine using the screen recording data for the content, and accessibility APIs to gather the actions performed by users (an approach used in recent work [17]). Beyond fully-automated approaches, learnersourcing approaches [25] could be used to elicit sub-goals that have particular pedagogical value, and these peer-generated subgoals could be turned into feedback for other learners in the system, using similar methods to those explored in other applications [24].
+
+§ 8 CONCLUSION
+
+This work has contributed a conceptual approach for representing the different means by which a fixed goal can be achieved in feature rich software, based on recordings of user demonstrations, and has demonstrated a scalable pipeline for constructing such a representation for 3D modeling software. It has also presented a range of applications that could leverage this representation to support users in improving their skill sets over time. Overall, we see this work as a first step toward enabling a new generation of help and learning systems for feature-rich software, powered by data-driven models of tasks and workflows.
+
+§ 9 ACKNOWLEDGEMENTS
+
+Thanks to Autodesk Research for all their support, and in particular to Aditya Sanghi and Kaveh Hassani, who provided invaluable advice and guidance on techniques for comparing 3D models. Thanks also to our study participants for their valuable feedback.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..20ab1a1d0ecb0afa6667b9b6439aecd8feb8fd91
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,465 @@
+# QCue: Queries and Cues for Computer-Facilitated Mind-Mapping
+
+Ting-Ju Chen*
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Sai Ganesh Subramanian ${}^{ \dagger }$
+
+J. Mike Walker ’66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Vinayak R. Krishnamurthy ${}^{ \ddagger }$
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+## Abstract
+
+We introduce a novel workflow, ${QCue}$ , for providing textual stimulation during mind-mapping. Mind-mapping is a powerful tool whose intent is to allow one to externalize ideas and their relationships surrounding a central problem. The key challenge in mind-mapping is the difficulty in balancing the exploration of different aspects of the problem (breadth) with a detailed exploration of each of those aspects (depth). Our idea behind ${QCue}$ is based on two mechanisms: (1) computer-generated automatic cues to stimulate the user to explore the breadth of topics based on the temporal and topological evolution of a mind-map and (2) user-elicited queries for helping the user explore the depth for a given topic. We present a two-phase study wherein the first phase provided insights that led to the development of our work-flow for stimulating the user through cues and queries. In the second phase, we present a between-subjects evaluation comparing ${QCue}$ with a digital mind-mapping work-flow without computer intervention. Finally, we present an expert rater evaluation of the mind-maps created by users in conjunction with user feedback.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+## 1 INTRODUCTION
+
+Mind-maps are widely used for quick visual externalization of one's mental model around a central idea or problem. The underlying principle behind mind-mapping is to provide a means for associative thinking so as to foster the development of concepts that both explore different aspects around a given problem (breadth) and explore each of those aspects in a detail-oriented manner (depth) [49]. The ideas in a mind-map spread out in a hierarchical/tree-like manner [35], which allows for the integration of diverse knowledge elements into a coherent pattern [8] to enable critical thinking and learning through making synaptic connections and divergent exploration [41, ${56},{77},{78}\rbrack$ . As a result, mind-maps are uniquely suitable for problem understanding/exploration prior to design conceptualization [8].
+
+Problem exploration is critical in helping designers develop new perspectives and driving the search for solutions within the iterative process of identifying features/needs and re-framing the scope [53]. Generally, it requires a combination of two distinct and often conflicted modes of thinking: (1) logical, analytical, and detail-oriented, and (2) lateral, systems-level, breadth-oriented [40]. Most current efforts in computer-facilitated exploratory tasks focus exclusively on one of these cognitive mechanisms. As a result, there is currently a limited understanding of how this breadth-depth conflict can be addressed. Maintaining the balance between the breadth and depth of exploration can be challenging, especially for first-time users. For atypical and open-ended problem statements (that are commonplace in design problems), this issue is further pronounced ultimately leading to creative inhibition and lack of engagement.
+
+Effective and quick thinking is closely tied to the individual's imagination and ability to create associations between various information chunks [44]. Incidentally, this is also a skill that takes time to develop and manifest in novices. We draw from existing works $\left\lbrack {{17},{22},{27},{29},{38},{60},{74},{79}}\right\rbrack$ that emphasize on stimulating reflection during exploration tasks. Quayle et al. [60] and Wetzstein et al. [74] indicate that the act of responding to questions can create several avenues for designers to reflect on their their assumptions and expand their field of view about a given idea. Adler et al. [3] found that asking questions in sketching activity keeps the participants engaged and reflecting on ambiguities. In fact, asking one question in turn raises a variety of other questions, thereby bringing out more ideas from the user's mind [17]. Goldschmidt [29] further demonstrated that exposing designers to text can lead to higher originality during idea generation.
+
+Our approach is informed by the notion of reflection-in-design [60, 74], that takes an almost Socratic approach to reason about the design problem space through question-based verbalization. The premise is that cognitive processes underlying mind-mapping can be enriched to enable an iterative cycle between exploration, inquiry, and reflection. We apply this reasoning in a digital setting where the user has access to vast knowledge databases. Our key idea is to explore two different ways in which such textual stimuli can be provided. The first is through a simple mechanism for query expansion (i.e. asking for suggestions) and followed by means for responding to computer-generated stimuli (i.e. answering questions). Based on this, we present a workflow for mind-mapping wherein the user, while adding and connecting concepts (exploration), can also query a semantic database to explore related concepts (inquiry) and build upon those concepts by answering questions posed by the mind-mapping tool itself (reflection). Our approach is powered by ConceptNet [66], a semantic network that contains a graph-based representation with nodes representing real-word concepts as natural language phrases (e.g. bring to a potluck, provide comfort, etc.), and edges representing semantic relationships. Using related entries to a given concept and also the types of relationships, our work investigates methods for textual stimulation for mind-mapping.
+
+### 1.1 Contributions
+
+We make three contributions. First, we present a novel workflow — QCue - that uses the relational ontology offered by ConceptNet [66] to create mechanisms for cognitive stimulus through automated questioning with idea expansion and proactive user query. Second, we present an adaptive algorithm to facilitate the breadth-depth balance in a mind-mapping task. This algorithm analyzes the temporal and topological evolution of a given mind-map and generates questions (cues) for the user to respond to. Finally, to showcase the reflection-in-design approach, we conduct a between-subjects user study and present a comparative evaluation of ${QCue}$ with a digital mind-mapping workflow without our algorithm (henceforth in the paper, we will refer to this as traditional mind-mapping or TMM). The inter-rater analysis of user-generated mind-maps and the user feedback demonstrates the efficacy of our approach and also reveals new directions for future digital mind-mapping tools.
+
+---
+
+*e-mail: carol0712@tamu.edu
+
+${}^{ \dagger }$ e-mail: sai3097ganesh@tamu.edu
+
+${}^{ \ddagger }$ e-mail: vinayak@tamu.edu
+
+---
+
+## 2 Related Works
+
+### 2.1 Problem Exploration in Design
+
+Problem exploration is the process that leads to discovery of opportunities and insights that drives the innovation of products, services and systems [18]. Silver et al. [31] underscore the importance of problem-based learning for students to identify what they need to learn in order to solve a problem. Most current methods in early design are generally focused on increasing the probability of coming up with creative solutions by promoting divergent thinking. For instance, brainstorming specifically focuses on the quantity of ideas without judgment $\left\lbrack {6,{48},{57}}\right\rbrack$ . There are many other popular techniques such as SCAMPER [51], C-Sketch [64], and morphological matrix [80], that support the formation of new concepts through modification and re-interpretation of rough initial ideas. This however, also leads to design fixation toward a specific and narrow set of concepts thereby curtailing the exploration process. In contrast, mind-mapping is a flexible technique that can help investigate problem from multiple points of view. In this paper, we use mind-mapping as means for problem exploration, which has been proven to be useful for reflection, communication, and synthesis during idea generation $\left\lbrack {{33},{50}}\right\rbrack$ . The structure of mind-maps thus facilitates a wide-range of activities ranging from note-taking to information integration [20] by highlighting the relationships between various concepts and the organization of topic-oriented flow of thoughts $\left\lbrack {{55},{61}}\right\rbrack$ .
+
+### 2.2 Computer-Based Cognitive Support
+
+There have been significant efforts to engage and facilitate ones' critical thinking and learning by using digital workflows through pictorial stimuli $\left\lbrack {{30},{32},{71}}\right\rbrack$ , heuristic-based feedback generation [72], text-mining $\left\lbrack {{46},{65},{67}}\right\rbrack$ and speech-based interfaces [19]. Some works $\left\lbrack {{13},{26}}\right\rbrack$ have also used gamification as a means to engage the user in the idea generation process. Specifically in engineering design and systems engineering, there are a number of computer systems that support user's creativity during design conceptualization $\left\lbrack {4,{58},{62},{70}}\right\rbrack$ . These are, however, targeted toward highly technical and domain-specific contexts.
+
+While there are works $\left\lbrack {2,{24},{43},{73}}\right\rbrack$ that have explored the possibility of automatic generation of mind-maps from speech and texts, little is known in terms of how additional computer support will affect the process of creating mind-maps. Works that consider computer support in mind-mapping $\left\lbrack {7,{25}}\right\rbrack$ have evaluated numerous existing mind-mapping software applications and found that pen-and-paper and digital mind-mapping proves to have different levels of speed and efficiency analyzing various factors like user's intent, ethnography, nature of collaboration. Of particular relevance are works by Kerne’s group on curation [34,36,47] and web-semantics $\left\lbrack {{59},{76}}\right\rbrack$ for information based-ideation. While these works are not particularly aimed at mind-mapping as a mode of exploration, they share our premise of using information to support free-form visual exploration of ideas.
+
+Recent work by Chen et al. [12] studies collaboration in mind-mapping and offer some insight regarding how mind-maps evolve during collaboration. They further proposed a computer as a partner approach [13], where they demonstrate human-AI partnership by posing mind-mapping as a two-player game where the human and the AI (intelligent agent) take turns to add ideas to a mind-map. While an exciting prospect, we note that there is currently little information regarding how intelligent systems could be used for augmenting the user's cognitive capabilities for free-form mind-mapping without constraining the process. Recent work by Koch et al. [38] proposed cooperative contextual bandits (CCB) that provides cognitive support in forms of suggestions (visual materials) and explanations (questions to justify the categories of designers' selections from search engine) to users during mood board design tasks. While CCB treats questions as means to justify designers' focus and adapt the system accordingly, we emphasize the associative thinking capability brought by questions formed with semantic relations.
+
+### 2.3 Digital Mind-Mapping
+
+Several digital tools [75] have been proposed to facilitate mind-mapping activity. However, to our knowledge, these tools contribute little in computer-supported cognitive assistance and idea generation during such thinking and learning process. They focus on making the operations of constructing maps easier by providing features to users such as quickly expand the conceptual domain through web-search, link concepts to on-line resources via URLs (uniform resource locators) and interactive map construction. Even though those tools have demonstrated advantages over traditional mind-mapping tasks [25], mind-map creators can still find it challenging due to several following reasons: inability to recall concepts related to a given problem, inherent ambiguity in the central problem, and difficulty in building relationships between different concepts [5,68] These difficulties often result in an unbalanced idea exploration resulting in either too broad or too detail-oriented mind-maps. In this work, we aim to investigate computational mechanisms to address this issue.
+
+## 3 Phase I: Preliminary Study
+
+Our first step was to investigate the effect of query expansion (the process of reformulating a given query to improve retrieval of information) and to observe how users react to conditions where suggestions are actively provided during mind-mapping. For this, we implemented an preliminary interface to record the usage of suggestions retrieved from ConceptNet [42] and conducted a preliminary study using this interface.
+
+### 3.1 Query-Expansion Interface
+
+The idea behind our interface is based on query expansion enabled by ConceptNet. In comparison to content-retrieval analysis (Wiki) or lexical-semantic databases such as WordNet [52], ConceptNet allows for leveraging the vast organization of related concepts based on a diverse set of relations resulting in a broader scope of queries. Using this feature of ConceptNet, we developed a simple web-based tool for query-expansion mind-mapping (QEM, Figure 1, Figure 2) wherein users could add nodes (words/phrases) and link them together to create a map. For every new word or phrase, we used the top 50 query results as suggestions that the users could use as alternatives or additional nodes in the map. Our hypothesis was that ConceptNet suggestions would help users create richer mind-maps in comparison to pen-paper mind-mapping.
+
+### 3.2 Evaluation Tasks
+
+We designed our tasks for (a) comparing pen-paper mind-mapping and QEMs with respect to user performance, preference, and completion time and (b) to explore how the addition of query-based search affects the spanning of ideas in a typical mind-map creation task. Each participant was asked to create two mind-maps, one for each of the following problem statements:
+
+- Discuss the problem of different forms of pollution, and suggest solutions to minimize them: This problem statement was kept generic and conclusive, and something that would be typically familiar to the target participants, to compare the creation modalities for simple problem statements.
+
+- Modes of human transportation in the year 2118: The intent behind this open-ended problem statement was to encourage users to explore a variety of ideas through both modalities, and observe the utility of query based mind map tools for such problem statements.
+
+
+
+Figure 1: Screenshot of user interface of QEM
+
+
+
+Figure 2: Illustration of QEM workflow ("MM" stands for "mind-mapping")
+
+The topics for the problem statements were selected to provide users with familiar domains while also leaving scope for encouraging new ideas from the participants.
+
+### 3.3 Participants
+
+We recruited 18 students (10 male, 8 female) from engineering majors between 18 to 32 years of age. Of these, 6 participants were familiar with the concept of mind-maps (with a self-reported score of 4 on a scale of 10 ). We conducted a between subjects study, where 9 participants created mind-maps for a given problem statement using QEM (Figure 1), and the remaining 9 using provided pen and paper.
+
+### 3.4 Procedure
+
+The total time taken during the experiment varied between 30 to 35 minutes. Participants in the QEM group were first introduced to the interfaces and were encouraged explore the interface. Subsequently, the participants created the mind-map for the assigned problem. They were allowed a maximum of 10 minutes for one problem statement. Finally, on completion, each participant answered a series of questions in terms of ease of use, intuitiveness, and effectiveness of the assigned mind-map creation modality.
+
+### 3.5 Key Findings
+
+#### 3.5.1 User Feedback
+
+We did not find consensus regarding self-reported satisfaction with the mind-maps created by participants in pen-paper mind-mapping. Moreover, while pen-paper mind-mapping participants agreed that the time for map creation was sufficient, nearly 50% did not agree with being able to span their ideas properly. On the other hand, ${90}\%$ QEM participants reported that they were satisfied with their resulting mind-maps. Over ${80}\%$ of the QEM participants agreed to be able to easily search for related words and ideas, and add them to the mind-map. In the post study survey, QEM users suggested adding features such as randomizing the order of words searched for, ability to query multiple phrases at the same time, and ability to search for images. One participant mentioned: "The interface wasn't able to do the query if put a pair of words together or search for somebody's name viz. Elon Musk".
+
+#### 3.5.2 Users' Over-dependency on Query-Expansion
+
+As compared to pen-paper mind-mapping, we observed two main limitations in our query-expansion workflow. First, the addition of a new idea required the query of the word as we did not allow direct addition of nodes in the mind-map (Figure 2). While we had implemented this to simplify the interactions, this resulted in a break in the user's flow of thought further inhibiting diversity (especially when the digital tool is able to search cross-domain and provide a big database for exploring). Second, we observed that users relied heavily on search and query results rather than externalizing their personal views on a subject. Users simply continued searching for the right keyword instead of adding more ideas to the map. This also increased the overall time taken for creating maps using query-expansion. This was also reported by users with statements such as: "I relied a lot on the search results the interface gave me" and "I did not brainstorm a lot while creating the mind map, I spent a lot of time in finding proper terms in the search results to put onto the mind map".
+
+
+
+Figure 3: Illustration of three mechanisms to create a new node. (a) user double-clicks a node and enters a text entry to create a new child node. (b) single right-clicking an existing node allows the user to use top suggestions from ConceptNet to create a new child node. (c) user double-clicks a cue node to response for a new node addition. Yellow shade denotes selected node; red shade denotes newly created node.
+
+## 4 Phase II: QCUE
+
+Motivated by our preliminary study, the design goal behind ${QCue}$ is to strike a balance between idea expansion workflow and cognitive support during digital mind-mapping. We aim to provide computer support in a manner that stimulated the user to think in new directions but did not intrude in the user's own line of thinking. The algorithm of generating computer support in ${QCue}$ was developed based on the evolution of the structure of the user-generated map over time to balance the breadth and depth of exploration.
+
+### 4.1 Workflow Design
+
+QCue was designed primarily to support divergent idea exploration in ideation processes. This requires an interface that would allow for simple yet fast interactions that are typically natural in a traditional pen-paper setting. We formulate process of mind-mapping as an iterative two-mode sequence: generating as many ideas as possible on a topic (breadth-first exploration), and choosing a smaller subset to refine and detail (depth-first exploration). We further assume our mind-maps to be strictly acyclic graphs (trees). The design of our workflow is based on the following guiding principles:
+
+- In the initial phases of mind-mapping, asking questions to the user can help them externalize their assumptions regarding the topic, stimulate indirect relationships across concepts (latent relations).
+
+- For exploring ideas in depth during later stages, suggesting alternatives to the use helps maintain the rate of idea addition. Here, questions can further help the user look for appropriate suggestions.
+
+### 4.2 Idea Expansion Workflow
+
+We provided the following interactions to users for creating a mind-map using ${QCue}$ :
+
+- Direct user input: This is the default mode of adding ideas to the map wherein users simply double-click on an existing node $\left( {n}_{i}\right)$ to add content for its child node $\left( {n}_{j}\right)$ using an input dialog box in the editor workspace. A link is created automatically between ${n}_{i}$ and ${n}_{j}$ (Figure 3(a)). This offers users minimal manipulation in the construction of a tree type structure.
+
+- Asking for suggestions: In situations where a user is unclear about a given direction of exploration from a node in the mind-map, the user can explicitly query ConceptNet with the concerned node (right-click on a node to be queried). Subsequently, we extract top 10 related concepts (words and phrases) from ConceptNet and allow users to add any related concept they see fit. Users can continuously explore and expand their search (right-clicking on any existing node) and add the result of the query (Figure 3(b)).
+
+- Responding to cues: QCue evaluates the nodes in the map and detects nodes that need further exploration. Once identified, ${QCue}$ automatically generates and adds a question as cue to user. The user can react to this cue node (double-click) and choose to either answer, ignore, or delete it. Once a valid (nonempty) answer is recorded, the interface replaces the clicked node with the answer (Figure 3(c)).
+
+- Breadth-vs-depth exploration: Two sliders are provided on the QCue interface to allow adjustment of exploratory directions guided by the cues (Figure 4(a)). Specifically, users can use the sliders to control the position of newly generated cues to be either breadth or depth-first anytime during mind-mapping.
+
+### 4.3 Cue Generation Rationale
+
+There are three aspects that we considered to design our cue-generation mechanism. Given the current state of a mind-map our challenge was to determine (1) where to generate a cue (which nodes in the mind-map need exploration), (2) when a cue should be generated (so as to provide a meaningful but non-intrusive intervention) and (3) the what to ask the user (in terms of the actual content of cue). To find out where and when to add cues, we draw from the recent work by Chen et al. [13] that explored several algorithms for computer-generated ideas. One of their algorithmic iterations - which is of particular interest to us - involves using the temporal and topological evolution of the mind-map to determine which nodes to target. However, this approach is rendered weak in their work because they modeled the mind-mapping process as a sequential game with each player (human and computer) takes turns. In our case, however, this is a powerful idea since the human and the intelligent agent (AI) are not bound by sequential activity - both work asynchronously. This also reflects from our core idea of using computer as a facilitator rather than a collaborator. Based on these observations we designed our algorithm to utilize the topological and temporal evolution of a given mind-map in order to determine the potential nodes where we want the user to explore further. For this, we use a strategy similar to the one proposed by Chen et al. [13] that uses two penalty terms based on the time elapsed since a node was added to the mind-map and it's relative topological position (or lineage) with respect to the central problem.
+
+Tesnière [69] note that continuous thoughts can only be expressed with built connections. Tesnière was originally describing this idea in the context of linguistic syntax and how the mind perceives words not in isolation (as they appear in a dictionary) but in the context of other words in a sentence. It is the sentence that provides the connection between its constituent words. This is our driving guideline for composing the content of a cue. Specifically, we observe that the basic issue faced by users is not the inability to create individual concepts but the difficulty in contextualizing broad categories or topics that link specific concepts. Here, we draw from works that identify semantic relations/connections between concepts to build human-like computer systems [54] and perform design synthesis [39]. We further note that the most important characteristic of mind-maps is their linked structure that allows users to associate and understand a group of concepts in a short amount of time. Therefore, our strategy for generating cue content is to simply make use of semantic relationship types already provided in ConceptNet. Our rationale is that providing relationship instead of concept-instances will assist the user in two ways: (1) help them think broadly about the problem thereby assisting them in generating much higher number of instances, and (2) keeping a continuous flow of thoughts throughout the creation process. Specifically, we developed our approach by taking the provided 25 relationship categories along with the weighted assertions from ConceptNet into consideration. Note that we did not take all relations from ConceptNet (34 in total) because some may be too ambiguous to users such as RelatedTo, EtymologicallyDerivedFrom, ExternalURL, etc. The algorithm is detailed in the following sections.
+
+
+
+Figure 4: Illustration of the cue generation algorithm using retrieved weighted relations through ConceptNet Open Data API [66]. Yellow shade denotes computer selected potential node; red shade denotes computer generated cue. This algorithm is executed at regular intervals of 2 seconds. The user interface of QCue is illustrated in (c).
+
+#### 4.3.1 Time Penalty
+
+Time penalty(T)is a measure of the inactivity of a given node in the map. It is defined as the time elapsed since last activity (linked to a parent or added a child). For a newly added node, the time penalty is initialized to 1 and reduced by a constant value(c)at regular intervals of 2 seconds. The value of $c$ was determined experimentally (see section 4.4 for details). Once the value reaches 0 , it remains constantly at 0 thereafter. Therefore, at any given instance, time penalty ranges from 0 to 1 . A default threshold for time penalty was set and adjustable for users by using the provided slider on the ${QCue}$ interface. Users can perform breadth-first exploration on ideas that have been recently visited by increasing the threshold value. Given the initial condition $T\left( {n}_{i}\right) = {1.0}$ , we compute the time penalty of any node ${n}_{i} \in {N}_{M}$ at every interval ${\Delta t}$ as $T\left( {n}_{i}\right) \rightarrow \max \left( {T\left( {n}_{i}\right) - c,0}\right)$ .
+
+#### 4.3.2 Lineage Penalty
+
+Lineage penalty(L)is a measure of the relative depth of nodes in a given mind-map. It is defined as the normalized total count of children of a given node. Each node has a lineage weight $\left( {x}_{i}\right)$ that equals to 0 upon addition. For the addition of every child node, this weight is increased by $1\left( {{x}_{i} \leftarrow }\right.$ number of children of $\left. {n}_{i}\right)$ . To compute the lineage penalty for every node, all these weights are normalized (ranges from 0 to 1) and then subtracted by one $\left( {L\left( {n}_{i}\right) = 1 - {}^{{x}_{i}}/{}_{\max \left( {x}_{i}\right) }}\right)$ . Therefore, lineage penalty is 1 for leaf nodes and 0 for the root node, and ranges from 0 to 1 for the others. QCue's support based on this can help exploration towards leaf nodes. Note that we give equal importance to all nodes at a given depth of the mind-map. The goal is to determine where to generate a cue based on the evolving topology of the maps (acyclic directed graph).
+
+#### 4.3.3 Cue Generation using ConceptNet
+
+Given any state of a mind-map, there are three primary algorithm steps that are needed for generating cues in the form of questions using the ConceptNet semantic network. First, QCue scouts out a good location (node) to facilitate exploration using the two penalties Subsequently, the spotted nodes are queried from ConceptNet to retrieve corresponding weighted relations for content determination Finally, based on the determined content, QCue generates a cue node to ultimately guide the user and help expand the idea space during mind-map creation.
+
+- Scouting: For every node in the current state of a mind-map, we compute its time penalty and lineage penalty. Then, based on the current adjusted thresholds $\left( {{x}_{t},{x}_{l}}\right)$ where ${x}_{t}$ and ${x}_{l}$ denote thresholds for time and lineage penalty respectively, QCue spots potential nodes $\left( {N}_{E}\right)$ for exploration. Specifically, if $T\left( {n}_{i}\right) < {x}_{t}$ or $L\left( {n}_{i}\right) < {x}_{l}$ then ${N}_{E} \leftarrow {N}_{E} \cup \left\{ {n}_{i}\right\}$ (Figure 4(a)). If no node is within the thresholds, all nodes in the current mind-map are considered as potential nodes.
+
+- Content determination: In this step, we further query the spotted nodes $\left( {N}_{E}\right)$ from ConceptNet. A list of query results containing weighted relations is retrieved for each potential node
+
+
+
+Figure 5: Screenshot of user interface of QCue
+
+(Figure 4(b)). In order to find the node which has the maximum potential of associative capability, we subdivide each list categorically based on the 25 relationship types provided by ConceptNet. Subsequently, we select one subdivision which has the highest sum of relation weights (weights provided by ConceptNet), and use it as basis for a new cue's content (Figure 4(b)). Note that if a subdivision has been used to generate a cue node, it will be removed from future selection pool. For example, TypeOf can not be selected again for generating a cue node for travel (Figure 4(c)).
+
+- Cue generation: Using the selected subdivision from content determination, QCue formulates a new cue based on fixed templates (Figure 4(c)). To avoid repetition of cues generated during mind-map creation, we specifically construct at least three templates (combinations of query + verb + relationship type) for each relationship category provided by ConceptNet. Example cues based on a query - knife - and a relationship type - CapableOf - are as follows: "What can knife do?", "What is knife capable of doing?" and "Which task is knife capable of performing?".
+
+### 4.4 Implementation
+
+Our QCue interface is a Javascript web application that runs entirely on the browser using NodeJS and D3JS (Figure 5). We incorporated JSON-LD API (Linked Data structure) offered by ConceptNet in our interface. The nodes of ConceptNet are words and phrases of natural language. Each node contains an edge list which has all the relations such as UsedFor stored in rel with its corresponding weight stored in weight, and a human-readable label stored in start and end. As the user queries a word or phrase in natural language (as one node), we search for all the relations in this node (filtered in English) and extract the non-repetitive human-readable labels out.
+
+On the ${QCue}$ interface, users can spatially organize ideas in the mind-map by dragging ideas with forced-links around the editor workspace. Such force-directed layout produces an aesthetically pleasing graph while maintaining comprehensibility, even with large dataset. Users are also allowed to adjust sliders to shape their exploration preferences by either wider or deeper. QCue employs a listener function to run the cue generation algorithm at fixed intervals of 2 seconds. We also developed an web-based interface for TMM which is essentially the same as ${QCue}$ but without any computer support (cues and queries).
+
+- Data format and storage: Each mind-map is stored in a local folder with a distinct user ID. To store the structure of a mind-map, we defined a JavaScript prototype containing nodes, links, timestamps and other appearance data (e.g. color, size, font etc.). We can regenerate a mind-map by importing the file data into ${QCue}$ . Videos of the mind-maps are also stored within the respective folders to be used in further analysis.
+
+- Choice of penalty and threshold: To find an appropriate default value for the constant $c$ in time penalty and the thresholds for the two penalties, we conducted several pilot studies (Section 5.1) to observe how people mind-map in a regular setting (TMM) and how people get acquainted with ${QCue}$ . The final assignments are: $\mathrm{c} = {0.08},{x}_{t}\& {x}_{l} = {0.6}$ when $\mathrm{t} = 0$ .
+
+## 5 Evaluation Methodology
+
+### 5.1 Pilot Study
+
+We conducted a pilot study with 12 participants where our intention was to observe (1) how users react to the cue-query workflow, (2) determine ideas and problem statements that could serve as our evaluation tasks, and (3) determine appropriate initial parameters (such as lineage and time thresholds). In order to observe user's thinking process while creating a mind-map, we designed four different problem statements namely, pollution, toys in the future, camping underwater, and wedding on space station. We encouraged the users to explore the basic idea, cause, effect and potential solutions of the given problem statement.
+
+Participants were both surprised as well as interested in topics such as weddings on space station and underwater camping. Specifically, for open-ended topics, they indicated a need for time to prepare themselves before beginning the mind-mapping task. For topics such as pollution and toy, they showed immediate inclination toward starting the session. Since we wanted to test the robustness of our algorithm with respect to the topic given, we decided to conduct the user study with two topics of opposite extremes. Namely, pollution (T1) - a seemingly familiar topic and underwater camping (T2) - a more open-ended topic that is uncommon to think about.
+
+| Condition | Structure (1-4) | Exploratory (1-4) | Communication (1-4) | Extent of coverage (1-4) | Quantity (raw) | Variety (0-1) | Novelty (0-1) |
| TMM T1 | 2.29 | 2.42 | 2.38 | 2.25 | 31 | 0.5 | 0.125 |
| TMM T2 | 2.54 | 2.5 | 2.25 | 2.5 | 34 | 0.48 | 0.12 |
| QCue T1 | 3.29 | 3.29 | 2.75 | 2.79 | 38 | 0.66 | 0.19 |
| QCue T2 | 2.63 | 2.54 | 2.29 | 2.58 | 41 | 0.61 | 0.17 |
| Average TMM | 2.42 | 2.46 | 2.31 | 2.38 | 32.5 | 0.49 | 0.12 |
| Average ${OCue}$ | 2.96 | 2.92 | 2.52 | 2.69 | 39.5 | 0.63 | 0.18 |
+
+Figure 6: Table of average ratings for each metric by four user conditions: TMM, QCue with T1 and T2. On a scale of 1 to 4: 1 – Poor, 2 – Average, 3- Good, 4 - Excellent.
+
+### 5.2 Participants
+
+In the user study, we recruited 24 undergraduate and graduate students from all across a university campus. Our participants came from engineering, architecture, and science backgrounds and were within the age range of 19-30 years. Six (6) participants had prior experience with creating mind-maps. For those who had no experience with mind-mapping, we prepared a short presentation about the general spirit and principles of the technique, and provided them an additional 5 to 10 minutes to practice. We conducted a between-subjects study to minimize learning effects across conditions, where 12 participants created mind-maps for a given topic using TMM, and the remaining 12 using ${QCue}$ .
+
+### 5.3 Tasks
+
+In total, across the two experimental conditions, 24 participants created 48 mind-maps - one for each central topic. The total time taken during the experiment varied between 30 and 40 minutes and the order of the two central topics were randomized across the participants. After describing the setup and the purpose of the study, we described the features of the assigned interface and practically demonstrated its usage. For each participant and the mind-mapping task, we recorded a video of the task, the completion time, and the time-stamped ideas generated by the users for each mind-map. Each participant performed the following tasks:
+
+- Practice: To familiarize themselves with the interaction of the assigned interface, the participants were given a brief demonstration of the software and its function. They are allowed to practice the interface for 5 to 10 minutes, with guidance when required.
+
+- Mind-mapping with T1 & T2: Participants were asked to create mind-map using the assigned interface. The duration of mind-mapping session was 10 minutes for each central topic. Participants were encouraged to explore the central topic as fulfill as they could. The workspace was cleared after completion of each mind-map.
+
+- Questionnaire: Finally, each participant answered a series of questions regarding their exploration of central topic before and after the creation of each mind-map, perception of each of the interfaces in terms of ease of use, intuitiveness, and assistance. We also conducted post-study interviews to collect open-ended feedback regarding the experience.
+
+### 5.4 Metrics
+
+Mind-maps recorded during the study were de-identified. The designed metrics assessed all ideas generated in each mind-map based on four primary aspects: quantity, quality, novelty and variety $\left\lbrack {{45},{63}}\right\rbrack$ . The quantity metric is directly measured as the total number of nodes is a given mind-map. The variety of each mind-map is given by the number of idea categories that raters find in the mind-map, and the novelty score is a measure of how unique are the ideas represented in a given mind-map [12,45]. For a fair assessment of the quality of mind-maps for both central topics, we adapted the mind-map assessment rubric $\left\lbrack {1,{12}}\right\rbrack$ and the raters evaluated the mind-maps based on the four major criteria: structure,
+
+exploratory, communication and extent of coverage ${}^{1}$ . These metrics are commonly used to evaluate ideation success in open-ended design tasks [10].
+
+Here, we would like to point out to similar metrics that have been used in HCI literature on creativity support. For instance, Kerne's elemental metrics [37] for information-based ideation (IBI) are adapted from Shah's metrics [63]. While the metrics we chose have been used in previous mind-mapping studies, they also have some connection with creativity-support index (CSI) [9, 14] and ideational fluency [37] (for example, holistic IBI metrics are similar to the "structure" metric and our post study questions are functionally similar to CSI tailored for mind-mapping).
+
+### 5.5 Raters
+
+Two raters were recruited for assessing the mind-maps created by the users. These raters were senior designers in the mechanical engineering design domain, having had multiple design experiences during their coursework and research life. The raters selected were unaware of the study design and tasks, and were not furnished with information related to the general study hypotheses. The 48 mind-maps created across both interfaces were presented to each rater in a randomized order. For every mind-map assessed, the raters evaluate them on a scale of 1-4 based for each criteria discussed above. Every created mind-map is then assigned a score from a total of 16 points, which is used further for comparing the quality with respect to other mind-maps.
+
+For a given central topic, the evaluation depends on knowledge of the raters and their interpretation of what the metrics mean. In our study, two inter-raters independently perform subjective ratings of every idea/concept in a mind-map. This evaluation technique has the advantage of capturing aspects of creative work that are subjectively recognized by raters, but are difficult to define objectively. After the independent evaluation by the two raters, the ratings from the two raters were checked for consensus.
+
+---
+
+${}^{1}$ Please refer to the literature for detailed explanation of these metrics
+
+---
+
+
+
+Figure 7: General trends on how users generating ideas towards different topics (T1 and T2) during TMM and QCue. Each bar represents an average count of the total nodes in the given time frame (per 1 minute).
+
+## 6 Results
+
+### 6.1 Ratings for User-Generated Mind-Maps
+
+For metrics admitting integer values (structure, exploratory, communication and extent of coverage), we calculated the Cohen's kappa for ensuring inter-rater agreement. The Cohen's kappa value was found to be between the range of ${0.4} - {0.6}$ showing a moderate inter-rater agreement level [15]. For metrics admitting real/scalar values (variety and novelty), we calculated the Pearson's correlation coefficient to find the degree of correlation between the raters' ratings. This correlation coefficient was found to be close to 0.8 which indicates an acceptable range of agreement [15].
+
+Overall, the ratings for QCue were relatively higher than TMM across all metrics (Figure 6). Two-way ANOVA was conducted with two factors of comparison: (1) the choice of topic (pollution or underwater camping) and (2) the choice of interface ( ${QCue}$ or TMM). Although the data for certain metrics were non-normal, we proceeded with ANOVA since it is resistant to moderate deviation from normality. The mean ratings for structure were higher for QCue (2.96) in comparison to TMM (2.42, p-value 0.007). Similarly the mean scores for the exploratory metric is also higher for ${QCue}\left( {2.92}\right)$ with respect to TMM (2.46, p-value 0.008). This suggests that the mind-maps created using QCue were relatively more balanced (in depth and breadth) and more comprehensively explored. Further, we recorded a better variety score in QCue (0.49) relative to TMM (0.63, p-value 0.009). Finally, we also recorded a larger number of nodes added in QCue (39.5) relative to TMM (32.5, p-value 0.048). These observations indicate that the cue-query mechanism assisted the users in (1) exploring diverse aspects of the given topic and (2) making non-obvious relationships across ideas.
+
+We also carried out a main effect analysis (one-way ANOVA) between pollution and underwater camping independently for TMM and ${QCue}$ . While the difference in the outcome was not pronounced in TMM, a significant difference was found across topics in the structure $\left( {p = {0.01}}\right)$ and exploratory $\left( {p = {0.002}}\right)$ metrics for ${QCue}$ . This suggests that the ${QCue}$ workflow is dependent on the type of the central topic explored. The overall ratings are higher in ${QCue}$ for pollution (for example in Figure 6, mean structure value increases to 3.29 from 2.29 in pollution).
+
+
+
+Figure 8: Comparison of trends on how QCue users generating ideas towards T1 and T2 using the three modes (user direct input, cue node response and query) stacked one above the other. The frequencies are averaged across the 12 users.
+
+### 6.2 Temporal Trend for Node Addition
+
+In general, the rate of node addition decreased over time in the TMM workflow regardless of the topics. For ${QCue}$ , the node addition rate was comparatively steady indicating that cues and queries helped sustain user engagement for exploration during even later stages of the tasks (Figure 7).
+
+While there are three modes for node addition using ${QCue}$ , as expected, the number of cues and queries used depended on users' familiarity with the central topic in the tasks. Overall, we observed that the users tended to ask for queries in the first few minutes of mind-mapping, and proceed with the usage of cue nodes in the middle stages of the given time (Figure 8). For pollution, the number of answered cue nodes increases with time. Specifically, users appreciated cues between the 5 and 6 minutes mark for pollution. For underwater camping, we noticed an increasing amount of the cue nodes answered specifically in the 2 and 6 to 7 minutes mark. This indicates two primary usage of cues. First, when the users have explored their prior knowledge of the topic and reach an impasse during the middle stages of mind-mapping ( 5 to 7 minutes mark in our case), cues help them reflect on the existing concepts and discover new relationships to generate ideas further. Second, for open-ended problems such as underwater camping, cues helped users in exploring different directions of exploration around the central idea in the beginning. This impacted the exploration of ideas in the later stages of the task. On the other hand, surprisingly, we found that the percentage of the number of nodes added from query mode is lower than the cue mode. This suggests that users were generally more engaged when they were actively involved in the cycle of exploration and reflection based on cues in comparison to receiving direct answers provided by query.
+
+### 6.3 User Feedback: Cue vs Query
+
+To help us evaluate the effectiveness of our algorithm, the participants filled out a questionnaire after creation of each mind-map. We also encouraged the participants to give open-ended feedback to support their rating.
+
+
+
+Figure 9: Two user created mind-maps with underwater camping (T2) as central topic using (a) TMM and (b) QCue. The label represents timestamp upon node addition and type of addition.
+
+There was a mixed response from the users for asking whether the cues were useful in the process of mind-mapping. Around 60% of the users agreed that the cues helped them to develop new lines of thoughts at the right time. One user stated, "Questions (or cues) were helpful at the point when you get fixated. They give you other dimensions/ideas to expand your thought". The remaining stated that they do not find the cues helpful because they already had ideas on how to develop the mind-map. "I felt like the questions (or cues) would make me lose my train of thought". Users who found it difficult to add to existing ideas in the mind-map, used the cues and queries extensively to build and visualize new dimensions to the central idea. These users felt that the cues helped them to reach unexplored avenues: "I started with a particular topic, and ended at a completely unrelated topic. It enabled me to push my creativity limits further".
+
+For the usage of queries, above ${80}\%$ of users agreed that queries were useful regardless of the topics. For underwater camping, 20% of the users who disagreed, suggested that the system should include queries that were more closely linked to the context of the central idea. Specifically, a user stated: "Some suggestions (or queries) under certain context might not be straight forward".
+
+What is interesting to note here is that while we received mixed responses in the cues and overly positive responses on queries, we also recorded higher number of user interactions with cues than queries. The likely explanation for this seeming contradiction is that it is easy to answer a cue than looking for a suggestion that fits the user's need at a given instance. Second, querying a suggestion also would mean that the user was clear in what they wanted to add. However, this clarity ultimately resulted in users directly adding the node manually. Therefore, we believe that the users tacitly inclined toward answering to the cues generated by our system.
+
+### 6.4 User Feedback: QCue as a Workflow
+
+In comparison to TMM, users who used ${QCue}$ performed more consistently during creation of mind-maps - the frequency of generating new nodes was comparatively steady throughout the process. As one user stated: "the questions helped me to create new chain of thoughts. I might not have the answer for the question (or cues) directly, but it provided new aspects to the given idea. Especially for underwater camping". One user with negligible experience in brainstorming, shared her excitement: "I was fully engaged in the creation process. I was expecting questions from all different angles". On the other hand, we also found that ${QCue}$ users kept generating new directions of ideas with respect to the central topic even after the initial creation phase, where TMM users tended to focus on fixed number of directions (Figure 9). This indicates the capability of ${QCue}$ - problems co-evolved with the development of the idea space during the mind-mapping process.
+
+## 7 Discussions
+
+### 7.1 Limitations
+
+There are two main limitations in this work. First, a majority of the recruited users had little to no experience in mind-mapping. While this allowed us to demonstrate the capability of ${QCue}$ in guiding novices to explore problem spaces, we believe that including expert users in our future studies can help us (1) understand how differently they perform using this workflow and (2) lead to a richer discussion on how expertise can be transferred to our system toward better facilitation. Second, one of the key challenges we faced was the lack of robust methodology for determining the effect of cue-based stimulus during mind-mapping (how users may have used cues and queries without explicitly using them to add nodes). While we characterize it on the basis of the number of cues answered and the number of suggestions assimilated directly in the mind-map, we believe that a deeper qualitative study on the mind-mapping process can reveal valuable insights. We plan to conduct such an analysis as our immediate next step.
+
+### 7.2 Cue & Query Formulation
+
+One of the challenges we faced in our implementation of cue generation was grammatically and semantically effective formulation of the questions themselves. Recently, Gilon et al. [28] demonstrated a design-by-analogy workflow using ConcpetNet noting the lack of domain specificity to be an issue. In this regard, there is scope for further investigation of natural language processing methods as well as new databases for construction of cues in specific domains such as engineering design. More importantly, users frequently suggested for context-dependent queries. For problems such as underwater camping, this is a challenging task that may need technological advancements in artificial intelligence approaches for generating suggestions and cues based on real-time synthesis of ideas from the information retrieved from a knowledge database. We did preliminary exploration in this direction using a markov chain based question generation method [16]. However, the cues generated were not well-phrased indicating further studies into other generative language models [23].
+
+### 7.3 Cue Representation
+
+The rationale behind providing cues comes from being able to stimulate the user to generate and add ideas. We believe there is a richer space of representations, both textual and graphical, that can potentially enhance cognitive stimulation particularly for open-ended problems. For instance, textual stimuli can be produced through simple unsolicited suggestions from ConceptNet (example: "concept?") or advanced mechanisms based on higher level contextual interpretation (e.g. questioning based on second-order neighbors in the ConceptNet graph). From a graphical perspective, the use of visual content databases such as ShapeNet [11] and ImageNet [21] may lead to novel ways for providing stimuli to users. There are several avenues that need to be investigated in terms of colors, images, arrows, and dimension to reflect personal interest and individuality [8].
+
+## 8 Conclusion
+
+Our intention in this research was to augment users' capability to discover more about a given problem during mind-mapping. For this, we introduced and investigated a new digital workflow (QCue) that provides cues to users based on the current state of the mind-map and also allows them to query suggestions. While our experiments demonstrated the potential of such mechanisms in stimulating idea exploration, the fundamental take-away is that such stimulation requires a balancing act between intervening the user's own line of thought with computer-generated cues and providing suggestions to the user's queries. Furthermore, our work shows the impact of computer-facilitated textual stimuli particularly for those with little practice in brainstorming-type tasks. We believe that ${QCue}$ is only a step toward a much richer set of research directions in the domain of intelligent cognitive assistants.
+
+## References
+
+[1] Scoring rubric for mind maps. https://www.claytonschools.
+
+net/cms/lib/MO@1000419/Centricity/Domain/206/Mind% 20Map%20Rubric.pdf, 2008.
+
+[2] M. Abdeen, R. El-Sahan, A. Ismaeil, S. El-Harouny, M. Shalaby, and M. C. E. Yagoub. Direct automatic generation of mind maps from text with m2gen. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), pp. 95-99, Sept 2009. doi: 10.1109/TIC-STH.2009.5444360
+
+[3] A. Adler and R. Davis. Speech and sketching: An empirical study of multimodal interaction. In Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling, pp. 83-90. ACM, 2007.
+
+[4] O. Barros. A pragmatic approach to computer assisted system building. pp. 613-622 vol.3, 02 1990. doi: 10.1109/HICSS. 1990.205399
+
+[5] A. Bleakley. Resource-based learning activities: Information literacy for high school students. American Library Association, 1994.
+
+[6] H. Breuer, M. Hewing, and F. Steinhoff. Divergent innovation: Fostering and managing the fuzzy front end of innovation. In PICMET'09- 2009 Portland International Conference on Management of Engineering & Technology, pp. 754-761. IEEE, 2009.
+
+[7] S. Buisine, G. Besacier, M. Najm, A. Aoussat, and F. Vernier. Computer-supported creativity: Evaluation of a tabletop mind-map application. In Proceedings of the 7th International Conference on Engineering Psychology and Cognitive Ergonomics, EPCE'07, pp. 22-31. Springer-Verlag, Berlin, Heidelberg, 2007.
+
+[8] T. Buzan. The ultimate book of mind maps: unlock your creativity, boost your memory, change your life. HarperCollins UK, 2006.
+
+[9] E. A. Carroll, C. Latulipe, R. Fung, and M. Terry. Creativity factor evaluation: towards a standardized survey metric for creativity support. In Proceedings of the seventh ACM conference on Creativity and cognition, pp. 127-136. ACM, 2009.
+
+[10] J. Chan, K. Fu, C. Schunn, J. Cagan, K. Wood, and K. Kotovsky. On the benefits and pitfalls of analogies for innovative design: Ideation performance based on analogical distance, commonness, and modality of examples. Journal of mechanical design, 133(8):081004, 2011.
+
+[11] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. Shapenet: An information-rich 3d model repository. CoRR, abs/1512.03012, 2015.
+
+[12] T.-J. Chen, R. R. Mohanty, M. A. H. Rodriguez, and V. R. Krishna-murthy. Collaborative mind-mapping: A study of patterns, strategies, and evolution of maps created by peer-pairs. In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019.
+
+[13] T.-J. Chen, S. G. Subramanian, and V. R. Krishnamurthy. Mini-map: Mixed-initiative mind-mapping via contextual query expansion. In
+
+AIAA Scitech 2019 Forum, p. 2347, 2019.
+
+[14] E. Cherry and C. Latulipe. Quantifying the creativity support of digital tools through the creativity support index. ACM Transactions on Computer-Human Interaction (TOCHI), 21(4):21, 2014.
+
+[15] D. Clark-Carter. Doing Quantitative Psychological Research: From Design to Report. Taylor & Francis, Inc., UK., 1997.
+
+[16] J. M. Conroy and D. P. O'leary. Text summarization via hidden markov models. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01, pp. 406-407. ACM, New York, NY, USA, 2001. doi: 10. 1145/383952.384042
+
+[17] M. Copeland. Socratic circles: Fostering critical and creative thinking in middle and high school. Stenhouse Publishers, 2005.
+
+[18] N. Cross. Engineering design methods: strategies for product design. Wiley, 2008.
+
+[19] A. V. D'Antoni and G. P. Zipp. Applications of the mind map learning technique in chiropractic education: A pilot study and literature review. Journal of Chiropractic Humanities, 13:2-11, 2006.
+
+[20] A. V. D'Antoni, G. P. Zipp, and V. G. Olson. Interrater reliability of the mind map assessment rubric in a cohort of medical students. ${BMC}$ Medical Education, 9(1):19, 2009.
+
+[21] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
+
+[22] A. R. Dennis, R. K. Minas, and A. P. Bhagwatwar. Sparking creativity: Improving electronic brainstorming with individual cognitive priming. Journal of Management Information Systems, 29(4):195-216, 2013. doi: 10.2753/MIS0742-1222290407
+
+[23] X. Du, J. Shao, and C. Cardie. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106, 2017.
+
+[24] M. Elhoseiny and A. Elgammal. Text to multi-level mindmaps. Multimedia Tools and Applications, 75(8):4217-4244, Apr 2016. doi: 10. 1007/s11042-015-2467-y
+
+[25] H. Faste and H. Lin. The untapped promise of digital mind maps. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 1017-1026. ACM, New York, NY, USA, 2012. doi: 10.1145/2207676.2208548
+
+[26] H. Furukawa, T. Yuizono, and A. Sakai. A design of distributed brainstorming support tool with gamification elements. In 2016 11th International Conference on Knowledge, Information and Creativity Support Systems (KICSS), pp. 1-6. IEEE, 2016.
+
+[27] M. Ghajargar and M. Wiberg. Thinking with interactive artifacts: Reflection as a concept in design outcomes. Design Issues, 34(2):48- 63, April 2018. doi: 10.1162/DESI_a_00485
+
+[28] K. Gilon, J. Chan, F. Y. Ng, H. Liifshitz-Assaf, A. Kittur, and D. Shahaf. Analogy mining for specific design needs. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, pp. 121:1-121:11. ACM, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3173695
+
+[29] G. Goldschmidt and A. L. Sever. Inspiring design ideas with texts. Design Studies, 32(2):139 - 155, 2011. doi: 10.1016/j.destud.2010.09. 006
+
+[30] J. Han, F. Shi, L. Chen, and P. R. Childs. The combinator-a computer-based tool for creative idea generation based on a simulation approach. Design Science, 4, 2018.
+
+[31] C. Hmelo-Silver. Problem-based learning: What and how do students learn? Educational Psychology Review, 16:235-266, 01 2004. doi: 10. 1023/B:EDPR.0000034022.16470.f3
+
+[32] B. Indurkhya. On the Role of Computers in Creativity-Support Systems, pp. 213-227. 02 2016. doi: 10.1007/978-3-319-19090-7_17
+
+[33] S. G. Isaksen, K. B. Dorval, and D. J. Treffinger. Creative approaches to problem solving: A framework for change. Kendall Hunt Publishing Company, 2000.
+
+[34] A. Jain, N. Lupfer, Y. Qu, R. Linder, A. Kerne, and S. M. Smith. Evaluating TweetBubble with Ideation Metrics of Exploratory Browsing. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition - C&C '15, pp. 53-62. ACM Press, New York, New York, USA, 2015. doi: 10.1145/2757226.2757239
+
+[35] D. H. Jonassen, K. Beissner, and M. Yacci. Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Psychology Press, 1993.
+
+[36] A. Kerne, A. Billingsley, N. Lupfer, R. Linder, Y. Qu, A. Valdez, A. Jain, K. Keith, M. Carrasco, and J. Vanegas. Strategies of Free-Form Web Curation. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition - C&C '17, pp. 380-392. ACM Press, New York, New York, USA, 2017. doi: 10.1145/3059454.3059471
+
+[37] A. Kerne, A. M. Webb, S. M. Smith, R. Linder, N. Lupfer, Y. Qu, J. Moeller, and S. Damaraju. Using metrics of curation to evaluate information-based ideation. ACM Trans. Comput.-Hum. Interact., 21(3):14:1-14:48, June 2014. doi: 10.1145/2591677
+
+[38] J. Koch, A. Lucero, L. Hegemann, and A. Oulasvirta. May ai?: Design ideation with cooperative contextual bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 633:1-633:12. ACM, New York, NY, USA, 2019. doi: 10. 1145/3290605.3300863
+
+[39] J. Kolko. Abductive thinking and sensemaking: The drivers of design synthesis. Design issues, 26(1):15-28, 2010.
+
+[40] J. Kolko. The divisiveness of design thinking. interactions, 25(3):28- 34, 2018.
+
+[41] P. Kommers and J. Lanzing. Students' concept mapping for hypermedia design: Navigation through world wide web (www) space and self-assessment. J. Interact. Learn. Res., 8(3-4):421-455, Dec. 1997.
+
+[42] A. Kotov and C. Zhai. Tapping into knowledge base for concept feedback: Leveraging conceptnet to improve search results for difficult queries. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, WSDM ' 12, pp. 403-412. ACM, New York, NY, USA, 2012. doi: 10.1145/2124295.2124344
+
+[43] R. Kudelić, M. Konecki, and M. Maleković. Mind map generator software model with text mining algorithm. In Proceedings of the ITI 2011, 33rd International Conference on Information Technology Interfaces, pp. 487-494, June 2011.
+
+[44] B. M. Kudrowitz. Haha and aha!: Creativity, idea generation, improvisational humor, and product design. PhD thesis, Massachusetts Institute of Technology, 2010.
+
+[45] J. S. Linsey, E. Clauss, T. Kurtoglu, J. Murphy, K. Wood, and A. Markman. An experimental study of group idea generation techniques: understanding the roles of idea representation and viewing methods. Journal of Mechanical Design, 133(3):031008, 2011.
+
+[46] J. Luo, S. Sarica, and K. L. Wood. Computer-aided design ideation using innogps. In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers Digital Collection.
+
+[47] N. Lupfer, A. Kerne, A. M. Webb, and R. Linder. Patterns of Free-form Curation. In Proceedings of the 2016 ACM on Multimedia Conference - MM '16, pp. 12-21. ACM Press, New York, New York, USA, 2016. doi: 10.1145/2964284.2964303
+
+[48] E. M. Kalargiros and M. Manning. Divergent thinking and brainstorming in perspective: Implications for organization change and innovation. Research in Organizational Change and Development, 23:293-327, 06 2015. doi: 10.1108/S0897-301620150000023007
+
+[49] C. P. Malycha and G. W. Maier. The random-map technique: Enhancing mind-mapping with a conceptual combination technique to foster creative potential. Creativity Research Journal, 29(2):114-124, 2017.
+
+[50] K. S. Marshall, R. Crawford, and D. Jensen. Analogy seeded mind-maps: A comparison of verbal and pictorial representation of analogies in the concept generation process. In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016.
+
+[51] M. Michalko. Thinkertoys: A handbook of creative-thinking techniques. Ten Speed Press, 2010.
+
+[52] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41, 1995.
+
+[53] J. K. Murray, J. A. Studer, S. R. Daly, S. McKilligan, and C. M. Seifert. Design by taking perspectives: How engineers explore problems. Journal of Engineering Education, 2019.
+
+[54] V. Nastase, P. Nakov, D. O. Seaghdha, and S. Szpakowicz. Semantic
+
+relations between nominals. Synthesis lectures on human language technologies, 6(1):1-119, 2013.
+
+[55] J. C. Nesbit and O. O. Adesope. Learning with concept and knowledge
+
+maps: A meta-analysis. Review of educational research, 76(3):413- 448, 2006.
+
+[56] R. O'Connell. Mind mapping for critical thinking. pp. 354-386, 01 2014. doi: 10.4018/978-1-4666-5816-5.ch014
+
+[57] A. F. Osborne. Applied imagination. New York: Scribner, 1963.
+
+[58] B. Pak, O. Onder Ozener, and E. Arzu. Utilizing customizable generative design tools in digital design studio: Xp-gen experimental form generator. international journal of architectural computing, 4:21-33, 12 2006. doi: 10.1260/147807706779398962
+
+[59] Y. Qu, A. Kerne, N. Lupfer, R. Linder, and A. Jain. Metadata type system. In Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems - EICS '14, pp. 107-116. ACM Press, New York, New York, USA, 2014. doi: 10.1145/2607023. 2607030
+
+[60] M. Quayle and D. Paterson. Techniques for encouraging reflection in design. Journal of Architectural Education (1984-), 42(2):30-42, 1989.
+
+[61] M. A. Ruiz-Primo, S. E. Schultz, M. Li, and R. J. Shavelson. Comparison of the reliability and validity of scores from two concept-mapping techniques. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, ${38}\left( 2\right) : {260} - {278},{2001}$ .
+
+[62] S. Sarica, J. Luo, and K. L. Wood. Technet: Technology semantic network based on patent data. Expert Systems with Applications, 142:112995, 2020. doi: 10.1016/j.eswa.2019.112995
+
+[63] J. J. Shah, S. M. Smith, and N. Vargas-Hernandez. Metrics for measuring ideation effectiveness. Design studies, 24(2):111-134, 2003.
+
+[64] J. J. Shah, N. Vargas-Hernandez, J. D. Summers, and S. Kulkarni. Collaborative sketching (c-sketch)-an idea generation technique for engineering design. The Journal of Creative Behavior, 35(3):168-198, 2001.
+
+[65] P. Siangliulue, J. Chan, S. P. Dow, and K. Z. Gajos. Ideahound: Improving large-scale collaborative ideation with crowd-powered real-time semantic modeling. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16, p. 609-624. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2984511.2984578
+
+[66] R. Speer, J. Chin, and C. Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pp. 4444-4451. AAAI Press, 2017.
+
+[67] X. Tang, Y. Liu, and W. Zhang. Computerized support for idea generation during knowledge creating process. pp. 437-443, 09 2005. doi: 10.1007/11554028_61
+
+[68] S.-O. Tergan. Digital Concept Maps for Managing Knowledge and Information, pp. 185-204. Springer Berlin Heidelberg, Berlin, Heidelberg, 2005. doi: 10.1007/11510154_10
+
+[69] L. Tesnière. Éléments de syntaxe structurale (1959). Paris: Klincksieck, 1965.
+
+[70] R. Vroom, E. van Breemen, and W. van der Vegte. Developing a conceptual design engineering toolbox and its tools. Acta Polytechnica, 44:2004, 01 2004.
+
+[71] H. Wang, D. Cosley, and S. R. Fussell. Idea expander: Supporting group brainstorming with conversationally triggered visual thinking stimuli. pp. 103-106, 01 2010. doi: 10.1145/1718918. 1718938
+
+[72] H.-C. Wang, T.-Y. Li, C. P. Rosé, C.-C. Huang, and C.-Y. Chang. Vibrant: A brainstorming agent for computer supported creative problem solving. In International Conference on Intelligent Tutoring Systems, pp. 787-789. Springer, 2006.
+
+[73] Y. Wei, H. Guo, J. Wei, and Z. Su. Revealing semantic structures of texts: Multi-grained framework for automatic mind-map generation. 2019.
+
+[74] A. Wetzstein and W. Hacker. Reflective verbalization improves solutions-the effects of question-based reflection in design problem solving. Applied Cognitive Psychology, 18(2):145-156, 2004.
+
+[75] Wikipedia contributors. List of mind-mapping software - Wikipedia, the free encyclopedia, 2019. [Online; accessed 20-December-2019].
+
+[76] J. Wilkins, J. Järvi, A. Jain, G. Kejriwal, A. Kerne, and V. Gumudavelly. EvolutionWorks. pp. 213-230. Springer, Cham, 2015. doi: 10.1007/ 978-3-319-22723-8
+
+[77] C. L. Willis and S. L. Miertschin. Mind tools for enhancing thinking and learning skills. In Proceedings of the 6th Conference on Information Technology Education, SIGITE '05, pp. 249-254. ACM, New York, NY, USA, 2005. doi: 10.1145/1095714.1095772
+
+[78] C. L. Willis and S. L. Miertschin. Mind maps as active learning tools. J. Comput. Sci. Coll., 21(4):266-272, Apr. 2006.
+
+[79] A. R. Zolfaghari, D. Fathi, and M. Hashemi. Role of creative questioning in the process of learning and teaching. Procedia-Social and Behavioral Sciences, 30:2079-2082, 2011.
+
+[80] F. Zwicky. Discovery, invention, research through the morphological approach. 1969.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..71565d59de2316766302e5d858f020ab41e9892d
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/r5vnRRwrgX/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,309 @@
+§ QCUE: QUERIES AND CUES FOR COMPUTER-FACILITATED MIND-MAPPING
+
+Ting-Ju Chen*
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Sai Ganesh Subramanian ${}^{ \dagger }$
+
+J. Mike Walker ’66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+Vinayak R. Krishnamurthy ${}^{ \ddagger }$
+
+J. Mike Walker '66 Department
+
+of Mechanical Engineering
+
+Texas A&M University
+
+§ ABSTRACT
+
+We introduce a novel workflow, ${QCue}$ , for providing textual stimulation during mind-mapping. Mind-mapping is a powerful tool whose intent is to allow one to externalize ideas and their relationships surrounding a central problem. The key challenge in mind-mapping is the difficulty in balancing the exploration of different aspects of the problem (breadth) with a detailed exploration of each of those aspects (depth). Our idea behind ${QCue}$ is based on two mechanisms: (1) computer-generated automatic cues to stimulate the user to explore the breadth of topics based on the temporal and topological evolution of a mind-map and (2) user-elicited queries for helping the user explore the depth for a given topic. We present a two-phase study wherein the first phase provided insights that led to the development of our work-flow for stimulating the user through cues and queries. In the second phase, we present a between-subjects evaluation comparing ${QCue}$ with a digital mind-mapping work-flow without computer intervention. Finally, we present an expert rater evaluation of the mind-maps created by users in conjunction with user feedback.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+§ 1 INTRODUCTION
+
+Mind-maps are widely used for quick visual externalization of one's mental model around a central idea or problem. The underlying principle behind mind-mapping is to provide a means for associative thinking so as to foster the development of concepts that both explore different aspects around a given problem (breadth) and explore each of those aspects in a detail-oriented manner (depth) [49]. The ideas in a mind-map spread out in a hierarchical/tree-like manner [35], which allows for the integration of diverse knowledge elements into a coherent pattern [8] to enable critical thinking and learning through making synaptic connections and divergent exploration [41, ${56},{77},{78}\rbrack$ . As a result, mind-maps are uniquely suitable for problem understanding/exploration prior to design conceptualization [8].
+
+Problem exploration is critical in helping designers develop new perspectives and driving the search for solutions within the iterative process of identifying features/needs and re-framing the scope [53]. Generally, it requires a combination of two distinct and often conflicted modes of thinking: (1) logical, analytical, and detail-oriented, and (2) lateral, systems-level, breadth-oriented [40]. Most current efforts in computer-facilitated exploratory tasks focus exclusively on one of these cognitive mechanisms. As a result, there is currently a limited understanding of how this breadth-depth conflict can be addressed. Maintaining the balance between the breadth and depth of exploration can be challenging, especially for first-time users. For atypical and open-ended problem statements (that are commonplace in design problems), this issue is further pronounced ultimately leading to creative inhibition and lack of engagement.
+
+Effective and quick thinking is closely tied to the individual's imagination and ability to create associations between various information chunks [44]. Incidentally, this is also a skill that takes time to develop and manifest in novices. We draw from existing works $\left\lbrack {{17},{22},{27},{29},{38},{60},{74},{79}}\right\rbrack$ that emphasize on stimulating reflection during exploration tasks. Quayle et al. [60] and Wetzstein et al. [74] indicate that the act of responding to questions can create several avenues for designers to reflect on their their assumptions and expand their field of view about a given idea. Adler et al. [3] found that asking questions in sketching activity keeps the participants engaged and reflecting on ambiguities. In fact, asking one question in turn raises a variety of other questions, thereby bringing out more ideas from the user's mind [17]. Goldschmidt [29] further demonstrated that exposing designers to text can lead to higher originality during idea generation.
+
+Our approach is informed by the notion of reflection-in-design [60, 74], that takes an almost Socratic approach to reason about the design problem space through question-based verbalization. The premise is that cognitive processes underlying mind-mapping can be enriched to enable an iterative cycle between exploration, inquiry, and reflection. We apply this reasoning in a digital setting where the user has access to vast knowledge databases. Our key idea is to explore two different ways in which such textual stimuli can be provided. The first is through a simple mechanism for query expansion (i.e. asking for suggestions) and followed by means for responding to computer-generated stimuli (i.e. answering questions). Based on this, we present a workflow for mind-mapping wherein the user, while adding and connecting concepts (exploration), can also query a semantic database to explore related concepts (inquiry) and build upon those concepts by answering questions posed by the mind-mapping tool itself (reflection). Our approach is powered by ConceptNet [66], a semantic network that contains a graph-based representation with nodes representing real-word concepts as natural language phrases (e.g. bring to a potluck, provide comfort, etc.), and edges representing semantic relationships. Using related entries to a given concept and also the types of relationships, our work investigates methods for textual stimulation for mind-mapping.
+
+§ 1.1 CONTRIBUTIONS
+
+We make three contributions. First, we present a novel workflow — QCue - that uses the relational ontology offered by ConceptNet [66] to create mechanisms for cognitive stimulus through automated questioning with idea expansion and proactive user query. Second, we present an adaptive algorithm to facilitate the breadth-depth balance in a mind-mapping task. This algorithm analyzes the temporal and topological evolution of a given mind-map and generates questions (cues) for the user to respond to. Finally, to showcase the reflection-in-design approach, we conduct a between-subjects user study and present a comparative evaluation of ${QCue}$ with a digital mind-mapping workflow without our algorithm (henceforth in the paper, we will refer to this as traditional mind-mapping or TMM). The inter-rater analysis of user-generated mind-maps and the user feedback demonstrates the efficacy of our approach and also reveals new directions for future digital mind-mapping tools.
+
+*e-mail: carol0712@tamu.edu
+
+${}^{ \dagger }$ e-mail: sai3097ganesh@tamu.edu
+
+${}^{ \ddagger }$ e-mail: vinayak@tamu.edu
+
+§ 2 RELATED WORKS
+
+§ 2.1 PROBLEM EXPLORATION IN DESIGN
+
+Problem exploration is the process that leads to discovery of opportunities and insights that drives the innovation of products, services and systems [18]. Silver et al. [31] underscore the importance of problem-based learning for students to identify what they need to learn in order to solve a problem. Most current methods in early design are generally focused on increasing the probability of coming up with creative solutions by promoting divergent thinking. For instance, brainstorming specifically focuses on the quantity of ideas without judgment $\left\lbrack {6,{48},{57}}\right\rbrack$ . There are many other popular techniques such as SCAMPER [51], C-Sketch [64], and morphological matrix [80], that support the formation of new concepts through modification and re-interpretation of rough initial ideas. This however, also leads to design fixation toward a specific and narrow set of concepts thereby curtailing the exploration process. In contrast, mind-mapping is a flexible technique that can help investigate problem from multiple points of view. In this paper, we use mind-mapping as means for problem exploration, which has been proven to be useful for reflection, communication, and synthesis during idea generation $\left\lbrack {{33},{50}}\right\rbrack$ . The structure of mind-maps thus facilitates a wide-range of activities ranging from note-taking to information integration [20] by highlighting the relationships between various concepts and the organization of topic-oriented flow of thoughts $\left\lbrack {{55},{61}}\right\rbrack$ .
+
+§ 2.2 COMPUTER-BASED COGNITIVE SUPPORT
+
+There have been significant efforts to engage and facilitate ones' critical thinking and learning by using digital workflows through pictorial stimuli $\left\lbrack {{30},{32},{71}}\right\rbrack$ , heuristic-based feedback generation [72], text-mining $\left\lbrack {{46},{65},{67}}\right\rbrack$ and speech-based interfaces [19]. Some works $\left\lbrack {{13},{26}}\right\rbrack$ have also used gamification as a means to engage the user in the idea generation process. Specifically in engineering design and systems engineering, there are a number of computer systems that support user's creativity during design conceptualization $\left\lbrack {4,{58},{62},{70}}\right\rbrack$ . These are, however, targeted toward highly technical and domain-specific contexts.
+
+While there are works $\left\lbrack {2,{24},{43},{73}}\right\rbrack$ that have explored the possibility of automatic generation of mind-maps from speech and texts, little is known in terms of how additional computer support will affect the process of creating mind-maps. Works that consider computer support in mind-mapping $\left\lbrack {7,{25}}\right\rbrack$ have evaluated numerous existing mind-mapping software applications and found that pen-and-paper and digital mind-mapping proves to have different levels of speed and efficiency analyzing various factors like user's intent, ethnography, nature of collaboration. Of particular relevance are works by Kerne’s group on curation [34,36,47] and web-semantics $\left\lbrack {{59},{76}}\right\rbrack$ for information based-ideation. While these works are not particularly aimed at mind-mapping as a mode of exploration, they share our premise of using information to support free-form visual exploration of ideas.
+
+Recent work by Chen et al. [12] studies collaboration in mind-mapping and offer some insight regarding how mind-maps evolve during collaboration. They further proposed a computer as a partner approach [13], where they demonstrate human-AI partnership by posing mind-mapping as a two-player game where the human and the AI (intelligent agent) take turns to add ideas to a mind-map. While an exciting prospect, we note that there is currently little information regarding how intelligent systems could be used for augmenting the user's cognitive capabilities for free-form mind-mapping without constraining the process. Recent work by Koch et al. [38] proposed cooperative contextual bandits (CCB) that provides cognitive support in forms of suggestions (visual materials) and explanations (questions to justify the categories of designers' selections from search engine) to users during mood board design tasks. While CCB treats questions as means to justify designers' focus and adapt the system accordingly, we emphasize the associative thinking capability brought by questions formed with semantic relations.
+
+§ 2.3 DIGITAL MIND-MAPPING
+
+Several digital tools [75] have been proposed to facilitate mind-mapping activity. However, to our knowledge, these tools contribute little in computer-supported cognitive assistance and idea generation during such thinking and learning process. They focus on making the operations of constructing maps easier by providing features to users such as quickly expand the conceptual domain through web-search, link concepts to on-line resources via URLs (uniform resource locators) and interactive map construction. Even though those tools have demonstrated advantages over traditional mind-mapping tasks [25], mind-map creators can still find it challenging due to several following reasons: inability to recall concepts related to a given problem, inherent ambiguity in the central problem, and difficulty in building relationships between different concepts [5,68] These difficulties often result in an unbalanced idea exploration resulting in either too broad or too detail-oriented mind-maps. In this work, we aim to investigate computational mechanisms to address this issue.
+
+§ 3 PHASE I: PRELIMINARY STUDY
+
+Our first step was to investigate the effect of query expansion (the process of reformulating a given query to improve retrieval of information) and to observe how users react to conditions where suggestions are actively provided during mind-mapping. For this, we implemented an preliminary interface to record the usage of suggestions retrieved from ConceptNet [42] and conducted a preliminary study using this interface.
+
+§ 3.1 QUERY-EXPANSION INTERFACE
+
+The idea behind our interface is based on query expansion enabled by ConceptNet. In comparison to content-retrieval analysis (Wiki) or lexical-semantic databases such as WordNet [52], ConceptNet allows for leveraging the vast organization of related concepts based on a diverse set of relations resulting in a broader scope of queries. Using this feature of ConceptNet, we developed a simple web-based tool for query-expansion mind-mapping (QEM, Figure 1, Figure 2) wherein users could add nodes (words/phrases) and link them together to create a map. For every new word or phrase, we used the top 50 query results as suggestions that the users could use as alternatives or additional nodes in the map. Our hypothesis was that ConceptNet suggestions would help users create richer mind-maps in comparison to pen-paper mind-mapping.
+
+§ 3.2 EVALUATION TASKS
+
+We designed our tasks for (a) comparing pen-paper mind-mapping and QEMs with respect to user performance, preference, and completion time and (b) to explore how the addition of query-based search affects the spanning of ideas in a typical mind-map creation task. Each participant was asked to create two mind-maps, one for each of the following problem statements:
+
+ * Discuss the problem of different forms of pollution, and suggest solutions to minimize them: This problem statement was kept generic and conclusive, and something that would be typically familiar to the target participants, to compare the creation modalities for simple problem statements.
+
+ * Modes of human transportation in the year 2118: The intent behind this open-ended problem statement was to encourage users to explore a variety of ideas through both modalities, and observe the utility of query based mind map tools for such problem statements.
+
+ < g r a p h i c s >
+
+Figure 1: Screenshot of user interface of QEM
+
+ < g r a p h i c s >
+
+Figure 2: Illustration of QEM workflow ("MM" stands for "mind-mapping")
+
+The topics for the problem statements were selected to provide users with familiar domains while also leaving scope for encouraging new ideas from the participants.
+
+§ 3.3 PARTICIPANTS
+
+We recruited 18 students (10 male, 8 female) from engineering majors between 18 to 32 years of age. Of these, 6 participants were familiar with the concept of mind-maps (with a self-reported score of 4 on a scale of 10 ). We conducted a between subjects study, where 9 participants created mind-maps for a given problem statement using QEM (Figure 1), and the remaining 9 using provided pen and paper.
+
+§ 3.4 PROCEDURE
+
+The total time taken during the experiment varied between 30 to 35 minutes. Participants in the QEM group were first introduced to the interfaces and were encouraged explore the interface. Subsequently, the participants created the mind-map for the assigned problem. They were allowed a maximum of 10 minutes for one problem statement. Finally, on completion, each participant answered a series of questions in terms of ease of use, intuitiveness, and effectiveness of the assigned mind-map creation modality.
+
+§ 3.5 KEY FINDINGS
+
+§ 3.5.1 USER FEEDBACK
+
+We did not find consensus regarding self-reported satisfaction with the mind-maps created by participants in pen-paper mind-mapping. Moreover, while pen-paper mind-mapping participants agreed that the time for map creation was sufficient, nearly 50% did not agree with being able to span their ideas properly. On the other hand, ${90}\%$ QEM participants reported that they were satisfied with their resulting mind-maps. Over ${80}\%$ of the QEM participants agreed to be able to easily search for related words and ideas, and add them to the mind-map. In the post study survey, QEM users suggested adding features such as randomizing the order of words searched for, ability to query multiple phrases at the same time, and ability to search for images. One participant mentioned: "The interface wasn't able to do the query if put a pair of words together or search for somebody's name viz. Elon Musk".
+
+§ 3.5.2 USERS' OVER-DEPENDENCY ON QUERY-EXPANSION
+
+As compared to pen-paper mind-mapping, we observed two main limitations in our query-expansion workflow. First, the addition of a new idea required the query of the word as we did not allow direct addition of nodes in the mind-map (Figure 2). While we had implemented this to simplify the interactions, this resulted in a break in the user's flow of thought further inhibiting diversity (especially when the digital tool is able to search cross-domain and provide a big database for exploring). Second, we observed that users relied heavily on search and query results rather than externalizing their personal views on a subject. Users simply continued searching for the right keyword instead of adding more ideas to the map. This also increased the overall time taken for creating maps using query-expansion. This was also reported by users with statements such as: "I relied a lot on the search results the interface gave me" and "I did not brainstorm a lot while creating the mind map, I spent a lot of time in finding proper terms in the search results to put onto the mind map".
+
+ < g r a p h i c s >
+
+Figure 3: Illustration of three mechanisms to create a new node. (a) user double-clicks a node and enters a text entry to create a new child node. (b) single right-clicking an existing node allows the user to use top suggestions from ConceptNet to create a new child node. (c) user double-clicks a cue node to response for a new node addition. Yellow shade denotes selected node; red shade denotes newly created node.
+
+§ 4 PHASE II: QCUE
+
+Motivated by our preliminary study, the design goal behind ${QCue}$ is to strike a balance between idea expansion workflow and cognitive support during digital mind-mapping. We aim to provide computer support in a manner that stimulated the user to think in new directions but did not intrude in the user's own line of thinking. The algorithm of generating computer support in ${QCue}$ was developed based on the evolution of the structure of the user-generated map over time to balance the breadth and depth of exploration.
+
+§ 4.1 WORKFLOW DESIGN
+
+QCue was designed primarily to support divergent idea exploration in ideation processes. This requires an interface that would allow for simple yet fast interactions that are typically natural in a traditional pen-paper setting. We formulate process of mind-mapping as an iterative two-mode sequence: generating as many ideas as possible on a topic (breadth-first exploration), and choosing a smaller subset to refine and detail (depth-first exploration). We further assume our mind-maps to be strictly acyclic graphs (trees). The design of our workflow is based on the following guiding principles:
+
+ * In the initial phases of mind-mapping, asking questions to the user can help them externalize their assumptions regarding the topic, stimulate indirect relationships across concepts (latent relations).
+
+ * For exploring ideas in depth during later stages, suggesting alternatives to the use helps maintain the rate of idea addition. Here, questions can further help the user look for appropriate suggestions.
+
+§ 4.2 IDEA EXPANSION WORKFLOW
+
+We provided the following interactions to users for creating a mind-map using ${QCue}$ :
+
+ * Direct user input: This is the default mode of adding ideas to the map wherein users simply double-click on an existing node $\left( {n}_{i}\right)$ to add content for its child node $\left( {n}_{j}\right)$ using an input dialog box in the editor workspace. A link is created automatically between ${n}_{i}$ and ${n}_{j}$ (Figure 3(a)). This offers users minimal manipulation in the construction of a tree type structure.
+
+ * Asking for suggestions: In situations where a user is unclear about a given direction of exploration from a node in the mind-map, the user can explicitly query ConceptNet with the concerned node (right-click on a node to be queried). Subsequently, we extract top 10 related concepts (words and phrases) from ConceptNet and allow users to add any related concept they see fit. Users can continuously explore and expand their search (right-clicking on any existing node) and add the result of the query (Figure 3(b)).
+
+ * Responding to cues: QCue evaluates the nodes in the map and detects nodes that need further exploration. Once identified, ${QCue}$ automatically generates and adds a question as cue to user. The user can react to this cue node (double-click) and choose to either answer, ignore, or delete it. Once a valid (nonempty) answer is recorded, the interface replaces the clicked node with the answer (Figure 3(c)).
+
+ * Breadth-vs-depth exploration: Two sliders are provided on the QCue interface to allow adjustment of exploratory directions guided by the cues (Figure 4(a)). Specifically, users can use the sliders to control the position of newly generated cues to be either breadth or depth-first anytime during mind-mapping.
+
+§ 4.3 CUE GENERATION RATIONALE
+
+There are three aspects that we considered to design our cue-generation mechanism. Given the current state of a mind-map our challenge was to determine (1) where to generate a cue (which nodes in the mind-map need exploration), (2) when a cue should be generated (so as to provide a meaningful but non-intrusive intervention) and (3) the what to ask the user (in terms of the actual content of cue). To find out where and when to add cues, we draw from the recent work by Chen et al. [13] that explored several algorithms for computer-generated ideas. One of their algorithmic iterations - which is of particular interest to us - involves using the temporal and topological evolution of the mind-map to determine which nodes to target. However, this approach is rendered weak in their work because they modeled the mind-mapping process as a sequential game with each player (human and computer) takes turns. In our case, however, this is a powerful idea since the human and the intelligent agent (AI) are not bound by sequential activity - both work asynchronously. This also reflects from our core idea of using computer as a facilitator rather than a collaborator. Based on these observations we designed our algorithm to utilize the topological and temporal evolution of a given mind-map in order to determine the potential nodes where we want the user to explore further. For this, we use a strategy similar to the one proposed by Chen et al. [13] that uses two penalty terms based on the time elapsed since a node was added to the mind-map and it's relative topological position (or lineage) with respect to the central problem.
+
+Tesnière [69] note that continuous thoughts can only be expressed with built connections. Tesnière was originally describing this idea in the context of linguistic syntax and how the mind perceives words not in isolation (as they appear in a dictionary) but in the context of other words in a sentence. It is the sentence that provides the connection between its constituent words. This is our driving guideline for composing the content of a cue. Specifically, we observe that the basic issue faced by users is not the inability to create individual concepts but the difficulty in contextualizing broad categories or topics that link specific concepts. Here, we draw from works that identify semantic relations/connections between concepts to build human-like computer systems [54] and perform design synthesis [39]. We further note that the most important characteristic of mind-maps is their linked structure that allows users to associate and understand a group of concepts in a short amount of time. Therefore, our strategy for generating cue content is to simply make use of semantic relationship types already provided in ConceptNet. Our rationale is that providing relationship instead of concept-instances will assist the user in two ways: (1) help them think broadly about the problem thereby assisting them in generating much higher number of instances, and (2) keeping a continuous flow of thoughts throughout the creation process. Specifically, we developed our approach by taking the provided 25 relationship categories along with the weighted assertions from ConceptNet into consideration. Note that we did not take all relations from ConceptNet (34 in total) because some may be too ambiguous to users such as RelatedTo, EtymologicallyDerivedFrom, ExternalURL, etc. The algorithm is detailed in the following sections.
+
+ < g r a p h i c s >
+
+Figure 4: Illustration of the cue generation algorithm using retrieved weighted relations through ConceptNet Open Data API [66]. Yellow shade denotes computer selected potential node; red shade denotes computer generated cue. This algorithm is executed at regular intervals of 2 seconds. The user interface of QCue is illustrated in (c).
+
+§ 4.3.1 TIME PENALTY
+
+Time penalty(T)is a measure of the inactivity of a given node in the map. It is defined as the time elapsed since last activity (linked to a parent or added a child). For a newly added node, the time penalty is initialized to 1 and reduced by a constant value(c)at regular intervals of 2 seconds. The value of $c$ was determined experimentally (see section 4.4 for details). Once the value reaches 0, it remains constantly at 0 thereafter. Therefore, at any given instance, time penalty ranges from 0 to 1 . A default threshold for time penalty was set and adjustable for users by using the provided slider on the ${QCue}$ interface. Users can perform breadth-first exploration on ideas that have been recently visited by increasing the threshold value. Given the initial condition $T\left( {n}_{i}\right) = {1.0}$ , we compute the time penalty of any node ${n}_{i} \in {N}_{M}$ at every interval ${\Delta t}$ as $T\left( {n}_{i}\right) \rightarrow \max \left( {T\left( {n}_{i}\right) - c,0}\right)$ .
+
+§ 4.3.2 LINEAGE PENALTY
+
+Lineage penalty(L)is a measure of the relative depth of nodes in a given mind-map. It is defined as the normalized total count of children of a given node. Each node has a lineage weight $\left( {x}_{i}\right)$ that equals to 0 upon addition. For the addition of every child node, this weight is increased by $1\left( {{x}_{i} \leftarrow }\right.$ number of children of $\left. {n}_{i}\right)$ . To compute the lineage penalty for every node, all these weights are normalized (ranges from 0 to 1) and then subtracted by one $\left( {L\left( {n}_{i}\right) = 1 - {}^{{x}_{i}}/{}_{\max \left( {x}_{i}\right) }}\right)$ . Therefore, lineage penalty is 1 for leaf nodes and 0 for the root node, and ranges from 0 to 1 for the others. QCue's support based on this can help exploration towards leaf nodes. Note that we give equal importance to all nodes at a given depth of the mind-map. The goal is to determine where to generate a cue based on the evolving topology of the maps (acyclic directed graph).
+
+§ 4.3.3 CUE GENERATION USING CONCEPTNET
+
+Given any state of a mind-map, there are three primary algorithm steps that are needed for generating cues in the form of questions using the ConceptNet semantic network. First, QCue scouts out a good location (node) to facilitate exploration using the two penalties Subsequently, the spotted nodes are queried from ConceptNet to retrieve corresponding weighted relations for content determination Finally, based on the determined content, QCue generates a cue node to ultimately guide the user and help expand the idea space during mind-map creation.
+
+ * Scouting: For every node in the current state of a mind-map, we compute its time penalty and lineage penalty. Then, based on the current adjusted thresholds $\left( {{x}_{t},{x}_{l}}\right)$ where ${x}_{t}$ and ${x}_{l}$ denote thresholds for time and lineage penalty respectively, QCue spots potential nodes $\left( {N}_{E}\right)$ for exploration. Specifically, if $T\left( {n}_{i}\right) < {x}_{t}$ or $L\left( {n}_{i}\right) < {x}_{l}$ then ${N}_{E} \leftarrow {N}_{E} \cup \left\{ {n}_{i}\right\}$ (Figure 4(a)). If no node is within the thresholds, all nodes in the current mind-map are considered as potential nodes.
+
+ * Content determination: In this step, we further query the spotted nodes $\left( {N}_{E}\right)$ from ConceptNet. A list of query results containing weighted relations is retrieved for each potential node
+
+ < g r a p h i c s >
+
+Figure 5: Screenshot of user interface of QCue
+
+(Figure 4(b)). In order to find the node which has the maximum potential of associative capability, we subdivide each list categorically based on the 25 relationship types provided by ConceptNet. Subsequently, we select one subdivision which has the highest sum of relation weights (weights provided by ConceptNet), and use it as basis for a new cue's content (Figure 4(b)). Note that if a subdivision has been used to generate a cue node, it will be removed from future selection pool. For example, TypeOf can not be selected again for generating a cue node for travel (Figure 4(c)).
+
+ * Cue generation: Using the selected subdivision from content determination, QCue formulates a new cue based on fixed templates (Figure 4(c)). To avoid repetition of cues generated during mind-map creation, we specifically construct at least three templates (combinations of query + verb + relationship type) for each relationship category provided by ConceptNet. Example cues based on a query - knife - and a relationship type - CapableOf - are as follows: "What can knife do?", "What is knife capable of doing?" and "Which task is knife capable of performing?".
+
+§ 4.4 IMPLEMENTATION
+
+Our QCue interface is a Javascript web application that runs entirely on the browser using NodeJS and D3JS (Figure 5). We incorporated JSON-LD API (Linked Data structure) offered by ConceptNet in our interface. The nodes of ConceptNet are words and phrases of natural language. Each node contains an edge list which has all the relations such as UsedFor stored in rel with its corresponding weight stored in weight, and a human-readable label stored in start and end. As the user queries a word or phrase in natural language (as one node), we search for all the relations in this node (filtered in English) and extract the non-repetitive human-readable labels out.
+
+On the ${QCue}$ interface, users can spatially organize ideas in the mind-map by dragging ideas with forced-links around the editor workspace. Such force-directed layout produces an aesthetically pleasing graph while maintaining comprehensibility, even with large dataset. Users are also allowed to adjust sliders to shape their exploration preferences by either wider or deeper. QCue employs a listener function to run the cue generation algorithm at fixed intervals of 2 seconds. We also developed an web-based interface for TMM which is essentially the same as ${QCue}$ but without any computer support (cues and queries).
+
+ * Data format and storage: Each mind-map is stored in a local folder with a distinct user ID. To store the structure of a mind-map, we defined a JavaScript prototype containing nodes, links, timestamps and other appearance data (e.g. color, size, font etc.). We can regenerate a mind-map by importing the file data into ${QCue}$ . Videos of the mind-maps are also stored within the respective folders to be used in further analysis.
+
+ * Choice of penalty and threshold: To find an appropriate default value for the constant $c$ in time penalty and the thresholds for the two penalties, we conducted several pilot studies (Section 5.1) to observe how people mind-map in a regular setting (TMM) and how people get acquainted with ${QCue}$ . The final assignments are: $\mathrm{c} = {0.08},{x}_{t}\& {x}_{l} = {0.6}$ when $\mathrm{t} = 0$ .
+
+§ 5 EVALUATION METHODOLOGY
+
+§ 5.1 PILOT STUDY
+
+We conducted a pilot study with 12 participants where our intention was to observe (1) how users react to the cue-query workflow, (2) determine ideas and problem statements that could serve as our evaluation tasks, and (3) determine appropriate initial parameters (such as lineage and time thresholds). In order to observe user's thinking process while creating a mind-map, we designed four different problem statements namely, pollution, toys in the future, camping underwater, and wedding on space station. We encouraged the users to explore the basic idea, cause, effect and potential solutions of the given problem statement.
+
+Participants were both surprised as well as interested in topics such as weddings on space station and underwater camping. Specifically, for open-ended topics, they indicated a need for time to prepare themselves before beginning the mind-mapping task. For topics such as pollution and toy, they showed immediate inclination toward starting the session. Since we wanted to test the robustness of our algorithm with respect to the topic given, we decided to conduct the user study with two topics of opposite extremes. Namely, pollution (T1) - a seemingly familiar topic and underwater camping (T2) - a more open-ended topic that is uncommon to think about.
+
+max width=
+
+Condition Structure (1-4) Exploratory (1-4) Communication (1-4) Extent of coverage (1-4) Quantity (raw) Variety (0-1) Novelty (0-1)
+
+1-8
+TMM T1 2.29 2.42 2.38 2.25 31 0.5 0.125
+
+1-8
+TMM T2 2.54 2.5 2.25 2.5 34 0.48 0.12
+
+1-8
+QCue T1 3.29 3.29 2.75 2.79 38 0.66 0.19
+
+1-8
+QCue T2 2.63 2.54 2.29 2.58 41 0.61 0.17
+
+1-8
+Average TMM 2.42 2.46 2.31 2.38 32.5 0.49 0.12
+
+1-8
+Average ${OCue}$ 2.96 2.92 2.52 2.69 39.5 0.63 0.18
+
+1-8
+
+Figure 6: Table of average ratings for each metric by four user conditions: TMM, QCue with T1 and T2. On a scale of 1 to 4: 1 – Poor, 2 – Average, 3- Good, 4 - Excellent.
+
+§ 5.2 PARTICIPANTS
+
+In the user study, we recruited 24 undergraduate and graduate students from all across a university campus. Our participants came from engineering, architecture, and science backgrounds and were within the age range of 19-30 years. Six (6) participants had prior experience with creating mind-maps. For those who had no experience with mind-mapping, we prepared a short presentation about the general spirit and principles of the technique, and provided them an additional 5 to 10 minutes to practice. We conducted a between-subjects study to minimize learning effects across conditions, where 12 participants created mind-maps for a given topic using TMM, and the remaining 12 using ${QCue}$ .
+
+§ 5.3 TASKS
+
+In total, across the two experimental conditions, 24 participants created 48 mind-maps - one for each central topic. The total time taken during the experiment varied between 30 and 40 minutes and the order of the two central topics were randomized across the participants. After describing the setup and the purpose of the study, we described the features of the assigned interface and practically demonstrated its usage. For each participant and the mind-mapping task, we recorded a video of the task, the completion time, and the time-stamped ideas generated by the users for each mind-map. Each participant performed the following tasks:
+
+ * Practice: To familiarize themselves with the interaction of the assigned interface, the participants were given a brief demonstration of the software and its function. They are allowed to practice the interface for 5 to 10 minutes, with guidance when required.
+
+ * Mind-mapping with T1 & T2: Participants were asked to create mind-map using the assigned interface. The duration of mind-mapping session was 10 minutes for each central topic. Participants were encouraged to explore the central topic as fulfill as they could. The workspace was cleared after completion of each mind-map.
+
+ * Questionnaire: Finally, each participant answered a series of questions regarding their exploration of central topic before and after the creation of each mind-map, perception of each of the interfaces in terms of ease of use, intuitiveness, and assistance. We also conducted post-study interviews to collect open-ended feedback regarding the experience.
+
+§ 5.4 METRICS
+
+Mind-maps recorded during the study were de-identified. The designed metrics assessed all ideas generated in each mind-map based on four primary aspects: quantity, quality, novelty and variety $\left\lbrack {{45},{63}}\right\rbrack$ . The quantity metric is directly measured as the total number of nodes is a given mind-map. The variety of each mind-map is given by the number of idea categories that raters find in the mind-map, and the novelty score is a measure of how unique are the ideas represented in a given mind-map [12,45]. For a fair assessment of the quality of mind-maps for both central topics, we adapted the mind-map assessment rubric $\left\lbrack {1,{12}}\right\rbrack$ and the raters evaluated the mind-maps based on the four major criteria: structure,
+
+exploratory, communication and extent of coverage ${}^{1}$ . These metrics are commonly used to evaluate ideation success in open-ended design tasks [10].
+
+Here, we would like to point out to similar metrics that have been used in HCI literature on creativity support. For instance, Kerne's elemental metrics [37] for information-based ideation (IBI) are adapted from Shah's metrics [63]. While the metrics we chose have been used in previous mind-mapping studies, they also have some connection with creativity-support index (CSI) [9, 14] and ideational fluency [37] (for example, holistic IBI metrics are similar to the "structure" metric and our post study questions are functionally similar to CSI tailored for mind-mapping).
+
+§ 5.5 RATERS
+
+Two raters were recruited for assessing the mind-maps created by the users. These raters were senior designers in the mechanical engineering design domain, having had multiple design experiences during their coursework and research life. The raters selected were unaware of the study design and tasks, and were not furnished with information related to the general study hypotheses. The 48 mind-maps created across both interfaces were presented to each rater in a randomized order. For every mind-map assessed, the raters evaluate them on a scale of 1-4 based for each criteria discussed above. Every created mind-map is then assigned a score from a total of 16 points, which is used further for comparing the quality with respect to other mind-maps.
+
+For a given central topic, the evaluation depends on knowledge of the raters and their interpretation of what the metrics mean. In our study, two inter-raters independently perform subjective ratings of every idea/concept in a mind-map. This evaluation technique has the advantage of capturing aspects of creative work that are subjectively recognized by raters, but are difficult to define objectively. After the independent evaluation by the two raters, the ratings from the two raters were checked for consensus.
+
+${}^{1}$ Please refer to the literature for detailed explanation of these metrics
+
+ < g r a p h i c s >
+
+Figure 7: General trends on how users generating ideas towards different topics (T1 and T2) during TMM and QCue. Each bar represents an average count of the total nodes in the given time frame (per 1 minute).
+
+§ 6 RESULTS
+
+§ 6.1 RATINGS FOR USER-GENERATED MIND-MAPS
+
+For metrics admitting integer values (structure, exploratory, communication and extent of coverage), we calculated the Cohen's kappa for ensuring inter-rater agreement. The Cohen's kappa value was found to be between the range of ${0.4} - {0.6}$ showing a moderate inter-rater agreement level [15]. For metrics admitting real/scalar values (variety and novelty), we calculated the Pearson's correlation coefficient to find the degree of correlation between the raters' ratings. This correlation coefficient was found to be close to 0.8 which indicates an acceptable range of agreement [15].
+
+Overall, the ratings for QCue were relatively higher than TMM across all metrics (Figure 6). Two-way ANOVA was conducted with two factors of comparison: (1) the choice of topic (pollution or underwater camping) and (2) the choice of interface ( ${QCue}$ or TMM). Although the data for certain metrics were non-normal, we proceeded with ANOVA since it is resistant to moderate deviation from normality. The mean ratings for structure were higher for QCue (2.96) in comparison to TMM (2.42, p-value 0.007). Similarly the mean scores for the exploratory metric is also higher for ${QCue}\left( {2.92}\right)$ with respect to TMM (2.46, p-value 0.008). This suggests that the mind-maps created using QCue were relatively more balanced (in depth and breadth) and more comprehensively explored. Further, we recorded a better variety score in QCue (0.49) relative to TMM (0.63, p-value 0.009). Finally, we also recorded a larger number of nodes added in QCue (39.5) relative to TMM (32.5, p-value 0.048). These observations indicate that the cue-query mechanism assisted the users in (1) exploring diverse aspects of the given topic and (2) making non-obvious relationships across ideas.
+
+We also carried out a main effect analysis (one-way ANOVA) between pollution and underwater camping independently for TMM and ${QCue}$ . While the difference in the outcome was not pronounced in TMM, a significant difference was found across topics in the structure $\left( {p = {0.01}}\right)$ and exploratory $\left( {p = {0.002}}\right)$ metrics for ${QCue}$ . This suggests that the ${QCue}$ workflow is dependent on the type of the central topic explored. The overall ratings are higher in ${QCue}$ for pollution (for example in Figure 6, mean structure value increases to 3.29 from 2.29 in pollution).
+
+ < g r a p h i c s >
+
+Figure 8: Comparison of trends on how QCue users generating ideas towards T1 and T2 using the three modes (user direct input, cue node response and query) stacked one above the other. The frequencies are averaged across the 12 users.
+
+§ 6.2 TEMPORAL TREND FOR NODE ADDITION
+
+In general, the rate of node addition decreased over time in the TMM workflow regardless of the topics. For ${QCue}$ , the node addition rate was comparatively steady indicating that cues and queries helped sustain user engagement for exploration during even later stages of the tasks (Figure 7).
+
+While there are three modes for node addition using ${QCue}$ , as expected, the number of cues and queries used depended on users' familiarity with the central topic in the tasks. Overall, we observed that the users tended to ask for queries in the first few minutes of mind-mapping, and proceed with the usage of cue nodes in the middle stages of the given time (Figure 8). For pollution, the number of answered cue nodes increases with time. Specifically, users appreciated cues between the 5 and 6 minutes mark for pollution. For underwater camping, we noticed an increasing amount of the cue nodes answered specifically in the 2 and 6 to 7 minutes mark. This indicates two primary usage of cues. First, when the users have explored their prior knowledge of the topic and reach an impasse during the middle stages of mind-mapping ( 5 to 7 minutes mark in our case), cues help them reflect on the existing concepts and discover new relationships to generate ideas further. Second, for open-ended problems such as underwater camping, cues helped users in exploring different directions of exploration around the central idea in the beginning. This impacted the exploration of ideas in the later stages of the task. On the other hand, surprisingly, we found that the percentage of the number of nodes added from query mode is lower than the cue mode. This suggests that users were generally more engaged when they were actively involved in the cycle of exploration and reflection based on cues in comparison to receiving direct answers provided by query.
+
+§ 6.3 USER FEEDBACK: CUE VS QUERY
+
+To help us evaluate the effectiveness of our algorithm, the participants filled out a questionnaire after creation of each mind-map. We also encouraged the participants to give open-ended feedback to support their rating.
+
+ < g r a p h i c s >
+
+Figure 9: Two user created mind-maps with underwater camping (T2) as central topic using (a) TMM and (b) QCue. The label represents timestamp upon node addition and type of addition.
+
+There was a mixed response from the users for asking whether the cues were useful in the process of mind-mapping. Around 60% of the users agreed that the cues helped them to develop new lines of thoughts at the right time. One user stated, "Questions (or cues) were helpful at the point when you get fixated. They give you other dimensions/ideas to expand your thought". The remaining stated that they do not find the cues helpful because they already had ideas on how to develop the mind-map. "I felt like the questions (or cues) would make me lose my train of thought". Users who found it difficult to add to existing ideas in the mind-map, used the cues and queries extensively to build and visualize new dimensions to the central idea. These users felt that the cues helped them to reach unexplored avenues: "I started with a particular topic, and ended at a completely unrelated topic. It enabled me to push my creativity limits further".
+
+For the usage of queries, above ${80}\%$ of users agreed that queries were useful regardless of the topics. For underwater camping, 20% of the users who disagreed, suggested that the system should include queries that were more closely linked to the context of the central idea. Specifically, a user stated: "Some suggestions (or queries) under certain context might not be straight forward".
+
+What is interesting to note here is that while we received mixed responses in the cues and overly positive responses on queries, we also recorded higher number of user interactions with cues than queries. The likely explanation for this seeming contradiction is that it is easy to answer a cue than looking for a suggestion that fits the user's need at a given instance. Second, querying a suggestion also would mean that the user was clear in what they wanted to add. However, this clarity ultimately resulted in users directly adding the node manually. Therefore, we believe that the users tacitly inclined toward answering to the cues generated by our system.
+
+§ 6.4 USER FEEDBACK: QCUE AS A WORKFLOW
+
+In comparison to TMM, users who used ${QCue}$ performed more consistently during creation of mind-maps - the frequency of generating new nodes was comparatively steady throughout the process. As one user stated: "the questions helped me to create new chain of thoughts. I might not have the answer for the question (or cues) directly, but it provided new aspects to the given idea. Especially for underwater camping". One user with negligible experience in brainstorming, shared her excitement: "I was fully engaged in the creation process. I was expecting questions from all different angles". On the other hand, we also found that ${QCue}$ users kept generating new directions of ideas with respect to the central topic even after the initial creation phase, where TMM users tended to focus on fixed number of directions (Figure 9). This indicates the capability of ${QCue}$ - problems co-evolved with the development of the idea space during the mind-mapping process.
+
+§ 7 DISCUSSIONS
+
+§ 7.1 LIMITATIONS
+
+There are two main limitations in this work. First, a majority of the recruited users had little to no experience in mind-mapping. While this allowed us to demonstrate the capability of ${QCue}$ in guiding novices to explore problem spaces, we believe that including expert users in our future studies can help us (1) understand how differently they perform using this workflow and (2) lead to a richer discussion on how expertise can be transferred to our system toward better facilitation. Second, one of the key challenges we faced was the lack of robust methodology for determining the effect of cue-based stimulus during mind-mapping (how users may have used cues and queries without explicitly using them to add nodes). While we characterize it on the basis of the number of cues answered and the number of suggestions assimilated directly in the mind-map, we believe that a deeper qualitative study on the mind-mapping process can reveal valuable insights. We plan to conduct such an analysis as our immediate next step.
+
+§ 7.2 CUE & QUERY FORMULATION
+
+One of the challenges we faced in our implementation of cue generation was grammatically and semantically effective formulation of the questions themselves. Recently, Gilon et al. [28] demonstrated a design-by-analogy workflow using ConcpetNet noting the lack of domain specificity to be an issue. In this regard, there is scope for further investigation of natural language processing methods as well as new databases for construction of cues in specific domains such as engineering design. More importantly, users frequently suggested for context-dependent queries. For problems such as underwater camping, this is a challenging task that may need technological advancements in artificial intelligence approaches for generating suggestions and cues based on real-time synthesis of ideas from the information retrieved from a knowledge database. We did preliminary exploration in this direction using a markov chain based question generation method [16]. However, the cues generated were not well-phrased indicating further studies into other generative language models [23].
+
+§ 7.3 CUE REPRESENTATION
+
+The rationale behind providing cues comes from being able to stimulate the user to generate and add ideas. We believe there is a richer space of representations, both textual and graphical, that can potentially enhance cognitive stimulation particularly for open-ended problems. For instance, textual stimuli can be produced through simple unsolicited suggestions from ConceptNet (example: "concept?") or advanced mechanisms based on higher level contextual interpretation (e.g. questioning based on second-order neighbors in the ConceptNet graph). From a graphical perspective, the use of visual content databases such as ShapeNet [11] and ImageNet [21] may lead to novel ways for providing stimuli to users. There are several avenues that need to be investigated in terms of colors, images, arrows, and dimension to reflect personal interest and individuality [8].
+
+§ 8 CONCLUSION
+
+Our intention in this research was to augment users' capability to discover more about a given problem during mind-mapping. For this, we introduced and investigated a new digital workflow (QCue) that provides cues to users based on the current state of the mind-map and also allows them to query suggestions. While our experiments demonstrated the potential of such mechanisms in stimulating idea exploration, the fundamental take-away is that such stimulation requires a balancing act between intervening the user's own line of thought with computer-generated cues and providing suggestions to the user's queries. Furthermore, our work shows the impact of computer-facilitated textual stimuli particularly for those with little practice in brainstorming-type tasks. We believe that ${QCue}$ is only a step toward a much richer set of research directions in the domain of intelligent cognitive assistants.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..d1307c55429a046ceb97c40241f16d59e12dce3e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,463 @@
+# AnimationPak: Packing Elements with Scripted Animations
+
+Reza Adhitya Saputra*
+
+University of Waterloo
+
+Craig S. Kaplan†
+
+University of Waterloo
+
+Paul Asente ${}^{ \ddagger }$
+
+Adobe Research
+
+
+
+Figure 1: (a) Input animated elements, each with its own animation: swimming penguins, swimming sharks and fish, Pac-Man fish that open or close their mouths, and rotating stars. (b-e) four selected frames from an animated packing.
+
+## Abstract
+
+We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.
+
+Index Terms: I.3.3 [Computing Methodologies]: Computer Graphics-Picture/Image Generation; I.3.m [Computing Methodologies]: Computer Graphics—Animation;
+
+## 1 INTRODUCTION
+
+A decorative packing is a composition created by arranging two-dimensional shapes called elements within a larger region called a container. Packings are popular in graphic design, and are used frequently in advertising and product packaging.
+
+At a high level, packings can communicate a relationship between a whole and the parts that make it up. Consider for example the logo of the 2018 SIGGRAPH conference, shown inset. The 2018 logo surrounds the main logo of the SIGGRAPH organization with a ring of small icons depicting computer graphics themes.
+
+
+
+At a lower level, packings must be attractive compositions, which balance the shapes of the elements with the empty space between them, known as the negative space. In particular, negative space should be distributed as evenly as possible, leading to roughly constant-width "grout" between elements.
+
+Recently, Saputra et al. presented RepulsionPak [31], a deformation-driven packing method inspired by physical simulation techniques. In RepulsionPak, small elements are placed within a fixed container shape. As they grow, they interact with each other and the container boundary, inducing forces that translate, rotate, and deform elements. The motion and deformation of the elements allows them to achieve a physical equilibrium with an even distribution of negative space.
+
+Inspired by RepulsionPak, we investigate a physics-based packing method for elements with scripted animations. An element can have an animated deformation, such as a bird flapping its wings or a fish flicking its tail. It can also have an animated transformation, giving a changing position, size, and orientation within the container. Our goal is producing an animated packing, with elements playing out their animations while simultaneously filling the container shape evenly. A successful animated packing should balance among the evenness of the negative space, the preservation of element shapes, and the comprehensibility of their scripted animations.
+
+In our technique, called AnimationPak, we consider an animated element to be a geometric extrusion along a time axis, a three-dimensional object that we call a "spacetime element". We use a three-dimensional physical simulation similar to RepulsionPak to pack spacetime elements into a volume created by extruding a static container shape. The animated packing emerges from this three-dimensional volume by rendering cross sections perpendicular to the time axis. Our time axis behaves differently than a third spatial dimension. Although the cross sections of a spacetime element can drift from their original positions on the time axis, they must remain ordered monotonically. Furthermore, each individual cross section must remain flat in time, so that all of its $2\mathrm{D}$ points occur simultaneously.
+
+Animated packings are a largely unexplored style of motion graphics, presumably because of the difficulty of creating an animated packing by hand. We were not able to find any motivating examples created by artists. There is also very little past research on animated packings; we discuss the work that does exist in the next section.
+
+## 2 RELATED WORK
+
+Packings and mosaics: Researchers have explored many approaches to creating 2D packings and simulated mosaics, including using Centroidal Area Voronoi Diagrams (CAVDs) to position elements $\left\lbrack {{15},{16},{33}}\right\rbrack$ , spectral approaches to create even negative space [10], energy minimization [23], and shape descriptors [25]. Several approaches have been proposed to extend 2D packing methods to adapt to the challenges of placing them on the surfaces of $3\mathrm{D}$ objects. $\left\lbrack {6,7,{19},{37}}\right\rbrack$ .
+
+---
+
+*e-mail: radhitya@uwaterloo.ca
+
+${}^{ \dagger }$ e-mail: csk@uwaterloo.ca
+
+${}^{ \ddagger }$ e-mail: asente@adobe.com
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+---
+
+Approaches that work with a smaller library of elements but allow them to deform are particularly relevant to AnimationPak. Xu and Kaplan [36] and Zou et al. [38] developed packing methods that construct calligrams inside containers by allowing significant deformation of letterforms. Saputra et al. presented FLOWPAK [32], which deformed long, thin elements along user-defined vector fields. RepulsionPak $\left\lbrack {{30},{31}}\right\rbrack$ deformed elements using mass-spring systems and repulsion forces to create compatibilities between element boundaries.
+
+Animated packings and tilings: Animosaics by Smith et al. [33] constructed animations in which static elements without scripted animations follow the motion of an animated container. Elements are placed using CAVDs, and advected frame-to-frame using a choice of methods motivated by Gestalt grouping principles. As the container's area changes, elements are added and removed as needed, while attempting to maximize overall temporal coherence. Dalal et al. [10] showed how the spectral approach they introduced for 2D packings could be extended to pack animated elements in a static container. Like us, they recast the problem in terms of three-dimensional spacetime; they compute optimal element placement using discrete samples over time and orientation. However, their spacetime elements have fixed shapes and are made to fit together using only translation and rotation, limiting their ability to consume the container's negative space.
+
+Liu and Veksler created animated decorative mosaics from video input [26]. Their technique combines vision-based motion segmentation with a packing step similar to Animosaics. Kang et al. [21] extracted edges from video and then oriented rectangular tesserae relative to edge directions.
+
+Kaplan [22] explored animations of simple tilings of the plane from copies of a single shape. Elements in a tiling fit together by construction, and therefore always consume all the negative space in the animation.
+
+3D packings: AnimationPak fills a 3D container with 3D elements, and is therefore related to other work on constructing freeform 3D packings. Gal et al. [13] presented a method for constructing 3D collages reminiscent of portrait paintings by Arcimboldo. They filled a 3D container with overlapping 3D elements using a greedy approach and a partial shape matching algorithm. Marco [1] decomposed a 3D model into parts that pack tightly into a small build volume, allowing it to be $3\mathrm{D}$ printed with less waste material and packed into a smaller box. Ma et al. [28] developed a heuristic method to create 3D packings that are overlap free. Other work has experimented with example-based packing of $3\mathrm{D}$ volumes [27], or optimized placement based on user interaction [18].
+
+Derived animations: AnimationPak falls into the category of systems that create a derived animation based on some input animation. This problem, which requires preserving the visual character of the input, is a longstanding one in computer graphics research. Spacetime constraints $\left\lbrack {9,{34}}\right\rbrack$ allow an animator to specify an object’s constraints and goals, and then calculates the object's trajectory via spacetime optimization. Motion warping [35] is a method that deforms an existing motion curve to meet user-specified constraints. Gleicher [14] developed a motion path editing method that allows user to modify the traveling path of a walking character. Bruderlin and Williams [4] used signal processing techniques to modify motion curves. Carra et al. [5] presented a timeslice grammar to procedurally animate a large number of objects.
+
+Previous work has also investigated geometric deformation of animations. Edmond et al. [17] encoded spatial joint relationships using tetrahedral meshes, and applied as-rigid-as-possible shape deformation to the mesh to retarget animation to new characters. Choi et al. [8] developed a method to deform character motion to allow characters to navigate tight passages. Masaki [29] developed a motion editing tool that deformed 3D lattice proxies of a character's joints. Dalstein et al. [11] presented a data structure to animate vector graphics with complex topological changes. Kim et al. [24] explored a packing algorithm to avoid collisions in a crowd of moving characters. They defined a motion patch containing temporal trajectories of interacting characters, and arranged deformed patches to prevent collisions between characters.
+
+## 3 ANIMATED ELEMENTS
+
+The input to AnimationPak is a library of animated elements and a fixed container shape. AnimationPak currently supports two kinds of animation: the user can animate the shape of each individual element and can also give elements trajectories that animate their position within the container. This section explains how we animate the element shapes using as-rigid-as-possible deformation, and then construct spacetime-extruded objects that form the basis of our packing algorithm. These elements animate "in place": they change shape without translating. The next section describes how these elements can be given transformation trajectories within the container. Size and orientation of an element can be animated either way; they can be specified as an animation of the element's shape. or they can be part of the transformation trajectory.
+
+### 3.1 Spacetime Extrusion
+
+Each element begins life as a static shape defined using vector paths. Following RepulsionPak, we construct a discrete geometric proxy of the element that will interact with other proxies in a physical simulation. The construction of this proxy for a single shape is shown in Fig. 2, and the individual steps are explained in greater detail below.
+
+In order to produce a packing with an even distribution of negative space, we first offset the shape’s paths by a distance ${\Delta s}$ , leaving the shape surrounded by a channel of negative space (Fig. 2a). In our system we scale the shape to fit a unit square and set ${\Delta s} = {0.04}$ .
+
+Next, we place evenly-spaced samples around the outer boundary of the offset path and construct a Delaunay triangulation of the samples (Fig. 2b). As in RepulsionPak, we will later treat the edges of the triangulation as springs, allowing the element to deform in response to forces in the simulation. We also follow RepulsionPak by adding extra edges to prevent folding or self-overlaps during simulation (Fig. 2c). First, if two triangles ${ABC}$ and ${BCD}$ share edge ${BC}$ , then we add a shear edge connecting $A$ and $D$ . Second, we triangulate the negative space inside the convex hull of the original Delaunay triangulation, and create new negative space edges corresponding to the newly created triangulation edges. These negative space edges are used exclusively for internal bracing. The element's concavities can still be occupied by its neighbours.
+
+We refer to the augmented triangulation shown in Fig. 2c as a slice. The entire spacetime packing process operates on slices. However, we will eventually need to compute deformed copies of the element's original vector paths when rendering a final animation (Sect. 6). To that end, we re-express all path information relative to the slice triangulation: every path control point is represented using barycentric coordinates within one triangle.
+
+To extend the element into the time dimension, we now position evenly-spaced copies of the slice along the time axis. Assuming that the animation will run over the time interval $\left\lbrack {0,1}\right\rbrack$ , we choose a number of slices ${n}_{s}$ and place slices $\left\{ {{s}_{1},\ldots ,{s}_{{n}_{s}}}\right\}$ , with slice ${s}_{i}$ being placed at time $\left( {i - 1}\right) /\left( {{n}_{s} - 1}\right)$ . Higher temporal resolution will produce a smoother final animation at the expense of more computation. In our examples, we set ${n}_{s} = {100}$ . Fig. 2d shows a set of time slices, with ${n}_{s} = 5$ for visualization purposes.
+
+To complete the construction of a spacetime element without animation, we stitch the slices together into a single 3D object. Let ${s}_{j}$ and ${s}_{j + 1}$ be consecutive slices constructed above. The outer boundaries of the element triangulations are congruent polygons offset in the time axis. We stitch the two polygons together using a new set of time edges: if ${AB}$ is an edge on the boundary of ${s}_{j}$ and ${CD}$ is the corresponding edge on the boundary of ${s}_{j + 1}$ , then we add time edges ${AC},{AD}$ , and ${BC}$ . During simulation, time edges will transmit forces backwards and forwards in time, maintaining temporal coherence by smoothing out deformation and transformations. Fig. 2e shows time edges for ${n}_{s} = 5$ .
+
+
+
+Figure 2: The creation of a discretized spacetime element. (a) A 2D element shape offset by ${\Delta s}$ . (b) A single triangle mesh slice. (c) Shear edges (red) and negative space edges (dashed blue). (d) A set of five slices placed along the time axis. (e) The vertices on the boundaries of the slices are joined by time edges. The black edges in (e) define a triangle mesh called the envelope of the element. In practice we use a larger number of slices in (d) and (e).
+
+
+
+Figure 3: A spacetime element with a scripted animation.
+
+### 3.2 Animation
+
+The 3D information constructed above is a parallel extrusion of a slice along the time axis, representing a shape with no scripted animation. We created a simple interactive application for adding animation to spacetime elements, inspired by as-rigid-as-possible shape manipulation [20]. The artist first designates a subset of the slices as keyframes. They can then interactively manipulate any triangulation vertex of a keyframe slice. Any vertex that has been positioned manually has its entire trajectory through the animation computed using spline interpolation. Then, at any other slice, the positions of all other vertices can be interpolated using the as-rigid-as-possible technique. The result is a smoothly animated spacetime volume like the one visualized in Fig. 3.
+
+Unlike data-driven packing methods like PAD [25], methods that allow distortions do not require a large library of distinct elements to generate successful packings. The results in this paper all use fewer than ten input elements, and some use only one. The physical simulation induces deformation to enhance the compatibility of nearby shapes in the final animation.
+
+## 4 INITIAL CONFIGURATION
+
+We begin the packing process by constructing a 3D spacetime volume for the container by extruding its static shape in the time direction. The container is permitted to have internal holes, which are also extruded. The resulting volume is scaled to fit a unit cube. We also shrink each of the spacetime elements, in the spatial dimensions only, to 5-10% of its original size. These shrunken elements are thin enough that we can place them in the container without overlaps.
+
+
+
+Figure 4: A 2D illustration of a guided element. Slices are depicted as black lines and slice vertices as black dots. A spring connects the centermost vertex $\mathbf{x}$ of a slice $s$ to a target point $p$ . (a) The initial shape of a guided element is a polygonal extrusion. (b) The spacetime element deforms but the springs pull it back towards the target points.
+
+The artist can optionally specify trajectories for a subset of the elements, which we call guided elements. A guided element attempts to pass through a sequence of fixed target points in the container, imbuing the animation with a degree of intention and narrative structure. To define a guided element, we designate the triangulation vertex closest to its centroid to be the anchor point for the element. The artist then chooses a set of spacetime target points ${\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{n}$ , with ${\mathbf{p}}_{i} = \left( {{x}_{i},{y}_{i},{t}_{i}}\right)$ , that the anchor should pass through during the animation. In our interface, the artist uses a slider to choose the time ${t}_{i}$ for a target point, and clicks in the container to specify the spatial position $\left( {{x}_{i},{y}_{i}}\right)$ . The artist can also optionally specify scale and orientation at the target points. We require ${t}_{1} = 0$ and ${t}_{n} = 1$ , fixing the initial and final positions of the guided element. We then linearly interpolate the anchor position for each slice based on the target points, and translate the slice so that its anchor lies at the desired position. The red extrusions in Fig. 5a are guided elements.
+
+If the artist wishes to create a looping animation, the $\left( {{x}_{i},{y}_{i}}\right)$ position for target points ${\mathbf{p}}_{1}$ and ${\mathbf{p}}_{n}$ must match up, either for a single guided element or across elements. In Fig. 5 the two guided elements form a connected loop; $\left( {{x}_{1},{y}_{1}}\right)$ for each one matches $\left( {{x}_{n},{y}_{n}}\right)$ for the other.
+
+In this initial configuration, the guided elements abruptly change direction at target points. However, because the slices are connected by springs, the trajectories will smooth out as the simulation runs. Also, the simulation is not constrained to reach each target position exactly. Instead, we attach the anchor to the target using a target-point spring that attempts to draw the element towards it while balancing against the other physical forces in play (Fig. 5b). The strength of these springs determines how closely the element will follow the trajectory.
+
+
+
+Figure 5: The simulation process. (a) Initial placement of shrunken spacetime elements inside a static 2D disc, extruded into a cylindrical spacetime domain. Guided elements are shown in red and unguided elements in blue. (b) A physics simulation causes the spacetime elements to bend. They also grow gradually. (c) The spacetime elements occupy the container space. (d) The simulation stops when elements do not have sufficient negative space in which to grow, or have reached their target sizes.
+
+
+
+Figure 6: Repulsion forces applied to a vertex $\mathbf{x}$ , allowing the element to deform and move away from a neighbouring element.
+
+We then seed the container with an initial packing of non-guided spacetime elements. We generate points within the container at random, using blue-noise sampling [2] to prevent points from being too close together, and assign a spacetime element to each seed point, selecting elements randomly from the input library. Depending upon the desired effect, we either randomize their orientations or give them preferred orientations. We reject any candidate seed point that would cause an unguided element's volume to intersect a guided element's volume.
+
+Finally we shrink each element, guided and unguided, uniformly in the spatial dimension towards its centroid. These shrunken elements are guaranteed not to intersect one another; as the simulation runs, they will grow and consume the container's negative space, while avoiding collisions. The blue extrusions in Fig. 5a show an initial placement of spacetime elements.
+
+## 5 SIMULATION
+
+We now perform a physics simulation on the spacetime elements and the container. Elements are subject to a number of forces that cause them to simultaneously grow, deform, and repel each other (Fig. 5). Our physics simulation is very similar to that of RepulsionPak [30] — with the exception of the new temporal force, all our forces are the spacetime analogues of the ones used there. In Sect. 5.2 we introduce some new hard constraints that must be applied after every time step.
+
+Note that we must distinguish two notions of time in this simulation. We use $t$ to refer to the time axis of our spacetime volume, which will become the time dimension of the final animation, and ${t}_{\text{sim }}$ to refer to the time domain in which the simulation is taking place.
+
+Repulsion Forces allow elements to push away vertices of neighbouring elements, inducing deformations and transformations that lead to an even distribution of elements within the container (Fig. 6). We compute the repulsion force ${\mathbf{F}}_{\mathrm{{rpl}}}$ on a vertex $\mathbf{x}$ located on a slice boundary as:
+
+$$
+{\mathbf{F}}_{\mathrm{{rpl}}} = {k}_{\mathrm{{rpl}}}\mathop{\sum }\limits_{{i = 1}}^{n}\frac{\mathbf{u}}{\parallel \mathbf{u}\parallel }\frac{1}{\epsilon + \parallel \mathbf{u}{\parallel }^{2}} \tag{1}
+$$
+
+where
+
+${k}_{\mathrm{{rpl}}}$ is the relative strength of ${\mathbf{F}}_{\mathrm{{rpl}}}$ . We set ${k}_{\mathrm{{rpl}}} = {10}$ ;
+
+$n$ is the number of nearest points to $\mathbf{x}$ ;
+
+${\mathbf{x}}_{\mathbf{i}}$ is the $i$ -th closest point on the neighboring element surfaces;
+
+$\mathbf{u} = \mathbf{x} - {\mathbf{x}}_{\mathbf{i}}$ ; and
+
+$\epsilon$ is a soft parameter to avoid instability when $\parallel \mathbf{u}\parallel$ is small. We set $\epsilon = 1$ .
+
+Since the simulation operates in the spacetime domain, vertex $\mathbf{x}$ accumulates repulsion forces from points at various time positions. To locate these points on neighbouring elements that are considered nearest, we use a collision grid data structure, described in greater detail in Sect. 5.1.
+
+Edge Forces allow elements to deform in response to repulsion forces. The edges defined in Sect. 3 are used here as springs. Like RepulsionPak, we use a non-physical quadratic spring force. Let ${\mathbf{x}}_{\mathbf{a}}$ and ${\mathbf{x}}_{\mathbf{b}}$ be vertices connected by a spring. Each vertex experiences an edge force ${\mathbf{F}}_{\text{edg }}$ of
+
+$$
+{\mathbf{F}}_{\text{edg }} = {k}_{\text{edg }}\frac{\mathbf{u}}{\parallel \mathbf{u}\parallel }s{\left( \parallel \mathbf{u}\parallel - \ell \right) }^{2} \tag{2}
+$$
+
+where
+
+${k}_{\text{edg }}$ is is the relative strength of ${\mathbf{F}}_{\text{edg }}$ . Different classes of spring will have different ${k}_{\text{edg }}$ values;
+
+$\mathbf{u} = {\mathbf{x}}_{\mathbf{b}} - {\mathbf{x}}_{\mathbf{a}};$
+
+$\ell$ is the rest length of the spring; and
+
+$s$ is +1 or -1, according to whether $\left( {\parallel \mathbf{u}\parallel - \ell }\right)$ is positive or negative.
+
+We have five types of springs, with stiffness constants that can be set independently. In our implementation we set ${k}_{\text{edg }}$ to 0.01 for time springs, 0.1 for negative-space springs, and 10 for edge springs, shear springs, and target point springs.
+
+Overlap forces resolve a vertex penetrating a neighboring spacetime element. Overlaps can occur later in the simulation when negative space is limited. Once we detect a penetration, we temporarily disable the repulsion force on vertex $\mathbf{x}$ , and apply an overlap force ${\mathbf{F}}_{\text{ovr }}$ to push it out:
+
+$$
+{\mathbf{F}}_{\text{ovr }} = {k}_{\text{ovr }}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {{\mathbf{p}}_{i} - \mathbf{x}}\right) \tag{3}
+$$
+
+
+
+Figure 7: An illustration of the temporal force. The vertices in slice ${s}_{i}$ are drawn back towards time $t$ .
+
+## where
+
+${k}_{\text{ovr }}$ is the relative strength of ${\mathbf{F}}_{\text{ovr }}$ . We set ${k}_{\text{ovr }} = 5$ ;
+
+$n$ is the number of slice triangles that have $\mathbf{x}$ as a vertex; and
+
+${\mathbf{p}}_{\mathbf{i}}$ is the centroid of the $i$ -th slice triangle incident on $\mathbf{x}$ .
+
+Boundary forces keep vertices inside the container. If an element vertex $\mathbf{x}$ is outside the container, the boundary force ${\mathbf{F}}_{\text{bdr }}$ moves it towards the closest point on the container's boundary by an amount proportional to the distance to the boundary:
+
+$$
+{\mathbf{F}}_{\mathrm{{bdr}}} = {k}_{\mathrm{{bdr}}}\left( {{\mathbf{p}}_{\mathbf{b}} - \mathbf{x}}\right) \tag{4}
+$$
+
+## where
+
+${k}_{\text{bdr }}$ is the relative strength of ${\mathbf{F}}_{\text{bdr }}$ . We set ${k}_{\text{bdr }} = 5$ ; and
+
+${\mathbf{p}}_{\mathbf{b}}$ is the closest point on the target container to $\mathbf{x}$ .
+
+Torsional forces allow an element's slices to be given preferred orientations, to which they attempt to return. Consider a vertex $\mathbf{x}$ of a slice, and let ${\mathbf{c}}_{r}$ be the slice’s center of mass in its undeformed state. We define the rest orientation of $\mathbf{x}$ as the orientation of the vector ${\mathbf{u}}_{r} = \mathbf{x} - {\mathbf{c}}_{r}$ . During simulation we compute the current centre of mass $\mathbf{c}$ of the slice and let $\mathbf{u} = \mathbf{x} - \mathbf{c}$ . Then the torsional
+
+force ${\mathbf{F}}_{\text{tor }}$ is
+
+$$
+{\mathbf{F}}_{\text{tor }} = \left\{ \begin{array}{ll} {k}_{\text{tor }}{\mathbf{u}}^{ \bot }, & \text{ if }\theta > 0 \\ - {k}_{\text{tor }}{\mathbf{u}}^{ \bot }, & \text{ if }\theta < 0 \end{array}\right. \tag{5}
+$$
+
+where
+
+${k}_{\text{tor }}$ is the relative strength of ${\mathbf{F}}_{\text{tor }}$ . We set ${k}_{\text{tor }} = {0.1}$ ;
+
+$\theta$ is the signed angle between ${\mathbf{u}}_{r}$ and $\mathbf{u}$ ; and
+
+${\mathbf{u}}^{ \bot }$ is a unit vector rotated ${90}^{ \circ }$ counterclockwise relative to $\mathbf{u}$ .
+
+Temporal forces prevent slices from drifting too far from their original positions along the time axis positions (Fig. 7), which could cause unexpected accelerations and decelerations in the final animation. For every vertex, we compute the temporal force ${\mathbf{F}}_{\text{tmp }}$
+
+as
+
+$$
+{\mathbf{F}}_{\mathrm{{tmp}}} = {k}_{\mathrm{{tmp}}}{\mathbf{u}}^{t}\left( {t - {t}^{\prime }}\right) \tag{6}
+$$
+
+where
+
+${k}_{\mathrm{{tmp}}}$ is the relative strength of ${\mathbf{F}}_{\mathrm{{tmp}}}$ . We set ${k}_{\mathrm{{tmp}}} = 1$ ;
+
+$t$ is the initial time of the slice to which the vertex belongs;
+
+${t}^{\prime }$ is the current time value of the vertex; and
+
+${\mathbf{u}}^{t} = \left( {0,0,1}\right)$ .
+
+## Computing total force and numerical integration:
+
+The total force on a vertex is the sum of all of the individual forces described above:
+
+$$
+{\mathbf{F}}_{\text{total }} = {\mathbf{F}}_{\mathrm{{rpl}}} + {\mathbf{F}}_{\text{edg }} + {\mathbf{F}}_{\mathrm{{bdr}}} + {\mathbf{F}}_{\text{ovr }} + {\mathbf{F}}_{\text{tor }} + {\mathbf{F}}_{\text{tmp }} \tag{7}
+$$
+
+
+
+Figure 8: (a) The triangles that connect consecutive slices define the envelope of the element. The midpoints of these triangles are stored in a collision grid. (b) A 2D visualization of the region of collision grid cells around a query point $\mathbf{x}$ in which repulsion and overlap forces will be computed. In the central blue region, we check overlaps and compute exact repulsion forces relative to closest points on triangles of neighbouring elements; in the peripheral red region we do not compute overlaps, and repulsion forces are approximated using triangle midpoints only.
+
+We use explicit Euler integration to simulate the motions of the mesh vertices under the forces described above. Every vertex has a position and a velocity vector; in every iteration, we update velocities using forces, and update positions using velocities. These updates are scaled by a time step $\Delta {t}_{\text{sim }}$ that we set to 0.01 . We cap velocities at ${10\Delta }{t}_{\text{sim }}$ to dissipate extra energy from the simulation.
+
+### 5.1 Spatial Queries
+
+Repulsion and overlap forces rely on being able to find points on neighbouring elements that are close to a given query vertex. To find these points, we use each element's envelope, a triangle mesh implied by the construction in Sect. 3. Each triangle of the envelope is made from two time edges and one edge of a slice boundary, as shown in Fig. 8a. Given a query vertex $\mathbf{x}$ , we need to find nearby envelope triangles that belong to other elements.
+
+To accelerate this computation, we first find and store the centroids of every element's envelope triangles in a uniformly subdivided 3D grid that surrounds the spacetime volume of the animation. In using this data structure, we make two simplifying assumptions; first, that because envelope triangles are small, their centroids are adequate for finding triangles near a given query point; and second, that the repulsion force from a more distant triangle is well approximated by a force from its centroid.
+
+Given a query vertex $\mathbf{x}$ , we first find all envelope triangle centroids in nearby grid cells that belong to other elements. For each centroid, we use a method described by Ericson [12] to find the point on its triangle closest to $\mathbf{x}$ and include that point in the list of points in Eq. (1). These nearby triangles will also be used to test for interpenetration of elements. We then find centroids in more distant grid cells, and add those centroids directly to the Eq. (1) list, skipping the closest point computation. In our system we set the cell size to 0.04, giving a ${25} \times {25} \times {25}$ grid around the simulation volume. A query point's nearby grid cells are the 27 cells making up a $3 \times 3 \times 3$ block around the cell containing the point; the more distant cells are the 98 that make up the outer shell of the $5 \times 5 \times 5$ block around that (Fig. 8).
+
+### 5.2 Slice Constraints
+
+There are three hard geometric constraints on the configuration of slices, which must be enforced throughout the simulation. Each of the following constraints is reapplied after each physical simulation step described above.
+
+
+
+Figure 9: a) End-to-end constraint: slice ${s}_{1}$ and ${s}_{n}$ , located at $t = 0$ and $t = 1$ , should never change their $t$ positions but can change their $x, y$ positions. b) Simultaneity constraint: all vertices on the same slice should have the same $t$ position. c) Loop constraint with a single element: the $x, y$ positions for ${s}_{1}$ and ${s}_{n}$ must match. d) Loop constraint with two elements: the $x, y$ position for ${s}_{1}$ for one element matches the $x, y$ position for ${s}_{n}$ of the other.
+
+1. End-to-end constraint: A spacetime element must be present for the full length of the animation from $t = 0$ to $t = 1$ . After every simulation step, every vertex belonging to an element's first slice has its $t$ value set to 0, and every vertex of the last slice has its $t$ value set to 1 (Fig. 9a).
+
+2. Simultaneity constraint: During simulation, the vertices of a slice can drift away from each other in time, which could lead to rendering artifacts in the animation. After every simulation step, we compute the average $t$ value of all vertices belonging to each slice other than the first and last slices, and snap all the slice’s vertices to that $t$ value (Fig. 9b).
+
+3. Loop constraint: AnimationPak optionally supports looping animations. When looping is enabled, we must ensure that the $t = 0$ and $t = 1$ planes of the spacetime container are identical. The $t = 1$ slice of every element ${e}_{1}$ must then coincide with the $t = 0$ slice of some element ${e}_{2}$ . We can have ${e}_{1} = {e}_{2}$ (Fig. 9c), but more general loops are possible in which the elements arrive at a permutation of their original configuration (Fig. 9d). We require only that there is a one-to-one correspondence between the vertices of the $t = 1$ slice of ${e}_{1}$ and the $t = 0$ slice of ${e}_{2}$ . If ${\mathbf{p}}_{1} = \left( {{x}_{1},{y}_{1},1}\right) \in {e}_{1}$ and ${\mathbf{p}}_{2} = \left( {{x}_{2},{y}_{2},0}\right) \in {e}_{2}$ are in correspondence, then after every simulation step we move ${\mathbf{p}}_{1}$ to $\left( {\frac{{x}_{1} + {x}_{2}}{2},\frac{{y}_{1} + {y}_{2}}{2},1}\right)$ and ${\mathbf{p}}_{2}$ to $\left( {\frac{{x}_{1} + {x}_{1}}{2},\frac{{y}_{2} + {y}_{2}}{2},0}\right)$ .
+
+
+
+Figure 10: A spacetime element shown (a) shrunken at the beginning of the simulation, and (b) grown later in the simulation. (c) When two elements overlap somewhere along their lengths, they are temporarily prohibited from growing there.
+
+### 5.3 Element Growth and Stopping Criteria
+
+We begin the spacetime packing process with all element slices scaled down in $x$ and $y$ , guaranteeing that elements do not overlap. As the simulation progresses we gradually grow the slices, consuming the negative space around them (Fig. 10a, b). A perfect packing would fill the spacetime container completely with the elements. Because each element wraps the underlying animated shape with a narrow channel of negative space, this would yield an even distribution of shapes in the resulting animation. For real-world elements, the goal of minimizing deformation of irregular element shapes will lead to imperfect packings with additional pockets of negative space.
+
+Element growth: We induce elements to grow spatially by gradually increasing the rest lengths of their springs. The initial rest length of each spring is determined by the vertex positions in the shrunken version of the spacetime element constructed in Sect. 4. We allow an element's slices to grow independently of each other, which complicates the calculation of new rest lengths for time springs. Therefore, we create a duplicate of every shrunken spacetime element in the container, with a straight extrusion for unguided elements, and a polygonal extrusion for guided elements. This duplicate is not part of the simulation; it serves as a reference. Every element slice maintains a current scaling factor $g$ . When we wish to grow the slice, we increase its $g$ value. We can compute new rest lengths for all springs by scaling every slice of the reference element by a factor of $g$ relative to the slice’s centroid, and measuring distances between the scaled vertex positions. These new rest lengths are then used as the $\ell$ values in Equation 2.
+
+Every element slice has its $g$ value initialized to 1 . After every simulation step, if none of the slice's vertices were found to overlap other elements we increase that slice’s $g$ by ${0.001\Delta }{t}_{\text{sim }}$ , where $\Delta {t}_{\text{sim }}$ is the simulation time step. If any overlaps are found, then that slice's growth is instead paused to allow overlap and repulsion forces to give it more room to grow in later iterations. This approach can cause elements to fluctuate in size during the course of an animation, as slices compete for shifting negative space (Fig. 10).
+
+Stopping Criteria: We halt the simulation when the space between neighbouring elements drops below a threshold. When calculating repulsion forces, we find the distance from every slice vertex to the closest point in a neighbouring element. The minimum of these distances over all vertices in an element slice determines that slice's closest distance to neighbouring elements. We halt the simulation when the maximum per-slice distance falls below 0.006 (relative to a normalized container size of 1 ). That is, we stop when every slice is touching (or nearly touching) at least one other element.
+
+In some cases it can be useful to stop early based on cumulative element growth. In that case, we set a separate threshold for the slice scaling factors $g$ described above, and stop when the $g$ values of all slices exceed that threshold.
+
+## 6 RENDERING
+
+The result of the simulation described above is a packing of spacetime elements within a spacetime container. We can render an animation frame-by-frame by cutting through this volume at evenly spaced $t$ values from $t = 0$ to $t = 1$ . For our results, we typically render 500-frame animations.
+
+During simulation, a given spacetime element's slices may drift from their original creation times. However, time springs keep the sequence monotonic, and the simultaneity constraint ensures that every slice is fixed to one $t$ value. To render this element at an arbitrary frame time ${t}_{f} \in \left\lbrack {0,1}\right\rbrack$ , we find the two consecutive slices whose time values bound the interval containing ${t}_{f}$ and linearly interpolate the vertex positions of the triangulations at those two slices to obtain a new triangulation at ${t}_{f}$ . We can then compute a deformed copy of the original element paths by "replaying" the barycentric coordinates computed in Sect. 3 relative to the displaced triangulation vertices. We repeat this process for every spacetime element to obtain a rendering of the frame at ${t}_{f}$ .
+
+This interpolation process can occasionally lead to small artifacts in the animation. A rendered frame can fall between the discretely sampled slices for two elements at an intermediate time where physical forces were not computed explicitly. It is therefore possible for neighbouring elements to overlap briefly during such intervals.
+
+## 7 IMPLEMENTATION AND RESULTS
+
+The core AnimationPak algorithm consists of a C++ program that reads in text files describing the spacetime elements and the container, and outputs raster images of animation frames.
+
+Large parts of AnimationPak can benefit from parallelism. In our implementation we update the cells of the collision grid (Sect. 5.1) in parallel by distributing them across a pool of threads. When the updated collision grid is ready, we distribute the spacetime elements over threads. We calculate forces, perform numerical integration, and apply the end-to-end and simultaneity constraints for each element in parallel. We must process any loop constraints afterwards, as they can affect vertices in two separate elements.
+
+We created the results in this paper using a Windows PC with a ${3.60}\mathrm{{GHz}}$ Intel i7-4790 processor and ${16}\mathrm{{GB}}$ of RAM. We used a pool of eight threads, corresponding to the number of logical CPU cores. Table 1 shows statistics for our results. Each packing has tens of thousands of vertices and hundreds of thousands of springs, and requires about an hour to complete. We enable the loop constraint in all results. The paper shows selected frames from the results; see the accompanying videos for full animations.
+
+Fig. 1 is an animation of aquatic fauna featuring two penguins as guided elements. During one loop the penguins move clockwise around the container, swapping positions at the top and the bottom. Each ends at the other's starting point, demonstrating a loop constraint between distinct elements. All elements are animated, as shown in Fig. 1a. Note the coupling between the Pac-Man fish's mouth and the shark's tail on the left side of the second and fourth frames.
+
+A snake chases a bird around an annular container in Fig. 11, demonstrating a container with a hole and giving a simple example of the narrative potential of animated packings. Fig. 12 animates the giraffe-to-penguin illusion shown as a static packing in Repulsion-Pak. This example uses torsional forces to control slice orientations.
+
+Fig. 13 offers a direct comparison between packings computed using Centroidal Area Voronoi Diagrams (CAVD) [33], the spectral approach [10], and AnimationPak. These packings use stars that rotate and pulsate. For each method we show the initial frame $\left( {t = 0}\right)$ and the halfway point $\left( {t = {0.5}}\right)$ . The CAVD approach produces a satisfactory-albeit loosely coupled-packing for the first frame, but because the algorithm was not intended to work on animated elements, the evenness of the packing quickly degrades in later frames. The spectral approach is much better than CAVD, but their animated elements still have fixed spacetime shapes and can only translate and rotate to improve their fit. Repulsion forces and deformation allow AnimationPak to achieve a tighter packing that persists across the animation, including gear-like meshing of oppositely-rotating stars.
+
+Fig. 14a is a static packing of a lion created by an artist and used as an example in FLOWPAK [32]. In Fig. 14b, we reproduce it with animated elements for the mane. The orientations of elements follow a vector field inside the container, and are maintained during the animation by torsional forces. We simulate only half of the packing and reflect it to create the other half. The facial features were added manually in a post-processing step.
+
+Fig. 15 compares a static 2D packing created by RepulsionPak with a frame from an animated packing created by AnimationPak. The extra negative space in AnimationPak comes partly from the trade-off between temporal coherence and tight packing, and partly from the lack of secondary elements, which were used in a second pass in RepulsionPak to fill pockets of negative space.
+
+Fig. 16 emphasizes the trade-off between temporal coherence and evenness of negative space by creating two animations with different time springs stiffness. In (a), the time springs are 100 times stronger than in (b). The resulting packing has larger pockets of negative space, but the accompanying video shows that the animation is smoother. The packing in (b) is tighter, but the elements must move frantically to maintain that tightness.
+
+Fig. 17 is a failed attempt to animate a "blender". The packing has a beam that rotates clockwise and a number of small unguided circles. In a standard physics simulation we might expect the beam to push the circles around the container, giving each one a helical spacetime trajectory. Instead, as elements grow, repulsion forces cause circles to explore the container boundary, where they discover the lower-energy solution of slipping past the edge of the beam as it sweeps past. If we extend the beam to the full diameter of the container, consecutive slices simply teleport across the beam, hiding the moment of overlap in the brief time interval where physical forces were not computed. AnimationPak is not directly comparable to a 3D physics simulation; it is better suited to improving the packing quality of an animation that has already been blocked out at a high level.
+
+## 8 CONCLUSION AND FUTURE WORK
+
+We introduced AnimationPak, a system for generating animated packings by filling a static container with animated elements. Every animated 2D element is represented by an extruded spacetime tube. We discretize elements into triangle mesh slices connected by time edges, and deform element shapes and animations using a spacetime physical simulation. The result is a temporally coherent $2\mathrm{D}$ animation of elements that attempt both to perform their scripted motions and consume the negative space of the container. We show a variety of results where 2D elements move around inside the container.
+
+We see an number of opportunities for improvements and extensions to AnimationPak:
+
+- Because we use linear interpolation to synthesize an element's shape between slices, we require elements not to undergo changes in topology. More sophisticated representations of vector shapes, such as that of Dalstein et al. [11], could support interpolations between slices with complex topological changes. We would also need to synthesize a watertight envelope around the animating element in order to compute overlap and repulsion forces.
+
+
+
+Figure 11: A snake chasing a bird through a packing of animals. The snake and bird are both guided elements that move clockwise around the annular container.
+
+
+
+Figure 12: Penguins turning into giraffes. The penguins animate by rotating in place. Torsional forces are used to preserve element orientations. Frames are taken at $t = 0, t = {0.125}, t = {0.25}, t = {0.375}$ , and $t = {0.5}$ .
+
+
+
+Figure 13: A comparison of (a) Centroidal Area Voronoi Diagrams (CAVDs) [33], (b) spectral packing [10], and (c) AnimationPak. We show two frames for each method, taken at $t = 0$ and $t = {0.5}$ . The CAVD packing starts with evenly distributed elements but the packing degrades as the animation progresses. The spectral approach improves upon CAVD with better consistency, but still leaves significant pockets of negative space. The AnimationPak packing has less negative space that is more even.
+
+
+
+Figure 14: (a) A static packing made by an artist, taken from Figure 15: (a) A static packing created with RepulsionPak. (b) StockUnlimited. (b) The first frame from an AnimationPak packing. The first frame of a comparable AnimationPak packing. The input (c) The input animated elements and the container shape with a spacetime elements are shown on the right. The AnimationPak vector field. Torsional forces keep elements oriented in the direction packing has more negative space because we must tradeoff between of the vector field. We simulate half of the lion's mane and render temporal coherence and packing density. the other half using a reflection, and add the facial features by hand.
+
+Table 1: Data and statistics for the results in the paper. The table shows the number of elements, the number of vertices, the number of springs, the number of envelope triangles, and the running time of the simulation in hours, minutes, and seconds.
+
+| Packing | Elements | Vertices | Springs | Triangles | Time |
| Aquatic animals (Fig. 1) | 37 | 97,800 | 623,634 | 106,000 | 01:06:35 |
| Snake and birb (Fig. 11) | 37 | 58,700 | 370,571 | 58,700 | 01:01:32 |
| Penguin to giraffe (Fig. 12) | 33 | 124,300 | 824,164 | 143,000 | 01:19:50 |
| Heart stars (Fig. 13c) | 26 | 85,200 | 598,218 | 858,00 | 00:23:08 |
| Animals (Fig. 15b) | 34 | 69,600 | 444,337 | 69,800 | 01:00:19 |
| Lion (Fig. 14b) | 16 | 39,400 | 236,086 | 41,800 | 00:41:56 |
+
+
+
+Figure 16: (a) One frame from Fig. 1. (b) The same packing with time springs that are $1\%$ as stiff. Reducing the stiffness of time springs leads to a more even packing with less negative space, but the animated elements must move frantically to preserve packing density. The spacetime trajectories of the highlighted fish in (a) and (b) are shown in (c). The orange fish in (b) exhibits more high frequency fluctuation in its position.
+
+- We would like to improve the performance of the physical simulation. One option may be to increase the resolution of element meshes progressively during simulation. Early in the process, elements are small and distant from each other, so lower-resolution meshes may suffice for computing repulsion forces.
+
+- As noted in Sect. 6 and Fig. 17, our discrete simulation can miss element overlaps that occur between slices. A more robust continuous collision detection (CCD) algorithm such as that of Brochu et al. [3] could help us find all collisions between the envelopes of spacetime elements.
+
+- In RepulsionPak [30], an additional pass with small secondary elements had a significant positive effect on the distribution of negative space in the final packing. It may be possible to identify stretches of unused spacetime that can be filled opportunistically with additional elements. The challenge would be to locate tubes of empty space that run the full duration of the animation, always of sufficient diameter to accommodate an added element.
+
+- Like the spectral method [10], and unlike Animosaics [33], AnimationPak can pack animated elements into a static container. We would like to extend our work to also handle animated containers. This extension would certainly affect the initial element placement, which would need to ensure that elements are placed fully inside the spacetime volume of the container. It could also lead to undesirable scaling of elements if the container area changes too much. It would be interesting to investigate whether we could adapt to changes in area by adding and removing elements unobtrusively during the animation, in the style of Animosaics.
+
+
+
+Figure 17: A failure case for AnimationPak, consisting of a rotating beam and a number of small circles. Instead of being dragged around by the beam, the circles dodge it entirely by sneaking through the gap between the beam and the container. The red circle demonstrates one such maneuver.
+
+- AnimationPak implements forces and constraints geared towards spacetime animation, but many of the same ideas could be adapted to develop a deformation-driven method for packing purely spatial 3D objects into a $3\mathrm{D}$ container. We would like to evaluate the expressivity and visual quality of deformation-driven 3D packings in comparison to other 3D packing techniques.
+
+- Our physical simulation relies in several places on our method of constructing and animating spacetime elements. Our time edges make use of the one-to-one correspondence between boundary vertices of adjacent slices in order to construct a mesh surface that bounds each element. We also make direct use of that correspondence when rendering, to interpolate new triangulations between existing slices. We would like AnimationPak to be more agnostic about the method used to create animated elements. Given a "generic" animated element, we can easily compute independent triangulated slices, but we would need robust algorithms to join them into an extrusion and interpolate within that extrusion later.
+
+- Saputra et al. [30] previously studied a set of measurements inspired by spatial statistics for evaluating the evenness of the distribution of negative space in a static packing. While their measurements extend naturally to three purely spatial dimensions, it is not clear whether they can be adapted to our spacetime context. We would like to investigate spatial statistics for the quality of animated packings that correlate with human perceptual judgments.
+
+- There are many examples of static two-dimensional packings created by artists, which can serve as inspiration for an algorithm like RepulsionPak. We were unable to find an equivalent set of animated examples, probably because they would be difficult and time-consuming to create by hand. We would like to engage with artists to understand the aesthetic value and limitations of AnimationPak.
+
+## ACKNOWLEDGMENTS
+
+We thank reviewers for helpful feedback. Thanks to Danny Kaufman for discussions about spacetime optimizations and physics simulations. This research was funded by the National Sciences and Engineering Research Council of Canada (NSERC) and through a generous gift from Adobe.
+
+## REFERENCES
+
+[1] M. Attene. Shapes in a box: Disassembling 3D objects for efficient packing and fabrication. Computer Graphics Forum, 34(8):64-76, 2015. doi: 10.1111/cgf. 12608
+
+[2] R. Bridson. Fast Poisson disk sampling in arbitrary dimensions. In ACM SIGGRAPH 2007 Sketches, SIGGRAPH 07, pp. 22 - es. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10. 1145/1278780.1278807
+
+[3] T. Brochu, E. Edwards, and R. Bridson. Efficient geometrically exact continuous collision detection. ACM Trans. Graph., 31(4), July 2012. doi: 10.1145/2185520.2185592
+
+[4] A. Bruderlin and L. Williams. Motion signal processing. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 95, pp. 97-104. Association for Computing Machinery, New York, NY, USA, 1995. doi: 10.1145/ 218380.218421
+
+[5] E. Carra, C. Santoni, and F. Pellacini. gMotion: A spatio-temporal grammar for the procedural generation of motion graphics. In Proceedings of Graphics Interface 2018, GI 2018, pp. 100-107. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2018. doi: 10.20380/GI2018.14
+
+[6] W. Chen, Y. Ma, S. Lefebvre, S. Xin, J. Martínez, and W. Wang. Fabricable tile decors. ACM Trans. Graph., 36(6):175:1-175:15, Nov. 2017. doi: 10.1145/3130800.3130817
+
+[7] W. Chen, X. Zhang, S. Xin, Y. Xia, S. Lefebvre, and W. Wang. Synthesis of filigrees for digital fabrication. ACM Trans. Graph., 35(4):98:1- 98:13, July 2016. doi: 10.1145/2897824.2925911
+
+[8] M. G. Choi, M. Kim, K. L. Hyun, and J. Lee. Deformable motion: Squeezing into cluttered environments. Computer Graphics Forum, 30(2):445-453, 2011. doi: 10.1111/j.1467-8659.2011.01889.x
+
+[9] M. F. Cohen. Interactive spacetime control for animation. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 92, pp. 293-302. Association for Computing Machinery, New York, NY, USA, 1992. doi: 10. 1145/133994.134083
+
+[10] K. Dalal, A. W. Klein, Y. Liu, and K. Smith. A spectral approach to NPR packing. In Proceedings of the 4th International Symposium on Non-photorealistic Animation and Rendering, NPAR '06, pp. 71-78. ACM, New York, NY, USA, 2006. doi: 10.1145/1124728.1124741
+
+[11] B. Dalstein, R. Ronfard, and M. van de Panne. Vector graphics animation with time-varying topology. ACM Trans. Graph., 34(4), July 2015.
+
+[12] C. Ericson. Chapter 5: Basic primitive tests. In C. Ericson, ed., Real-Time Collision Detection, The Morgan Kaufmann Series in Interactive 3D Technology, pp. 125-233. Morgan Kaufmann, San Francisco, 2005. doi: 10.1016/B978-1-55860-732-3.50010-3
+
+[13] R. Gal, O. Sorkine, T. Popa, A. Sheffer, and D. Cohen-Or. 3D collage: Expressive non-realistic modeling. In Proceedings of the 5th International Symposium on Non-photorealistic Animation and Rendering, NPAR '07, pp. 7-14. ACM, New York, NY, USA, 2007. doi: 10.1145/1274871.1274873
+
+[14] M. Gleicher. Motion path editing. In Proceedings of the 2001 Symposium on Interactive 3D Graphics, I3D 01, pp. 195-202. Association for Computing Machinery, New York, NY, USA, 2001. doi: 10.1145/364338.364400
+
+[15] A. Hausner. Simulating decorative mosaics. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 01, pp. 573-580. Association for Computing Machinery, New York, NY, USA, 2001. doi: 10.1145/383259.383327
+
+[16] S. Hiller, H. Hellwig, and O. Deussen. Beyond stippling-methods for distributing objects on the plane. Computer Graphics Forum, 22(3):515-522, 2003. doi: 10.1111/1467-8659.00699
+
+[17] E. S. L. Ho, T. Komura, and C.-L. Tai. Spatial relationship preserving character motion adaptation. ACM Trans. Graph., 29(4), July 2010. doi: 10.1145/1778765.1778770
+
+[18] C.-Y. Hsu, L.-Y. Wei, L. You, and J. J. Zhang. Autocomplete element fields. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI 20. Association for Computing Machinery, New York, NY, USA, 2020.
+
+[19] W. Hu, Z. Chen, H. Pan, Y. Yu, E. Grinspun, and W. Wang. Surface mosaic synthesis with irregular tiles. IEEE Transactions on Visualization and Computer Graphics, 22(3):1302-1313, March 2016. doi: 10.1109/ TVCG.2015.2498620
+
+[20] T. Igarashi, T. Moscovich, and J. F. Hughes. As-rigid-as-possible shape manipulation. ACM Trans. Graph., 24(3):1134-1141, July 2005. doi: 10.1145/1073204.1073323
+
+[21] D. Kang, Y.-J. Ohn, M.-H. Han, and K.-H. Yoon. Animation for ancient tile mosaics. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, NPAR '11, pp. 157-166. ACM, New York, NY, USA, 2011. doi: 10.1145/ 2024676.2024701
+
+[22] C. S. Kaplan. Animated isohedral tilings. In Proceedings of Bridges 2019: Mathematics, Art, Music, Architecture, Education, Culture, pp. 99-106. Tessellations Publishing, Phoenix, Arizona, 2019. Available online at http://archive.bridgesmathart.org/2019/ bridges2019-99.pdf.
+
+[23] J. Kim and F. Pellacini. Jigsaw image mosaics. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 02, pp. 657-664. Association for Computing Machinery, New York, NY, USA, 2002. doi: 10.1145/566570.566633
+
+[24] M. Kim, Y. Hwang, K. Hyun, and J. Lee. Tiling motion patches. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 12, pp. 117-126. Eurographics Association, Goslar, DEU, 2012.
+
+[25] K. C. Kwan, L. T. Sinn, C. Han, T.-T. Wong, and C.-W. Fu. Pyramid of arclength descriptor for generating collage of shapes. ACM Trans. Graph., 35(6):229:1-229:12, Nov. 2016. doi: 10.1145/2980179. 2980234
+
+[26] Y. Liu and O. Veksler. Animated classic mosaics from video. In Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II, ISVC '09, pp. 1085-1096. Springer-Verlag, Berlin, Heidelberg, 2009. doi: 10.1007/978-3-642-10520-3_104
+
+[27] C. Ma, L.-Y. Wei, and X. Tong. Discrete element textures. ACM Trans. Graph., 30(4), July 2011. doi: 10.1145/2010324.1964957
+
+[28] Y. Ma, Z. Chen, W. Hu, and W. Wang. Packing irregular objects in 3D space via hybrid optimization. Computer Graphics Forum, 37(5):49- 59, 2018. doi: 10.1111/cgf. 13490
+
+[29] M. Oshita. Lattice-guided human motion deformation for collision avoidance. In Proceedings of the Tenth International Conference on Motion in Games, MIG 17. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3136457.3136475
+
+[30] R. A. Saputra, C. S. Kaplan, and P. Asente. RepulsionPak: Deformation-driven element packing with repulsion forces. In Proceedings of the 44th Graphics Interface Conference, GI '18. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2018.
+
+[31] R. A. Saputra, C. S. Kaplan, and P. Asente. Improved deformation-driven element packing with RepulsionPak. IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2019. doi: 10.1109/ TVCG.2019.2950235
+
+[32] R. A. Saputra, C. S. Kaplan, P. Asente, and R. Měch. FLOWPAK: Flow-based ornamental element packing. In Proceedings of the 43rd Graphics Interface Conference, GI '17, pp. 8-15. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2017. doi: 10.20380/GI2017.02
+
+[33] K. Smith, Y. Liu, and A. Klein. Animosaics. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '05, pp. 201-208. ACM, New York, NY, USA, 2005. doi: 10.1145/1073368.1073397
+
+[34] A. Witkin and M. Kass. Spacetime constraints. In Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 88, pp. 159-168. Association for Computing Machinery, New York, NY, USA, 1988. doi: 10.1145/54852.378507
+
+[35] A. Witkin and Z. Popović. Motion warping. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive
+
+Techniques, SIGGRAPH 95, pp. 105-108. Association for Computing Machinery, New York, NY, USA, 1995. doi: 10.1145/218380.218422
+
+[36] J. Xu and C. S. Kaplan. Calligraphic packing. In Proceedings of Graphics Interface 2007, GI 07, pp. 43-50. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1268517. 1268527
+
+[37] J. Zehnder, S. Coros, and B. Thomaszewski. Designing structurally-sound ornamental curve networks. ACM Trans. Graph., 35(4), July 2016. doi: 10.1145/2897824.2925888
+
+[38] C. Zou, J. Cao, W. Ranaweera, I. Alhashim, P. Tan, A. Sheffer, and H. Zhang. Legible compact calligrams. ACM Trans. Graph., 35(4), July 2016. doi: 10.1145/2897824.2925887
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..07aeed49ed27f9df30a0cf72583ab7b7410ebd04
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/sr89orrDo-o/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,401 @@
+§ ANIMATIONPAK: PACKING ELEMENTS WITH SCRIPTED ANIMATIONS
+
+Reza Adhitya Saputra*
+
+University of Waterloo
+
+Craig S. Kaplan†
+
+University of Waterloo
+
+Paul Asente ${}^{ \ddagger }$
+
+Adobe Research
+
+ < g r a p h i c s >
+
+Figure 1: (a) Input animated elements, each with its own animation: swimming penguins, swimming sharks and fish, Pac-Man fish that open or close their mouths, and rotating stars. (b-e) four selected frames from an animated packing.
+
+§ ABSTRACT
+
+We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.
+
+Index Terms: I.3.3 [Computing Methodologies]: Computer Graphics-Picture/Image Generation; I.3.m [Computing Methodologies]: Computer Graphics—Animation;
+
+§ 1 INTRODUCTION
+
+A decorative packing is a composition created by arranging two-dimensional shapes called elements within a larger region called a container. Packings are popular in graphic design, and are used frequently in advertising and product packaging.
+
+At a high level, packings can communicate a relationship between a whole and the parts that make it up. Consider for example the logo of the 2018 SIGGRAPH conference, shown inset. The 2018 logo surrounds the main logo of the SIGGRAPH organization with a ring of small icons depicting computer graphics themes.
+
+ < g r a p h i c s >
+
+At a lower level, packings must be attractive compositions, which balance the shapes of the elements with the empty space between them, known as the negative space. In particular, negative space should be distributed as evenly as possible, leading to roughly constant-width "grout" between elements.
+
+Recently, Saputra et al. presented RepulsionPak [31], a deformation-driven packing method inspired by physical simulation techniques. In RepulsionPak, small elements are placed within a fixed container shape. As they grow, they interact with each other and the container boundary, inducing forces that translate, rotate, and deform elements. The motion and deformation of the elements allows them to achieve a physical equilibrium with an even distribution of negative space.
+
+Inspired by RepulsionPak, we investigate a physics-based packing method for elements with scripted animations. An element can have an animated deformation, such as a bird flapping its wings or a fish flicking its tail. It can also have an animated transformation, giving a changing position, size, and orientation within the container. Our goal is producing an animated packing, with elements playing out their animations while simultaneously filling the container shape evenly. A successful animated packing should balance among the evenness of the negative space, the preservation of element shapes, and the comprehensibility of their scripted animations.
+
+In our technique, called AnimationPak, we consider an animated element to be a geometric extrusion along a time axis, a three-dimensional object that we call a "spacetime element". We use a three-dimensional physical simulation similar to RepulsionPak to pack spacetime elements into a volume created by extruding a static container shape. The animated packing emerges from this three-dimensional volume by rendering cross sections perpendicular to the time axis. Our time axis behaves differently than a third spatial dimension. Although the cross sections of a spacetime element can drift from their original positions on the time axis, they must remain ordered monotonically. Furthermore, each individual cross section must remain flat in time, so that all of its $2\mathrm{D}$ points occur simultaneously.
+
+Animated packings are a largely unexplored style of motion graphics, presumably because of the difficulty of creating an animated packing by hand. We were not able to find any motivating examples created by artists. There is also very little past research on animated packings; we discuss the work that does exist in the next section.
+
+§ 2 RELATED WORK
+
+Packings and mosaics: Researchers have explored many approaches to creating 2D packings and simulated mosaics, including using Centroidal Area Voronoi Diagrams (CAVDs) to position elements $\left\lbrack {{15},{16},{33}}\right\rbrack$ , spectral approaches to create even negative space [10], energy minimization [23], and shape descriptors [25]. Several approaches have been proposed to extend 2D packing methods to adapt to the challenges of placing them on the surfaces of $3\mathrm{D}$ objects. $\left\lbrack {6,7,{19},{37}}\right\rbrack$ .
+
+*e-mail: radhitya@uwaterloo.ca
+
+${}^{ \dagger }$ e-mail: csk@uwaterloo.ca
+
+${}^{ \ddagger }$ e-mail: asente@adobe.com
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+Approaches that work with a smaller library of elements but allow them to deform are particularly relevant to AnimationPak. Xu and Kaplan [36] and Zou et al. [38] developed packing methods that construct calligrams inside containers by allowing significant deformation of letterforms. Saputra et al. presented FLOWPAK [32], which deformed long, thin elements along user-defined vector fields. RepulsionPak $\left\lbrack {{30},{31}}\right\rbrack$ deformed elements using mass-spring systems and repulsion forces to create compatibilities between element boundaries.
+
+Animated packings and tilings: Animosaics by Smith et al. [33] constructed animations in which static elements without scripted animations follow the motion of an animated container. Elements are placed using CAVDs, and advected frame-to-frame using a choice of methods motivated by Gestalt grouping principles. As the container's area changes, elements are added and removed as needed, while attempting to maximize overall temporal coherence. Dalal et al. [10] showed how the spectral approach they introduced for 2D packings could be extended to pack animated elements in a static container. Like us, they recast the problem in terms of three-dimensional spacetime; they compute optimal element placement using discrete samples over time and orientation. However, their spacetime elements have fixed shapes and are made to fit together using only translation and rotation, limiting their ability to consume the container's negative space.
+
+Liu and Veksler created animated decorative mosaics from video input [26]. Their technique combines vision-based motion segmentation with a packing step similar to Animosaics. Kang et al. [21] extracted edges from video and then oriented rectangular tesserae relative to edge directions.
+
+Kaplan [22] explored animations of simple tilings of the plane from copies of a single shape. Elements in a tiling fit together by construction, and therefore always consume all the negative space in the animation.
+
+3D packings: AnimationPak fills a 3D container with 3D elements, and is therefore related to other work on constructing freeform 3D packings. Gal et al. [13] presented a method for constructing 3D collages reminiscent of portrait paintings by Arcimboldo. They filled a 3D container with overlapping 3D elements using a greedy approach and a partial shape matching algorithm. Marco [1] decomposed a 3D model into parts that pack tightly into a small build volume, allowing it to be $3\mathrm{D}$ printed with less waste material and packed into a smaller box. Ma et al. [28] developed a heuristic method to create 3D packings that are overlap free. Other work has experimented with example-based packing of $3\mathrm{D}$ volumes [27], or optimized placement based on user interaction [18].
+
+Derived animations: AnimationPak falls into the category of systems that create a derived animation based on some input animation. This problem, which requires preserving the visual character of the input, is a longstanding one in computer graphics research. Spacetime constraints $\left\lbrack {9,{34}}\right\rbrack$ allow an animator to specify an object’s constraints and goals, and then calculates the object's trajectory via spacetime optimization. Motion warping [35] is a method that deforms an existing motion curve to meet user-specified constraints. Gleicher [14] developed a motion path editing method that allows user to modify the traveling path of a walking character. Bruderlin and Williams [4] used signal processing techniques to modify motion curves. Carra et al. [5] presented a timeslice grammar to procedurally animate a large number of objects.
+
+Previous work has also investigated geometric deformation of animations. Edmond et al. [17] encoded spatial joint relationships using tetrahedral meshes, and applied as-rigid-as-possible shape deformation to the mesh to retarget animation to new characters. Choi et al. [8] developed a method to deform character motion to allow characters to navigate tight passages. Masaki [29] developed a motion editing tool that deformed 3D lattice proxies of a character's joints. Dalstein et al. [11] presented a data structure to animate vector graphics with complex topological changes. Kim et al. [24] explored a packing algorithm to avoid collisions in a crowd of moving characters. They defined a motion patch containing temporal trajectories of interacting characters, and arranged deformed patches to prevent collisions between characters.
+
+§ 3 ANIMATED ELEMENTS
+
+The input to AnimationPak is a library of animated elements and a fixed container shape. AnimationPak currently supports two kinds of animation: the user can animate the shape of each individual element and can also give elements trajectories that animate their position within the container. This section explains how we animate the element shapes using as-rigid-as-possible deformation, and then construct spacetime-extruded objects that form the basis of our packing algorithm. These elements animate "in place": they change shape without translating. The next section describes how these elements can be given transformation trajectories within the container. Size and orientation of an element can be animated either way; they can be specified as an animation of the element's shape. or they can be part of the transformation trajectory.
+
+§ 3.1 SPACETIME EXTRUSION
+
+Each element begins life as a static shape defined using vector paths. Following RepulsionPak, we construct a discrete geometric proxy of the element that will interact with other proxies in a physical simulation. The construction of this proxy for a single shape is shown in Fig. 2, and the individual steps are explained in greater detail below.
+
+In order to produce a packing with an even distribution of negative space, we first offset the shape’s paths by a distance ${\Delta s}$ , leaving the shape surrounded by a channel of negative space (Fig. 2a). In our system we scale the shape to fit a unit square and set ${\Delta s} = {0.04}$ .
+
+Next, we place evenly-spaced samples around the outer boundary of the offset path and construct a Delaunay triangulation of the samples (Fig. 2b). As in RepulsionPak, we will later treat the edges of the triangulation as springs, allowing the element to deform in response to forces in the simulation. We also follow RepulsionPak by adding extra edges to prevent folding or self-overlaps during simulation (Fig. 2c). First, if two triangles ${ABC}$ and ${BCD}$ share edge ${BC}$ , then we add a shear edge connecting $A$ and $D$ . Second, we triangulate the negative space inside the convex hull of the original Delaunay triangulation, and create new negative space edges corresponding to the newly created triangulation edges. These negative space edges are used exclusively for internal bracing. The element's concavities can still be occupied by its neighbours.
+
+We refer to the augmented triangulation shown in Fig. 2c as a slice. The entire spacetime packing process operates on slices. However, we will eventually need to compute deformed copies of the element's original vector paths when rendering a final animation (Sect. 6). To that end, we re-express all path information relative to the slice triangulation: every path control point is represented using barycentric coordinates within one triangle.
+
+To extend the element into the time dimension, we now position evenly-spaced copies of the slice along the time axis. Assuming that the animation will run over the time interval $\left\lbrack {0,1}\right\rbrack$ , we choose a number of slices ${n}_{s}$ and place slices $\left\{ {{s}_{1},\ldots ,{s}_{{n}_{s}}}\right\}$ , with slice ${s}_{i}$ being placed at time $\left( {i - 1}\right) /\left( {{n}_{s} - 1}\right)$ . Higher temporal resolution will produce a smoother final animation at the expense of more computation. In our examples, we set ${n}_{s} = {100}$ . Fig. 2d shows a set of time slices, with ${n}_{s} = 5$ for visualization purposes.
+
+To complete the construction of a spacetime element without animation, we stitch the slices together into a single 3D object. Let ${s}_{j}$ and ${s}_{j + 1}$ be consecutive slices constructed above. The outer boundaries of the element triangulations are congruent polygons offset in the time axis. We stitch the two polygons together using a new set of time edges: if ${AB}$ is an edge on the boundary of ${s}_{j}$ and ${CD}$ is the corresponding edge on the boundary of ${s}_{j + 1}$ , then we add time edges ${AC},{AD}$ , and ${BC}$ . During simulation, time edges will transmit forces backwards and forwards in time, maintaining temporal coherence by smoothing out deformation and transformations. Fig. 2e shows time edges for ${n}_{s} = 5$ .
+
+ < g r a p h i c s >
+
+Figure 2: The creation of a discretized spacetime element. (a) A 2D element shape offset by ${\Delta s}$ . (b) A single triangle mesh slice. (c) Shear edges (red) and negative space edges (dashed blue). (d) A set of five slices placed along the time axis. (e) The vertices on the boundaries of the slices are joined by time edges. The black edges in (e) define a triangle mesh called the envelope of the element. In practice we use a larger number of slices in (d) and (e).
+
+ < g r a p h i c s >
+
+Figure 3: A spacetime element with a scripted animation.
+
+§ 3.2 ANIMATION
+
+The 3D information constructed above is a parallel extrusion of a slice along the time axis, representing a shape with no scripted animation. We created a simple interactive application for adding animation to spacetime elements, inspired by as-rigid-as-possible shape manipulation [20]. The artist first designates a subset of the slices as keyframes. They can then interactively manipulate any triangulation vertex of a keyframe slice. Any vertex that has been positioned manually has its entire trajectory through the animation computed using spline interpolation. Then, at any other slice, the positions of all other vertices can be interpolated using the as-rigid-as-possible technique. The result is a smoothly animated spacetime volume like the one visualized in Fig. 3.
+
+Unlike data-driven packing methods like PAD [25], methods that allow distortions do not require a large library of distinct elements to generate successful packings. The results in this paper all use fewer than ten input elements, and some use only one. The physical simulation induces deformation to enhance the compatibility of nearby shapes in the final animation.
+
+§ 4 INITIAL CONFIGURATION
+
+We begin the packing process by constructing a 3D spacetime volume for the container by extruding its static shape in the time direction. The container is permitted to have internal holes, which are also extruded. The resulting volume is scaled to fit a unit cube. We also shrink each of the spacetime elements, in the spatial dimensions only, to 5-10% of its original size. These shrunken elements are thin enough that we can place them in the container without overlaps.
+
+ < g r a p h i c s >
+
+Figure 4: A 2D illustration of a guided element. Slices are depicted as black lines and slice vertices as black dots. A spring connects the centermost vertex $\mathbf{x}$ of a slice $s$ to a target point $p$ . (a) The initial shape of a guided element is a polygonal extrusion. (b) The spacetime element deforms but the springs pull it back towards the target points.
+
+The artist can optionally specify trajectories for a subset of the elements, which we call guided elements. A guided element attempts to pass through a sequence of fixed target points in the container, imbuing the animation with a degree of intention and narrative structure. To define a guided element, we designate the triangulation vertex closest to its centroid to be the anchor point for the element. The artist then chooses a set of spacetime target points ${\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{n}$ , with ${\mathbf{p}}_{i} = \left( {{x}_{i},{y}_{i},{t}_{i}}\right)$ , that the anchor should pass through during the animation. In our interface, the artist uses a slider to choose the time ${t}_{i}$ for a target point, and clicks in the container to specify the spatial position $\left( {{x}_{i},{y}_{i}}\right)$ . The artist can also optionally specify scale and orientation at the target points. We require ${t}_{1} = 0$ and ${t}_{n} = 1$ , fixing the initial and final positions of the guided element. We then linearly interpolate the anchor position for each slice based on the target points, and translate the slice so that its anchor lies at the desired position. The red extrusions in Fig. 5a are guided elements.
+
+If the artist wishes to create a looping animation, the $\left( {{x}_{i},{y}_{i}}\right)$ position for target points ${\mathbf{p}}_{1}$ and ${\mathbf{p}}_{n}$ must match up, either for a single guided element or across elements. In Fig. 5 the two guided elements form a connected loop; $\left( {{x}_{1},{y}_{1}}\right)$ for each one matches $\left( {{x}_{n},{y}_{n}}\right)$ for the other.
+
+In this initial configuration, the guided elements abruptly change direction at target points. However, because the slices are connected by springs, the trajectories will smooth out as the simulation runs. Also, the simulation is not constrained to reach each target position exactly. Instead, we attach the anchor to the target using a target-point spring that attempts to draw the element towards it while balancing against the other physical forces in play (Fig. 5b). The strength of these springs determines how closely the element will follow the trajectory.
+
+ < g r a p h i c s >
+
+Figure 5: The simulation process. (a) Initial placement of shrunken spacetime elements inside a static 2D disc, extruded into a cylindrical spacetime domain. Guided elements are shown in red and unguided elements in blue. (b) A physics simulation causes the spacetime elements to bend. They also grow gradually. (c) The spacetime elements occupy the container space. (d) The simulation stops when elements do not have sufficient negative space in which to grow, or have reached their target sizes.
+
+ < g r a p h i c s >
+
+Figure 6: Repulsion forces applied to a vertex $\mathbf{x}$ , allowing the element to deform and move away from a neighbouring element.
+
+We then seed the container with an initial packing of non-guided spacetime elements. We generate points within the container at random, using blue-noise sampling [2] to prevent points from being too close together, and assign a spacetime element to each seed point, selecting elements randomly from the input library. Depending upon the desired effect, we either randomize their orientations or give them preferred orientations. We reject any candidate seed point that would cause an unguided element's volume to intersect a guided element's volume.
+
+Finally we shrink each element, guided and unguided, uniformly in the spatial dimension towards its centroid. These shrunken elements are guaranteed not to intersect one another; as the simulation runs, they will grow and consume the container's negative space, while avoiding collisions. The blue extrusions in Fig. 5a show an initial placement of spacetime elements.
+
+§ 5 SIMULATION
+
+We now perform a physics simulation on the spacetime elements and the container. Elements are subject to a number of forces that cause them to simultaneously grow, deform, and repel each other (Fig. 5). Our physics simulation is very similar to that of RepulsionPak [30] — with the exception of the new temporal force, all our forces are the spacetime analogues of the ones used there. In Sect. 5.2 we introduce some new hard constraints that must be applied after every time step.
+
+Note that we must distinguish two notions of time in this simulation. We use $t$ to refer to the time axis of our spacetime volume, which will become the time dimension of the final animation, and ${t}_{\text{ sim }}$ to refer to the time domain in which the simulation is taking place.
+
+Repulsion Forces allow elements to push away vertices of neighbouring elements, inducing deformations and transformations that lead to an even distribution of elements within the container (Fig. 6). We compute the repulsion force ${\mathbf{F}}_{\mathrm{{rpl}}}$ on a vertex $\mathbf{x}$ located on a slice boundary as:
+
+$$
+{\mathbf{F}}_{\mathrm{{rpl}}} = {k}_{\mathrm{{rpl}}}\mathop{\sum }\limits_{{i = 1}}^{n}\frac{\mathbf{u}}{\parallel \mathbf{u}\parallel }\frac{1}{\epsilon + \parallel \mathbf{u}{\parallel }^{2}} \tag{1}
+$$
+
+where
+
+${k}_{\mathrm{{rpl}}}$ is the relative strength of ${\mathbf{F}}_{\mathrm{{rpl}}}$ . We set ${k}_{\mathrm{{rpl}}} = {10}$ ;
+
+$n$ is the number of nearest points to $\mathbf{x}$ ;
+
+${\mathbf{x}}_{\mathbf{i}}$ is the $i$ -th closest point on the neighboring element surfaces;
+
+$\mathbf{u} = \mathbf{x} - {\mathbf{x}}_{\mathbf{i}}$ ; and
+
+$\epsilon$ is a soft parameter to avoid instability when $\parallel \mathbf{u}\parallel$ is small. We set $\epsilon = 1$ .
+
+Since the simulation operates in the spacetime domain, vertex $\mathbf{x}$ accumulates repulsion forces from points at various time positions. To locate these points on neighbouring elements that are considered nearest, we use a collision grid data structure, described in greater detail in Sect. 5.1.
+
+Edge Forces allow elements to deform in response to repulsion forces. The edges defined in Sect. 3 are used here as springs. Like RepulsionPak, we use a non-physical quadratic spring force. Let ${\mathbf{x}}_{\mathbf{a}}$ and ${\mathbf{x}}_{\mathbf{b}}$ be vertices connected by a spring. Each vertex experiences an edge force ${\mathbf{F}}_{\text{ edg }}$ of
+
+$$
+{\mathbf{F}}_{\text{ edg }} = {k}_{\text{ edg }}\frac{\mathbf{u}}{\parallel \mathbf{u}\parallel }s{\left( \parallel \mathbf{u}\parallel - \ell \right) }^{2} \tag{2}
+$$
+
+where
+
+${k}_{\text{ edg }}$ is is the relative strength of ${\mathbf{F}}_{\text{ edg }}$ . Different classes of spring will have different ${k}_{\text{ edg }}$ values;
+
+$\mathbf{u} = {\mathbf{x}}_{\mathbf{b}} - {\mathbf{x}}_{\mathbf{a}};$
+
+$\ell$ is the rest length of the spring; and
+
+$s$ is +1 or -1, according to whether $\left( {\parallel \mathbf{u}\parallel - \ell }\right)$ is positive or negative.
+
+We have five types of springs, with stiffness constants that can be set independently. In our implementation we set ${k}_{\text{ edg }}$ to 0.01 for time springs, 0.1 for negative-space springs, and 10 for edge springs, shear springs, and target point springs.
+
+Overlap forces resolve a vertex penetrating a neighboring spacetime element. Overlaps can occur later in the simulation when negative space is limited. Once we detect a penetration, we temporarily disable the repulsion force on vertex $\mathbf{x}$ , and apply an overlap force ${\mathbf{F}}_{\text{ ovr }}$ to push it out:
+
+$$
+{\mathbf{F}}_{\text{ ovr }} = {k}_{\text{ ovr }}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {{\mathbf{p}}_{i} - \mathbf{x}}\right) \tag{3}
+$$
+
+ < g r a p h i c s >
+
+Figure 7: An illustration of the temporal force. The vertices in slice ${s}_{i}$ are drawn back towards time $t$ .
+
+§ WHERE
+
+${k}_{\text{ ovr }}$ is the relative strength of ${\mathbf{F}}_{\text{ ovr }}$ . We set ${k}_{\text{ ovr }} = 5$ ;
+
+$n$ is the number of slice triangles that have $\mathbf{x}$ as a vertex; and
+
+${\mathbf{p}}_{\mathbf{i}}$ is the centroid of the $i$ -th slice triangle incident on $\mathbf{x}$ .
+
+Boundary forces keep vertices inside the container. If an element vertex $\mathbf{x}$ is outside the container, the boundary force ${\mathbf{F}}_{\text{ bdr }}$ moves it towards the closest point on the container's boundary by an amount proportional to the distance to the boundary:
+
+$$
+{\mathbf{F}}_{\mathrm{{bdr}}} = {k}_{\mathrm{{bdr}}}\left( {{\mathbf{p}}_{\mathbf{b}} - \mathbf{x}}\right) \tag{4}
+$$
+
+§ WHERE
+
+${k}_{\text{ bdr }}$ is the relative strength of ${\mathbf{F}}_{\text{ bdr }}$ . We set ${k}_{\text{ bdr }} = 5$ ; and
+
+${\mathbf{p}}_{\mathbf{b}}$ is the closest point on the target container to $\mathbf{x}$ .
+
+Torsional forces allow an element's slices to be given preferred orientations, to which they attempt to return. Consider a vertex $\mathbf{x}$ of a slice, and let ${\mathbf{c}}_{r}$ be the slice’s center of mass in its undeformed state. We define the rest orientation of $\mathbf{x}$ as the orientation of the vector ${\mathbf{u}}_{r} = \mathbf{x} - {\mathbf{c}}_{r}$ . During simulation we compute the current centre of mass $\mathbf{c}$ of the slice and let $\mathbf{u} = \mathbf{x} - \mathbf{c}$ . Then the torsional
+
+force ${\mathbf{F}}_{\text{ tor }}$ is
+
+$$
+{\mathbf{F}}_{\text{ tor }} = \left\{ \begin{array}{ll} {k}_{\text{ tor }}{\mathbf{u}}^{ \bot }, & \text{ if }\theta > 0 \\ - {k}_{\text{ tor }}{\mathbf{u}}^{ \bot }, & \text{ if }\theta < 0 \end{array}\right. \tag{5}
+$$
+
+where
+
+${k}_{\text{ tor }}$ is the relative strength of ${\mathbf{F}}_{\text{ tor }}$ . We set ${k}_{\text{ tor }} = {0.1}$ ;
+
+$\theta$ is the signed angle between ${\mathbf{u}}_{r}$ and $\mathbf{u}$ ; and
+
+${\mathbf{u}}^{ \bot }$ is a unit vector rotated ${90}^{ \circ }$ counterclockwise relative to $\mathbf{u}$ .
+
+Temporal forces prevent slices from drifting too far from their original positions along the time axis positions (Fig. 7), which could cause unexpected accelerations and decelerations in the final animation. For every vertex, we compute the temporal force ${\mathbf{F}}_{\text{ tmp }}$
+
+as
+
+$$
+{\mathbf{F}}_{\mathrm{{tmp}}} = {k}_{\mathrm{{tmp}}}{\mathbf{u}}^{t}\left( {t - {t}^{\prime }}\right) \tag{6}
+$$
+
+where
+
+${k}_{\mathrm{{tmp}}}$ is the relative strength of ${\mathbf{F}}_{\mathrm{{tmp}}}$ . We set ${k}_{\mathrm{{tmp}}} = 1$ ;
+
+$t$ is the initial time of the slice to which the vertex belongs;
+
+${t}^{\prime }$ is the current time value of the vertex; and
+
+${\mathbf{u}}^{t} = \left( {0,0,1}\right)$ .
+
+§ COMPUTING TOTAL FORCE AND NUMERICAL INTEGRATION:
+
+The total force on a vertex is the sum of all of the individual forces described above:
+
+$$
+{\mathbf{F}}_{\text{ total }} = {\mathbf{F}}_{\mathrm{{rpl}}} + {\mathbf{F}}_{\text{ edg }} + {\mathbf{F}}_{\mathrm{{bdr}}} + {\mathbf{F}}_{\text{ ovr }} + {\mathbf{F}}_{\text{ tor }} + {\mathbf{F}}_{\text{ tmp }} \tag{7}
+$$
+
+ < g r a p h i c s >
+
+Figure 8: (a) The triangles that connect consecutive slices define the envelope of the element. The midpoints of these triangles are stored in a collision grid. (b) A 2D visualization of the region of collision grid cells around a query point $\mathbf{x}$ in which repulsion and overlap forces will be computed. In the central blue region, we check overlaps and compute exact repulsion forces relative to closest points on triangles of neighbouring elements; in the peripheral red region we do not compute overlaps, and repulsion forces are approximated using triangle midpoints only.
+
+We use explicit Euler integration to simulate the motions of the mesh vertices under the forces described above. Every vertex has a position and a velocity vector; in every iteration, we update velocities using forces, and update positions using velocities. These updates are scaled by a time step $\Delta {t}_{\text{ sim }}$ that we set to 0.01 . We cap velocities at ${10\Delta }{t}_{\text{ sim }}$ to dissipate extra energy from the simulation.
+
+§ 5.1 SPATIAL QUERIES
+
+Repulsion and overlap forces rely on being able to find points on neighbouring elements that are close to a given query vertex. To find these points, we use each element's envelope, a triangle mesh implied by the construction in Sect. 3. Each triangle of the envelope is made from two time edges and one edge of a slice boundary, as shown in Fig. 8a. Given a query vertex $\mathbf{x}$ , we need to find nearby envelope triangles that belong to other elements.
+
+To accelerate this computation, we first find and store the centroids of every element's envelope triangles in a uniformly subdivided 3D grid that surrounds the spacetime volume of the animation. In using this data structure, we make two simplifying assumptions; first, that because envelope triangles are small, their centroids are adequate for finding triangles near a given query point; and second, that the repulsion force from a more distant triangle is well approximated by a force from its centroid.
+
+Given a query vertex $\mathbf{x}$ , we first find all envelope triangle centroids in nearby grid cells that belong to other elements. For each centroid, we use a method described by Ericson [12] to find the point on its triangle closest to $\mathbf{x}$ and include that point in the list of points in Eq. (1). These nearby triangles will also be used to test for interpenetration of elements. We then find centroids in more distant grid cells, and add those centroids directly to the Eq. (1) list, skipping the closest point computation. In our system we set the cell size to 0.04, giving a ${25} \times {25} \times {25}$ grid around the simulation volume. A query point's nearby grid cells are the 27 cells making up a $3 \times 3 \times 3$ block around the cell containing the point; the more distant cells are the 98 that make up the outer shell of the $5 \times 5 \times 5$ block around that (Fig. 8).
+
+§ 5.2 SLICE CONSTRAINTS
+
+There are three hard geometric constraints on the configuration of slices, which must be enforced throughout the simulation. Each of the following constraints is reapplied after each physical simulation step described above.
+
+ < g r a p h i c s >
+
+Figure 9: a) End-to-end constraint: slice ${s}_{1}$ and ${s}_{n}$ , located at $t = 0$ and $t = 1$ , should never change their $t$ positions but can change their $x,y$ positions. b) Simultaneity constraint: all vertices on the same slice should have the same $t$ position. c) Loop constraint with a single element: the $x,y$ positions for ${s}_{1}$ and ${s}_{n}$ must match. d) Loop constraint with two elements: the $x,y$ position for ${s}_{1}$ for one element matches the $x,y$ position for ${s}_{n}$ of the other.
+
+1. End-to-end constraint: A spacetime element must be present for the full length of the animation from $t = 0$ to $t = 1$ . After every simulation step, every vertex belonging to an element's first slice has its $t$ value set to 0, and every vertex of the last slice has its $t$ value set to 1 (Fig. 9a).
+
+2. Simultaneity constraint: During simulation, the vertices of a slice can drift away from each other in time, which could lead to rendering artifacts in the animation. After every simulation step, we compute the average $t$ value of all vertices belonging to each slice other than the first and last slices, and snap all the slice’s vertices to that $t$ value (Fig. 9b).
+
+3. Loop constraint: AnimationPak optionally supports looping animations. When looping is enabled, we must ensure that the $t = 0$ and $t = 1$ planes of the spacetime container are identical. The $t = 1$ slice of every element ${e}_{1}$ must then coincide with the $t = 0$ slice of some element ${e}_{2}$ . We can have ${e}_{1} = {e}_{2}$ (Fig. 9c), but more general loops are possible in which the elements arrive at a permutation of their original configuration (Fig. 9d). We require only that there is a one-to-one correspondence between the vertices of the $t = 1$ slice of ${e}_{1}$ and the $t = 0$ slice of ${e}_{2}$ . If ${\mathbf{p}}_{1} = \left( {{x}_{1},{y}_{1},1}\right) \in {e}_{1}$ and ${\mathbf{p}}_{2} = \left( {{x}_{2},{y}_{2},0}\right) \in {e}_{2}$ are in correspondence, then after every simulation step we move ${\mathbf{p}}_{1}$ to $\left( {\frac{{x}_{1} + {x}_{2}}{2},\frac{{y}_{1} + {y}_{2}}{2},1}\right)$ and ${\mathbf{p}}_{2}$ to $\left( {\frac{{x}_{1} + {x}_{1}}{2},\frac{{y}_{2} + {y}_{2}}{2},0}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 10: A spacetime element shown (a) shrunken at the beginning of the simulation, and (b) grown later in the simulation. (c) When two elements overlap somewhere along their lengths, they are temporarily prohibited from growing there.
+
+§ 5.3 ELEMENT GROWTH AND STOPPING CRITERIA
+
+We begin the spacetime packing process with all element slices scaled down in $x$ and $y$ , guaranteeing that elements do not overlap. As the simulation progresses we gradually grow the slices, consuming the negative space around them (Fig. 10a, b). A perfect packing would fill the spacetime container completely with the elements. Because each element wraps the underlying animated shape with a narrow channel of negative space, this would yield an even distribution of shapes in the resulting animation. For real-world elements, the goal of minimizing deformation of irregular element shapes will lead to imperfect packings with additional pockets of negative space.
+
+Element growth: We induce elements to grow spatially by gradually increasing the rest lengths of their springs. The initial rest length of each spring is determined by the vertex positions in the shrunken version of the spacetime element constructed in Sect. 4. We allow an element's slices to grow independently of each other, which complicates the calculation of new rest lengths for time springs. Therefore, we create a duplicate of every shrunken spacetime element in the container, with a straight extrusion for unguided elements, and a polygonal extrusion for guided elements. This duplicate is not part of the simulation; it serves as a reference. Every element slice maintains a current scaling factor $g$ . When we wish to grow the slice, we increase its $g$ value. We can compute new rest lengths for all springs by scaling every slice of the reference element by a factor of $g$ relative to the slice’s centroid, and measuring distances between the scaled vertex positions. These new rest lengths are then used as the $\ell$ values in Equation 2.
+
+Every element slice has its $g$ value initialized to 1 . After every simulation step, if none of the slice's vertices were found to overlap other elements we increase that slice’s $g$ by ${0.001\Delta }{t}_{\text{ sim }}$ , where $\Delta {t}_{\text{ sim }}$ is the simulation time step. If any overlaps are found, then that slice's growth is instead paused to allow overlap and repulsion forces to give it more room to grow in later iterations. This approach can cause elements to fluctuate in size during the course of an animation, as slices compete for shifting negative space (Fig. 10).
+
+Stopping Criteria: We halt the simulation when the space between neighbouring elements drops below a threshold. When calculating repulsion forces, we find the distance from every slice vertex to the closest point in a neighbouring element. The minimum of these distances over all vertices in an element slice determines that slice's closest distance to neighbouring elements. We halt the simulation when the maximum per-slice distance falls below 0.006 (relative to a normalized container size of 1 ). That is, we stop when every slice is touching (or nearly touching) at least one other element.
+
+In some cases it can be useful to stop early based on cumulative element growth. In that case, we set a separate threshold for the slice scaling factors $g$ described above, and stop when the $g$ values of all slices exceed that threshold.
+
+§ 6 RENDERING
+
+The result of the simulation described above is a packing of spacetime elements within a spacetime container. We can render an animation frame-by-frame by cutting through this volume at evenly spaced $t$ values from $t = 0$ to $t = 1$ . For our results, we typically render 500-frame animations.
+
+During simulation, a given spacetime element's slices may drift from their original creation times. However, time springs keep the sequence monotonic, and the simultaneity constraint ensures that every slice is fixed to one $t$ value. To render this element at an arbitrary frame time ${t}_{f} \in \left\lbrack {0,1}\right\rbrack$ , we find the two consecutive slices whose time values bound the interval containing ${t}_{f}$ and linearly interpolate the vertex positions of the triangulations at those two slices to obtain a new triangulation at ${t}_{f}$ . We can then compute a deformed copy of the original element paths by "replaying" the barycentric coordinates computed in Sect. 3 relative to the displaced triangulation vertices. We repeat this process for every spacetime element to obtain a rendering of the frame at ${t}_{f}$ .
+
+This interpolation process can occasionally lead to small artifacts in the animation. A rendered frame can fall between the discretely sampled slices for two elements at an intermediate time where physical forces were not computed explicitly. It is therefore possible for neighbouring elements to overlap briefly during such intervals.
+
+§ 7 IMPLEMENTATION AND RESULTS
+
+The core AnimationPak algorithm consists of a C++ program that reads in text files describing the spacetime elements and the container, and outputs raster images of animation frames.
+
+Large parts of AnimationPak can benefit from parallelism. In our implementation we update the cells of the collision grid (Sect. 5.1) in parallel by distributing them across a pool of threads. When the updated collision grid is ready, we distribute the spacetime elements over threads. We calculate forces, perform numerical integration, and apply the end-to-end and simultaneity constraints for each element in parallel. We must process any loop constraints afterwards, as they can affect vertices in two separate elements.
+
+We created the results in this paper using a Windows PC with a ${3.60}\mathrm{{GHz}}$ Intel i7-4790 processor and ${16}\mathrm{{GB}}$ of RAM. We used a pool of eight threads, corresponding to the number of logical CPU cores. Table 1 shows statistics for our results. Each packing has tens of thousands of vertices and hundreds of thousands of springs, and requires about an hour to complete. We enable the loop constraint in all results. The paper shows selected frames from the results; see the accompanying videos for full animations.
+
+Fig. 1 is an animation of aquatic fauna featuring two penguins as guided elements. During one loop the penguins move clockwise around the container, swapping positions at the top and the bottom. Each ends at the other's starting point, demonstrating a loop constraint between distinct elements. All elements are animated, as shown in Fig. 1a. Note the coupling between the Pac-Man fish's mouth and the shark's tail on the left side of the second and fourth frames.
+
+A snake chases a bird around an annular container in Fig. 11, demonstrating a container with a hole and giving a simple example of the narrative potential of animated packings. Fig. 12 animates the giraffe-to-penguin illusion shown as a static packing in Repulsion-Pak. This example uses torsional forces to control slice orientations.
+
+Fig. 13 offers a direct comparison between packings computed using Centroidal Area Voronoi Diagrams (CAVD) [33], the spectral approach [10], and AnimationPak. These packings use stars that rotate and pulsate. For each method we show the initial frame $\left( {t = 0}\right)$ and the halfway point $\left( {t = {0.5}}\right)$ . The CAVD approach produces a satisfactory-albeit loosely coupled-packing for the first frame, but because the algorithm was not intended to work on animated elements, the evenness of the packing quickly degrades in later frames. The spectral approach is much better than CAVD, but their animated elements still have fixed spacetime shapes and can only translate and rotate to improve their fit. Repulsion forces and deformation allow AnimationPak to achieve a tighter packing that persists across the animation, including gear-like meshing of oppositely-rotating stars.
+
+Fig. 14a is a static packing of a lion created by an artist and used as an example in FLOWPAK [32]. In Fig. 14b, we reproduce it with animated elements for the mane. The orientations of elements follow a vector field inside the container, and are maintained during the animation by torsional forces. We simulate only half of the packing and reflect it to create the other half. The facial features were added manually in a post-processing step.
+
+Fig. 15 compares a static 2D packing created by RepulsionPak with a frame from an animated packing created by AnimationPak. The extra negative space in AnimationPak comes partly from the trade-off between temporal coherence and tight packing, and partly from the lack of secondary elements, which were used in a second pass in RepulsionPak to fill pockets of negative space.
+
+Fig. 16 emphasizes the trade-off between temporal coherence and evenness of negative space by creating two animations with different time springs stiffness. In (a), the time springs are 100 times stronger than in (b). The resulting packing has larger pockets of negative space, but the accompanying video shows that the animation is smoother. The packing in (b) is tighter, but the elements must move frantically to maintain that tightness.
+
+Fig. 17 is a failed attempt to animate a "blender". The packing has a beam that rotates clockwise and a number of small unguided circles. In a standard physics simulation we might expect the beam to push the circles around the container, giving each one a helical spacetime trajectory. Instead, as elements grow, repulsion forces cause circles to explore the container boundary, where they discover the lower-energy solution of slipping past the edge of the beam as it sweeps past. If we extend the beam to the full diameter of the container, consecutive slices simply teleport across the beam, hiding the moment of overlap in the brief time interval where physical forces were not computed. AnimationPak is not directly comparable to a 3D physics simulation; it is better suited to improving the packing quality of an animation that has already been blocked out at a high level.
+
+§ 8 CONCLUSION AND FUTURE WORK
+
+We introduced AnimationPak, a system for generating animated packings by filling a static container with animated elements. Every animated 2D element is represented by an extruded spacetime tube. We discretize elements into triangle mesh slices connected by time edges, and deform element shapes and animations using a spacetime physical simulation. The result is a temporally coherent $2\mathrm{D}$ animation of elements that attempt both to perform their scripted motions and consume the negative space of the container. We show a variety of results where 2D elements move around inside the container.
+
+We see an number of opportunities for improvements and extensions to AnimationPak:
+
+ * Because we use linear interpolation to synthesize an element's shape between slices, we require elements not to undergo changes in topology. More sophisticated representations of vector shapes, such as that of Dalstein et al. [11], could support interpolations between slices with complex topological changes. We would also need to synthesize a watertight envelope around the animating element in order to compute overlap and repulsion forces.
+
+ < g r a p h i c s >
+
+Figure 11: A snake chasing a bird through a packing of animals. The snake and bird are both guided elements that move clockwise around the annular container.
+
+ < g r a p h i c s >
+
+Figure 12: Penguins turning into giraffes. The penguins animate by rotating in place. Torsional forces are used to preserve element orientations. Frames are taken at $t = 0,t = {0.125},t = {0.25},t = {0.375}$ , and $t = {0.5}$ .
+
+ < g r a p h i c s >
+
+Figure 13: A comparison of (a) Centroidal Area Voronoi Diagrams (CAVDs) [33], (b) spectral packing [10], and (c) AnimationPak. We show two frames for each method, taken at $t = 0$ and $t = {0.5}$ . The CAVD packing starts with evenly distributed elements but the packing degrades as the animation progresses. The spectral approach improves upon CAVD with better consistency, but still leaves significant pockets of negative space. The AnimationPak packing has less negative space that is more even.
+
+ < g r a p h i c s >
+
+Figure 14: (a) A static packing made by an artist, taken from Figure 15: (a) A static packing created with RepulsionPak. (b) StockUnlimited. (b) The first frame from an AnimationPak packing. The first frame of a comparable AnimationPak packing. The input (c) The input animated elements and the container shape with a spacetime elements are shown on the right. The AnimationPak vector field. Torsional forces keep elements oriented in the direction packing has more negative space because we must tradeoff between of the vector field. We simulate half of the lion's mane and render temporal coherence and packing density. the other half using a reflection, and add the facial features by hand.
+
+Table 1: Data and statistics for the results in the paper. The table shows the number of elements, the number of vertices, the number of springs, the number of envelope triangles, and the running time of the simulation in hours, minutes, and seconds.
+
+max width=
+
+Packing Elements Vertices Springs Triangles Time
+
+1-6
+Aquatic animals (Fig. 1) 37 97,800 623,634 106,000 01:06:35
+
+1-6
+Snake and birb (Fig. 11) 37 58,700 370,571 58,700 01:01:32
+
+1-6
+Penguin to giraffe (Fig. 12) 33 124,300 824,164 143,000 01:19:50
+
+1-6
+Heart stars (Fig. 13c) 26 85,200 598,218 858,00 00:23:08
+
+1-6
+Animals (Fig. 15b) 34 69,600 444,337 69,800 01:00:19
+
+1-6
+Lion (Fig. 14b) 16 39,400 236,086 41,800 00:41:56
+
+1-6
+
+ < g r a p h i c s >
+
+Figure 16: (a) One frame from Fig. 1. (b) The same packing with time springs that are $1\%$ as stiff. Reducing the stiffness of time springs leads to a more even packing with less negative space, but the animated elements must move frantically to preserve packing density. The spacetime trajectories of the highlighted fish in (a) and (b) are shown in (c). The orange fish in (b) exhibits more high frequency fluctuation in its position.
+
+ * We would like to improve the performance of the physical simulation. One option may be to increase the resolution of element meshes progressively during simulation. Early in the process, elements are small and distant from each other, so lower-resolution meshes may suffice for computing repulsion forces.
+
+ * As noted in Sect. 6 and Fig. 17, our discrete simulation can miss element overlaps that occur between slices. A more robust continuous collision detection (CCD) algorithm such as that of Brochu et al. [3] could help us find all collisions between the envelopes of spacetime elements.
+
+ * In RepulsionPak [30], an additional pass with small secondary elements had a significant positive effect on the distribution of negative space in the final packing. It may be possible to identify stretches of unused spacetime that can be filled opportunistically with additional elements. The challenge would be to locate tubes of empty space that run the full duration of the animation, always of sufficient diameter to accommodate an added element.
+
+ * Like the spectral method [10], and unlike Animosaics [33], AnimationPak can pack animated elements into a static container. We would like to extend our work to also handle animated containers. This extension would certainly affect the initial element placement, which would need to ensure that elements are placed fully inside the spacetime volume of the container. It could also lead to undesirable scaling of elements if the container area changes too much. It would be interesting to investigate whether we could adapt to changes in area by adding and removing elements unobtrusively during the animation, in the style of Animosaics.
+
+ < g r a p h i c s >
+
+Figure 17: A failure case for AnimationPak, consisting of a rotating beam and a number of small circles. Instead of being dragged around by the beam, the circles dodge it entirely by sneaking through the gap between the beam and the container. The red circle demonstrates one such maneuver.
+
+ * AnimationPak implements forces and constraints geared towards spacetime animation, but many of the same ideas could be adapted to develop a deformation-driven method for packing purely spatial 3D objects into a $3\mathrm{D}$ container. We would like to evaluate the expressivity and visual quality of deformation-driven 3D packings in comparison to other 3D packing techniques.
+
+ * Our physical simulation relies in several places on our method of constructing and animating spacetime elements. Our time edges make use of the one-to-one correspondence between boundary vertices of adjacent slices in order to construct a mesh surface that bounds each element. We also make direct use of that correspondence when rendering, to interpolate new triangulations between existing slices. We would like AnimationPak to be more agnostic about the method used to create animated elements. Given a "generic" animated element, we can easily compute independent triangulated slices, but we would need robust algorithms to join them into an extrusion and interpolate within that extrusion later.
+
+ * Saputra et al. [30] previously studied a set of measurements inspired by spatial statistics for evaluating the evenness of the distribution of negative space in a static packing. While their measurements extend naturally to three purely spatial dimensions, it is not clear whether they can be adapted to our spacetime context. We would like to investigate spatial statistics for the quality of animated packings that correlate with human perceptual judgments.
+
+ * There are many examples of static two-dimensional packings created by artists, which can serve as inspiration for an algorithm like RepulsionPak. We were unable to find an equivalent set of animated examples, probably because they would be difficult and time-consuming to create by hand. We would like to engage with artists to understand the aesthetic value and limitations of AnimationPak.
+
+§ ACKNOWLEDGMENTS
+
+We thank reviewers for helpful feedback. Thanks to Danny Kaufman for discussions about spacetime optimizations and physics simulations. This research was funded by the National Sciences and Engineering Research Council of Canada (NSERC) and through a generous gift from Adobe.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..f1abe0cb08ee2acc520cbb1950186e6e843169d6
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,487 @@
+# Selection Performance Using a Scaled Virtual Stylus Cursor in VR
+
+Seyed Amir Ahmad Didehkhorshid*
+
+Carleton University, Ottawa, Canada
+
+Robert J. Teather**
+
+Carleton University, Ottawa, Canada
+
+## Abstract
+
+We propose a surface warping technique we call warped virtual surfaces (WVS). WVS is similar to applying CD gain to mouse cursor on a screen and is used with traditionally $1 : 1$ input devices, in our case, a tablet and stylus, for use with VR head-mounted displays (HMDs). WVS allows users to interact with arbitrarily large virtual panels in VR while getting the benefits of passive haptic feedback from a fixed-sized physical panel. To determine the extent to which WVS affects user performance, we conducted an experiment with 24 participants using a Fitts' law reciprocal tapping task to compare different scale factors. Results indicate there was a significant difference in movement time for large scale factors. However, for throughput (ranging from 3.35 - 3.47 bps) and error rate (ranging from 3.6 - 5.4%), our analysis did not find a significant difference between scale factors. Using non-inferiority statistical testing (a form of equivalence testing), we show that performance in terms of throughput and error rate for large scale factors is no worse than a 1-to-1 mapping. Our results suggest WVS is a promising way of providing large tactile surfaces in VR, using small physical surfaces, and with little impact on user performance.
+
+Index Terms: - Human-centered computing $\sim$ Human computer interaction (HCI) - Interaction techniques - Pointing - Human-centered computing $\sim$ Human computer interaction (HCI)~Interaction paradigms~Virtual reality
+
+## 1 INTRODUCTION
+
+There has been a recent surge in demand for virtual reality (VR) in entertainment, education, and design applications. Current VR hardware is self-contained, wireless, lighter, and offers high visual fidelity. Development tools (e.g., Unity3D) are also becoming more accessible. These factors have created the perfect storm, paving the way for a new wave of innovations and creativity in immersive VR technologies. Despite these advances, there remain challenging research problems to be solved. General-purpose haptics is among these big problems. Past research has shown that haptic feedback significantly increases the quality of a VR experience $\lbrack {25},{31},{34}$ , 51]. However, designing interaction techniques that support realistic haptic feedback in VR is still problematic.
+
+Past studies have demonstrated the effectiveness of planar surfaces as a semi-general purpose prop in VR. In particular, the use of tablets to provide tactile surfaces in VR has been extensively studied [11, 18, 44, 53, 63, 68]. The Personal Interaction Panel (PIP) [63], Lindeman et al.'s HARP system [44], the Virtual Notepad by Poupyrev et al. [53], and Worlds In Miniature (WIM) [61] all used tracked panels for VR interaction. Other studies used a tablet and stylus for text input in VR [11].
+
+Several researchers have investigated the use of redirection and retargeting techniques for VR interaction. This relatively new class
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+of perceptual illusion-based interaction techniques include techniques like redirected walking (RDW) [55], haptic retargeting [6], and redirected touch [38]. Except for Yang et al.'s VRGrabber technique, which used retargeting with grabbing tools [71], all other techniques apply warping or redirection to the entire body (e.g., with RDW), or a body part such as the hand or fingers. None of the proposed interaction techniques so far have been applied with a planar surface, despite the well-known benefits of using these in VR. Although there are studies on bimanual retargeting $\left\lbrack {{27},{50}}\right\rbrack$ or unimanual redirected touching that used planar surfaces in their experiments [38-40], none studied the planar surface as a means of input method. Large displays are known to offer performance benefits on spatial tasks, spatial knowledge, and navigation [7, 64, 65]. Hence, we propose to use space warping to extend the virtual interaction panel surface in VR.
+
+We propose a technique we call Warped Virtual Surfaces (WVS). WVS combines the ideas behind RDW in expanding the tracking space, with haptic retargeting to use perceptual illusions and body warping. With WVS, users can interact with an arbitrarily large tactile virtual surface in HMD-based VR. Haptic feedback is provided by a fixed-size tracked physical tablet that uses a stylus for input. The technique applies a scale factor (SF) to move the virtual cursor beyond the tablet's active tracking area, making the stylus behave like an indirect input device, similar to a mouse and akin to changing the control-display (CD) gain [5, 48].
+
+Since users cannot see their physical hands or the stylus when using HMDs, they do not notice the decoupling between the physical stylus and virtual cursor positions. In non-VR setups, this could be distracting and confusing. Different SFs cause the cursor to move further with less physical movement on the tablet, much like how CD gain works for a mouse cursor. However, the user perceives the virtual contact on the surface in VR and thus receives appropriate tactile cues from physically touching the tablet with the stylus. WVS creates the illusion of an arbitrarily large tactile surface in VR while keeping the users' motor space consistent during interaction on the tablet. WVS can theoretically be employed with any other tablet-based VR technique, offering tactile feedback with a virtually larger tablet than what is available.
+
+We conducted an experiment to determine how far we can push this illusion and expand the interaction surface without affecting user performance. To evaluate user performance, we employed a Fitts’ law reciprocal tapping task $\left\lbrack {{24},{66}}\right\rbrack$ . To our knowledge, our experiment is the first to investigate the effects of surface warping using a stylus on user performance in selection tasks, and to apply CD gain with a tablet and stylus input modality in VR.
+
+## 2 RELATED WORK
+
+### 2.1 Visual Illusions
+
+Studies on the human brain reveal the dominance of vision over other senses when there are sensory conflicts $\lbrack 9,{16},{21},{26},{28},{57}$ , 60]. For instance, Gibson showed that a flat surface is perceived as curved while wearing distortion glasses and moving hands on a straight line [26]. VR and HCI researchers took advantage of this visual dominance to enhance selection in VR $\left\lbrack {4,{52},{54},{69}}\right\rbrack$ , yielding several novel interaction techniques for HMD-based VR.
+
+Many such techniques rely on the inaccuracy inherent to our proprioception and vestibular senses [9]. Burns et al. provided evidence of visual dominance over proprioception in a series of studies and found that people tend to believe their hand is where they see it [13-15]. Klatzky et al. showed that vestibular cues are also dominated by vision [37]. These ideas have been applied in various HCI contexts. For example, Zenner et al. used an internal weight shifting mechanism in a passive haptic proxy to enhance virtual object length and thickness perception with Shifty [73]. Similarly, Krekhov et al. used weight perception illusions in a self-transforming controller to enhance VR player experience [42]. McClelland et al. introduced the Haptobend, which used a bendable device to support different objects with simple geometry such as tubes and flat surfaces with a single physical prop [51].
+
+---
+
+* amir.didehkhorshid@carleton.ca
+
+** rob.teather@carleton.ca
+
+---
+
+Table 1. Summary of key studies on perceptual illusion techniques in VR.
+
+| 1st Author/Year | Description | Evaluation Method | Redirection on | Main Findings | Notes |
| Kohli/2010 $\left\lbrack {{38},{39},{40}}\right\rbrack$ | Redirected touch under warped spaces. | Fitts’ law | Fingers | Discrepant conditions are no worse than 1-to-1 mapping. | Also evaluated performance & adaptation under warped spaces |
| Azmandian/2016 [6] | Repurposing a passive haptic prop using body, world & hybrid space warping | Block stacking | Hands | Hybrid warping scored the highest presence score | Recommended encouraging slow hand movements & taking advantage of visual dominance. |
| Carvalheiro/2016 [16] | Haptic interaction system for VR & evaluating user awareness regarding the space warping. | Touch & moving objects | Hands | Average accuracy error of $7\mathrm{\;{mm}}$ , users adapt to distortion $\&$ are less sensitive to negative distortions. | 7 participants did not detect distortions, even though they were told about it in the second phase of experiments |
| Murillo/2017 [52] | Multi-object retargeting optimized for upper limb interactions in VR | Target selection | Hands | Improved ergonomics with no loss of performance or sense of control. | Used tetrahedrons to partition the physical and virtual world for multi-object retargeting. |
| Cheng/2017 [19] | Sparse haptic proxy using gaze and hand movement for target prediction. | Target acquisition | Hands | Retargeting of up to ${15}^{ \circ }$ similar to no retargeting, ${45}^{ \circ }$ received lower but above natural ratings | Predicted desired targets 2 seconds before participants touched with 97.5% accuracy |
| Han/2018 [28] | Static translation offsets vs dynamic interpolations for redirected reach | Reaching for objects | Hands | The translational technique performed better, with more robustness in larger mismatches | Horizontal offsets up to 76cm applied when reaching for a virtual target were tolerable |
| Yang/2018 [71] | Virtual grabbing tool with ungrounded haptic retargeting | Using controller for precise object grabbing | Controller | Travelling distance difference between the visual and the physical chopstick needs to be in the range (-1.48,1.95) cm. | Control/Display ratio needs to be between 0.71 and 1.77 & better performance with ungrounded haptic retargeting |
| Feuchtner/2018 [23] | Slow shift of user's virtual hand to reduce strain of in-air interaction | Pursuit tracking | Hands | Vertical shift hand by ${65}\mathrm{\;{cm}}$ reduced fatigue, maintained body ownership | Vertical shift decreases performance by 4% & gradual shifts are preferable. |
| Matthews/2019 [50] | Bimanual haptic retargeting with interface, body & combined warps. | Pressing virtual buttons | Hands/ Bimanual | Faster response time for combined warp, increased error in body warp | Same time and error between bimanual and unimanual retargeting, but needs a more statistically powerful study. |
+
+Vision is not always dominant. In the case of conflicts, sensory signals are weighted based on reliability in the brain $\left\lbrack {{30},{32}}\right\rbrack$ . There are thresholds on the dominance of vision. Some studies used the just noticeable difference (JND) threshold methodology to quantify mismatch thresholds $\left\lbrack {{13},{36},{43},{49}}\right\rbrack$ while others employed two-alternative forced-choice (2AFC) [72]. Interestingly, some studies have shown that force direction and the curvature of real props can influence the mismatch thresholds [8, 56, 72].
+
+### 2.2 Perceptual Illusions in VR
+
+Table 1 summarizes key studies on perceptual illusions in VR. Haptic retargeting, introduced by Azmandian et al. [6], partially solved a major limitation of using physical props for tactile feedback, by mapping one physical object to multiple virtual ones. The technique operates by redirecting the user's hand towards the physical prop when they are reaching for different virtual items at various locations [6]. This and similar techniques work through perceptual illusions and the dominance of vision over other senses $\left\lbrack {9,{26},{37},{49},{57},{60},{62}}\right\rbrack$ . Likely the most well-known example is redirected walking, first proposed by Razzaque et al. [55]. RDW enables users to walk an infinite straight virtual space in HMD VR. In reality, RDW users are walking in circles in a limited tracking space, but perceive themselves as walking on straight lines.
+
+Kohli et al. were among the first to propose redirected touching in a VR setting [38]. In a series of experiments, Kohli et al. looked into the effects of warping virtual spaces on user performance and adaptation and training under warped spaces $\left\lbrack {{39},{40}}\right\rbrack$ . They reported that while training under real conditions seemed more productive, after adapting to discrepancies between vision and proprioception, participants performed much better [40]. Indeed, they report that participants had to readapt to the real world after adapting to the warping virtual space [40].
+
+Azmandian et al. took the idea of redirected touching further and introduced haptic retargeting, which added dynamic mapping of the whole hand rather than just the fingers [6]. The technique leverages visual dominance to repurpose a single passive haptic prop for various virtual objects. This produced a higher sense of presence among participants, in line with past findings on the benefits of haptics in VR [34]. Their technique is limited by the shape of the physical prop, and that the target position must be known prior to selection. To overcome the targeting limitations, Murillo et al. proposed a multi-object retargeting technique by partitioning both virtual and physical spaces using tetrahedrons to allow open-ended hand movements while retargeting [52]. Haptic retargeting could also be applied for bimanual interactions [50]. Matthews et al. suggested that the technique could also be applied to wearables interfaces, i.e., on the user's wrist or arm [50].
+
+Several other studies employed similar techniques. For example, Cheng et al. explored the applications and the limits of hand redirection using geometric primitives with touch feedback in a VE while predicting the desired targets using hand movements and gaze direction [19]. Feuchtner et al. proposed the Ownershift interaction technique to ease over the head interaction in VR while wearing an HMD [23]. Ownershift does not require a mental recalibration phase since the initial 1:1 mapping allowed initial ballistic movements toward the targets [23]. Abtahi et al. utilized Visuo-haptic illusions in tandem with shape displays [1]. They were able to increase the perceived resolution of the shape displays for a VR user by applying scales less than ${1.8}\mathrm{x}$ , redirecting sloped lines with angles less than 40 degrees onto a horizontal line.
+
+To summarize, despite the well-known advantages of large displays and planar surfaces in VR, very few studies have used warping techniques with planar input devices. Our proposed technique and present study aim to fill this gap.
+
+### 2.3 Fitts' Law and Scale
+
+Fitts' law predicts selection time as a function of target size and distance [24]. The model is given as:
+
+$$
+{MT} = a + b \cdot {ID}\text{where}{ID} = {\log }_{2}\left( {\frac{A}{w} + 1}\right) \tag{3}
+$$
+
+where ${MT}$ is movement time, and $a$ and $b$ are empirically derived via linear regression. ${ID}$ is the index of difficulty, the overall selection difficulty, based on $A$ , the amplitude (i.e., distance) between targets, and $W$ , the target width. As seen in Equation (3), increasing $A$ or decreasing $W$ increases ${ID}$ , yielding a harder task.
+
+Throughput is recommended by the ISO 9241-9 standard as a primary metric for pointing device comparison, rather than movement time or error rate alone. Throughput incorporates speed and accuracy into a single score and is unaffected by speed-accuracy tradeoffs [46]. In contrast, movement speed and accuracy vary due to participant differences. Throughput thus gives a more realistic idea of overall user performance than movement time or error rate. Our study employs throughput for consistency with other studies $\left\lbrack {{35},{66},{67}}\right\rbrack$ . Throughput is given as:
+
+$$
+{TP} = \frac{I{D}_{e}}{MT}\text{where}I{D}_{e} = {\log }_{2}\left( {\frac{{A}_{e}}{{w}_{e}} + 1}\right) \tag{4}
+$$
+
+$I{D}_{e}$ is the effective index of difficulty and gives difficulty of the task users actually performed, rather than that they were presented with. Effective amplitude, ${A}_{e}$ , is the mean movement distance between targets for a particular condition. Effective width, ${W}_{e}$ is:
+
+$$
+{W}_{e} = {4.133} \cdot S{D}_{x} \tag{5}
+$$
+
+Where $S{D}_{x}$ is the standard deviation of selection endpoints projected onto the vector between the two targets (i.e., the task axis). It incorporates the variability in selection coordinates and is multiplied by 4.133, yielding $\pm {2.066}$ standard deviations from the mean. This effectively resizes targets so that 96% of selections hit the target, normalizing experimental error rate to 4%, facilitating comparison between studies with varying error rates [45, 59].
+
+Both visual and motor scale have been previously studied by HCI scholars in non-VR contexts, often using Fitts' law studies Factors involved in evaluating scale include the physical dimension of the device screen, the pixel density of the screen, and the distance between the user and the display screen [2, 12, 17, 33, 41, 70]. Browning et al. found that physical screen dimensions affected target acquisition performance negatively, especially for smaller screens [12]. Chapuis et al. also report that target acquisition for small targets suffered, indicating that selection performance is affected by movement scale, rather than visual scale [17]. Accot et al. used identical display conditions with varying input scale to isolate movement scaling to adjust the trackpad size systematically [2]. They used this set up with the steering task [3] and found a "U-shaped" performance curve, meaning that small and large trackpad sizes had the worst performance. They concluded that this was a result of human motor precision [2]. Kovacs et al. studied screen size independent of motor precision. Their findings suggested that human movement planning ability is affected by screen size [41]. Hourcade et al. showed that increasing the distance between the user and the screen, which causes the targets to scale due to perspective, affects accuracy and speed negatively as well [33].
+
+## 3 Warped Virtual Surfaces
+
+With WVS, users perceive themselves interacting with an arbitrarily sized virtual surface, that is potentially much larger than the physical tablet. The actual interaction space is always the same (i.e., the physical tracking area of the tablet). We rescale the plane representing a virtual screen in VR and render targets on locations that would fall outside the bounds of the physical tablet's tracking area. In other words, with WVS, users can select and "feel" targets that are beyond the extents of the tablet's physical dimensions.
+
+Tablet drivers typically provide the stylus tip position on the tablet relative to the top or bottom left corner (i.e., the coordinate origin of the tracking area) with $x$ and $y$ values ranging from 0 to 1 . In our case, the origin was the bottom left corner. The coordinate range is calculated based on the physical distance of the stylus tip to the origin and dividing the $x$ and $y$ values of that distance vector by the respective physical width and height of the tablet's active tracking area. We calculate this as the real cursor position $\left( {C}_{Rp}\right)$ , the point where the stylus tip is physically touching the tablet:
+
+${C}_{Rp} = \left( {1/\text{width,}1/\text{height}}\right) \times$ dist(stylusTip, ${W}_{o}$ ) (1)
+
+Similar to haptic retargeting, we use a warping origin $\left( {W}_{O}\right) \left\lbrack 6\right\rbrack$ for scaling. For WVS, the origin is the centre of the physical tablet's rectangular tracking area. We chose the centre of the tablet as the warping origin because it was the point from around which the physical panel would grow in size. ${W}_{O}$ is also the only point on the tablet surface, which, regardless of the SF, remains in its original 1-to-1 mapping position. In contrast, the virtual tablet corner points are subject to scaling. Therefore, we instead chose ${W}_{O}$ as the origin point for both ${C}_{Rp}$ and the virtual warped cursor position $\left( {C}_{Wp}\right)$ , the position of the cursor the user sees on the screen panel in VR. We thus shift the coordinate system origin of ${C}_{Rp}$ from the bottom left corner of the tablet to ${W}_{O}$ by rescaling the output range of the ${C}_{Rp}$ points to range from -0.5 to 0.5 instead of 0 to 1 . This results in the centre of the tablet tracking surface to be represented as(0,0) instead of(0.5,0.5). We track the stylus tip with the tablet’s builtin digitizer and apply the SF only when the stylus is within tracking range, ensuring that warping is limited to the tablet's surface.
+
+At ${Wo},{C}_{Rp}$ and ${C}_{Wp}$ align. Warping the tablet’s surface causes ${C}_{Wp}$ to move ahead of ${C}_{Rp}$ as they move the stylus further away from ${W}_{O}$ . This is similar to the effects of CD gain, where a small movement of the physical mouse translates to a large screen movement for the mouse cursor. We use a similar idea to extend cursor reach on the tablet in VR with WVS. A larger SF would cause the ${C}_{Wp}$ to speed up, much like a high CD gain. The further we move away from ${W}_{O}$ decoupling between ${C}_{Rp}$ and ${C}_{Wp}$ increases. See Figure 1. ${C}_{Rp}$ values range from -0.5 to 0.5, multiplied by a ScaleFactor yield ${C}_{Wp}$ , which is where we render the cursor in VR.
+
+${C}_{{W}_{P}} =$ ScaleFactor $\times {C}_{Rp}$ (2)
+
+
+
+Figure 1: Visual representation of the WVS system.
+
+## 4 METHODOLOGY
+
+We conducted a Fitts' law experiment comparing several SFs to a 1-to-1 mapping "control condition." Our objective was to determine whether or not the application of scaling in our warped virtual surface technique influenced user performance. We used a set of pre-selected amplitude and width pairs, rather than fully crossing a selection of amplitude and widths. This ensured that all combinations of $A$ and $W$ were reachable with the 1-to-1 mapping condition. Also, a sufficiently large $W$ could yield targets that were cut off the virtual tablet screen, which would not be reachable by the cursor without warping. We thus carefully chose our amplitude and width pairs so that they would cover a wide range of ${ID}$ s, while still having physically reachable targets under 1-to-1 mapping.
+
+### 4.1 Hypothesis
+
+Past studies suggest that although visual and motor scale affect performance differently, small scales and sizes impact user performance negatively for both $\left\lbrack {{10},{12},{17},{20},{33},{48},{70}}\right\rbrack$ . Most similar to our work, Blanch et al.'s study revealed that pointing task performance is governed by motor space rather than visual [10].
+
+Thus, we hypothesize that movement time(MT), error rate, target entry count and most importantly, throughput(TP)would be unaffected with varying SFs since participants are still selecting the same physical locations on the tablet's surface. In other words, we hypothesize that selection performance will be the same regardless of the influence of warping. We show this by using a non-inferiority statistical analysis [58] (explained in Section 5.1).
+
+### 4.2 Participants
+
+We recruited 24 participants (11 females, aged 19 to 64, $\mu = {26.5}$ , ${SD} = {10.5}$ ). Three were left-handed, and one was ambidextrous but chose to complete the experiment using their right hand. We also surveyed their experience with VR and games: ${62.5}\%$ reported having no VR experience at all, 37.50% reported having a little VR experience, and 4.20% a moderate amount. In terms of gaming experience, 37.50% reported having no 3D first-person game experience, 20.80% reported having a little, 29.20% reported having a moderate amount, 12.50% reported having a lot of experience. All participants had normal or corrected-to-normal stereo vision, assessed based on questioning before entering VR.
+
+### 4.3 Apparatus
+
+#### 4.3.1 Hardware
+
+We used a PC with an Intel Core i7 processor PC with an NVIDIA Geforce GTX 1080 graphics card. We used the HTC Vive VR platform, which includes an HMD with ${1080} \times {1200}$ pixel (per eye) resolution, ${90}\mathrm{\;{Hz}}$ Refresh Rate, and ${110}^{ \circ }$ field of view. The tablet was an XP-PEN STAR 06 wireless drawing tablet. Its dimensions were ${354}\mathrm{\;{mm}} \times {220}\mathrm{\;{mm}} \times {9.9}\mathrm{\;{mm}}$ with a ${254}\mathrm{\;{mm}} \times {152.4}\mathrm{\;{mm}}$ active area, a 5080 LPI resolution. The tablet includes a stylus with a barrel button and a tip switch to support activation upon pressing it against the tablet surface. The $2\mathrm{D}$ location of the stylus tip is tracked along its surface by the built-in electromagnetic digitizer. We affixed a Vive tracker to the top-right corner of the tablet using Velcro tape, see Figure 2.
+
+
+
+Figure 2: Overlay of Fitts' law task on the tablet. Orange circles depict the physical target location, while blue circles depict the targets the user saw in VR. Gradient arrows illustrate the surface warping effect and how the virtual surface grows in size in all directions.
+
+#### 4.3.2 Software
+
+We developed our software using Unity3D 2019.2 and C# on MS Windows 10. Figure 3 depicts the participants' view in VR during a selection task. We also included several space-themed assets from the Unity store. These were not seen during the task but were visible during breaks to help entertain participants between trials. We used a modified version of the source code provided by Hansen et al. [29] to develop our software.
+
+
+
+Figure 3: Fitts law task as seen in on the tablet in VR.
+
+Windows 10 recognizes the tablet as a human interaction device profile, which causes tablet input to be mapped to the mouse cursor. We used a custom LibUSB driver that allowed direct access to the tablet data to use in Unity. The library provides stylus coordinates on the tablet surface and whether the stylus was touching the tablet surface or hovering above it within an approximately $1\mathrm{\;{cm}}$ range.
+
+The software polled the Vive tracker to map a virtual interaction panel to the physical tablet's active tracking area, co-locating their centres. The virtual panel in VR had a resolution of ${4000} \times {2400}$ , with the same size in 1-to-1 mapping as the physical tracking area on the tablet. When scaling was applied, the virtual tablet panel size was multiplied by the SF value. The tablet stylus was used to interact with the tablet. We could not find a reliable and suitable solution to track the stylus or hands externally, hence tracking was limited to the tip of the stylus in a close range to the tablet surface by the tablet's digitizer. Due to this limitation, we did not render a model of the stylus or hands. However, when the stylus was in the range of the tablet, we displayed a virtual star-shaped cursor with a dot hotspot in the centre at the stylus tip. By applying pressure and touching the tablet with the stylus, input ("click") events were detected on the tablet. The virtual cursor was used for selection, and its position was calculated as described in Section 3.
+
+The virtual tablet sat on a table (see Figure 4). While hovering on targets, they changed colour to show which would be selected if the tip switch was pressed. Upon selecting a target successfully, an auditory "click" sound was played, and the experiment would move to the next target for selection. In case of an error, a distinct "beep" sound was used to indicate selection error, and the experiment would move to the next target in the current sequence.
+
+
+
+Figure 4. The virtual table where participants sat during the study.
+
+### 4.4 Procedure
+
+Overall, the experiment took about one hour, with participants in VR for around 45 minutes. Before starting, participants provided informed consent and completed a demographic questionnaire. The main experiment was divided up into eight blocks (one per SF). Each block consisted of ten sequences, one for each of the 10 IDs. In each sequence, participants were presented with 15 targets. Participants had to successfully select at least 50% of the targets to move on to the next sequence. Participants were given an (at least) 30-second break between each block. During the break, they could remove the HMD if desired. To begin each sequence, participants had to select the first target to begin the timer. This selection in each sequence was thus not logged as it just started the sequence. The target would be selected again as the last target in the sequence.
+
+There was no training session before starting the experiment. The task involved selecting circular targets, as is commonly used in Fitts' law experiments (see Figure 2). The task required selecting the purple target as quickly and accurately as possible. If participants missed the target, the system would record an error and move on to the next target. If the participants had more than 50% error rate, they were asked to redo the sequence, the data logging system would record this. After completing the experiment, the participants exited the VE and completed a post-questionnaire where they gave comments on their experience using the VR tablet prototype. They were then debriefed and compensated $\$ {10}\mathrm{{CAD}}$ .
+
+### 4.5 Design
+
+Our experiment employed a within-subjects design with two independent variables Scale Factor and ID (index of difficulty):
+
+Scale Factor (SF): 1, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4.
+
+ID:1.1,1.5,1.8,2.1,2.3,2.7,3.5,3.8,4.0,4.6. The ID values were generated from the following 10 combinations of $A$ and $W$ (in pixels):
+
+| ID | 1.1 | 1.5 | 1.8 | 2.1 | 2.3 | 2.7 | 3.5 | 3.8 | 4.0 | 4.6 |
| $A$ | 300 | 450 | 1300 | 1600 | 800 | 1100 | 1000 | 2000 | 2250 | 2300 |
| $W$ | 250 | 250 | 500 | 500 | 200 | 200 | 100 | 150 | 150 | 100 |
+
+The SFs were applied to both the cursor position and virtual panel size in VR, as described in Section 3. IDs were calculated according to Equation (3) using the SF of 1 (i.e., 1-to-1 mapping). SF ordering was counterbalanced via a balanced Latin squared. Within each SF, ${ID}$ order was randomized, with one ${ID}$ per sequence (i.e., circle) of 15 targets.
+
+Our dependent variables included:
+
+- Movement time: average selection time, in milliseconds.
+
+- Error rate: average proportion of targets missed (percentage).
+
+- Throughput (in bits per second, bps): calculated based on the ISO 9241-9 standard, using Equation (4).
+
+- Target entry count: number of times the cursor entered a target before selection; representative of control problems [47].
+
+Like others [22, 45, 46, 59, 67], we argue that throughput gives a better idea of selection performance than either movement time or error rate. The accuracy adjustment used to derive throughput incorporates speed and accuracy together, making throughput constant regardless of participant biases towards speed or accuracy. It is thus better facilitates comparison between studies and is more representative of performance than speed or accuracy alone [46]. We use it as our primary dependent variable, similar to other studies.
+
+In total, each participant completed $8\mathrm{{SFs}} \times {10}\mathrm{{IDs}} \times {15}$ trials (individual selections) for 1200 selections. Our analysis is based on 24participants $\times {1200}$ trials $= {28800}$ selections in total.
+
+## 5 RESULTS
+
+We used repeated-measures ANOVA on movement time, error rate, throughput, and target entries to detect significant differences due to SF. We did not analyze ${ID}$ , as it is expected to yield performance differences. As detailed below, we found significant main effects for SF only for ${MT}$ and target entries. We did not find significant differences in error rate, and most importantly, for throughput. Horizontal bars (●...●) indicate pairwise significant differences between conditions with Bonferonni adjustments.
+
+We note here that while standard null-hypothesis statistical testing will determine if two conditions are significantly different, our objective was to determine if WVS is not worse than the one-to-one mapping (i.e., SF of 1). This would suggest that it has minimal impact on user performance and is thus a viable technique for virtually extending tablet surfaces. However, standard null-hypothesis statistical tests (e.g., ANOVA) do not determine if two conditions are statistically the same or non-inferior compared to one another. Hence, we instead conducted non-inferiority testing for ${TP}$ and error rate [58].
+
+### 5.1 Non-inferiority Statistical Analysis
+
+Non-inferiority testing is a form of equivalence testing that shows if a condition is statistically no worse than another. It requires defining an indifference zone, i.e., the maximum allowed difference between two conditions to be considered non-inferior based on the context of the study [58]. With the indifference zone defined, we next analyze the mean difference between the conditions and the 1-tailed 95% confidence interval of that difference. Finally, we check if the mean difference score and the 1-tailed 95% confidence interval fall within the extents of the indifference zone. If so, then the two conditions are deemed to be no worse than each other, i.e., equivalent [58]. Although this form of analysis is rare in HCI, it has been used before in the context of VR Fitts’ law experiments [39].
+
+For throughput, we used the same indifference zone (1bps) as Kohli et al. [39]. For error rate, they used the smallest unit of error, i.e., one target miss in a sequence. We used the same threshold, in our case, one miss in fifteen targets, for a 6.66% indifference zone.
+
+### 5.2 Throughput
+
+RM-ANOVA on throughput, revealed no significant difference for scale factor $\left( {{F}_{{4.21},{96.98}} = {.92}\text{, ns}}\right)$ . Mean ${TP}$ was fairly consistent across all scale factors. See Figure 5.
+
+
+
+Figure 5: Mean TP for each SF value. Error bars show 95% CI.
+
+To determine if throughput is statistically consistent across SFs, we next conducted a non-inferiority test. Using the aforementioned indifference zone of $1\mathrm{{bps}}$ , the mean difference between each compared SF and the lower bound of the one-tailed confidence intervals should be greater than -1 bps to be considered non-inferior. Table 3 shows the results of the non-inferiority test for pair-wise comparisons (with Bonferonni corrections) between the 1-to-1 mapping and all other SFs. Based on this analysis, no SF has worse ${TP}$ than 1-to-1 mapping (i.e., they are all considered non-inferior). Overall, this result indicates that ${TP}$ is not affected by SF, in line with our main hypothesis and suggesting our WVS technique has minimal impact on user performance.
+
+Table 3: Mean ${TP}$ differences and non-inferiority test results.
+
+| SF Pairs | Mean Diff. | 1-tailed 95% CI | SD Error | Non-inferiority Comparison |
| 1-1.2 | -0.077 | >-0.313 | 0.067 | $- {0.313} > - {1.0}$ |
| 1-1.4 | -0.059 | $> - {0.333}$ | 0.078 | $- {0.333} > - {1.0}$ |
| 1-1.6 | -0.073 | > -0.426 | 0.100 | $- {0.426} > - {1.0}$ |
| 1-1.8 | -0.115 | $> - {0.393}$ | 0.079 | $- {0.393} > - {1.0}$ |
| 1-2.0 | -0.108 | >-0.410 | 0.086 | $- {0.410} > - {1.0}$ |
| 1-2.2 | 0.0130 | >-0.260 | 0.077 | $- {0.260} > - {1.0}$ |
| 1-2.4 | 0.0190 | >-0.194 | 0.060 | $- {0.194} > - {1.0}$ |
+
+### 5.3 Error Rate
+
+We found no significant difference in error rate for different SFs using RM ANOVA $\left( {{F}_{{3.98},{91.56}} = {2.07}, p > {.05}}\right)$ . Based on Figure 6, they too are reasonably consistent across the eight scale factors.
+
+
+
+Figure 6: Error rate for each SF condition. Error bars show 95% CE.
+
+Like with TP, we used a non-inferiority test on error rate to determine if each SF yielded error rates no worse than 1-to-1 mapping. Using the aforementioned indifference zone limit of ${6.66}\%$ , each SF must have error rate differences no higher than 6.66% compared to the SF of 1 (i.e., 1-to-1 mapping). The results of this analysis, with Bonferroni corrections, are seen in Table 4. No SF offered a worse error rate than the SF of 1 . Results indicate that the error rate was also unaffected by SF value, also in line with our hypothesis, meaning target misses rates were constant regardless of SF.
+
+Table 4: Mean error rate differences and non-inferiority results.
+
+| SF Pairs | Mean Diff. | 1-tailed 95% CI | SD Error | Non-inferiority Comparison |
| 1-1.2 | 0.278 | < 1.708 | 0.405 | ${1.708} < {6.66}$ |
| 1-1.4 | -0.25 | $< {1.671}$ | 0.544 | ${1.671} \leq {6.66}$ |
| 1-1.6 | 0.111 | $< {2.572}$ | 0.697 | ${2.572} < {6.66}$ |
| 1-1.8 | -1.139 | $< {1.636}$ | 0.786 | ${1.636} \leq {6.66}$ |
| 1-2.0 | -0.806 | <1.213 | 0.572 | ${1.213} < {6.66}$ |
| 1-2.2 | -0.639 | $< {1.595}$ | 0.633 | ${1.595} < {6.66}$ |
| 1-2.4 | -1.417 | <0.189 | 0.455 | ${0.189} < {6.66}$ |
+
+### 5.4 Movement Time
+
+RM-ANOVA resulted in significant results in the case of movement time. We also note that as suggested by Kohli et al. [39], it is not clear what a reasonable indifference zone for movement time should be.
+
+Mauchly's test revealed that the assumption of sphericity was violated $\left( {{\chi }^{2}\left( {27}\right) = {51.33}, p = {.004}}\right)$ so we applied Greenhouse-Geisser correction $\left( {\varepsilon = {.56}}\right)$ . There was a significant main effect of scale factor on movement time $\left( {{\mathrm{F}}_{{3.98},{91.65}} = {13.92}, p < {.001},{\eta }_{p}{}^{2} = }\right.$ .37, power $= {1.00}\left( {\alpha = {.05}}\right)$ ). Posthoc results showing pairwise differences and mean movement times are seen in Figure 7. Results indicate higher SF values resulted in higher mean movement time, suggesting participants moved slower with higher SFs. Our hypothesis failed in the case of movement time.
+
+
+
+Figure 7: Mean MT for each SF. Error bars show 95% CI.
+
+### 5.5 Target Entry Count
+
+As seen in Figure 8, higher SFs yield slightly higher target entry counts, suggesting participants had more difficulty getting the cursor into the target before selection. The assumption of sphericity was not violated, so results were analyzed using RM-ANOVA as usual. There was a significant main effect of SF on target entry count $\left( {{F}_{7,{161}} = {17.41}, p < {.001},{\eta }_{p}{}^{2} = {.43}\text{, power} = {1.00}\left( {\alpha = {.05}}\right) }\right)$ . Higher SF resulted in average higher target re-entries for correct selection. Our hypothesis failed for the target entry count.
+
+
+
+Figure 8: Mean target entry count for SFs. Error bars show 95% CI.
+
+Several participants mentioned having difficulty selecting the smallest targets. As a result, we also analyzed if the target entry count was affected by the target size using RM-ANOVA. According to Mauchly's test, the assumption of sphericity was not violated. We found a significant main effect of target width on target entry count $\left( {{F}_{4,{92}} = {13.52}, p < {.001},{\eta }_{p}{}^{2} = {.37}}\right.$ , power $= {1.00}$ $\left( {\alpha = {.05}}\right) )$ . Figure 9 depicts the mean target entry count for different target widths. Our analysis did not find a significant interaction effect between SF and target width $\left( {{F}_{{28},{644}} = {1.33}, p > {.05}}\right)$ . Results suggest smaller targets were harder to hit upon initial entry and required on average more re-entries to select.
+
+
+
+Figure 9: Mean target entry count across the target width. Error bars show 95% CI.
+
+Input device
+
+
+
+General comfort
+
+Neck fatigue Shoulder fatgiue Arm fatigue Wrist fatigue Finger fatigue Operation speed Accurate pointing Physical effort Mental effort Operation Smoothness Acctuation force
+
+Figure 10: Device Assessment Questionnaire results. Label numbers indicate the percentage of participants choosing each answer.
+
+### 5.6 Fitts' Law Analysis
+
+Fitts' law is commonly used as a predictive model of movement time. We performed a linear regression of ${MT}$ onto ${ID}$ . Figure 11 depicts the relationship between ${MT}$ and the ${ID}$ .
+
+
+
+Figure 11: Linear regression of ${MT}$ on ${ID}$ for both presented ${ID}$ and scaled ${ID}$ (applying the scale factor to $A$ when calculating ${ID}$ ).
+
+As is often the case in Fitts' law studies, there is a strong linear relationship between ${MT}$ and ${ID}$ . We performed linear regression for both conventional ${ID}$ (i.e., the 10 IDs listed in Section 4.5), and for scaled ${ID}$ . Scaled ${ID}$ was calculated based on the SF value applied to target amplitudes when calculating ${ID}$ . Applying the scale factor in this way is more representative of the task participants perceived themselves as performing. Interestingly, the Fitts’ law regression using scaled ${ID}$ yielded a better fitting model.
+
+### 5.7 Effective Width Analysis
+
+To further explore why throughput was constant, despite increasing movement time across scale factors, we also analyze effective width. We note that throughput is based on effective width, which in turn relates to the magnitude of errors, rather than the error rate. This explains how throughput can stay constant across SFs while why movement time significantly increases with SF (and while error rate is also constant). Misses farther from the target will push effective width upward, while selections closer to the target centre will lower it. Thus, we looked into how mean ${W}_{e}$ changed under different SF conditions. Based on Figure 12, ${W}_{e}$ appears to decrease with higher SFs. This indicates that participants were making more accurate selections with higher SFs, likely yielding the higher movement times noted above with higher SFs.
+
+### 5.8 Post-Questionnaire
+
+We used the device assessment questionnaire from ISO 9241-9 [22] to evaluate the experience of using a tablet and stylus with WVS.
+
+
+
+Figure 12: ${W}_{e}$ for each SF. Error bars show 95% CI.
+
+Participants had to rate each phrase from 1 (lowest) to 5 (highest).See Figure 10. We did not compare results across different SFs since participants were unlikely to notice differences in the SFs, and this would require they complete 8 lengthy questionnaires instead of just one.
+
+## 6 Discussion
+
+Results indicate that WVS had significant effects only on ${MT}$ and target entry count. Notably, the largest SFs were significantly different. For error rate and ${TP}$ , non-inferiority testing indicated that WVS is no worse than 1-to-1 mapping. Target entry count was affected by the target size, which was unsurprising, but highlights some difficulty in accurately selecting the smallest targets.
+
+### 6.1 General Discussion
+
+Our results are in line with past work and suffer from some of the same limitations $\left\lbrack {{10},{39}}\right\rbrack$ . The most important result from our study is that throughput is relatively stable when using WVS, especially for modest scale factors, e.g., 1.2, 1.4 and 1.6. One explanation could be that the same muscles are used across all selections under different SFs. Pointing performance is known to be affected by the muscle used to reach the target [74]. This is a promising finding, as it suggests that WVS can be applied in tablet-based VR to provide a larger virtual tactile proxy than is otherwise available. Similarly, as indicated in our results, movement time is significantly (if only slightly) worse with higher SFs, especially at 2.2 and 2.4.
+
+Overall, our results show lower throughput with a tracked stylus compared to redirected touching [39]. Lower throughput and error rate and higher movement time in our study are potentially due to the different warping techniques we used. Other factors that could contribute to this difference are the different hardware setup, for instance, input using a stylus rather than fingers, and the position/orientation of the tablet in our study. Notably, our throughput scores - regardless of scale factor - are in line with previous work using a 3D tracked stylus, which is a closer comparison point anyway [67]. Also, the tablet placed on the table caused participants to experience some neck fatigue, as indicated in the post-questionnaire results (see Figure 10) and participant comments. 54.2% of the participants reported high neck fatigue. Our participants also noted it was hard for them to select the smaller targets. One commented, "There were some times where selecting the smaller circles was difficult." Such comments were not unexpected and are supported by the significant differences found in our analysis, as shown in Figure 9. One other contributor to this difficulty could be the limited screen resolution in the Vive HMD.
+
+Higher scale factors yielded slightly higher movement times and target entries than lower scale factors. A potential reason for this is the increase in virtual cursor movement speed caused by scaling. This increase in cursor speed would make fast, accurate movement more challenging, particularly in precisely selecting targets. This kind of effect has been noted before as a "U-shaped" curve for coarse/fine positioning times under different CD gain levels [2]. Also, since users were not able to see the stylus in VR, they likely moved more slowly to keep track of the cursor.
+
+On the other hand, as seen in Figure 5, throughput is almost flat across scale factors. Throughput characterizes the speed/accuracy trade-off in selection tasks. For throughput to be flat across scale factor, and in light of increasing movement times, accuracy must have been better with higher scale factors. In our error rate analysis, we found non-inferiority between 1-to-1 mapping and all other scale factors, suggesting error rates were at least not worse with higher scale factors. However, effective width (from which throughput is derived) is not based on error rate, but rather on the distribution of selection coordinates. In other words, the distance of the selection coordinates to the target centre influences ${W}_{e}$ . Participants may miss targets at about the same rate, but miss "closer" to the target (which yields lower ${W}_{e}$ ). Alternatively, they may hit closer to the centre of the target (which also yields lower $\left. {W}_{e}\right)$ . This is confirmed in our ${W}_{e}$ analysis (Section 5.7) - ${W}_{e}$ became smaller with higher scale factors, which is why throughput was constant regardless of the SF. With higher SF, the cursor moved faster. Participants likely slowed their operation speed slightly to compensate for the higher cursor speed. This is reflected in higher ${MT}$ for higher scale factors (Figure 7). By compensating (i.e., slowing down), participants were more readily able to precisely select targets (yielding lower magnitude misses, or selections closer to the target centre), resulting in lower ${W}_{e}$ and higher throughput.
+
+Based on our observations, most target misses were due to loss of tracking for the stylus and participant moving their hand closer to regain tracking and accidentally touching the tracking surface. Some participants also commented on this. A participant reported: "my errors were false selection during dragging my hand to the desired point." Also, since we were warping the virtual space, moving the stylus even slightly could cause the virtual cursor to move outside smaller targets (increasing target entry count). Participants held the stylus at an acute angle relative to the tablet surface, instead of perpendicular to it; reaching and selecting smaller targets from the hovering state could cause the virtual warped cursor to fall outside the target even with slight movements in either direction. Participants also held the stylus differently, i.e., in a different position and with different gestures.
+
+Participants found WVS easy to use, despite 41.7% finding accurate pointing difficult. One participant reported: "Overall was easy to select the targets." Another participant mentioned that "...would use again. Was more usable when the in-world representation of the tablet was larger, but it was still easy to select small targets on the small display." Half of our 24 participants reported the device to be very easy to use (Figure 10).
+
+Based on comments, participants liked that they could use a larger touch surface in VR despite arm, wrist and finger fatigue, as indicated in Figure 10. One participant commented that "I really like the idea of using smaller physical screens to choose on larger area in VR, hope it will become common input option for VR. "Only three out of 24 participants reported they did not notice any change in their cursor movement speed while in VR. One person mentioned that "I was able to observe the warping but not able to compare it to earlier trials. It was a smooth experience. "
+
+### 6.2 Limitations
+
+The main limitation of our study is the indifference zones used for the non-inferiority analysis. More studies are required to determine valid indifference zones for performance in Fitts' law studies. In the presented work, we used the same indifference zones as Kohli et al. [39] for the sake of consistency and to facilitate comparison. As mentioned in their work, some previous studies have found significant differences between conditions within the chosen indifference zones. Although we demonstrated non-inferiority, different indifference zones or statistical analysis could yield different results. Another limitation is that our hardware setup did not support 6DOF stylus tracking. We believe our findings can still be useful and can help VR researchers and system designers.
+
+## 7 CONCLUSIONS
+
+We introduced Warped Virtual Surfaces, a technique to scale input space with a tracked tablet, yielding larger virtual tablets than that physically available. We evaluated the effects of surface warping on task performance using a tablet and stylus in VR. In terms of TP and error rate, WVS yielded consistent performance regardless of SF. Non-inferiority statistical tests showed that ${TP}$ and error rate were statistically similar between all tested SFs and the "control" condition, i.e., the 1-to-1 mapping. However, for movement time and target entry count, we found small but significant differences, particularly for larger SFs, in line with previous work [10, 39].
+
+Our proposed method can be used for artists and designer that are interested in immersive workflows or for VR design sessions. Our approach uses cheap and affordable hardware. It enables users with a fix-sized physical panel or drawing tablet to get a bigger virtual panel without extra hardware and performance cost. WVS could be useful with small, lightweight arm-mounted touchscreens to facilitate tactile interaction with 3D menus or similar applications to PIP and WIM [61, 63]. WVS can also complement other tablet and stylus-based interaction techniques for VR, such as the HARP system [44], the Virtual Notepad [53]. In-Air drawing applications could also benefit from WVS. Interaction techniques like snappable panels or surfaces could employ WVS. WVS could potentially help with fatigue, but further experiments are needed to determine to what extent. Other haptic devices with limited interaction space, like the Phantom, could potentially also benefit from WVS by expanding their virtual reach.
+
+We conclude that our technique shows promise as a method to virtually extend physical surfaces in VR. Results suggest minimal performance impact of WVS. TP was flat across all SFs. Despite small differences in ${MT}$ , it seems users made up their performance via a slight accuracy improvement, yielding constant TP.
+
+Future work on Warped Virtual Surfaces will involve a followup study across multiple scale factors and multiple tablet sizes. We will use a subset of scale factors from the current study, with physically smaller tablets than that used in this study. We will also employ a 3D tracked stylus (e.g., Logitech VR Ink) scaling 3D movement, rather than just planar movement.
+
+## ACKNOWLEDGEMENTS
+
+We would like to thank Kyle Johnsen and A. J. Tuttle for sharing their tablet driver source code. We also thank our participants and other researchers whom their work inspired this research. This research was supported by NSERC.
+
+## REFERENCE
+
+[1] P. Abtahi and S. Follmer, "Visuo-Haptic Illusions for Improving the
+
+Perceived Performance of Shape Displays," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2018, pp. 1-13.
+
+[2] J. Accot and S. Zhai, "Scale effects in steering law tasks," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2001, pp. 1-8.
+
+[3] Accot J. and Zhai S., "Beyond Fitts' law: models for trajectory-based hci tasks," in Proceedings of ACM Conference on Human Factors in Computing Systems-CHI, 1997, pp. 295-302.
+
+[4] M. Achibet, A. Girard, A. Talvas, M. Marchal, and A. Lecuyer, "Elastic-Arm: Human-scale passive haptic feedback for augmenting interaction and perception in virtual environments," in Proceedings of IEEE Conference on Virtual Reality, 2015, pp. 63-68.
+
+[5] L. Y. Arnaut and J. S. Greenstein, "Optimizing the Touch Tablet: The Effects of Control-Display Gain and Method of Cursor Control," Human Factors, vol. 28, no. 6, pp. 717-726, 1986.
+
+[6] M. Azmandian, M. Hancock, H. Benko, E. Ofek, and A. D. Wilson, "Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2016, pp. 1968-1979.
+
+[7] J. Z. Bakdash, J. S. Augustyn, and D. R. Proffit, "Large displays enhance spatial knowledge of a virtual environment," in Proceedings of the ACM Conference on Human Factors in Computing System-APGV, 2006, pp. 59-62.
+
+[8] F. Barbagli, K. Salisbury, C. Ho, C. Spence, and H. Z. Tan, "Haptic discrimination of force direction and the influence of visual information," ACM Transactions on Applied Percepion., vol. 3, no. 2, pp. 125-135, 2006.
+
+[9] R. J. Van Beers, A. C. Sittig, J. J. Danier van der Gon, and J. J. Denier Van Der Gon, "The precision of proprioceptive position sense," Experimental Brain Research, vol. 122, no. 4, pp. 367-377, 1998.
+
+[10] R. Blanch, Y. Guiard, and M. Beaudouin-Lafon, "Semantic pointing," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2004, pp. 519-526.
+
+[11] D. A. Bowman, C. J. Rhoton, and M. S. Pinho, "Text Input Techniques for Immersive Virtual Environments: An Empirical Comparison," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 46, no. 26, pp. 2154-2158, 2002.
+
+[12] G. Browning and R. J. Teather, "Screen scaling: Effects of screen scale on moving target selection," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2014, pp. 2053-2058.
+
+[13] E. Burns, S. Razzaque, A. T. Panter, M. C. Whitton, M. R. McCallus, and F. P. Brooks, "The Hand is Slower than the Eye: A Quantitative Exploration of Visual Dominance over Proprioception," in Proceedings of the IEEE Conference on Virtual Reality, 2005, pp. 3- 10.
+
+[14] E. Burns, S. Razzaque, A. T. Panter, M. C. Whitton, M. R. McCallus, and F. P. Brooks, "The Hand Is More Easily Fooled than the Eye: Users Are More Sensitive to Visual Interpenetration than to Visual-Proprioceptive Discrepancy," Presence, vol. 15, no. 1, 2006.
+
+[15] E. Burns, S. Razzaque, M. C. Whitton, and F. P. Brooks, "MACBETH: The avatar which I see before me and its movement toward my hand," in Proceedings of the IEEE Conference on Virtual Reality, 2007, pp. 295-296.
+
+[16] C. Carvalheiro, R. Nóbrega, H. da Silva, and R. Rodrigues, "User Redirection and Direct Haptics in Virtual Environments," in Proceedings of the ACM Conference on Human Factors in Computing Systems-Multimedia, 2016, pp. 1146-1155.
+
+[17] O. Chapius and P. Dragicevic, "Effects of motor scale, visual scale, and quantization on small target acquisition difficulty," ${ACM}$ Transactions on Computer-Human Interaction, vol. 18, no. 3, pp. 1- 32, 2011.
+
+[18] T.-T. Chen, C.-H. Hsu, C.-H. Chung, Y.-S. Wang, and S. V. Babu, "iVRNote: Design, Creation and Evaluation of an Interactive Note-Taking Interface for Study and Reflection in VR Learning
+
+Environments," in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 2019, pp. 172-180.
+
+[19] L.-P. Cheng, E. Ofek, C. Holz, H. Benko, and A. D. Wilson, "Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2017, pp. 3718-3728.
+
+[20] A. Cockburn and P. Brock, "Human on-line response to visual and motor target expansion," in Proceedings of Graphics Interface Conference-GI, 2006, pp. 81-87.
+
+[21] A. M. Colman, A dictionary of psychology. Oxford University Press, USA, 2015.
+
+[22] S. A. Douglas, A. E. Kirkpatrick, and I. Scott MacKenzie, "Testing pointing device performance and user assessment with the ISO 9241, Part 9 standard," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 1999, pp. 215-222.
+
+[23] T. Feuchtner and J. Müller, "Ownershift: Facilitating Overhead Interaction in Virtual Reality with an Ownership-Preserving Hand Space Shift," in Proceedings of the ACM Conference on Human Factors in Computing Systems-UIST, 2018, pp. 31-43.
+
+[24] P. M. Fitts, "The information capacity of the human motor system in controlling the amplitude of movement," Experimental Psychology, vol. 47, no. 6, pp. 381-391, 1954.
+
+[25] A. Franzluebbers and K. Johnsen, "Performance Benefits of High-Fidelity Passive Haptic Feedback in Virtual Reality Training," in Proceedings of the ACM Conference on Human Factors in Computing Systems-SUI, 2018, pp. 16-24.
+
+[26] J. J. Gibson, "Adaptation, after-effect and contrast in the perception of curved lines," Experimental Psychology, vol. 16, no. 1, pp. 1-31, 1933.
+
+[27] E. J. Gonzalez and S. Follmer, "Investigating the detection of bimanual haptic retargeting in virtual reality," in Proceedings of the ACM Conference on Human Factors in Computing Systems-VRST, 2019, pp. 1-5.
+
+[28] D. T. Han, M. Suhail, and E. D. Ragan, "Evaluating remapped physical reach for hand interactions with passive haptics in virtual reality," IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 4, pp. 1467-1476, 2018.
+
+[29] J. P. Hansen, V. Rajanna, I. S. MacKenzie, and P. Bækgaard, "A Fitts" Law Study of Click and Dwell Interaction by Gaze, Head and Mouse with a Head-Mounted Display," in Proceedings of the Workshop on Communication by Gaze Interaction, 2018, pp. 1-5.
+
+[30] H. B. Helbig and M. O. Ernst, "Optimal integration of shape information from vision and touch," Experimental Brain Research, vol. 179, no. 4, pp. 595-606, 2007.
+
+[31] H. G. Hoffman, "Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments," in Proceedings of IEEE Conference on Virtual Reality, 1998, pp. 59-63.
+
+[32] N. P. Holmes and C. Spence, "Visual bias of unseen hand position with a mirror: spatial and temporal factors," Experimental Brain Research, vol. 166, pp. 489-497, 2005.
+
+[33] J. P. Hourcade and N. E. Bullock-Rest, "How small can you go? Analyzing the effect of visual angle in pointing tasks," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2012, pp. 213-216.
+
+[34] B. E. Insko, "Passive Haptics Significantly Enhances Virtual Environments," University of North Carolina at Chapel Hill, 2001.
+
+[35] International Organization for Standardization. ISO 9241-9:2000, "Ergonomic requirements for office work with visual display terminals (VDTs)-Part 9: Requirements for non-keyboard input devices"
+
+[36] I. Kitahara, M. Nakahara, and Y. Oht, "Sensory Properties in Fusion of Visual/Haptic Stimuli Using Mixed Reality," in Advances in Haptics, 2010, pp. 565-583.
+
+[37] R. L. Klatzky, J. M. Loomis, A. C. Beall, S. S. Chance, and R. G. Golledge, "Spatial Updating of Self-Position and Orientation During Real, Imagined, and Virtual Locomotion," Psychological Science,
+
+vol. 9, no. 4, pp. 293-298, 1998.
+
+[38] L. Kohli, "Redirected touching: Warping space to remap passive haptics," in Proceedings of the IEEE Conference on 3D User Interfaces, 2010, pp. 129-130.
+
+[39] L. Kohli, M. C. Whitton, and F. P. Brooks, "Redirected touching: The effect of warping space on task performance," in Proceedings of the IEEE Conference on 3D User Interfaces, 2012, pp. 105-112.
+
+[40] L. Kohli, M. C. Whitton, and F. P. Brooks, "Redirected Touching: Training and adaptation in warped virtual spaces," in Proceedings of the IEEE Conference on 3D User Interfaces, 2013, pp. 79-86.
+
+[41] A. Kovacs, J. Buchanan, and C. Shea, "Perceptual influences on Fitts' law," Experimental Brain Research, vol. 190, pp. 99-103, 2008.
+
+[42] A. Krekhov, K. Emmerich, P. Bergmann, S. Cmentowski, and J. Krüger, "Self-Transforming Controllers for Virtual Reality First Person Shooters," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI PLAY, 2017, pp. 517-529.
+
+[43] Y. Lee, I. Jang, and D. Lee, "Enlarging just noticeable differences of visual-proprioceptive conflict in VR using haptic feedback," in Proceedings of IEEE Conference on World Haptics, 2015, pp. 19-24.
+
+[44] R. W. Lindeman, J. L. Sibert, and J. K. Hahn, "Towards Usable VR: An Empirical Study of User Interfaces for Immersive Virtual Environments," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 1999, pp. 64-71.
+
+[45] I. S. MacKenzie, "Fitts' Law as a Research and Design Tool in Human-Computer Interaction," Human-Computer Interaction, vol. 7, no. 1, pp. 91-139, 1992.
+
+[46] I. S. MacKenzie and P. Isokoski, "Fitts' throughput and the speed-accuracy tradeoff," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2008, pp. 1633-1636.
+
+[47] I. S. MacKenzie, T. Kauppinen, and M. Silfverberg, "Accuracy measures for evaluating computer pointing devices," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2001, pp. 9-16.
+
+[48] I. S. MacKenzie and S. Riddersma, "Effects of output display and control-display gain on human performance in interactive systems," Behaviour & Information Technology, vol. 13, no. 5, pp. 328-337, 1994.
+
+[49] Y. Matsuoka, S. J. Allin, and R. L. Klatzky, "The tolerance for visual feedback distortions in a virtual environment," Physiology and Behavior, vol. 77, no. 4-5, pp. 651-655, 2002.
+
+[50] B. J. Matthews, B. H. Thomas, S. Von Itzstein, and R. T. Smith, "Remapped Physical-Virtual Interfaces with Bimanual Haptic Retargeting," in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 2019, pp. 19-27.
+
+[51] J. C. McClelland, R. J. Teather, and A. Girouard, "Haptobend: shape-changing passive haptic feedback in virtual reality," in Proceedings of the ACM Conference on Human Factors in Computing Systems-SUI, 2017, pp. 82-90.
+
+[52] R. A. Montano Murillo, S. Subramanian, and D. Martinez Plasencia, "Erg-O: Ergonomic Optimization of Immersive Virtual Environments," in Proceedings of the ACM Conference on Human Factors in Computing Systems-UIST, 2017, pp. 759-771.
+
+[53] I. Poupyrev, N. Tomokazu, and S. Weghorst, "Virtual Notepad: handwriting in immersive VR," in Proceedings of the IEEE Conference on Virtual Reality, 1998, pp. 126-132.
+
+[54] I. Poupyrev, M. Billinghurst, S. Weghorst, and T. Ichikawa, "The GoGo Interaction Technique: Non-linear Mapping for Direct Manipulation in VR," in Proceedings of the ACM Conference on Human Factors in Computing Systems-UIST, 1996, pp. 79-80.
+
+[55] S. Razzaque, Z. Kohn, and M. C. Whitton, "Redirected Walking," University of North Carolina at Chapel Hill, 2001.
+
+[56] G. Robles-De-La-Torre and V. Hayward, "Force can overcome object geometry in the perception of shape through active touch," Nature, vol. 412, no. 6845, pp. 445-448, 2001.
+
+[57] I. Rock and J. Victor, "Vision and touch: An experimentally created conflict between the two senses," Science (80)., vol. 143, no. 3606,
+
+pp. 594-596, 1964.
+
+[58] S. Wellek, Testing Statistical Hypotheses of Equivalence and Noninferiority, 2nd ed. Chapman and Hall/CRC, 2010.
+
+[59] R. W. Soukoreff and I. S. MacKenzie, "Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts' law research in HCI," International Journal of Human-Computer Studies, vol. 61, pp. 751-789, 2004.
+
+[60] M. A. Srinivasan, Beauregard, and Brock, "The impact of visual information on the haptic perception of stiffness in virtual environments," Proceedings of ASME Dynamic Systems and Control Div., vol. 58, pp. 555-559, 1996.
+
+[61] R. Stoakley, M. J. Conway, and R. Pausch, "Virtual Reality on a WIM: Interactive Worlds in Miniature," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 1995, pp. 265-272.
+
+[62] M. Suhail, S. P. Sargunam, D. T. Han, and E. D. Ragan, "Redirected reach in virtual reality: Enabling natural hand interaction at multiple virtual locations with passive haptics," in Proceedings of IEEE Conference on 3D User Interfaces, 2017, pp. 245-246.
+
+[63] Z. Szalavári and M. Gervautz, "The personal interaction panel - A two-handed interface for augmented reality," Computer Graphics Forum, vol. 16, no. 3, 1997.
+
+[64] D. S. Tan, D. Gergle, P. G. Scupelli, and R. Pausch, "Physically large displays improve path integration in 3D virtual navigation tasks," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2004, pp. 439-446.
+
+[65] D. S. Tan, D. Gergle, P. Scupelli, and R. Pausch, "Physically large displays improve performance on spatial tasks," ACM Transactions on Computer-Human Interaction, vol. 13, no. 1, pp. 71-99, 2006.
+
+[66] R. J. Teather, D. Natapov, and M. Jenkin, "Evaluating haptic feedback in virtual environments using ISO 9241-9," in Proceedings of IEEE Conference on Virtual Reality, 2010, pp. 307-308.
+
+[67] R. J. Teather and W. Stuerzlinger, "Pointing at 3D targets in a stereo head-tracked virtual environment," in IEEE Conference on 3D User Interfaces, 2011, pp. 87-94.
+
+[68] A. J. Tuttle, S. Savadatti, and K. Johnsen, "Facilitating Collaborative Engineering Analysis Problem Solving in Immersive Virtual Reality," in Proceedings of the American Society for Engineering Education Conference, 2019.
+
+[69] D. Valkov, A. Giesler, and K. H. Hinrichs, "Imperceptible depth shifts for touch interaction with stereoscopic objects," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 2014, pp. 227-236.
+
+[70] Y. Wang, C. Yu, Y. Qin, D. Li, and Y. Shi, "Exploring the effect of display size on pointing performance," in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, 2013, pp. 389-392.
+
+[71] J. Yang, H. Horii, A. Thayer, and R. Ballagas, "VR Grabbers: Ungrounded Haptic Retargeting for Precision Grabbing Tools," in Proceedings of the ACM Conference on Human Factors in Computing Systems-UIST, 2018, pp. 889-899.
+
+[72] A. Zenner and A. Kruger, "Estimating Detection Thresholds for Desktop-Scale Hand Redirection in Virtual Reality," in Proceedings of IEEE Conference on Virtual Reality and 3D User Interfaces, 2019, pp. 47-55.
+
+[73] A. Zenner and A. Kruger, "Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality," IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 4, pp. 1285-1294, 2017.
+
+[74] S. Zhai, P. Milgram, and W. Buxton, "The influence of muscle groups on performance of multiple degree-of-freedom input," in Proceedings of the ACM Conference on Human Factors in Computing Systems-CHI, 1996, pp. 308-315.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7d95780489633a4e9376299da4738a036eab2379
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/w5OS2As0_M/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,416 @@
+§ SELECTION PERFORMANCE USING A SCALED VIRTUAL STYLUS CURSOR IN VR
+
+Seyed Amir Ahmad Didehkhorshid*
+
+Carleton University, Ottawa, Canada
+
+Robert J. Teather**
+
+Carleton University, Ottawa, Canada
+
+§ ABSTRACT
+
+We propose a surface warping technique we call warped virtual surfaces (WVS). WVS is similar to applying CD gain to mouse cursor on a screen and is used with traditionally $1 : 1$ input devices, in our case, a tablet and stylus, for use with VR head-mounted displays (HMDs). WVS allows users to interact with arbitrarily large virtual panels in VR while getting the benefits of passive haptic feedback from a fixed-sized physical panel. To determine the extent to which WVS affects user performance, we conducted an experiment with 24 participants using a Fitts' law reciprocal tapping task to compare different scale factors. Results indicate there was a significant difference in movement time for large scale factors. However, for throughput (ranging from 3.35 - 3.47 bps) and error rate (ranging from 3.6 - 5.4%), our analysis did not find a significant difference between scale factors. Using non-inferiority statistical testing (a form of equivalence testing), we show that performance in terms of throughput and error rate for large scale factors is no worse than a 1-to-1 mapping. Our results suggest WVS is a promising way of providing large tactile surfaces in VR, using small physical surfaces, and with little impact on user performance.
+
+Index Terms: - Human-centered computing $\sim$ Human computer interaction (HCI) - Interaction techniques - Pointing - Human-centered computing $\sim$ Human computer interaction (HCI)Ĩnteraction paradigmsṼirtual reality
+
+§ 1 INTRODUCTION
+
+There has been a recent surge in demand for virtual reality (VR) in entertainment, education, and design applications. Current VR hardware is self-contained, wireless, lighter, and offers high visual fidelity. Development tools (e.g., Unity3D) are also becoming more accessible. These factors have created the perfect storm, paving the way for a new wave of innovations and creativity in immersive VR technologies. Despite these advances, there remain challenging research problems to be solved. General-purpose haptics is among these big problems. Past research has shown that haptic feedback significantly increases the quality of a VR experience $\lbrack {25},{31},{34}$ , 51]. However, designing interaction techniques that support realistic haptic feedback in VR is still problematic.
+
+Past studies have demonstrated the effectiveness of planar surfaces as a semi-general purpose prop in VR. In particular, the use of tablets to provide tactile surfaces in VR has been extensively studied [11, 18, 44, 53, 63, 68]. The Personal Interaction Panel (PIP) [63], Lindeman et al.'s HARP system [44], the Virtual Notepad by Poupyrev et al. [53], and Worlds In Miniature (WIM) [61] all used tracked panels for VR interaction. Other studies used a tablet and stylus for text input in VR [11].
+
+Several researchers have investigated the use of redirection and retargeting techniques for VR interaction. This relatively new class
+
+Graphics Interface Conference 2020
+
+28-29 May
+
+Copyright held by authors. Permission granted to
+
+CHCCS/SCDHM to publish in print and digital form, and
+
+ACM to publish electronically.
+
+of perceptual illusion-based interaction techniques include techniques like redirected walking (RDW) [55], haptic retargeting [6], and redirected touch [38]. Except for Yang et al.'s VRGrabber technique, which used retargeting with grabbing tools [71], all other techniques apply warping or redirection to the entire body (e.g., with RDW), or a body part such as the hand or fingers. None of the proposed interaction techniques so far have been applied with a planar surface, despite the well-known benefits of using these in VR. Although there are studies on bimanual retargeting $\left\lbrack {{27},{50}}\right\rbrack$ or unimanual redirected touching that used planar surfaces in their experiments [38-40], none studied the planar surface as a means of input method. Large displays are known to offer performance benefits on spatial tasks, spatial knowledge, and navigation [7, 64, 65]. Hence, we propose to use space warping to extend the virtual interaction panel surface in VR.
+
+We propose a technique we call Warped Virtual Surfaces (WVS). WVS combines the ideas behind RDW in expanding the tracking space, with haptic retargeting to use perceptual illusions and body warping. With WVS, users can interact with an arbitrarily large tactile virtual surface in HMD-based VR. Haptic feedback is provided by a fixed-size tracked physical tablet that uses a stylus for input. The technique applies a scale factor (SF) to move the virtual cursor beyond the tablet's active tracking area, making the stylus behave like an indirect input device, similar to a mouse and akin to changing the control-display (CD) gain [5, 48].
+
+Since users cannot see their physical hands or the stylus when using HMDs, they do not notice the decoupling between the physical stylus and virtual cursor positions. In non-VR setups, this could be distracting and confusing. Different SFs cause the cursor to move further with less physical movement on the tablet, much like how CD gain works for a mouse cursor. However, the user perceives the virtual contact on the surface in VR and thus receives appropriate tactile cues from physically touching the tablet with the stylus. WVS creates the illusion of an arbitrarily large tactile surface in VR while keeping the users' motor space consistent during interaction on the tablet. WVS can theoretically be employed with any other tablet-based VR technique, offering tactile feedback with a virtually larger tablet than what is available.
+
+We conducted an experiment to determine how far we can push this illusion and expand the interaction surface without affecting user performance. To evaluate user performance, we employed a Fitts’ law reciprocal tapping task $\left\lbrack {{24},{66}}\right\rbrack$ . To our knowledge, our experiment is the first to investigate the effects of surface warping using a stylus on user performance in selection tasks, and to apply CD gain with a tablet and stylus input modality in VR.
+
+§ 2 RELATED WORK
+
+§ 2.1 VISUAL ILLUSIONS
+
+Studies on the human brain reveal the dominance of vision over other senses when there are sensory conflicts $\lbrack 9,{16},{21},{26},{28},{57}$ , 60]. For instance, Gibson showed that a flat surface is perceived as curved while wearing distortion glasses and moving hands on a straight line [26]. VR and HCI researchers took advantage of this visual dominance to enhance selection in VR $\left\lbrack {4,{52},{54},{69}}\right\rbrack$ , yielding several novel interaction techniques for HMD-based VR.
+
+Many such techniques rely on the inaccuracy inherent to our proprioception and vestibular senses [9]. Burns et al. provided evidence of visual dominance over proprioception in a series of studies and found that people tend to believe their hand is where they see it [13-15]. Klatzky et al. showed that vestibular cues are also dominated by vision [37]. These ideas have been applied in various HCI contexts. For example, Zenner et al. used an internal weight shifting mechanism in a passive haptic proxy to enhance virtual object length and thickness perception with Shifty [73]. Similarly, Krekhov et al. used weight perception illusions in a self-transforming controller to enhance VR player experience [42]. McClelland et al. introduced the Haptobend, which used a bendable device to support different objects with simple geometry such as tubes and flat surfaces with a single physical prop [51].
+
+* amir.didehkhorshid@carleton.ca
+
+** rob.teather@carleton.ca
+
+Table 1. Summary of key studies on perceptual illusion techniques in VR.
+
+max width=
+
+1st Author/Year Description Evaluation Method Redirection on Main Findings Notes
+
+1-6
+Kohli/2010 $\left\lbrack {{38},{39},{40}}\right\rbrack$ Redirected touch under warped spaces. Fitts’ law Fingers Discrepant conditions are no worse than 1-to-1 mapping. Also evaluated performance & adaptation under warped spaces
+
+1-6
+Azmandian/2016 [6] Repurposing a passive haptic prop using body, world & hybrid space warping Block stacking Hands Hybrid warping scored the highest presence score Recommended encouraging slow hand movements & taking advantage of visual dominance.
+
+1-6
+Carvalheiro/2016 [16] Haptic interaction system for VR & evaluating user awareness regarding the space warping. Touch & moving objects Hands Average accuracy error of $7\mathrm{\;{mm}}$ , users adapt to distortion $\&$ are less sensitive to negative distortions. 7 participants did not detect distortions, even though they were told about it in the second phase of experiments
+
+1-6
+Murillo/2017 [52] Multi-object retargeting optimized for upper limb interactions in VR Target selection Hands Improved ergonomics with no loss of performance or sense of control. Used tetrahedrons to partition the physical and virtual world for multi-object retargeting.
+
+1-6
+Cheng/2017 [19] Sparse haptic proxy using gaze and hand movement for target prediction. Target acquisition Hands Retargeting of up to ${15}^{ \circ }$ similar to no retargeting, ${45}^{ \circ }$ received lower but above natural ratings Predicted desired targets 2 seconds before participants touched with 97.5% accuracy
+
+1-6
+Han/2018 [28] Static translation offsets vs dynamic interpolations for redirected reach Reaching for objects Hands The translational technique performed better, with more robustness in larger mismatches Horizontal offsets up to 76cm applied when reaching for a virtual target were tolerable
+
+1-6
+Yang/2018 [71] Virtual grabbing tool with ungrounded haptic retargeting Using controller for precise object grabbing Controller Travelling distance difference between the visual and the physical chopstick needs to be in the range (-1.48,1.95) cm. Control/Display ratio needs to be between 0.71 and 1.77 & better performance with ungrounded haptic retargeting
+
+1-6
+Feuchtner/2018 [23] Slow shift of user's virtual hand to reduce strain of in-air interaction Pursuit tracking Hands Vertical shift hand by ${65}\mathrm{\;{cm}}$ reduced fatigue, maintained body ownership Vertical shift decreases performance by 4% & gradual shifts are preferable.
+
+1-6
+Matthews/2019 [50] Bimanual haptic retargeting with interface, body & combined warps. Pressing virtual buttons Hands/ Bimanual Faster response time for combined warp, increased error in body warp Same time and error between bimanual and unimanual retargeting, but needs a more statistically powerful study.
+
+1-6
+
+Vision is not always dominant. In the case of conflicts, sensory signals are weighted based on reliability in the brain $\left\lbrack {{30},{32}}\right\rbrack$ . There are thresholds on the dominance of vision. Some studies used the just noticeable difference (JND) threshold methodology to quantify mismatch thresholds $\left\lbrack {{13},{36},{43},{49}}\right\rbrack$ while others employed two-alternative forced-choice (2AFC) [72]. Interestingly, some studies have shown that force direction and the curvature of real props can influence the mismatch thresholds [8, 56, 72].
+
+§ 2.2 PERCEPTUAL ILLUSIONS IN VR
+
+Table 1 summarizes key studies on perceptual illusions in VR. Haptic retargeting, introduced by Azmandian et al. [6], partially solved a major limitation of using physical props for tactile feedback, by mapping one physical object to multiple virtual ones. The technique operates by redirecting the user's hand towards the physical prop when they are reaching for different virtual items at various locations [6]. This and similar techniques work through perceptual illusions and the dominance of vision over other senses $\left\lbrack {9,{26},{37},{49},{57},{60},{62}}\right\rbrack$ . Likely the most well-known example is redirected walking, first proposed by Razzaque et al. [55]. RDW enables users to walk an infinite straight virtual space in HMD VR. In reality, RDW users are walking in circles in a limited tracking space, but perceive themselves as walking on straight lines.
+
+Kohli et al. were among the first to propose redirected touching in a VR setting [38]. In a series of experiments, Kohli et al. looked into the effects of warping virtual spaces on user performance and adaptation and training under warped spaces $\left\lbrack {{39},{40}}\right\rbrack$ . They reported that while training under real conditions seemed more productive, after adapting to discrepancies between vision and proprioception, participants performed much better [40]. Indeed, they report that participants had to readapt to the real world after adapting to the warping virtual space [40].
+
+Azmandian et al. took the idea of redirected touching further and introduced haptic retargeting, which added dynamic mapping of the whole hand rather than just the fingers [6]. The technique leverages visual dominance to repurpose a single passive haptic prop for various virtual objects. This produced a higher sense of presence among participants, in line with past findings on the benefits of haptics in VR [34]. Their technique is limited by the shape of the physical prop, and that the target position must be known prior to selection. To overcome the targeting limitations, Murillo et al. proposed a multi-object retargeting technique by partitioning both virtual and physical spaces using tetrahedrons to allow open-ended hand movements while retargeting [52]. Haptic retargeting could also be applied for bimanual interactions [50]. Matthews et al. suggested that the technique could also be applied to wearables interfaces, i.e., on the user's wrist or arm [50].
+
+Several other studies employed similar techniques. For example, Cheng et al. explored the applications and the limits of hand redirection using geometric primitives with touch feedback in a VE while predicting the desired targets using hand movements and gaze direction [19]. Feuchtner et al. proposed the Ownershift interaction technique to ease over the head interaction in VR while wearing an HMD [23]. Ownershift does not require a mental recalibration phase since the initial 1:1 mapping allowed initial ballistic movements toward the targets [23]. Abtahi et al. utilized Visuo-haptic illusions in tandem with shape displays [1]. They were able to increase the perceived resolution of the shape displays for a VR user by applying scales less than ${1.8}\mathrm{x}$ , redirecting sloped lines with angles less than 40 degrees onto a horizontal line.
+
+To summarize, despite the well-known advantages of large displays and planar surfaces in VR, very few studies have used warping techniques with planar input devices. Our proposed technique and present study aim to fill this gap.
+
+§ 2.3 FITTS' LAW AND SCALE
+
+Fitts' law predicts selection time as a function of target size and distance [24]. The model is given as:
+
+$$
+{MT} = a + b \cdot {ID}\text{ where }{ID} = {\log }_{2}\left( {\frac{A}{w} + 1}\right) \tag{3}
+$$
+
+where ${MT}$ is movement time, and $a$ and $b$ are empirically derived via linear regression. ${ID}$ is the index of difficulty, the overall selection difficulty, based on $A$ , the amplitude (i.e., distance) between targets, and $W$ , the target width. As seen in Equation (3), increasing $A$ or decreasing $W$ increases ${ID}$ , yielding a harder task.
+
+Throughput is recommended by the ISO 9241-9 standard as a primary metric for pointing device comparison, rather than movement time or error rate alone. Throughput incorporates speed and accuracy into a single score and is unaffected by speed-accuracy tradeoffs [46]. In contrast, movement speed and accuracy vary due to participant differences. Throughput thus gives a more realistic idea of overall user performance than movement time or error rate. Our study employs throughput for consistency with other studies $\left\lbrack {{35},{66},{67}}\right\rbrack$ . Throughput is given as:
+
+$$
+{TP} = \frac{I{D}_{e}}{MT}\text{ where }I{D}_{e} = {\log }_{2}\left( {\frac{{A}_{e}}{{w}_{e}} + 1}\right) \tag{4}
+$$
+
+$I{D}_{e}$ is the effective index of difficulty and gives difficulty of the task users actually performed, rather than that they were presented with. Effective amplitude, ${A}_{e}$ , is the mean movement distance between targets for a particular condition. Effective width, ${W}_{e}$ is:
+
+$$
+{W}_{e} = {4.133} \cdot S{D}_{x} \tag{5}
+$$
+
+Where $S{D}_{x}$ is the standard deviation of selection endpoints projected onto the vector between the two targets (i.e., the task axis). It incorporates the variability in selection coordinates and is multiplied by 4.133, yielding $\pm {2.066}$ standard deviations from the mean. This effectively resizes targets so that 96% of selections hit the target, normalizing experimental error rate to 4%, facilitating comparison between studies with varying error rates [45, 59].
+
+Both visual and motor scale have been previously studied by HCI scholars in non-VR contexts, often using Fitts' law studies Factors involved in evaluating scale include the physical dimension of the device screen, the pixel density of the screen, and the distance between the user and the display screen [2, 12, 17, 33, 41, 70]. Browning et al. found that physical screen dimensions affected target acquisition performance negatively, especially for smaller screens [12]. Chapuis et al. also report that target acquisition for small targets suffered, indicating that selection performance is affected by movement scale, rather than visual scale [17]. Accot et al. used identical display conditions with varying input scale to isolate movement scaling to adjust the trackpad size systematically [2]. They used this set up with the steering task [3] and found a "U-shaped" performance curve, meaning that small and large trackpad sizes had the worst performance. They concluded that this was a result of human motor precision [2]. Kovacs et al. studied screen size independent of motor precision. Their findings suggested that human movement planning ability is affected by screen size [41]. Hourcade et al. showed that increasing the distance between the user and the screen, which causes the targets to scale due to perspective, affects accuracy and speed negatively as well [33].
+
+§ 3 WARPED VIRTUAL SURFACES
+
+With WVS, users perceive themselves interacting with an arbitrarily sized virtual surface, that is potentially much larger than the physical tablet. The actual interaction space is always the same (i.e., the physical tracking area of the tablet). We rescale the plane representing a virtual screen in VR and render targets on locations that would fall outside the bounds of the physical tablet's tracking area. In other words, with WVS, users can select and "feel" targets that are beyond the extents of the tablet's physical dimensions.
+
+Tablet drivers typically provide the stylus tip position on the tablet relative to the top or bottom left corner (i.e., the coordinate origin of the tracking area) with $x$ and $y$ values ranging from 0 to 1 . In our case, the origin was the bottom left corner. The coordinate range is calculated based on the physical distance of the stylus tip to the origin and dividing the $x$ and $y$ values of that distance vector by the respective physical width and height of the tablet's active tracking area. We calculate this as the real cursor position $\left( {C}_{Rp}\right)$ , the point where the stylus tip is physically touching the tablet:
+
+${C}_{Rp} = \left( {1/\text{ width, }1/\text{ height }}\right) \times$ dist(stylusTip, ${W}_{o}$ ) (1)
+
+Similar to haptic retargeting, we use a warping origin $\left( {W}_{O}\right) \left\lbrack 6\right\rbrack$ for scaling. For WVS, the origin is the centre of the physical tablet's rectangular tracking area. We chose the centre of the tablet as the warping origin because it was the point from around which the physical panel would grow in size. ${W}_{O}$ is also the only point on the tablet surface, which, regardless of the SF, remains in its original 1-to-1 mapping position. In contrast, the virtual tablet corner points are subject to scaling. Therefore, we instead chose ${W}_{O}$ as the origin point for both ${C}_{Rp}$ and the virtual warped cursor position $\left( {C}_{Wp}\right)$ , the position of the cursor the user sees on the screen panel in VR. We thus shift the coordinate system origin of ${C}_{Rp}$ from the bottom left corner of the tablet to ${W}_{O}$ by rescaling the output range of the ${C}_{Rp}$ points to range from -0.5 to 0.5 instead of 0 to 1 . This results in the centre of the tablet tracking surface to be represented as(0,0) instead of(0.5,0.5). We track the stylus tip with the tablet’s builtin digitizer and apply the SF only when the stylus is within tracking range, ensuring that warping is limited to the tablet's surface.
+
+At ${Wo},{C}_{Rp}$ and ${C}_{Wp}$ align. Warping the tablet’s surface causes ${C}_{Wp}$ to move ahead of ${C}_{Rp}$ as they move the stylus further away from ${W}_{O}$ . This is similar to the effects of CD gain, where a small movement of the physical mouse translates to a large screen movement for the mouse cursor. We use a similar idea to extend cursor reach on the tablet in VR with WVS. A larger SF would cause the ${C}_{Wp}$ to speed up, much like a high CD gain. The further we move away from ${W}_{O}$ decoupling between ${C}_{Rp}$ and ${C}_{Wp}$ increases. See Figure 1. ${C}_{Rp}$ values range from -0.5 to 0.5, multiplied by a ScaleFactor yield ${C}_{Wp}$ , which is where we render the cursor in VR.
+
+${C}_{{W}_{P}} =$ ScaleFactor $\times {C}_{Rp}$ (2)
+
+ < g r a p h i c s >
+
+Figure 1: Visual representation of the WVS system.
+
+§ 4 METHODOLOGY
+
+We conducted a Fitts' law experiment comparing several SFs to a 1-to-1 mapping "control condition." Our objective was to determine whether or not the application of scaling in our warped virtual surface technique influenced user performance. We used a set of pre-selected amplitude and width pairs, rather than fully crossing a selection of amplitude and widths. This ensured that all combinations of $A$ and $W$ were reachable with the 1-to-1 mapping condition. Also, a sufficiently large $W$ could yield targets that were cut off the virtual tablet screen, which would not be reachable by the cursor without warping. We thus carefully chose our amplitude and width pairs so that they would cover a wide range of ${ID}$ s, while still having physically reachable targets under 1-to-1 mapping.
+
+§ 4.1 HYPOTHESIS
+
+Past studies suggest that although visual and motor scale affect performance differently, small scales and sizes impact user performance negatively for both $\left\lbrack {{10},{12},{17},{20},{33},{48},{70}}\right\rbrack$ . Most similar to our work, Blanch et al.'s study revealed that pointing task performance is governed by motor space rather than visual [10].
+
+Thus, we hypothesize that movement time(MT), error rate, target entry count and most importantly, throughput(TP)would be unaffected with varying SFs since participants are still selecting the same physical locations on the tablet's surface. In other words, we hypothesize that selection performance will be the same regardless of the influence of warping. We show this by using a non-inferiority statistical analysis [58] (explained in Section 5.1).
+
+§ 4.2 PARTICIPANTS
+
+We recruited 24 participants (11 females, aged 19 to 64, $\mu = {26.5}$ , ${SD} = {10.5}$ ). Three were left-handed, and one was ambidextrous but chose to complete the experiment using their right hand. We also surveyed their experience with VR and games: ${62.5}\%$ reported having no VR experience at all, 37.50% reported having a little VR experience, and 4.20% a moderate amount. In terms of gaming experience, 37.50% reported having no 3D first-person game experience, 20.80% reported having a little, 29.20% reported having a moderate amount, 12.50% reported having a lot of experience. All participants had normal or corrected-to-normal stereo vision, assessed based on questioning before entering VR.
+
+§ 4.3 APPARATUS
+
+§ 4.3.1 HARDWARE
+
+We used a PC with an Intel Core i7 processor PC with an NVIDIA Geforce GTX 1080 graphics card. We used the HTC Vive VR platform, which includes an HMD with ${1080} \times {1200}$ pixel (per eye) resolution, ${90}\mathrm{\;{Hz}}$ Refresh Rate, and ${110}^{ \circ }$ field of view. The tablet was an XP-PEN STAR 06 wireless drawing tablet. Its dimensions were ${354}\mathrm{\;{mm}} \times {220}\mathrm{\;{mm}} \times {9.9}\mathrm{\;{mm}}$ with a ${254}\mathrm{\;{mm}} \times {152.4}\mathrm{\;{mm}}$ active area, a 5080 LPI resolution. The tablet includes a stylus with a barrel button and a tip switch to support activation upon pressing it against the tablet surface. The $2\mathrm{D}$ location of the stylus tip is tracked along its surface by the built-in electromagnetic digitizer. We affixed a Vive tracker to the top-right corner of the tablet using Velcro tape, see Figure 2.
+
+ < g r a p h i c s >
+
+Figure 2: Overlay of Fitts' law task on the tablet. Orange circles depict the physical target location, while blue circles depict the targets the user saw in VR. Gradient arrows illustrate the surface warping effect and how the virtual surface grows in size in all directions.
+
+§ 4.3.2 SOFTWARE
+
+We developed our software using Unity3D 2019.2 and C# on MS Windows 10. Figure 3 depicts the participants' view in VR during a selection task. We also included several space-themed assets from the Unity store. These were not seen during the task but were visible during breaks to help entertain participants between trials. We used a modified version of the source code provided by Hansen et al. [29] to develop our software.
+
+ < g r a p h i c s >
+
+Figure 3: Fitts law task as seen in on the tablet in VR.
+
+Windows 10 recognizes the tablet as a human interaction device profile, which causes tablet input to be mapped to the mouse cursor. We used a custom LibUSB driver that allowed direct access to the tablet data to use in Unity. The library provides stylus coordinates on the tablet surface and whether the stylus was touching the tablet surface or hovering above it within an approximately $1\mathrm{\;{cm}}$ range.
+
+The software polled the Vive tracker to map a virtual interaction panel to the physical tablet's active tracking area, co-locating their centres. The virtual panel in VR had a resolution of ${4000} \times {2400}$ , with the same size in 1-to-1 mapping as the physical tracking area on the tablet. When scaling was applied, the virtual tablet panel size was multiplied by the SF value. The tablet stylus was used to interact with the tablet. We could not find a reliable and suitable solution to track the stylus or hands externally, hence tracking was limited to the tip of the stylus in a close range to the tablet surface by the tablet's digitizer. Due to this limitation, we did not render a model of the stylus or hands. However, when the stylus was in the range of the tablet, we displayed a virtual star-shaped cursor with a dot hotspot in the centre at the stylus tip. By applying pressure and touching the tablet with the stylus, input ("click") events were detected on the tablet. The virtual cursor was used for selection, and its position was calculated as described in Section 3.
+
+The virtual tablet sat on a table (see Figure 4). While hovering on targets, they changed colour to show which would be selected if the tip switch was pressed. Upon selecting a target successfully, an auditory "click" sound was played, and the experiment would move to the next target for selection. In case of an error, a distinct "beep" sound was used to indicate selection error, and the experiment would move to the next target in the current sequence.
+
+ < g r a p h i c s >
+
+Figure 4. The virtual table where participants sat during the study.
+
+§ 4.4 PROCEDURE
+
+Overall, the experiment took about one hour, with participants in VR for around 45 minutes. Before starting, participants provided informed consent and completed a demographic questionnaire. The main experiment was divided up into eight blocks (one per SF). Each block consisted of ten sequences, one for each of the 10 IDs. In each sequence, participants were presented with 15 targets. Participants had to successfully select at least 50% of the targets to move on to the next sequence. Participants were given an (at least) 30-second break between each block. During the break, they could remove the HMD if desired. To begin each sequence, participants had to select the first target to begin the timer. This selection in each sequence was thus not logged as it just started the sequence. The target would be selected again as the last target in the sequence.
+
+There was no training session before starting the experiment. The task involved selecting circular targets, as is commonly used in Fitts' law experiments (see Figure 2). The task required selecting the purple target as quickly and accurately as possible. If participants missed the target, the system would record an error and move on to the next target. If the participants had more than 50% error rate, they were asked to redo the sequence, the data logging system would record this. After completing the experiment, the participants exited the VE and completed a post-questionnaire where they gave comments on their experience using the VR tablet prototype. They were then debriefed and compensated $\$ {10}\mathrm{{CAD}}$ .
+
+§ 4.5 DESIGN
+
+Our experiment employed a within-subjects design with two independent variables Scale Factor and ID (index of difficulty):
+
+Scale Factor (SF): 1, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4.
+
+ID:1.1,1.5,1.8,2.1,2.3,2.7,3.5,3.8,4.0,4.6. The ID values were generated from the following 10 combinations of $A$ and $W$ (in pixels):
+
+max width=
+
+ID 1.1 1.5 1.8 2.1 2.3 2.7 3.5 3.8 4.0 4.6
+
+1-11
+$A$ 300 450 1300 1600 800 1100 1000 2000 2250 2300
+
+1-11
+$W$ 250 250 500 500 200 200 100 150 150 100
+
+1-11
+
+The SFs were applied to both the cursor position and virtual panel size in VR, as described in Section 3. IDs were calculated according to Equation (3) using the SF of 1 (i.e., 1-to-1 mapping). SF ordering was counterbalanced via a balanced Latin squared. Within each SF, ${ID}$ order was randomized, with one ${ID}$ per sequence (i.e., circle) of 15 targets.
+
+Our dependent variables included:
+
+ * Movement time: average selection time, in milliseconds.
+
+ * Error rate: average proportion of targets missed (percentage).
+
+ * Throughput (in bits per second, bps): calculated based on the ISO 9241-9 standard, using Equation (4).
+
+ * Target entry count: number of times the cursor entered a target before selection; representative of control problems [47].
+
+Like others [22, 45, 46, 59, 67], we argue that throughput gives a better idea of selection performance than either movement time or error rate. The accuracy adjustment used to derive throughput incorporates speed and accuracy together, making throughput constant regardless of participant biases towards speed or accuracy. It is thus better facilitates comparison between studies and is more representative of performance than speed or accuracy alone [46]. We use it as our primary dependent variable, similar to other studies.
+
+In total, each participant completed $8\mathrm{{SFs}} \times {10}\mathrm{{IDs}} \times {15}$ trials (individual selections) for 1200 selections. Our analysis is based on 24participants $\times {1200}$ trials $= {28800}$ selections in total.
+
+§ 5 RESULTS
+
+We used repeated-measures ANOVA on movement time, error rate, throughput, and target entries to detect significant differences due to SF. We did not analyze ${ID}$ , as it is expected to yield performance differences. As detailed below, we found significant main effects for SF only for ${MT}$ and target entries. We did not find significant differences in error rate, and most importantly, for throughput. Horizontal bars (●...●) indicate pairwise significant differences between conditions with Bonferonni adjustments.
+
+We note here that while standard null-hypothesis statistical testing will determine if two conditions are significantly different, our objective was to determine if WVS is not worse than the one-to-one mapping (i.e., SF of 1). This would suggest that it has minimal impact on user performance and is thus a viable technique for virtually extending tablet surfaces. However, standard null-hypothesis statistical tests (e.g., ANOVA) do not determine if two conditions are statistically the same or non-inferior compared to one another. Hence, we instead conducted non-inferiority testing for ${TP}$ and error rate [58].
+
+§ 5.1 NON-INFERIORITY STATISTICAL ANALYSIS
+
+Non-inferiority testing is a form of equivalence testing that shows if a condition is statistically no worse than another. It requires defining an indifference zone, i.e., the maximum allowed difference between two conditions to be considered non-inferior based on the context of the study [58]. With the indifference zone defined, we next analyze the mean difference between the conditions and the 1-tailed 95% confidence interval of that difference. Finally, we check if the mean difference score and the 1-tailed 95% confidence interval fall within the extents of the indifference zone. If so, then the two conditions are deemed to be no worse than each other, i.e., equivalent [58]. Although this form of analysis is rare in HCI, it has been used before in the context of VR Fitts’ law experiments [39].
+
+For throughput, we used the same indifference zone (1bps) as Kohli et al. [39]. For error rate, they used the smallest unit of error, i.e., one target miss in a sequence. We used the same threshold, in our case, one miss in fifteen targets, for a 6.66% indifference zone.
+
+§ 5.2 THROUGHPUT
+
+RM-ANOVA on throughput, revealed no significant difference for scale factor $\left( {{F}_{{4.21},{96.98}} = {.92}\text{ , ns }}\right)$ . Mean ${TP}$ was fairly consistent across all scale factors. See Figure 5.
+
+ < g r a p h i c s >
+
+Figure 5: Mean TP for each SF value. Error bars show 95% CI.
+
+To determine if throughput is statistically consistent across SFs, we next conducted a non-inferiority test. Using the aforementioned indifference zone of $1\mathrm{{bps}}$ , the mean difference between each compared SF and the lower bound of the one-tailed confidence intervals should be greater than -1 bps to be considered non-inferior. Table 3 shows the results of the non-inferiority test for pair-wise comparisons (with Bonferonni corrections) between the 1-to-1 mapping and all other SFs. Based on this analysis, no SF has worse ${TP}$ than 1-to-1 mapping (i.e., they are all considered non-inferior). Overall, this result indicates that ${TP}$ is not affected by SF, in line with our main hypothesis and suggesting our WVS technique has minimal impact on user performance.
+
+Table 3: Mean ${TP}$ differences and non-inferiority test results.
+
+max width=
+
+SF Pairs Mean Diff. 1-tailed 95% CI SD Error Non-inferiority Comparison
+
+1-5
+1-1.2 -0.077 >-0.313 0.067 $- {0.313} > - {1.0}$
+
+1-5
+1-1.4 -0.059 $> - {0.333}$ 0.078 $- {0.333} > - {1.0}$
+
+1-5
+1-1.6 -0.073 > -0.426 0.100 $- {0.426} > - {1.0}$
+
+1-5
+1-1.8 -0.115 $> - {0.393}$ 0.079 $- {0.393} > - {1.0}$
+
+1-5
+1-2.0 -0.108 >-0.410 0.086 $- {0.410} > - {1.0}$
+
+1-5
+1-2.2 0.0130 >-0.260 0.077 $- {0.260} > - {1.0}$
+
+1-5
+1-2.4 0.0190 >-0.194 0.060 $- {0.194} > - {1.0}$
+
+1-5
+
+§ 5.3 ERROR RATE
+
+We found no significant difference in error rate for different SFs using RM ANOVA $\left( {{F}_{{3.98},{91.56}} = {2.07},p > {.05}}\right)$ . Based on Figure 6, they too are reasonably consistent across the eight scale factors.
+
+ < g r a p h i c s >
+
+Figure 6: Error rate for each SF condition. Error bars show 95% CE.
+
+Like with TP, we used a non-inferiority test on error rate to determine if each SF yielded error rates no worse than 1-to-1 mapping. Using the aforementioned indifference zone limit of ${6.66}\%$ , each SF must have error rate differences no higher than 6.66% compared to the SF of 1 (i.e., 1-to-1 mapping). The results of this analysis, with Bonferroni corrections, are seen in Table 4. No SF offered a worse error rate than the SF of 1 . Results indicate that the error rate was also unaffected by SF value, also in line with our hypothesis, meaning target misses rates were constant regardless of SF.
+
+Table 4: Mean error rate differences and non-inferiority results.
+
+max width=
+
+SF Pairs Mean Diff. 1-tailed 95% CI SD Error Non-inferiority Comparison
+
+1-5
+1-1.2 0.278 < 1.708 0.405 ${1.708} < {6.66}$
+
+1-5
+1-1.4 -0.25 $< {1.671}$ 0.544 ${1.671} \leq {6.66}$
+
+1-5
+1-1.6 0.111 $< {2.572}$ 0.697 ${2.572} < {6.66}$
+
+1-5
+1-1.8 -1.139 $< {1.636}$ 0.786 ${1.636} \leq {6.66}$
+
+1-5
+1-2.0 -0.806 <1.213 0.572 ${1.213} < {6.66}$
+
+1-5
+1-2.2 -0.639 $< {1.595}$ 0.633 ${1.595} < {6.66}$
+
+1-5
+1-2.4 -1.417 <0.189 0.455 ${0.189} < {6.66}$
+
+1-5
+
+§ 5.4 MOVEMENT TIME
+
+RM-ANOVA resulted in significant results in the case of movement time. We also note that as suggested by Kohli et al. [39], it is not clear what a reasonable indifference zone for movement time should be.
+
+Mauchly's test revealed that the assumption of sphericity was violated $\left( {{\chi }^{2}\left( {27}\right) = {51.33},p = {.004}}\right)$ so we applied Greenhouse-Geisser correction $\left( {\varepsilon = {.56}}\right)$ . There was a significant main effect of scale factor on movement time $\left( {{\mathrm{F}}_{{3.98},{91.65}} = {13.92},p < {.001},{\eta }_{p}{}^{2} = }\right.$ .37, power $= {1.00}\left( {\alpha = {.05}}\right)$ ). Posthoc results showing pairwise differences and mean movement times are seen in Figure 7. Results indicate higher SF values resulted in higher mean movement time, suggesting participants moved slower with higher SFs. Our hypothesis failed in the case of movement time.
+
+ < g r a p h i c s >
+
+Figure 7: Mean MT for each SF. Error bars show 95% CI.
+
+§ 5.5 TARGET ENTRY COUNT
+
+As seen in Figure 8, higher SFs yield slightly higher target entry counts, suggesting participants had more difficulty getting the cursor into the target before selection. The assumption of sphericity was not violated, so results were analyzed using RM-ANOVA as usual. There was a significant main effect of SF on target entry count $\left( {{F}_{7,{161}} = {17.41},p < {.001},{\eta }_{p}{}^{2} = {.43}\text{ , power } = {1.00}\left( {\alpha = {.05}}\right) }\right)$ . Higher SF resulted in average higher target re-entries for correct selection. Our hypothesis failed for the target entry count.
+
+ < g r a p h i c s >
+
+Figure 8: Mean target entry count for SFs. Error bars show 95% CI.
+
+Several participants mentioned having difficulty selecting the smallest targets. As a result, we also analyzed if the target entry count was affected by the target size using RM-ANOVA. According to Mauchly's test, the assumption of sphericity was not violated. We found a significant main effect of target width on target entry count $\left( {{F}_{4,{92}} = {13.52},p < {.001},{\eta }_{p}{}^{2} = {.37}}\right.$ , power $= {1.00}$ $\left( {\alpha = {.05}}\right) )$ . Figure 9 depicts the mean target entry count for different target widths. Our analysis did not find a significant interaction effect between SF and target width $\left( {{F}_{{28},{644}} = {1.33},p > {.05}}\right)$ . Results suggest smaller targets were harder to hit upon initial entry and required on average more re-entries to select.
+
+ < g r a p h i c s >
+
+Figure 9: Mean target entry count across the target width. Error bars show 95% CI.
+
+Input device
+
+ < g r a p h i c s >
+
+General comfort
+
+Neck fatigue Shoulder fatgiue Arm fatigue Wrist fatigue Finger fatigue Operation speed Accurate pointing Physical effort Mental effort Operation Smoothness Acctuation force
+
+Figure 10: Device Assessment Questionnaire results. Label numbers indicate the percentage of participants choosing each answer.
+
+§ 5.6 FITTS' LAW ANALYSIS
+
+Fitts' law is commonly used as a predictive model of movement time. We performed a linear regression of ${MT}$ onto ${ID}$ . Figure 11 depicts the relationship between ${MT}$ and the ${ID}$ .
+
+ < g r a p h i c s >
+
+Figure 11: Linear regression of ${MT}$ on ${ID}$ for both presented ${ID}$ and scaled ${ID}$ (applying the scale factor to $A$ when calculating ${ID}$ ).
+
+As is often the case in Fitts' law studies, there is a strong linear relationship between ${MT}$ and ${ID}$ . We performed linear regression for both conventional ${ID}$ (i.e., the 10 IDs listed in Section 4.5), and for scaled ${ID}$ . Scaled ${ID}$ was calculated based on the SF value applied to target amplitudes when calculating ${ID}$ . Applying the scale factor in this way is more representative of the task participants perceived themselves as performing. Interestingly, the Fitts’ law regression using scaled ${ID}$ yielded a better fitting model.
+
+§ 5.7 EFFECTIVE WIDTH ANALYSIS
+
+To further explore why throughput was constant, despite increasing movement time across scale factors, we also analyze effective width. We note that throughput is based on effective width, which in turn relates to the magnitude of errors, rather than the error rate. This explains how throughput can stay constant across SFs while why movement time significantly increases with SF (and while error rate is also constant). Misses farther from the target will push effective width upward, while selections closer to the target centre will lower it. Thus, we looked into how mean ${W}_{e}$ changed under different SF conditions. Based on Figure 12, ${W}_{e}$ appears to decrease with higher SFs. This indicates that participants were making more accurate selections with higher SFs, likely yielding the higher movement times noted above with higher SFs.
+
+§ 5.8 POST-QUESTIONNAIRE
+
+We used the device assessment questionnaire from ISO 9241-9 [22] to evaluate the experience of using a tablet and stylus with WVS.
+
+ < g r a p h i c s >
+
+Figure 12: ${W}_{e}$ for each SF. Error bars show 95% CI.
+
+Participants had to rate each phrase from 1 (lowest) to 5 (highest).See Figure 10. We did not compare results across different SFs since participants were unlikely to notice differences in the SFs, and this would require they complete 8 lengthy questionnaires instead of just one.
+
+§ 6 DISCUSSION
+
+Results indicate that WVS had significant effects only on ${MT}$ and target entry count. Notably, the largest SFs were significantly different. For error rate and ${TP}$ , non-inferiority testing indicated that WVS is no worse than 1-to-1 mapping. Target entry count was affected by the target size, which was unsurprising, but highlights some difficulty in accurately selecting the smallest targets.
+
+§ 6.1 GENERAL DISCUSSION
+
+Our results are in line with past work and suffer from some of the same limitations $\left\lbrack {{10},{39}}\right\rbrack$ . The most important result from our study is that throughput is relatively stable when using WVS, especially for modest scale factors, e.g., 1.2, 1.4 and 1.6. One explanation could be that the same muscles are used across all selections under different SFs. Pointing performance is known to be affected by the muscle used to reach the target [74]. This is a promising finding, as it suggests that WVS can be applied in tablet-based VR to provide a larger virtual tactile proxy than is otherwise available. Similarly, as indicated in our results, movement time is significantly (if only slightly) worse with higher SFs, especially at 2.2 and 2.4.
+
+Overall, our results show lower throughput with a tracked stylus compared to redirected touching [39]. Lower throughput and error rate and higher movement time in our study are potentially due to the different warping techniques we used. Other factors that could contribute to this difference are the different hardware setup, for instance, input using a stylus rather than fingers, and the position/orientation of the tablet in our study. Notably, our throughput scores - regardless of scale factor - are in line with previous work using a 3D tracked stylus, which is a closer comparison point anyway [67]. Also, the tablet placed on the table caused participants to experience some neck fatigue, as indicated in the post-questionnaire results (see Figure 10) and participant comments. 54.2% of the participants reported high neck fatigue. Our participants also noted it was hard for them to select the smaller targets. One commented, "There were some times where selecting the smaller circles was difficult." Such comments were not unexpected and are supported by the significant differences found in our analysis, as shown in Figure 9. One other contributor to this difficulty could be the limited screen resolution in the Vive HMD.
+
+Higher scale factors yielded slightly higher movement times and target entries than lower scale factors. A potential reason for this is the increase in virtual cursor movement speed caused by scaling. This increase in cursor speed would make fast, accurate movement more challenging, particularly in precisely selecting targets. This kind of effect has been noted before as a "U-shaped" curve for coarse/fine positioning times under different CD gain levels [2]. Also, since users were not able to see the stylus in VR, they likely moved more slowly to keep track of the cursor.
+
+On the other hand, as seen in Figure 5, throughput is almost flat across scale factors. Throughput characterizes the speed/accuracy trade-off in selection tasks. For throughput to be flat across scale factor, and in light of increasing movement times, accuracy must have been better with higher scale factors. In our error rate analysis, we found non-inferiority between 1-to-1 mapping and all other scale factors, suggesting error rates were at least not worse with higher scale factors. However, effective width (from which throughput is derived) is not based on error rate, but rather on the distribution of selection coordinates. In other words, the distance of the selection coordinates to the target centre influences ${W}_{e}$ . Participants may miss targets at about the same rate, but miss "closer" to the target (which yields lower ${W}_{e}$ ). Alternatively, they may hit closer to the centre of the target (which also yields lower $\left. {W}_{e}\right)$ . This is confirmed in our ${W}_{e}$ analysis (Section 5.7) - ${W}_{e}$ became smaller with higher scale factors, which is why throughput was constant regardless of the SF. With higher SF, the cursor moved faster. Participants likely slowed their operation speed slightly to compensate for the higher cursor speed. This is reflected in higher ${MT}$ for higher scale factors (Figure 7). By compensating (i.e., slowing down), participants were more readily able to precisely select targets (yielding lower magnitude misses, or selections closer to the target centre), resulting in lower ${W}_{e}$ and higher throughput.
+
+Based on our observations, most target misses were due to loss of tracking for the stylus and participant moving their hand closer to regain tracking and accidentally touching the tracking surface. Some participants also commented on this. A participant reported: "my errors were false selection during dragging my hand to the desired point." Also, since we were warping the virtual space, moving the stylus even slightly could cause the virtual cursor to move outside smaller targets (increasing target entry count). Participants held the stylus at an acute angle relative to the tablet surface, instead of perpendicular to it; reaching and selecting smaller targets from the hovering state could cause the virtual warped cursor to fall outside the target even with slight movements in either direction. Participants also held the stylus differently, i.e., in a different position and with different gestures.
+
+Participants found WVS easy to use, despite 41.7% finding accurate pointing difficult. One participant reported: "Overall was easy to select the targets." Another participant mentioned that "...would use again. Was more usable when the in-world representation of the tablet was larger, but it was still easy to select small targets on the small display." Half of our 24 participants reported the device to be very easy to use (Figure 10).
+
+Based on comments, participants liked that they could use a larger touch surface in VR despite arm, wrist and finger fatigue, as indicated in Figure 10. One participant commented that "I really like the idea of using smaller physical screens to choose on larger area in VR, hope it will become common input option for VR. "Only three out of 24 participants reported they did not notice any change in their cursor movement speed while in VR. One person mentioned that "I was able to observe the warping but not able to compare it to earlier trials. It was a smooth experience. "
+
+§ 6.2 LIMITATIONS
+
+The main limitation of our study is the indifference zones used for the non-inferiority analysis. More studies are required to determine valid indifference zones for performance in Fitts' law studies. In the presented work, we used the same indifference zones as Kohli et al. [39] for the sake of consistency and to facilitate comparison. As mentioned in their work, some previous studies have found significant differences between conditions within the chosen indifference zones. Although we demonstrated non-inferiority, different indifference zones or statistical analysis could yield different results. Another limitation is that our hardware setup did not support 6DOF stylus tracking. We believe our findings can still be useful and can help VR researchers and system designers.
+
+§ 7 CONCLUSIONS
+
+We introduced Warped Virtual Surfaces, a technique to scale input space with a tracked tablet, yielding larger virtual tablets than that physically available. We evaluated the effects of surface warping on task performance using a tablet and stylus in VR. In terms of TP and error rate, WVS yielded consistent performance regardless of SF. Non-inferiority statistical tests showed that ${TP}$ and error rate were statistically similar between all tested SFs and the "control" condition, i.e., the 1-to-1 mapping. However, for movement time and target entry count, we found small but significant differences, particularly for larger SFs, in line with previous work [10, 39].
+
+Our proposed method can be used for artists and designer that are interested in immersive workflows or for VR design sessions. Our approach uses cheap and affordable hardware. It enables users with a fix-sized physical panel or drawing tablet to get a bigger virtual panel without extra hardware and performance cost. WVS could be useful with small, lightweight arm-mounted touchscreens to facilitate tactile interaction with 3D menus or similar applications to PIP and WIM [61, 63]. WVS can also complement other tablet and stylus-based interaction techniques for VR, such as the HARP system [44], the Virtual Notepad [53]. In-Air drawing applications could also benefit from WVS. Interaction techniques like snappable panels or surfaces could employ WVS. WVS could potentially help with fatigue, but further experiments are needed to determine to what extent. Other haptic devices with limited interaction space, like the Phantom, could potentially also benefit from WVS by expanding their virtual reach.
+
+We conclude that our technique shows promise as a method to virtually extend physical surfaces in VR. Results suggest minimal performance impact of WVS. TP was flat across all SFs. Despite small differences in ${MT}$ , it seems users made up their performance via a slight accuracy improvement, yielding constant TP.
+
+Future work on Warped Virtual Surfaces will involve a followup study across multiple scale factors and multiple tablet sizes. We will use a subset of scale factors from the current study, with physically smaller tablets than that used in this study. We will also employ a 3D tracked stylus (e.g., Logitech VR Ink) scaling 3D movement, rather than just planar movement.
+
+§ ACKNOWLEDGEMENTS
+
+We would like to thank Kyle Johnsen and A. J. Tuttle for sharing their tablet driver source code. We also thank our participants and other researchers whom their work inspired this research. This research was supported by NSERC.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..95db6218bab9ca5c8064d9d470d3ff9d6960b531
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,385 @@
+# Interactive Shape Based Brushing Technique for Trail Sets
+
+Almoctar Hassoumi *
+
+École Nationale de l'Aviation Civile
+
+María-Jesús Lobo†
+
+École Nationale de l'Aviation Civile
+
+Gabriel Jarry ${}^{ \ddagger }$
+
+École Nationale de l'Aviation Civile
+
+Vsevolod Peysakhovich ${}^{§}$
+
+ISAE SUPAERO
+
+Chrisophe Hurter ${}^{\pi }$
+
+École Nationale de l'Aviation Civile
+
+## Abstract
+
+Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scala-bility and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.
+
+Index Terms: Human-centered computing-Interaction design-Interaction design theory, concepts and paradigms- Human-centered computing-Visualization-Visualization application do-mainsVisual analytics Human-centered computing-Interaction design-Systems and tools for interaction design——Information systems-Information systems-Information retrievalInformation retrieval query processingQuery intent
+
+## 1 INTRODUCTION
+
+Brushing techniques [9], which are part of the standard InfoVis pipeline for data visualization and exploration [12], already have a long history. They are now standard interaction techniques in visualization systems [57] and toolkits [10,50]. Such techniques help to visually select items of interest with interactive paradigms (i.e. lasso, boxes, brush) in a view. When the user visually detects a relevant pattern (i.e. a specific curve or a trend), the brushing technique can then be applied to select it. While this selection can be seamlessly performed, the user may still face issues when the view becomes cluttered with many tangled items. In such dense visualization, existing brushing techniques also select items in the vicinity of the target and thus capture part of the clutter (see Fig. 1). To address such an issue, the user can adjust the brushing parameters
+
+
+
+Figure 1: This figure shows our main rationale to define our shaped based brushing technique. A) Unselected trail set where the user wishes to select the only curved one. B) Standard brushing technique, where the brushing of the curved trail also selects every other trail which touches the brushing area. C) Our method uses the brushing input to compare with the brushed trajectory and only the trajectories similar in shape are selected.
+
+by changing the brush size or the selection box locations. However, this may take time and requires many iterations or trials. This paper proposes a novel brushing technique that filters trajectories taking into account the shape of the brush in addition to the brush area. This dual input is provided at the same time and opens novel opportunities for brushing techniques. The cornerstone of such a technique relies on a shape comparison algorithm. This algorithm must provide a numerical similarity measurement which is ordered (low value for unrelated shapes, and high value for correlated shapes), continuous (no steps in the computed metric) and with a semantic so that the user can partially understand the logic behind this similarity measurement. Thus, to build such a dual filtering technique, the following design requirements (DR) must be fulfilled:
+
+- DR1: The technique enables users to select occluded trajectories in dense or cluttered view.
+
+- DR2: The shape comparison metric is flexible with continuous, ordered and meaningful values.
+
+- DR3: The technique enables incremental selection refinement.
+
+- DR4: The technique is interactive.
+
+Taking into account the identified requirements (DR1-DR4), this paper presents a novel shape-based brushing tool. To the best of our knowledge, such a combination of brushing and shape comparison techniques has not yet been explored in trajectory analysis and this paper fills this gap. In the remainder of the paper, the following subjects are presented. First, previous works in the domain of brushing and shape comparison are provided. Second, the brushing pipeline is detailed with explanations of the comparison metric data processing. Next, the use of such a technique through different use-cases is demonstrated. The brushing technique is discussed in terms of usefulness, specific application and possible limitations. Finally, the paper concludes with a summary of our contribution and provides future research directions.
+
+---
+
+*e-mail: almoctar.hassoumi-assoumana@isae.fr
+
+${}^{ \dagger }$ e-mail: maria-jesus.lobo@ign.fr
+
+${}^{ \ddagger }$ e-mail: gabriel.jarry@enac.fr
+
+§e-mail: vsevolod.peysakhovich@isae.fr
+
+Te-mail: christophe.hurter@enac.fr
+
+---
+
+## 2 RELATED WORK
+
+There are various domain-specific techniques targeting trail exploration and analysis. In this section, we explore three major components of selection techniques for trail-set exploration and analysis relevant to our work: brushing, query-by-content, and similarity measurement.
+
+### 2.1 Brushing in Trajectory Visualization
+
+Trail-set exploration relies on pattern discovery [13] where relevant trails need to be selected for further analysis. Brushing is a selection technique for information visualization, where the user interactively highlights a subset of the data by defining an area of interest. This technique has been shown to be a powerful and generic interaction technique for information retrieval [9]. The selection can be further refined using interactive filtering techniques [23]. The approach presented in this paper is based on dynamic queries [53] and direct manipulation [36, 52].
+
+Systems designed specifically for spatio-temporal visualization and in particular in trajectory visualizations are very complex because of their 3D and time varying nature. Due to this, several systems and frameworks have been especially designed to visualize them $\left\lbrack {5,6,8,{32},{33},{51}}\right\rbrack$ . Most of these systems include selection techniques based on brushing, and some of them enable further query refinement through boolean operations [32,33].
+
+These techniques do not take into account the shape of the trails, so selecting a specific one with a particular shape requires many manipulations and iterations to fine-tune the selection.
+
+### 2.2 Query-by-Content
+
+While this paper attempts to suggest a shape-based brushing technique for trail sets, researchers have explored shape-based selection techniques in different contexts, both using arbitrary shapes and sketch-based queries.
+
+Sketch-based querying presents several advantages over traditional selection [18]. It has been used for volumetric data sets [45], and neural pathway selection [1]. This last work is the closest to the current study. However, the authors presented a domain-specific application and they based their algorithm on the Euclidean distance. This is not a robust metric for similarity detection since it is hard to provide a value indicating a high similarity and varies greatly according to the domain and the data considered. In addition, this metric does not support direction and orientation matching nor the combination of brushing with filtering.
+
+In addition, user-sketched pattern matching plays an important role in searching and localizing time-series patterns of interest $\left\lbrack {{29},{43}}\right\rbrack$ . For example, Holz and Feiner $\left\lbrack {29}\right\rbrack$ defined a relaxed selection technique in which the users draw a query to select the relevant part of a displayed time-series. Correl et al. [18] propose a sketch-based query system to retrieve time-series using dynamic time wrapping, mean square error or the Hough transform. They present all matches individually in a small multiple, arranged according to the similarity measurement. These techniques, as the one proposed here, also take advantage of sketches to manipulate data. However, they are designed for querying rather than selecting, and specifically for $2\mathrm{D}$ data. Other approaches use boxes and spheres to specify the regions of interest $\left\lbrack {2,{28},{44}}\right\rbrack$ , and the desired trails are obtained if they intersect these regions of interest. However, many parameters must be changed by the analysts in order to achieve a simple single selection. The regions of interest must be re-scaled appropriately, and then re-positioned back and forth multiple times for each operation. Additionally, many selection box modifications are required to refine the selection and thus hinder and alter the selection efficiency [2].
+
+### 2.3 Similarity measures
+
+Given a set of trajectories, we are interested in retrieving the most similar subset of trajectories with a user-sketched query. Common approaches include selecting the K-nearest-neighbors (KNN) based on the Euclidean distance (ED) or elastic matching metrics (e.g, Dynamic Time Warping - DTWs). Affinity cues have also been used to group objects. For example, objects of identical color are given a high similarity coefficient for the color affinity [40].
+
+The Euclidean distance is the most simple to calculate, but, unlike mathematical similarity measurements [59] which are usually bound between 0 and 1 or -1 and 1 , ED is unbounded and task-specific. A number of works have suggested transforming the raw data into a lower-dimensional representation (e.g., SAX [39,41], PAA $\left\lbrack {{38},{42}}\right\rbrack )$ . However, they require adjusting many abstract parameters which are dataset-dependant and thus reduce their flexibility. Lindlbauer presented global and local proximity of 2D sketches. The second measure is used for similarity detection where an object is contained within another one, and is not relevant to this work. While the first measure refers to the distance between two objects (mostly circles and lines), there is no guarantee that the approach could be generalized to large datasets such as eye tracking, GPS or aircraft trajectories. In contrast, Dynamic Time Warping has been considered as the best measurement [20] in various applications [47] to select shapes by matching their representation [24]. It has been used in gestures recognition [58], eye movements $\left\lbrack {3,{26}}\right\rbrack$ and shapes $\left\lbrack {61}\right\rbrack$ . An overview of existing metrics is available [47].
+
+The k-Nearest Neighbor(KNN)approach has also long been studied for trail similarity detection [16,54]. However, using this metric, two trails may provide a good accurate connection (i.e, a small difference measure as above) even if they have very different shapes. Other measurements to calculate trajectory segment similarity are the Minimum Bounding Rectangles (MBR) [34] or Fréchet Distance [19] which leverage the perpendicular distance, the parallel distance and the angular distance in order to compute the distance between two trajectories.
+
+In order to address the aforementioned issues, we propose and investigate two different approaches. The first approach is based on directly calculating the correlations on $x$ -axis and $y$ -axis independently between the shape of the brush and the trails (section 3.1.1). The second approach (section 3.1.2) is based on the geometrical information of the trails, i.e, the trails are transformed into a new space (using eigenvectors of the co-variance matrix) which is more suitable for similarity detection. This paper's approach leverages the potential of these two metrics to foster efficient shape-based brushing for large cluttered datasets. As such, it allows targeting detailed motif discovery performed interactively.
+
+## 3 INTERACTION PIPELINE
+
+This section presents the interactive pipeline (Fig. 2) which fulfills the identified design requirements (DR1-DR4). As for any interactive system, user input plays the main role and will operate at every stage of the data processing. First, the input data (i.e. trail set) is given. Next, the user inputs a brush where the pipeline extracts the brushed items, the brush area and its shape. Then, two comparison metrics are computed between every brushed item and the shape of the brush (similarity measurement). A binning process serves to filter the data which is then presented to the user. The user can then refine the brush area and choose another comparison metric until the desired items are selected.
+
+
+
+Figure 2: This figure shows the interaction pipeline. The pipeline extracts the brushed items but also the shape of the brush which will be then used as a comparison with the brushed items (metrics stage). Then, brushed items are stored in bins displayed with small multiples where the user can interactivity refine the selection (i.e. binning and filtering stage). Finally, the user can adjust the selection with additional brushing interactions Note that both the PC and FPCA can be used for the binning process, but separately. Each small multiple comes from one metric exclusively.
+
+### 3.1 Metrics
+
+As previously detailed in the related word section, many comparison metrics exist. While our pipeline can use any metric that fulfills the design requirement DR2 (continuous, ordered and meaningful comparison), the presented pipeline contains only two complementary algorithms: Pearson and FPCA. The first one focuses on a shape comparison basis with correlation between their representative vertices, while the latter focuses on curvature comparison. As shown in Fig. 3, each metric produces different results. The user can use either of them depending on the type of filtering to be performed. During the initial development of this technique, we first considered using the Euclidean distance (ED) and DTW, but we have rapidly observed their limitations and we argue that PC and FPCA are more suitable to trajectory datasets. First, PC values are easier to threshold. A PC value $> {0.8}$ provides a clear indication of the similarity of 2 shapes. Moreover, to accurately discriminate between complex trajectories, we need to go beyond the performance of ED. Furthermore, the direction of the trajectories, while being essential for our brushing technique, is not supported by ED and DTW similarity measures. Another disadvantage of using ED is domain- and task-specific threshold that can drastically vary depending on the context. PC, that we used in our approach, on the other hand, uses the same threshold independently of the type of datasets. The two following sections detail the two proposed algorithms.
+
+#### 3.1.1 Pearson's Correlation (PC)
+
+Pearson's Correlation (PC) is a statistical tool that measures the correlation between two datasets and produces a continuous measurement between $\in \left\lbrack {-1,1}\right\rbrack$ with 1 indicating a high degree of similarity, and -1 an anti-correlation indicating an opposite trend [35]. This metric is well suited (DR2) for measuring dataset similarity (i.e. in trajectory points) [17, 55].
+
+Pearson’s Correlation ${PC}$ between two trails ${T}_{i}$ and ${T}_{j}$ on the $x - {axis}$ can be defined as follows:
+
+$$
+{r}_{x} = \frac{\operatorname{COV}\left( {{T}_{{i}_{x}},{T}_{{j}_{x}}}\right) }{{\sigma }_{{T}_{{i}_{x}}}{\sigma }_{{T}_{{j}_{x}}}},\;\operatorname{COV}\left( {{T}_{{i}_{x}},{T}_{{j}_{x}}}\right) = E\left\lbrack {\left( {{T}_{{i}_{x}} - \overline{{T}_{{i}_{x}}}}\right) \left( {{T}_{{j}_{x}} - \overline{{T}_{{j}_{x}}}}\right) }\right\rbrack \tag{1}
+$$
+
+Where $\overline{{T}_{{i}_{x}}}$ and $\overline{{T}_{{j}_{x}}}$ are the means, $E$ the expectation and ${\sigma }_{{T}_{{i}_{x}}},{\sigma }_{{T}_{{j}_{x}}}$ the standard deviations. The correlation is computed on the $y - {axis}$ and the $x - {axis}$ for two-dimensional points.
+
+This metric is invariant in point translation and trajectory scale but it does not take into account the order of points along a trajectory. Therefore, the pipeline also considers the FPCA metric that is more appropriate to trajectory shape but that does not take into account negative correlation.
+
+
+
+Figure 3: This figure shows an example of the two metrics for shape comparison usage. The user brushed around curve 3 and thus also selected curves 1 and 2. Thanks to the Pearson computation, the associated small multiples show that only curve 2 is correlated to the shape of the brush. Curve 3 is anti correlated since it in an opposite direction to the shape of the brush. The FPCA computation does not take into account the direction but rather the curvature similarity. As such, only shape 3 is considered as highly similar to the brush shape input.
+
+#### 3.1.2 Functional Principal Component Analysis
+
+Functional Data Analysis is a well-known information geometry approach [48] that captures the statistical properties of multivariate data functions, such as curves modeled as a point in an infinite-dimensional space (usually the ${L}^{2}$ space of square integrable functions [48]). The Functional Principal Component Analysis (FPCA) computes the data variability around the mean curve of a cluster while estimating the Karhunen-Loeve expansion scores. A simple analogy can be drawn with the Principal Component Analysis (PCA) algorithm where eigen vectors and their eigen values are computed, the FCPA performs the same operations with eigen functions (piecewise splices) and their principal component scores to model the statistical properties of a considered cluster [30]:
+
+$$
+\Gamma \left( {t,\omega }\right) = \bar{\gamma } + \mathop{\sum }\limits_{{j = 1}}^{{+\infty }}{b}_{j}\left( \omega \right) {\phi }_{j}\left( t\right) \tag{2}
+$$
+
+where ${b}_{j}$ are real-valued random variables called principal component scores. ${\phi }_{j}$ are the principal component functions, which obey
+
+to:
+
+$$
+{\int }_{0}^{1}\widehat{H}\left( {s, t}\right) {\phi }_{j}\left( s\right) {ds} = {\lambda }_{j}{\phi }_{j}\left( t\right) \tag{3}
+$$
+
+${\phi }_{j}$ are the (vector-valued) eigenfunctions of the covariance operator with eigenvalues ${\lambda }_{j}$ . We refer the prospective reader to the work of Hurter et al. [30] for a Discrete implementation. With this model, knowing the mean curve $\bar{\gamma }$ and the principal component functions ${\phi }_{j}$ , a group of curves can be described and reconstructed (Inverse FPCA) with the matrix of the principal component score ${b}_{j}$ of each curve. Usually, a finite vector (with fixed dimension d) of ${b}_{j}$ scores is selected such that the explained variance is more than a defined percentile.
+
+
+
+Figure 4: Illustration of FPCA metric algorithm. Diagram of the transformation from the trail space to the 2D points space using FPCA and then measuring the closest points (most similar) in the Principal Component space.
+
+To compute a continuous and meaningful metric (DR2), the metric computation uses the two first Principal Components (PC) to define the representative point of a considered trajectory. Then, the metric is computed by the euclidean distance between the shape of the brush and each brushed trajectory in the Cartesian scatterplot PC1/PC2 (Fig. 4). Each distance is then normalized between $\left\lbrack {0,1}\right\rbrack$ with 1 corresponding to the largest difference in shape between the considered shape of the brush of the corresponding trajectory.
+
+### 3.2 Binning and small multiple filtering
+
+Taking into account the computed comparison metrics, the pipeline stores the resulting values into bins. Items can then be sorted in continuous ways from the less similar to the most similar ones. While the Pearson measurements $\in \left\lbrack {-1,1}\right\rbrack$ and the FPCA $\in \left\lbrack {0,1}\right\rbrack$ , this binning process operates in the same way. Each bin is then used to visually show the trajectories it contains through small multiples (we use 5 small multiples which gives a good compromise between visualization compactness and trajectory visibility). The user can then interactively filter the selected items (DR4) with a range slider on top of the small multiple visualizations. The user is thus able to decide whether to remove uncorrelated items or refine the correlated one with a more restricted criterion (DR3).
+
+## 4 INTERACTION PARADIGM BY EXAMPLE
+
+This technique is designed to enable flexible and rapid brushing of trajectories, by both the location and the shape of the brush. The technique's interaction paradigm is now described and illustrated in a scenario where an air traffic management expert studies the flight data depicted in Fig. 6.
+
+### 4.1 Scenario Introduction
+
+Aircraft trajectories can be visually represented as connected line segments that form a path on a map. Given the flight level (altitude) of the aircraft, the trajectories can be presented in 3D and visualized by varying their appearances [7] or changing their representation to basic geometry types [11]. Since the visualization considers a large number of trajectories that compete for the visual space. these visualizations often present occlusion and visual clutter issues. rendering exploration difficult. Edge bundling techniques [37] have been used to reduce clutter and occlusion but they come at the cost of distorting the trajectory shapes which might not always be desirable.
+
+Analysts need to explore this kind of datasets in order to perform diverse tasks. Some of these tasks compare expected aircraft trajectories with the actual trajectories. Other tasks detect unexpected patterns and perform out traffic analysis in complex areas with dense traffic $\left\lbrack {7,{32}}\right\rbrack$ . To this end, various trajectory properties such as aircraft direction, flight level and shape are examined. However, most systems only support selection techniques that rely on starting and end points, or predefined regions. We argue that the interactive shape brush technique would be helpful for these kinds of tasks, as they require the visual inspection of the data, the detection of the specific patterns and then their selection for further examination. As these specific patterns might differ from the rest of the data precisely because of their shape, a technique that enables their selection through this characteristic will make their manipulation easier, as detailed in the example scenario. We consider a dataset that includes 4320 aircraft trajectories of variable lengths from one day of flight traffic over the French airspace.
+
+### 4.2 Brushing
+
+We define the trail $T$ as a set of real-valued consecutive points $T = \left\lbrack {\left( {T{x}_{1}, T{y}_{1}}\right) {)}^{\top },{\left( T{x}_{2}, T{y}_{2}\right) }^{\top },\ldots ,{\left( T{x}_{n}, T{y}_{n}\right) }^{\top }}\right\rbrack$ where $n$ is the number of points and ${\left( T{x}_{i}, T{y}_{i}\right) }^{\top }$ corresponds to the $i - {th}$ coordinate of the trail. The Fig. 6 depicts an example of 4133 trails (aircraft in French airspace). The brush $S$ hape consists of a set of real-valued consecutive points $S = \left\lbrack {\left( {S{x}_{1}, S{y}_{1}}\right) {)}^{\top },{\left( S{x}_{2}, S{y}_{2}\right) }^{\top },\ldots ,{\left( S{x}_{m}, S{y}_{m}\right) }^{\top }}\right\rbrack$ where $m$ is the number of points. Note that while the length $n$ of each trail is fixed, the length $m$ of the Shape depends on the length of the user brush. The Shape is smoothed using a 1Efilter [14] and then resampled to facilitate the trail comparison The similarity metrics are then used in subsequences of the shape of approximately the same length as the brush Shape. In order to do this, each trail is first resampled so that each pair of consecutive vertices on the trail has the same distance ${l}_{\text{vertices }}$ [22].
+
+The user starts by exploring the data using pan and zoom operations. They are interested in the trajectories from the south-east of France to Paris. The user can choose if they are looking for a subsequence match or an exact match. A subsequence match involves the detection of trajectories having a subsequence similar to the Shape locally. Exact match comparison also takes into account the length of the trajectory and the Shape in order to select a trajectory, i.e, its length must be approximately similar to the length of the Shape (general measurements). This option is especially useful to select a trajectory by its start and end points (e.g, finding trajectories taking off from an airport A and landing at an airport B). The exact matching is supported by analyzing the length of the trail and the Shape before applying the similarity metric algorithm. The analyst in the scenario activates the subsequence match where the Pearson's Correlation metric is selected by default and starts brushing in the vicinity of the target trajectories following the trajectory shape with the mouse. This will define both (1) the brush region and (2) the brush shape, that captures also the brush direction. Once the brushing has been completed, the selected trajectories are highlighted in green, as depicted in Fig. 5-(b).
+
+
+
+Figure 5: (a) The user brushes the trajectories in order to select those from the south-east of France to Paris. (b) They select the most correlated value to take into account the direction of the trails.
+
+### 4.3 Small Multiples and User filtering
+
+The similarity calculation between the $S$ hape and the brushed region will produce a similarity value for each trail contained in the region and the trails are distributed in small multiples as detailed in Section 3. Once the user has brushed, she can adjust the selection by selecting one of the bins displayed in the small multiples and using its range slider. The range slider position controls the similarity level, and its size determines the number of trajectories selected at each slider position: the smallest size selects one trajectory at a time. The range slider size and position are adjusted by direct manipulation using the mouse. This enables a fine control over the final selection and makes the algorithm thresholding easier to understand as the user controls both the granularity of the exploration and the chosen similarity level. As the bins are equally sized, the distribution of the similarity might not be linear across the small multiples. This makes navigation easier since the trajectories distribution in the small multiples is continuous. However, this also entails that not every bin corresponds to the same similarity value interval. To keep this information available to the user, a colored heatmap (from red to green) displays the actual distribution, as depicted in Fig. 3.
+
+In the current scenario, the expert, as they wish to select only the flights to Paris and not from Paris, selects the trajectories that are correlated with the original brush, as the correlation takes into account the brush direction. These trajectories are on the right side of the small multiple, highlighted in green as depicted in Fig. 5-(b).
+
+The expert is then interested in exploring the flights that land on the north landing strip but that are not coming from the east. For this, they perform a new shape brush that will consider only the previously selected trajectories to identify the planes that do come from the east, and distinguishable by the "C" shape in the trajectories, as depicted in Fig. 7. To be able to select the geometry precisely, the expert changes to the FPCA metric, using the keyboard shortcut. In this case, the small multiple arranges the trajectories from less similar to more similar. This entails that the small multiple based on FPCA also enables the selection of all the trajectories that do not match the specified Shape but which are contained in the brushing region. As all trajectories passing through the north landing strip are contained in the brushing region, the most similar trajectories will correspond to the ones that have a "C" shape, in the same orientation as the Shape, and thus come from the east. The less similar will be the ones that interest the analyst, so they can select them by choosing the most dissimilar small multiple as depicted in Fig. 7-(b).
+
+
+
+Figure 6: One day's aircraft trajectories in French airspace, including taxiing, taking-off, cruise, final approach and landing phases. Selecting specific trajectories using the standard brushing technique will yield inaccurate results due to the large number of trajectories, occlusions, and closeness in the spatial representation.
+
+
+
+Figure 7: (a) The user filters the trajectories that land on the north runway in Paris by brushing following the "C" shape. This retrieves the flights that come from the east. (b) They change the selection on the small multiples, to select all the dissimilar Shapes, resulting in the trajectories that land on the north landing strip but that do not come from the east.
+
+## 5 USE CASES
+
+We argue that there is a strong demand for targeted brushing to select motifs in datasets. In various domains, including aircraft trajectories, eye tracking, GPS trajectories or brain fiber analysis, there is a substantial need to be able to discover hidden motifs in large datasets. Undoubtedly, retrieving desired trails in such datasets would help analysts to focus on the most interesting parts of the data. The system was built using C# and OpenTK on a 64bit ${}^{1}$ XPS 15 Dell Laptop. Although, both PC and FPCA provide different but valuable results, the running performance was 10 times faster with PC compared to FPCA.
+
+The technique was first tested informally with experts from aerospace domain with more than 10 years of experience in trajectories analysis. While the collected feedback was largely positive, we observed some limitations regarding the misunderstanding of our filtering parameters. Given the novelty of our interaction technique, users needed a small training period to better understand the semantic of our small multiples interface. Nevertheless, experts founded our technique interesting and useful since it provides initial good selection result without any parameter adjustment.
+
+Because the presented technique is not designed to replace standard brushing but rather to complement it, we extend the informal user study with an evaluation based on real use cases. We argue that these use cases show how our technique facilitates trajectories selection in dense areas, where standard brushing would require multiple user actions (panning, zooming, brushing).
+
+### 5.1 Eye-tracking Data
+
+Eye-tracking technologies are gaining popularity for analyzing human behaviour, in visualization analysis, human factors, human-computer interaction, neuroscience, psychology and training. The principle consists in finding the likely objects of interest by tracking the movements of the user's eyes [4]. Using a camera, the pupil center position is detected and the gaze, i.e, the point in the scene the user is fixating on, is computed using a prior calibration procedure $\left\lbrack {{15},{25},{27},{49}}\right\rbrack$ . Therefore, the gaze data consist of sampled trails representing the movements of the user's eye gaze while completing a given task.
+
+Two important types of recorded movements characterize eye behaviour: the fixations and saccades [21]. Fixations are the eye positions the user fixates for a certain amount of time, in other words, they describe the locations that captured the attention of the user. The saccades connect the different fixations, i.e, they represent the rapid movements of the eye from one location to another. The combination of these eye movements is called the scanpath (Fig. 8A). The scanpath is subject to overplotting. This challenge may be addressed through precise brushing techniques to select specific trails. Usually, fixation events are studied to create an attention map which shows the salient elements in the scene. The salient elements are located at high-density fixation areas. However, the temporal connections of the different fixations provide additional information. The saccades enable the links between the fixations to be maintained and the temporal meaning of the eye movement to be held. Discovering patterns in the raw scanpath data is difficult since, in contrast to aircraft trajectories, eye movements are sparser and less regular (Fig. 8). To address this, different kinds of visualizations for scanpaths have been proposed in the literature. For example, edge bundling techniques [31] minimize visual clutter of large and occluded graphs. However, these techniques either alter trail properties such as shape and geometric information, or are otherwise computationally expensive, which makes them unsuitable for precise exploration and mining of large trail datasets. Moreover, it is possible to animate eye movements in order to have an insight of the different fixations and saccades. However, given the large datasets of eye movements retrieved from lengthy experiments containing thousands of saccades, this approach is unnecessarily time-consuming and expensive.
+
+Therefore, we next describe how this study's approach supports proper and more efficient motif discovery on such eye-tracking datasets. The tested dataset is adapted from Peysakhovich et al. [46], where a continuous recording of eye movement in a cockpit was performed. The gaze data was recorded at ${50}\mathrm{\;{Hz}}$ . Sequential points located in a square of ${20} \times {20}$ pixels and separated by at least 200 ms were stored as a fixation event and replaced by their average in order to reduce noise coming from the microsaccades and the tracking device.
+
+In order to illustrate some examples, we could consider a domain expert who wishes to explore the movements of the pilot's eyes in a cockpit. When performing a task, the pilot scans the different instruments in the cockpit, focuses more on certain instruments or interacts with them. Especially in this context, the order of pilot attention is important since checking a parameter in one instrument may give an indication of the information displayed in another instrument. For example, the priority of the Primary Flight Display (PFD) instrument compared to Flight Control Unit (FCU) will differ for the cruise phase as compared to the final landing approach [46]. As an example of analysis, the user wishes to explore the movement of the eye from the Primary Flight Display (PFD) to the Navigation Display (ND). Selecting these scanpaths using traditional brushing techniques would be challenging because of the clutter, selecting those scanpaths would introduce additional accidental selections. Therefore, he brushes these scanpaths using a shape that extends from the PFD to the ND, applying the Pearson metric to consider the direction. Fig. 8(a) depicts the brushed eye movements that correspond to the most correlated trails in the small multiple. There are several saccades between those two devices, and this is in line with the fact that saccadic movements between the PFD and the ND are typically caused by parameter checking routines.
+
+However, when the user changes the selection and brushes the scanpath between the ND and the FCU, it is surprising to see that there is only one saccade between them. Brushing now with a shape that goes between the PFD and the FCU (Fig. 8-(c)) reveals only one scanpath. This is difficult to visualize in the raw data or using the standard brushing technique. A final Shape searching for an eye movement from the PFD to the LAI and passing by the FCU, results in only one saccade (Fig. 8-(d)). To determine the meaning of this behavior, the tool also enables the expert to exploit a continuous transition to increase the visibility and gain insight on when these saccadic movements occurred (temporal view). The user can change the visual mapping from the(x, y)gaze location to the (time, y) temporal view. This smooth transition avoids abrupt change to the visualization [33] (Fig. 9).
+
+### 5.2 GPS Data
+
+GPS trajectories consist of sequential spatial locations recorded by a measurement instrument. Subjects such as people, wheeled vehicles, transportation modes and devices may be tracked by analyzing the spatial positions provided by these instruments. Analysts may need to explore and analyze different paths followed by the users. The advances in position-acquisition and ubiquitous devices have granted extremely large location data, which indicate the mobility of different moving targets such as autonomous vehicles, pedestrians, natural phenomena, etc. The commonness of these datasets calls for novel approaches in order to discover information and mine the data [62].
+
+---
+
+${}^{1}$ Intel(R) Core(TM) I7-4712HQ CPU @ 2.30GHz,2301 MHz,4 core,8 threads
+
+---
+
+
+
+Figure 8: (a) Selected eye movements between the PFD and ND, (b) Selected eye movements in the vicinity of the PFD, (c) Saccades between the ND and the FCU, (d) Eye movement from the PFD to the LAI passing by the FCU.
+
+
+
+Figure 9: This figure shows the animated transition between the $\mathrm{X}/\mathrm{Y}$ gaze view to the temporal view. This helps to detected how the selected eye movement occurred over time.
+
+Traditionally, researchers analyse GPS logs by defining a distance function (e.g, ${KNN}$ ) between two trajectories and then applying expensive processing algorithms to address the similarity detection. For example, they first convert the trajectories into a set of road segments by leveraging map-matching algorithms. Afterwards, the relationship between trajectories is managed using indexing structures [56,62]. Using the data provided by Zheng et al. [63], we seek to investigate different locations followed by the users in Beijing. The data consists of GPS trajectories collected for the Geolife project by 182 users during a period of over five years (from April 2007 to August 2012) [63]. Each trajectory is represented by a 3D latitude, longitude and altitude point. A range of users' outdoor movements were recorded, including life routines such as travelling to work, sports, shopping, etc.
+
+As the quantity of GPS data is becoming increasingly large and complex, proper brushing is challenging. Using bounding boxes somewhat alleviate this difficulty by setting the key of interest on the major corners. However, many boxes must be placed carefully for one single selection. The boxes can help the analysts to select all the trajectories that pass through a specific location, but do not simplify the analysis of overlapping and directional trajectories. This study's approach intuitively supports path differentiation for both overlapping trajectories and takes direction into account. For example, we are interested in answering questions about the activities people perform and their sequential order [63]. For this dataset, the authors were interested in finding event sequences that could inform tourists. The shape-based brushing could serve as a tool to further explore their results. For example, if they find an interesting classical sequence that passes through locations A and B they can further explore if this sequence corresponds to a larger sequence and what other locations are visited before or after. A first brushing and refinement using the FPCA metric and small multiples enables them to select all the trajectories that include a precise event sequence passing through a set of locations, as depicted in Fig. 10. A second brushing using the Pearson metric enables further explorations that also take into account the direction of the trajectories. Switching between the correlated trajectories and the anti-correlated ones, the user can gain insight about the visitation order of the selected locations.
+
+
+
+Figure 10: GPS locations of pedestrians (black). Selection of three different trajectories containing three different event sequences from [63] (green).
+
+## 6 Discussion
+
+The proposed brushing technique leverages existing methods with the novel usage of the shape of the brush as an additional filtering parameter. The interaction pipeline shows different data processing steps where the comparison algorithm between the brushed items and the shape of the brush plays a central role. While the presented pipeline contains two specific and complementary comparison metric computations, another one can be used as long as it fulfills the continuity and metric semantic requirements (DR2). There are indeed many standard approaches (ED, DTW, Discrete Fréchet distance) that are largely used by the community and could be used to extend our technique when faced with different datasets. Furthermore, the contribution of this paper is a novel shape-based brushing technique and not simply a shape similarity measure. In our work, we found two reasonable similarity measures that fulfill our shape-based brushing method: The FPCA distance comparison provides an accurate curve similarity measurement while the Pearson metric provides a complementary criteria with the direction of the trajectory.
+
+In terms of visualization, the binning process provides a valuable overview of the order of the trajectory shapes. This important step eases the filtering and adjustment of the selected items. It is important to mention that this filtering operates in a continuous manner as such trajectories are added or removed one by one when adjusting this filtering parameter. This practice helps to fine tune the selected items with accurate filtering parameters. The presented scenario shows how small multiple interaction can provide flexibility. This is especially the case when the user brushes specific trajectories to be then removed when setting the compatibility metrics to uncorrelated. This operation performs a brush removal. The proposed filtering method can also consider other types of binning and allows different possible representations (i.e. various visual mapping solutions).
+
+This paper illustrates the shape-based brushing technique with three application domains (air traffic, eye tracking, GPS data), but it can be extended to any moving object dataset. However, our evaluation is limited by the number of studied application domains. Furthermore, even if various users and practitioners participated in the design of the technique, and assessed the simplicity and intuitiveness of the method, we did not conduct a more formal evaluation. The shape-based brush is aimed at complementing the traditional brush, and in no way do we argue that it is more efficient or effective than the original technique for all cases. The scenarios are examples of how this technique enables the selection of trails that would be otherwise difficult to manipulate, and how the usage of the brush area and its shape to perform comparison opens novel brushing perspectives. We believe they provide strong evidence of the potential of such a technique.
+
+The technique also presents limitations in its selection flexibility, as it is not yet possible to combine selections. Many extensions can be applied to the last step of the pipeline to support this. This step mainly addresses the DR4 where the selection can be refined thanks to user inputs. As such, multiple selections can be envisaged and finally be composed. Boolean operations can be considered with the standard And, Or, Not. While this composition is easy to model, it remains difficult for an end user to master the operations when there are more than 2 subset operations $\left\lbrack {{33},{60}}\right\rbrack$ . As a solution, Hurter et al. proposed an implicit item composition with a simple drag and drop technique [33]. The pipeline can be extended with the same paradigm where a place holder can store filtered items and then be composed to produce the final result. The user can then refine the selection by adding, removing or merging multiple selections.
+
+## 7 CONCLUSION
+
+In this paper, a novel sketch-based brushing technique for trail selection was proposed and investigated. This approach facilitates user selection in occluded and cluttered data visualization where the selection is performed on a standard brush basis while taking into account the shape of the brush area as a filtering tool. This brushing tool works as follows. Firstly, the user brushes the trajectory of interest trying to follow its shape as closely as possible. Then the system pre-selects every trajectory which touches the brush area. Next, the algorithm computes a distance between every brushed shape and the shape of the brushed area. Comparison scores are then sorted and the system displays visual bins presenting trajectories from the lowest scores (unrelated - or dissimilar trajectories) to the highest values/scores (highly correlated or similar trajectories). The user can then adjust a filtering parameter to refine the actual selected trajectories that touch the brushed area and which have a suitable correlation with the shape of the brushed area. The cornerstone of this shape-based technique relies on the shape comparison method. Therefore, we choose two algorithms which provide enough flexibility to adjust the set of selected trajectories. One algorithm relies on functional decomposition analysis which ensures a shape curvature comparison, while the other method insures an accurate geometric based comparison (Pearson algorithm). To validate the efficiency of this method, we show three examples of usage with various types of trail datasets.
+
+This work can be extended in many directions. We can first extend it with additional application domains and other types of dataset such as car or animal movements or any type of time-varying data. We can also consider other types of input to extend the mouse pointer usage. Virtual Reality data exploration with the so-called immersive analytic domain gives a relevant work extension which will be investigated in the near future. Finally, we can also consider adding machine learning to help users brush relevant trajectories. For instance, in a very dense area, where the relevant trajectories or even a part of the trajectories are not visible due to the occlusion, additional visual processing may be useful to guide the user during the brushing process.
+
+## REFERENCES
+
+[1] D. Akers. Cinch: A cooperatively designed marking interface for 3d pathway selection. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, UIST '06, pp. 33-42. ACM, New York, NY, USA, 2006. doi: 10.1145/1166253.1166260
+
+[2] D. Akers, A. Sherbondy, R. Mackenzie, R. Dougherty, and B. Wandell. Exploration of the brain's white matter pathways with dynamic queries. In Proceedings of the Conference on Visualization '04, VIS '04, pp. 377-384. IEEE Computer Society, Washington, DC, USA, 2004. doi: 10.1109/VISUAL.2004.30
+
+[3] H. Almoctar, P. Irani, V. Peysakhovich, and C. Hurter. Path word: A multimodal password entry method for ad-hoc authentication based on digits' shape and smooth pursuit eye movements. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, ICMI '18, pp. 268-277. ACM, New York, NY, USA, 2018. doi: 10. 1145/3242969.3243008
+
+[4] H. Almoctar, P. Irani, V. Peysakhovich, and C. Hurter. Path word: A multimodal password entry method for ad-hoc authentication based on digits' shape and smooth pursuit eye movements. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, ICMI '18, pp. 268-277. ACM, New York, NY, USA, 2018. doi: 10. 1145/3242969.3243008
+
+[5] G. Andrienko and N. Andrienko. A general framework for using aggregation in visual exploration of movement data. The Cartographic Journal, 47(1):22-40, 2010. doi: 10.1179/000870409X12525737905042
+
+[6] G. Andrienko, N. Andrienko, and M. Heurich. An event-based conceptual model for context-aware movement analysis. International Journal of Geographical Information Science, 25(9):1347-1370, 2011. doi: 10.1080/13658816.2011.556120
+
+[7] N. Andrienko, G. Andrienko, J. M. C. Garcia, and D. Scarlatti. Analysis of flight variability: a systematic approach. IEEE Transactions on Visualization and Computer Graphics, August 2018. (C) 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. doi: 10.1109/TVCG.2018.2864811
+
+[8] B. Bach, P. Dragicevic, D. Archambault, C. Hurter, and S. Carpendale. A descriptive framework for temporal data visualizations based on generalized space-time cubes. Computer Graphics Forum, 36(6):36- 61. doi: 10.1111/cgf.12804
+
+[9] R. A. Becker and W. S. Cleveland. Brushing scatterplots. Technomet-rics, 29(2):127-142, May 1987. doi: 10.2307/1269768
+
+[10] M. Bostock, V. Ogievetsky, and J. Heer. D3 data-driven documents. IEEE Transactions on Visualization and Computer Graphics, 17(12):2301-2309, Dec. 2011. doi: 10.1109/TVCG.2011.185
+
+[11] S. Buschmann, M. Trapp, and J. Döllner. Animated visualization of spatial-temporal trajectory data for air-traffic analysis. The Visual Computer, 32(3):371-381, Mar 2016. doi: 10.1007/s00371-015-1185-9
+
+[12] S. K. Card, J. D. Mackinlay, and B. Shneiderman, eds. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1999.
+
+[13] S. K. Card, J. D. Mackinlay, and B. Shneiderman, eds. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1999.
+
+[14] G. Casiez, N. Roussel, and D. Vogel. 1€ filter: A simple speed-based low-pass filter for noisy input in interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
+
+CHI '12, pp. 2527-2530. ACM, New York, NY, USA, 2012. doi: 10. 1145/2207676.2208639
+
+[15] F. M. Celebi, E. S. Kim, Q. Wang, C. A. Wall, and F. Shic. A smooth
+
+pursuit calibration technique. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA '14, pp. 377-378. ACM, New York, NY, USA, 2014. doi: 10.1145/2578153.2583042
+
+[16] Z. Chen, H. T. Shen, X. Zhou, Y. Zheng, and X. Xie. Searching trajectories by locations: An efficiency study. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, SIGMOD ' 10, pp. 255-266. ACM, New York, NY, USA, 2010. doi: 10.1145/1807167.1807197
+
+[17] C. Clarke, A. Bellino, A. Esteves, and H. Gellersen. Remote control by body movement in synchrony with orbiting widgets: An evaluation of tracematch. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 1(3):45:1-45:22, Sept. 2017. doi: 10.1145/3130910
+
+[18] M. Correll and M. Gleicher. The semantics of sketch: Flexibility in visual query systems for time series data. In 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 131-140, Oct 2016. doi: 10.1109/VAST.2016.7883519
+
+[19] T. Devogele, L. Etienne, M. Esnault, and F. Lardy. Optimized discrete frÉchet distance between trajectories. In Proceedings of the 6th ACM SIGSPATIAL Workshop on Analytics for Big Geospatial Data, BigSpatial'17, pp. 11-19. ACM, New York, NY, USA, 2017. doi: 10. 1145/3150919.3150924
+
+[20] H. Ding, G. Trajcevski, P. Scheuermann, X. Wang, and E. Keogh. Querying and mining of time series data: Experimental comparison of representations and distance measures. Proc. VLDB Endow., 1(2):1542- 1552, Aug. 2008. doi: 10.14778/1454159.1454226
+
+[21] A. Duchowski. Eye tracking methodology: Theory and practice, vol. 373. Springer, 2007.
+
+[22] M. H. Everts, E. Begue, H. Bekker, J. B. T. M. Roerdink, and T. Isen-berg. Exploration of the brain's white matter structure through visual abstraction and multi-scale local fiber tract contraction. IEEE Transactions on Visualization and Computer Graphics, 21(7):808-821, 2015. doi: 10.1109/TVCG.2015.2403323
+
+[23] A. Gennady, A. Natalia, B. Peter, K. Daniel, and W. Stefan. Visual analytics of movement. Springer, Berlin, Heidelberg, 2013. doi: doi. org/10.1007/978-3-642-37583-5
+
+[24] A. Hassoumi. and C. Hurter. Eye gesture in a mixed reality environment. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP, pp. 183-187. INSTICC, SciTePress, 2019. doi: 10.5220/0007684001830187
+
+[25] A. Hassoumi, V. Peysakhovich, and C. Hurter. Uncertainty visualization of gaze estimation to support operator-controlled calibration. Journal of Eye Movement Research, 10(5), 2018. doi: 10.16910/jemr.10. 5.6
+
+[26] A. Hassoumi, V. Peysakhovich, and C. Hurter. Eyeflow: Pursuit interactions using an unmodified camera. In Proceedings of the 11th ACM Symposium on Eye Tracking Research Applications, ETRA '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3314111.3319820
+
+[27] A. Hassoumi, V. Peysakhovich, and C. Hurter. Improving eye-tracking calibration accuracy using symbolic regression. PLOS ONE, 14(3):1- 22, 03 2019. doi: 10.1371/journal.pone.0213675
+
+[28] H. Hauser, F. Ledermann, and H. Doleisch. Angular brushing of extended parallel coordinates. In IEEE Symposium on Information Visualization, 2002. INFOVIS 2002., pp. 127-130, Oct 2002. doi: 10. 1109/INFVIS.2002.1173157
+
+[29] C. Holz and S. Feiner. Relaxed selection techniques for querying time-series graphs. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology, UIST '09, pp. 213-222. ACM, New York, NY, USA, 2009. doi: 10.1145/1622176.1622217
+
+[30] C. Hurter, S. Puechmorel, F. Nicol, and A. Telea. Functional decomposition for bundled simplification of trail sets. IEEE Transactions on Visualization and Computer Graphics, 24(1):500-510, Jan 2018. doi: 10.1109/TVCG.2017.2744338
+
+[31] C. Hurter, S. Puechmorel, F. Nicol, and A. Telea. Functional decomposition for bundled simplification of trail sets. IEEE Transactions on Visualization and Computer Graphics, 24(1):500-510, Jan 2018. doi:
+
+10.1109/TVCG.2017.2744338
+
+[32] C. Hurter, N. H. Riche, S. M. Drucker, M. Cordeil, R. Alligier, and R. Vuillemot. Fiberclay: Sculpting three dimensional trajectories to reveal structural insights. IEEE Transactions on Visualization and
+
+Computer Graphics, 25(1):704-714, Jan 2019. doi: 10.1109/TVCG.2018. 2865191
+
+[33] C. Hurter, B. Tissoires, and S. Conversy. Fromdady: Spreading aircraft trajectories across views to support iterative queries. IEEE Transactions on Visualization and Computer Graphics, 15(6):1017-1024, Nov. 2009. doi: 10.1109/TVCG.2009.145
+
+[34] H. Jeung, M. L. Yiu, and C. S. Jensen. Trajectory Pattern Mining, pp. 143-177. Springer New York, New York, NY, 2011. doi: 10. 1007/978-1-4614-1629-6_5
+
+[35] P. Karl. Note on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 1895. doi: 10.1098/rspl. 1895.0041
+
+[36] J. F. Kruiger, A. Hassoumi, H.-J. Schulz, A. Telea, and C. Hurter. Multidimensional data exploration by explicitly controlled animation. Informatics, 4(3), 2017. doi: 10.3390/informatics4030026
+
+[37] A. Lhuillier, C. Hurter, and A. Telea. State of the art in edge and trail bundling techniques. Computer Graphics Forum, 2017. doi: 10.1111/cgf. 13213
+
+[38] J. Lin, E. Keogh, S. Lonardi, and B. Chiu. A symbolic representation of time series, with implications for streaming algorithms. In Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, DMKD '03, pp. 2-11. ACM, New York, NY, USA, 2003. doi: 10.1145/882082.882086
+
+[39] J. Lin, E. Keogh, L. Wei, and S. Lonardi. Experiencing sax: A novel symbolic representation of time series. Data Min. Knowl. Discov., 15(2):107-144, Oct. 2007. doi: 10.1007/s10618-007-0064-z
+
+[40] D. Lindlbauer, M. Haller, M. Hancock, S. D. Scott, and W. Stuerzlinger. Perceptual grouping: Selection assistance for digital sketching. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces, ITS '13, pp. 51-60. ACM, New York, NY, USA, 2013. doi: 10.1145/2512349.2512801
+
+[41] W. Luo, H. Tan, H. Mao, and L. M. Ni. Efficient similarity joins on massive high-dimensional datasets using mapreduce. In 2012 IEEE 13th International Conference on Mobile Data Management, pp. 1-10, July 2012. doi: 10.1109/MDM.2012.25
+
+[42] Y. Ma, X. Meng, and S. Wang. Parallel similarity joins on massive high-dimensional data using mapreduce. Concurr. Comput. : Pract. Exper., 28(1):166-183, Jan. 2016. doi: 10.1002/cpe.3663
+
+[43] P. K. Muthumanickam, K. Vrotsou, M. Cooper, and J. Johansson. Shape grammar extraction for efficient query-by-sketch pattern matching in long time series. In 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 121-130, Oct 2016. doi: 10.1109/ VAST.2016.7883518
+
+[44] M. Nielsen, N. Elmqvist, and K. Grønbæk. Scribble query: Fluid touch brushing for multivariate data visualization. In Proceedings of the 28th Australian Conference on Computer-Human Interaction, OzCHI '16, pp. 381-390. ACM, New York, NY, USA, 2016. doi: 10.1145/3010915. 3010951
+
+[45] S. Owada, F. Nielsen, and T. Igarashi. Volume catcher. In Proceedings of the 2005 Symposium on Interactive 3D Graphics and Games, I3D '05, pp. 111-116. ACM, New York, NY, USA, 2005. doi: 10.1145/ 1053427.1053445
+
+[46] V. Peysakhovich, C. Hurter, and A. Telea. Attribute-driven edge bundling for general graphs with applications in trail analysis. In 2015 IEEE Pacific Visualization Symposium (PacificVis), pp. 39-46, April 2015. doi: 10.1109/PACIFICVIS.2015.7156354
+
+[47] T. Rakthanmanon, B. Campana, A. Mueen, G. Batista, B. Westover, Q. Zhu, J. Zakaria, and E. Keogh. Searching and mining trillions of time series subsequences under dynamic time warping. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '12, pp. 262-270. ACM, New York, NY, USA, 2012. doi: 10.1145/2339530.2339576
+
+[48] J. Ramsay and B. Silverman. Functional Data Analysis. Springer Series in Statistics. Springer, 2005. doi: 10.1007/b98888
+
+[49] T. Santini, W. Fuhl, and E. Kasneci. Calibme: Fast and unsupervised eye tracker calibration for gaze-based pervasive human-computer inter-
+
+action. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 2594-2605. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453.3025950
+
+[50] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-lite: A grammar of interactive graphics. IEEE Transactions on Visualization and Computer Graphics, 23(1):341-350, Jan. 2017. doi: 10. 1109/TVCG.2016.2599030
+
+[51] R. Scheepens, C. Hurter, H. Van De Wetering, and J. J. Van Wijk. Visualization, selection, and analysis of traffic flows. IEEE Transactions on Visualization and Computer Graphics, 22(1):379-388, Jan 2016. doi: 10.1109/TVCG.2015.2467112
+
+[52] B. Shneiderman. Human-computer interaction. chap. Direct Manipulation: A Step Beyond Programming Languages, pp. 461-467. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1987.
+
+[53] B. Shneiderman. Dynamic queries for visual information seeking. IEEE Softw., 11(6):70-77, Nov. 1994. doi: 10.1109/52.329404
+
+[54] Y. Tao, D. Papadias, and Q. Shen. Continuous nearest neighbor search. In Proceedings of the 28th International Conference on Very Large Data Bases, VLDB '02, pp. 287-298. VLDB Endowment, 2002.
+
+[55] M. Vidal, A. Bulling, and H. Gellersen. Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pp. 439-448. ACM, 2013.
+
+[56] Y. Wang, Y. Zheng, and Y. Xue. Travel time estimation of a path using sparse trajectories. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pp. 25-34. ACM, New York, NY, USA, 2014. doi: 10.1145/ 2623330.2623656
+
+[57] M. O. Ward. Xmdvtool: Integrating multiple methods for visualizing multivariate data. In Proceedings of the Conference on Visualization '94, VIS '94, pp. 326-333. IEEE Computer Society Press, Los Alamitos, CA, USA, 1994.
+
+[58] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: A $\$ 1$ recognizer for user interface prototypes. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST '07, pp. 159-168. ACM, New York, NY, USA, 2007. doi: 10.1145/1294211.1294238
+
+[59] C. M. Yeh, Y. Zhu, L. Ulanova, N. Begum, Y. Ding, H. A. Dau, D. F. Silva, A. Mueen, and E. Keogh. Matrix profile i: All pairs similarity joins for time series: A unifying view that includes motifs, discords and shapelets. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1317-1322, Dec 2016. doi: 10.1109/ICDM.2016. 0179
+
+[60] D. Young and B. Shneiderman. A graphical filter/flow representation of boolean queries: A prototype implementation and evaluation. J. Am. Soc. Inf. Sci., 44(6):327-339, July 1993. doi: 10.1002/(SICI)1097-4571 (199307)44:6<327::AID-ASI3>3.0.CO;2-J
+
+[61] J. Zhao and L. Itti. shapedtw: Shape dynamic time warping. Pattern Recognition, 74:171 - 184, 2018. . doi: 10.1016/j.patcog.2017.09.020
+
+[62] Y. Zheng. Trajectory data mining: An overview. ACM Transaction on Intelligent Systems and Technology, September 2015.
+
+[63] Y. Zheng, L. Zhang, X. Xie, and W.-Y. Ma. Mining interesting locations and travel sequences from gps trajectories. In Proceedings of the 18th International Conference on World Wide Web, WWW '09, pp. 791-800. ACM, New York, NY, USA, 2009. doi: 10.1145/1526709.1526816
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6cdfeb5f7bf99ebeee15203791dd30a256cb4bc1
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2020/Graphics_Interface 2020 Conference/yaeJLwvTr/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,239 @@
+§ INTERACTIVE SHAPE BASED BRUSHING TECHNIQUE FOR TRAIL SETS
+
+Almoctar Hassoumi *
+
+École Nationale de l'Aviation Civile
+
+María-Jesús Lobo†
+
+École Nationale de l'Aviation Civile
+
+Gabriel Jarry ${}^{ \ddagger }$
+
+École Nationale de l'Aviation Civile
+
+Vsevolod Peysakhovich ${}^{§}$
+
+ISAE SUPAERO
+
+Chrisophe Hurter ${}^{\pi }$
+
+École Nationale de l'Aviation Civile
+
+§ ABSTRACT
+
+Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scala-bility and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.
+
+Index Terms: Human-centered computing-Interaction design-Interaction design theory, concepts and paradigms- Human-centered computing-Visualization-Visualization application do-mainsVisual analytics Human-centered computing-Interaction design-Systems and tools for interaction design——Information systems-Information systems-Information retrievalInformation retrieval query processingQuery intent
+
+§ 1 INTRODUCTION
+
+Brushing techniques [9], which are part of the standard InfoVis pipeline for data visualization and exploration [12], already have a long history. They are now standard interaction techniques in visualization systems [57] and toolkits [10,50]. Such techniques help to visually select items of interest with interactive paradigms (i.e. lasso, boxes, brush) in a view. When the user visually detects a relevant pattern (i.e. a specific curve or a trend), the brushing technique can then be applied to select it. While this selection can be seamlessly performed, the user may still face issues when the view becomes cluttered with many tangled items. In such dense visualization, existing brushing techniques also select items in the vicinity of the target and thus capture part of the clutter (see Fig. 1). To address such an issue, the user can adjust the brushing parameters
+
+ < g r a p h i c s >
+
+Figure 1: This figure shows our main rationale to define our shaped based brushing technique. A) Unselected trail set where the user wishes to select the only curved one. B) Standard brushing technique, where the brushing of the curved trail also selects every other trail which touches the brushing area. C) Our method uses the brushing input to compare with the brushed trajectory and only the trajectories similar in shape are selected.
+
+by changing the brush size or the selection box locations. However, this may take time and requires many iterations or trials. This paper proposes a novel brushing technique that filters trajectories taking into account the shape of the brush in addition to the brush area. This dual input is provided at the same time and opens novel opportunities for brushing techniques. The cornerstone of such a technique relies on a shape comparison algorithm. This algorithm must provide a numerical similarity measurement which is ordered (low value for unrelated shapes, and high value for correlated shapes), continuous (no steps in the computed metric) and with a semantic so that the user can partially understand the logic behind this similarity measurement. Thus, to build such a dual filtering technique, the following design requirements (DR) must be fulfilled:
+
+ * DR1: The technique enables users to select occluded trajectories in dense or cluttered view.
+
+ * DR2: The shape comparison metric is flexible with continuous, ordered and meaningful values.
+
+ * DR3: The technique enables incremental selection refinement.
+
+ * DR4: The technique is interactive.
+
+Taking into account the identified requirements (DR1-DR4), this paper presents a novel shape-based brushing tool. To the best of our knowledge, such a combination of brushing and shape comparison techniques has not yet been explored in trajectory analysis and this paper fills this gap. In the remainder of the paper, the following subjects are presented. First, previous works in the domain of brushing and shape comparison are provided. Second, the brushing pipeline is detailed with explanations of the comparison metric data processing. Next, the use of such a technique through different use-cases is demonstrated. The brushing technique is discussed in terms of usefulness, specific application and possible limitations. Finally, the paper concludes with a summary of our contribution and provides future research directions.
+
+*e-mail: almoctar.hassoumi-assoumana@isae.fr
+
+${}^{ \dagger }$ e-mail: maria-jesus.lobo@ign.fr
+
+${}^{ \ddagger }$ e-mail: gabriel.jarry@enac.fr
+
+§e-mail: vsevolod.peysakhovich@isae.fr
+
+Te-mail: christophe.hurter@enac.fr
+
+§ 2 RELATED WORK
+
+There are various domain-specific techniques targeting trail exploration and analysis. In this section, we explore three major components of selection techniques for trail-set exploration and analysis relevant to our work: brushing, query-by-content, and similarity measurement.
+
+§ 2.1 BRUSHING IN TRAJECTORY VISUALIZATION
+
+Trail-set exploration relies on pattern discovery [13] where relevant trails need to be selected for further analysis. Brushing is a selection technique for information visualization, where the user interactively highlights a subset of the data by defining an area of interest. This technique has been shown to be a powerful and generic interaction technique for information retrieval [9]. The selection can be further refined using interactive filtering techniques [23]. The approach presented in this paper is based on dynamic queries [53] and direct manipulation [36, 52].
+
+Systems designed specifically for spatio-temporal visualization and in particular in trajectory visualizations are very complex because of their 3D and time varying nature. Due to this, several systems and frameworks have been especially designed to visualize them $\left\lbrack {5,6,8,{32},{33},{51}}\right\rbrack$ . Most of these systems include selection techniques based on brushing, and some of them enable further query refinement through boolean operations [32,33].
+
+These techniques do not take into account the shape of the trails, so selecting a specific one with a particular shape requires many manipulations and iterations to fine-tune the selection.
+
+§ 2.2 QUERY-BY-CONTENT
+
+While this paper attempts to suggest a shape-based brushing technique for trail sets, researchers have explored shape-based selection techniques in different contexts, both using arbitrary shapes and sketch-based queries.
+
+Sketch-based querying presents several advantages over traditional selection [18]. It has been used for volumetric data sets [45], and neural pathway selection [1]. This last work is the closest to the current study. However, the authors presented a domain-specific application and they based their algorithm on the Euclidean distance. This is not a robust metric for similarity detection since it is hard to provide a value indicating a high similarity and varies greatly according to the domain and the data considered. In addition, this metric does not support direction and orientation matching nor the combination of brushing with filtering.
+
+In addition, user-sketched pattern matching plays an important role in searching and localizing time-series patterns of interest $\left\lbrack {{29},{43}}\right\rbrack$ . For example, Holz and Feiner $\left\lbrack {29}\right\rbrack$ defined a relaxed selection technique in which the users draw a query to select the relevant part of a displayed time-series. Correl et al. [18] propose a sketch-based query system to retrieve time-series using dynamic time wrapping, mean square error or the Hough transform. They present all matches individually in a small multiple, arranged according to the similarity measurement. These techniques, as the one proposed here, also take advantage of sketches to manipulate data. However, they are designed for querying rather than selecting, and specifically for $2\mathrm{D}$ data. Other approaches use boxes and spheres to specify the regions of interest $\left\lbrack {2,{28},{44}}\right\rbrack$ , and the desired trails are obtained if they intersect these regions of interest. However, many parameters must be changed by the analysts in order to achieve a simple single selection. The regions of interest must be re-scaled appropriately, and then re-positioned back and forth multiple times for each operation. Additionally, many selection box modifications are required to refine the selection and thus hinder and alter the selection efficiency [2].
+
+§ 2.3 SIMILARITY MEASURES
+
+Given a set of trajectories, we are interested in retrieving the most similar subset of trajectories with a user-sketched query. Common approaches include selecting the K-nearest-neighbors (KNN) based on the Euclidean distance (ED) or elastic matching metrics (e.g, Dynamic Time Warping - DTWs). Affinity cues have also been used to group objects. For example, objects of identical color are given a high similarity coefficient for the color affinity [40].
+
+The Euclidean distance is the most simple to calculate, but, unlike mathematical similarity measurements [59] which are usually bound between 0 and 1 or -1 and 1, ED is unbounded and task-specific. A number of works have suggested transforming the raw data into a lower-dimensional representation (e.g., SAX [39,41], PAA $\left\lbrack {{38},{42}}\right\rbrack )$ . However, they require adjusting many abstract parameters which are dataset-dependant and thus reduce their flexibility. Lindlbauer presented global and local proximity of 2D sketches. The second measure is used for similarity detection where an object is contained within another one, and is not relevant to this work. While the first measure refers to the distance between two objects (mostly circles and lines), there is no guarantee that the approach could be generalized to large datasets such as eye tracking, GPS or aircraft trajectories. In contrast, Dynamic Time Warping has been considered as the best measurement [20] in various applications [47] to select shapes by matching their representation [24]. It has been used in gestures recognition [58], eye movements $\left\lbrack {3,{26}}\right\rbrack$ and shapes $\left\lbrack {61}\right\rbrack$ . An overview of existing metrics is available [47].
+
+The k-Nearest Neighbor(KNN)approach has also long been studied for trail similarity detection [16,54]. However, using this metric, two trails may provide a good accurate connection (i.e, a small difference measure as above) even if they have very different shapes. Other measurements to calculate trajectory segment similarity are the Minimum Bounding Rectangles (MBR) [34] or Fréchet Distance [19] which leverage the perpendicular distance, the parallel distance and the angular distance in order to compute the distance between two trajectories.
+
+In order to address the aforementioned issues, we propose and investigate two different approaches. The first approach is based on directly calculating the correlations on $x$ -axis and $y$ -axis independently between the shape of the brush and the trails (section 3.1.1). The second approach (section 3.1.2) is based on the geometrical information of the trails, i.e, the trails are transformed into a new space (using eigenvectors of the co-variance matrix) which is more suitable for similarity detection. This paper's approach leverages the potential of these two metrics to foster efficient shape-based brushing for large cluttered datasets. As such, it allows targeting detailed motif discovery performed interactively.
+
+§ 3 INTERACTION PIPELINE
+
+This section presents the interactive pipeline (Fig. 2) which fulfills the identified design requirements (DR1-DR4). As for any interactive system, user input plays the main role and will operate at every stage of the data processing. First, the input data (i.e. trail set) is given. Next, the user inputs a brush where the pipeline extracts the brushed items, the brush area and its shape. Then, two comparison metrics are computed between every brushed item and the shape of the brush (similarity measurement). A binning process serves to filter the data which is then presented to the user. The user can then refine the brush area and choose another comparison metric until the desired items are selected.
+
+ < g r a p h i c s >
+
+Figure 2: This figure shows the interaction pipeline. The pipeline extracts the brushed items but also the shape of the brush which will be then used as a comparison with the brushed items (metrics stage). Then, brushed items are stored in bins displayed with small multiples where the user can interactivity refine the selection (i.e. binning and filtering stage). Finally, the user can adjust the selection with additional brushing interactions Note that both the PC and FPCA can be used for the binning process, but separately. Each small multiple comes from one metric exclusively.
+
+§ 3.1 METRICS
+
+As previously detailed in the related word section, many comparison metrics exist. While our pipeline can use any metric that fulfills the design requirement DR2 (continuous, ordered and meaningful comparison), the presented pipeline contains only two complementary algorithms: Pearson and FPCA. The first one focuses on a shape comparison basis with correlation between their representative vertices, while the latter focuses on curvature comparison. As shown in Fig. 3, each metric produces different results. The user can use either of them depending on the type of filtering to be performed. During the initial development of this technique, we first considered using the Euclidean distance (ED) and DTW, but we have rapidly observed their limitations and we argue that PC and FPCA are more suitable to trajectory datasets. First, PC values are easier to threshold. A PC value $> {0.8}$ provides a clear indication of the similarity of 2 shapes. Moreover, to accurately discriminate between complex trajectories, we need to go beyond the performance of ED. Furthermore, the direction of the trajectories, while being essential for our brushing technique, is not supported by ED and DTW similarity measures. Another disadvantage of using ED is domain- and task-specific threshold that can drastically vary depending on the context. PC, that we used in our approach, on the other hand, uses the same threshold independently of the type of datasets. The two following sections detail the two proposed algorithms.
+
+§ 3.1.1 PEARSON'S CORRELATION (PC)
+
+Pearson's Correlation (PC) is a statistical tool that measures the correlation between two datasets and produces a continuous measurement between $\in \left\lbrack {-1,1}\right\rbrack$ with 1 indicating a high degree of similarity, and -1 an anti-correlation indicating an opposite trend [35]. This metric is well suited (DR2) for measuring dataset similarity (i.e. in trajectory points) [17, 55].
+
+Pearson’s Correlation ${PC}$ between two trails ${T}_{i}$ and ${T}_{j}$ on the $x - {axis}$ can be defined as follows:
+
+$$
+{r}_{x} = \frac{\operatorname{COV}\left( {{T}_{{i}_{x}},{T}_{{j}_{x}}}\right) }{{\sigma }_{{T}_{{i}_{x}}}{\sigma }_{{T}_{{j}_{x}}}},\;\operatorname{COV}\left( {{T}_{{i}_{x}},{T}_{{j}_{x}}}\right) = E\left\lbrack {\left( {{T}_{{i}_{x}} - \overline{{T}_{{i}_{x}}}}\right) \left( {{T}_{{j}_{x}} - \overline{{T}_{{j}_{x}}}}\right) }\right\rbrack \tag{1}
+$$
+
+Where $\overline{{T}_{{i}_{x}}}$ and $\overline{{T}_{{j}_{x}}}$ are the means, $E$ the expectation and ${\sigma }_{{T}_{{i}_{x}}},{\sigma }_{{T}_{{j}_{x}}}$ the standard deviations. The correlation is computed on the $y - {axis}$ and the $x - {axis}$ for two-dimensional points.
+
+This metric is invariant in point translation and trajectory scale but it does not take into account the order of points along a trajectory. Therefore, the pipeline also considers the FPCA metric that is more appropriate to trajectory shape but that does not take into account negative correlation.
+
+ < g r a p h i c s >
+
+Figure 3: This figure shows an example of the two metrics for shape comparison usage. The user brushed around curve 3 and thus also selected curves 1 and 2. Thanks to the Pearson computation, the associated small multiples show that only curve 2 is correlated to the shape of the brush. Curve 3 is anti correlated since it in an opposite direction to the shape of the brush. The FPCA computation does not take into account the direction but rather the curvature similarity. As such, only shape 3 is considered as highly similar to the brush shape input.
+
+§ 3.1.2 FUNCTIONAL PRINCIPAL COMPONENT ANALYSIS
+
+Functional Data Analysis is a well-known information geometry approach [48] that captures the statistical properties of multivariate data functions, such as curves modeled as a point in an infinite-dimensional space (usually the ${L}^{2}$ space of square integrable functions [48]). The Functional Principal Component Analysis (FPCA) computes the data variability around the mean curve of a cluster while estimating the Karhunen-Loeve expansion scores. A simple analogy can be drawn with the Principal Component Analysis (PCA) algorithm where eigen vectors and their eigen values are computed, the FCPA performs the same operations with eigen functions (piecewise splices) and their principal component scores to model the statistical properties of a considered cluster [30]:
+
+$$
+\Gamma \left( {t,\omega }\right) = \bar{\gamma } + \mathop{\sum }\limits_{{j = 1}}^{{+\infty }}{b}_{j}\left( \omega \right) {\phi }_{j}\left( t\right) \tag{2}
+$$
+
+where ${b}_{j}$ are real-valued random variables called principal component scores. ${\phi }_{j}$ are the principal component functions, which obey
+
+to:
+
+$$
+{\int }_{0}^{1}\widehat{H}\left( {s,t}\right) {\phi }_{j}\left( s\right) {ds} = {\lambda }_{j}{\phi }_{j}\left( t\right) \tag{3}
+$$
+
+${\phi }_{j}$ are the (vector-valued) eigenfunctions of the covariance operator with eigenvalues ${\lambda }_{j}$ . We refer the prospective reader to the work of Hurter et al. [30] for a Discrete implementation. With this model, knowing the mean curve $\bar{\gamma }$ and the principal component functions ${\phi }_{j}$ , a group of curves can be described and reconstructed (Inverse FPCA) with the matrix of the principal component score ${b}_{j}$ of each curve. Usually, a finite vector (with fixed dimension d) of ${b}_{j}$ scores is selected such that the explained variance is more than a defined percentile.
+
+ < g r a p h i c s >
+
+Figure 4: Illustration of FPCA metric algorithm. Diagram of the transformation from the trail space to the 2D points space using FPCA and then measuring the closest points (most similar) in the Principal Component space.
+
+To compute a continuous and meaningful metric (DR2), the metric computation uses the two first Principal Components (PC) to define the representative point of a considered trajectory. Then, the metric is computed by the euclidean distance between the shape of the brush and each brushed trajectory in the Cartesian scatterplot PC1/PC2 (Fig. 4). Each distance is then normalized between $\left\lbrack {0,1}\right\rbrack$ with 1 corresponding to the largest difference in shape between the considered shape of the brush of the corresponding trajectory.
+
+§ 3.2 BINNING AND SMALL MULTIPLE FILTERING
+
+Taking into account the computed comparison metrics, the pipeline stores the resulting values into bins. Items can then be sorted in continuous ways from the less similar to the most similar ones. While the Pearson measurements $\in \left\lbrack {-1,1}\right\rbrack$ and the FPCA $\in \left\lbrack {0,1}\right\rbrack$ , this binning process operates in the same way. Each bin is then used to visually show the trajectories it contains through small multiples (we use 5 small multiples which gives a good compromise between visualization compactness and trajectory visibility). The user can then interactively filter the selected items (DR4) with a range slider on top of the small multiple visualizations. The user is thus able to decide whether to remove uncorrelated items or refine the correlated one with a more restricted criterion (DR3).
+
+§ 4 INTERACTION PARADIGM BY EXAMPLE
+
+This technique is designed to enable flexible and rapid brushing of trajectories, by both the location and the shape of the brush. The technique's interaction paradigm is now described and illustrated in a scenario where an air traffic management expert studies the flight data depicted in Fig. 6.
+
+§ 4.1 SCENARIO INTRODUCTION
+
+Aircraft trajectories can be visually represented as connected line segments that form a path on a map. Given the flight level (altitude) of the aircraft, the trajectories can be presented in 3D and visualized by varying their appearances [7] or changing their representation to basic geometry types [11]. Since the visualization considers a large number of trajectories that compete for the visual space. these visualizations often present occlusion and visual clutter issues. rendering exploration difficult. Edge bundling techniques [37] have been used to reduce clutter and occlusion but they come at the cost of distorting the trajectory shapes which might not always be desirable.
+
+Analysts need to explore this kind of datasets in order to perform diverse tasks. Some of these tasks compare expected aircraft trajectories with the actual trajectories. Other tasks detect unexpected patterns and perform out traffic analysis in complex areas with dense traffic $\left\lbrack {7,{32}}\right\rbrack$ . To this end, various trajectory properties such as aircraft direction, flight level and shape are examined. However, most systems only support selection techniques that rely on starting and end points, or predefined regions. We argue that the interactive shape brush technique would be helpful for these kinds of tasks, as they require the visual inspection of the data, the detection of the specific patterns and then their selection for further examination. As these specific patterns might differ from the rest of the data precisely because of their shape, a technique that enables their selection through this characteristic will make their manipulation easier, as detailed in the example scenario. We consider a dataset that includes 4320 aircraft trajectories of variable lengths from one day of flight traffic over the French airspace.
+
+§ 4.2 BRUSHING
+
+We define the trail $T$ as a set of real-valued consecutive points $T = \left\lbrack {\left( {T{x}_{1},T{y}_{1}}\right) {)}^{\top },{\left( T{x}_{2},T{y}_{2}\right) }^{\top },\ldots ,{\left( T{x}_{n},T{y}_{n}\right) }^{\top }}\right\rbrack$ where $n$ is the number of points and ${\left( T{x}_{i},T{y}_{i}\right) }^{\top }$ corresponds to the $i - {th}$ coordinate of the trail. The Fig. 6 depicts an example of 4133 trails (aircraft in French airspace). The brush $S$ hape consists of a set of real-valued consecutive points $S = \left\lbrack {\left( {S{x}_{1},S{y}_{1}}\right) {)}^{\top },{\left( S{x}_{2},S{y}_{2}\right) }^{\top },\ldots ,{\left( S{x}_{m},S{y}_{m}\right) }^{\top }}\right\rbrack$ where $m$ is the number of points. Note that while the length $n$ of each trail is fixed, the length $m$ of the Shape depends on the length of the user brush. The Shape is smoothed using a 1Efilter [14] and then resampled to facilitate the trail comparison The similarity metrics are then used in subsequences of the shape of approximately the same length as the brush Shape. In order to do this, each trail is first resampled so that each pair of consecutive vertices on the trail has the same distance ${l}_{\text{ vertices }}$ [22].
+
+The user starts by exploring the data using pan and zoom operations. They are interested in the trajectories from the south-east of France to Paris. The user can choose if they are looking for a subsequence match or an exact match. A subsequence match involves the detection of trajectories having a subsequence similar to the Shape locally. Exact match comparison also takes into account the length of the trajectory and the Shape in order to select a trajectory, i.e, its length must be approximately similar to the length of the Shape (general measurements). This option is especially useful to select a trajectory by its start and end points (e.g, finding trajectories taking off from an airport A and landing at an airport B). The exact matching is supported by analyzing the length of the trail and the Shape before applying the similarity metric algorithm. The analyst in the scenario activates the subsequence match where the Pearson's Correlation metric is selected by default and starts brushing in the vicinity of the target trajectories following the trajectory shape with the mouse. This will define both (1) the brush region and (2) the brush shape, that captures also the brush direction. Once the brushing has been completed, the selected trajectories are highlighted in green, as depicted in Fig. 5-(b).
+
+ < g r a p h i c s >
+
+Figure 5: (a) The user brushes the trajectories in order to select those from the south-east of France to Paris. (b) They select the most correlated value to take into account the direction of the trails.
+
+§ 4.3 SMALL MULTIPLES AND USER FILTERING
+
+The similarity calculation between the $S$ hape and the brushed region will produce a similarity value for each trail contained in the region and the trails are distributed in small multiples as detailed in Section 3. Once the user has brushed, she can adjust the selection by selecting one of the bins displayed in the small multiples and using its range slider. The range slider position controls the similarity level, and its size determines the number of trajectories selected at each slider position: the smallest size selects one trajectory at a time. The range slider size and position are adjusted by direct manipulation using the mouse. This enables a fine control over the final selection and makes the algorithm thresholding easier to understand as the user controls both the granularity of the exploration and the chosen similarity level. As the bins are equally sized, the distribution of the similarity might not be linear across the small multiples. This makes navigation easier since the trajectories distribution in the small multiples is continuous. However, this also entails that not every bin corresponds to the same similarity value interval. To keep this information available to the user, a colored heatmap (from red to green) displays the actual distribution, as depicted in Fig. 3.
+
+In the current scenario, the expert, as they wish to select only the flights to Paris and not from Paris, selects the trajectories that are correlated with the original brush, as the correlation takes into account the brush direction. These trajectories are on the right side of the small multiple, highlighted in green as depicted in Fig. 5-(b).
+
+The expert is then interested in exploring the flights that land on the north landing strip but that are not coming from the east. For this, they perform a new shape brush that will consider only the previously selected trajectories to identify the planes that do come from the east, and distinguishable by the "C" shape in the trajectories, as depicted in Fig. 7. To be able to select the geometry precisely, the expert changes to the FPCA metric, using the keyboard shortcut. In this case, the small multiple arranges the trajectories from less similar to more similar. This entails that the small multiple based on FPCA also enables the selection of all the trajectories that do not match the specified Shape but which are contained in the brushing region. As all trajectories passing through the north landing strip are contained in the brushing region, the most similar trajectories will correspond to the ones that have a "C" shape, in the same orientation as the Shape, and thus come from the east. The less similar will be the ones that interest the analyst, so they can select them by choosing the most dissimilar small multiple as depicted in Fig. 7-(b).
+
+ < g r a p h i c s >
+
+Figure 6: One day's aircraft trajectories in French airspace, including taxiing, taking-off, cruise, final approach and landing phases. Selecting specific trajectories using the standard brushing technique will yield inaccurate results due to the large number of trajectories, occlusions, and closeness in the spatial representation.
+
+ < g r a p h i c s >
+
+Figure 7: (a) The user filters the trajectories that land on the north runway in Paris by brushing following the "C" shape. This retrieves the flights that come from the east. (b) They change the selection on the small multiples, to select all the dissimilar Shapes, resulting in the trajectories that land on the north landing strip but that do not come from the east.
+
+§ 5 USE CASES
+
+We argue that there is a strong demand for targeted brushing to select motifs in datasets. In various domains, including aircraft trajectories, eye tracking, GPS trajectories or brain fiber analysis, there is a substantial need to be able to discover hidden motifs in large datasets. Undoubtedly, retrieving desired trails in such datasets would help analysts to focus on the most interesting parts of the data. The system was built using C# and OpenTK on a 64bit ${}^{1}$ XPS 15 Dell Laptop. Although, both PC and FPCA provide different but valuable results, the running performance was 10 times faster with PC compared to FPCA.
+
+The technique was first tested informally with experts from aerospace domain with more than 10 years of experience in trajectories analysis. While the collected feedback was largely positive, we observed some limitations regarding the misunderstanding of our filtering parameters. Given the novelty of our interaction technique, users needed a small training period to better understand the semantic of our small multiples interface. Nevertheless, experts founded our technique interesting and useful since it provides initial good selection result without any parameter adjustment.
+
+Because the presented technique is not designed to replace standard brushing but rather to complement it, we extend the informal user study with an evaluation based on real use cases. We argue that these use cases show how our technique facilitates trajectories selection in dense areas, where standard brushing would require multiple user actions (panning, zooming, brushing).
+
+§ 5.1 EYE-TRACKING DATA
+
+Eye-tracking technologies are gaining popularity for analyzing human behaviour, in visualization analysis, human factors, human-computer interaction, neuroscience, psychology and training. The principle consists in finding the likely objects of interest by tracking the movements of the user's eyes [4]. Using a camera, the pupil center position is detected and the gaze, i.e, the point in the scene the user is fixating on, is computed using a prior calibration procedure $\left\lbrack {{15},{25},{27},{49}}\right\rbrack$ . Therefore, the gaze data consist of sampled trails representing the movements of the user's eye gaze while completing a given task.
+
+Two important types of recorded movements characterize eye behaviour: the fixations and saccades [21]. Fixations are the eye positions the user fixates for a certain amount of time, in other words, they describe the locations that captured the attention of the user. The saccades connect the different fixations, i.e, they represent the rapid movements of the eye from one location to another. The combination of these eye movements is called the scanpath (Fig. 8A). The scanpath is subject to overplotting. This challenge may be addressed through precise brushing techniques to select specific trails. Usually, fixation events are studied to create an attention map which shows the salient elements in the scene. The salient elements are located at high-density fixation areas. However, the temporal connections of the different fixations provide additional information. The saccades enable the links between the fixations to be maintained and the temporal meaning of the eye movement to be held. Discovering patterns in the raw scanpath data is difficult since, in contrast to aircraft trajectories, eye movements are sparser and less regular (Fig. 8). To address this, different kinds of visualizations for scanpaths have been proposed in the literature. For example, edge bundling techniques [31] minimize visual clutter of large and occluded graphs. However, these techniques either alter trail properties such as shape and geometric information, or are otherwise computationally expensive, which makes them unsuitable for precise exploration and mining of large trail datasets. Moreover, it is possible to animate eye movements in order to have an insight of the different fixations and saccades. However, given the large datasets of eye movements retrieved from lengthy experiments containing thousands of saccades, this approach is unnecessarily time-consuming and expensive.
+
+Therefore, we next describe how this study's approach supports proper and more efficient motif discovery on such eye-tracking datasets. The tested dataset is adapted from Peysakhovich et al. [46], where a continuous recording of eye movement in a cockpit was performed. The gaze data was recorded at ${50}\mathrm{\;{Hz}}$ . Sequential points located in a square of ${20} \times {20}$ pixels and separated by at least 200 ms were stored as a fixation event and replaced by their average in order to reduce noise coming from the microsaccades and the tracking device.
+
+In order to illustrate some examples, we could consider a domain expert who wishes to explore the movements of the pilot's eyes in a cockpit. When performing a task, the pilot scans the different instruments in the cockpit, focuses more on certain instruments or interacts with them. Especially in this context, the order of pilot attention is important since checking a parameter in one instrument may give an indication of the information displayed in another instrument. For example, the priority of the Primary Flight Display (PFD) instrument compared to Flight Control Unit (FCU) will differ for the cruise phase as compared to the final landing approach [46]. As an example of analysis, the user wishes to explore the movement of the eye from the Primary Flight Display (PFD) to the Navigation Display (ND). Selecting these scanpaths using traditional brushing techniques would be challenging because of the clutter, selecting those scanpaths would introduce additional accidental selections. Therefore, he brushes these scanpaths using a shape that extends from the PFD to the ND, applying the Pearson metric to consider the direction. Fig. 8(a) depicts the brushed eye movements that correspond to the most correlated trails in the small multiple. There are several saccades between those two devices, and this is in line with the fact that saccadic movements between the PFD and the ND are typically caused by parameter checking routines.
+
+However, when the user changes the selection and brushes the scanpath between the ND and the FCU, it is surprising to see that there is only one saccade between them. Brushing now with a shape that goes between the PFD and the FCU (Fig. 8-(c)) reveals only one scanpath. This is difficult to visualize in the raw data or using the standard brushing technique. A final Shape searching for an eye movement from the PFD to the LAI and passing by the FCU, results in only one saccade (Fig. 8-(d)). To determine the meaning of this behavior, the tool also enables the expert to exploit a continuous transition to increase the visibility and gain insight on when these saccadic movements occurred (temporal view). The user can change the visual mapping from the(x, y)gaze location to the (time, y) temporal view. This smooth transition avoids abrupt change to the visualization [33] (Fig. 9).
+
+§ 5.2 GPS DATA
+
+GPS trajectories consist of sequential spatial locations recorded by a measurement instrument. Subjects such as people, wheeled vehicles, transportation modes and devices may be tracked by analyzing the spatial positions provided by these instruments. Analysts may need to explore and analyze different paths followed by the users. The advances in position-acquisition and ubiquitous devices have granted extremely large location data, which indicate the mobility of different moving targets such as autonomous vehicles, pedestrians, natural phenomena, etc. The commonness of these datasets calls for novel approaches in order to discover information and mine the data [62].
+
+${}^{1}$ Intel(R) Core(TM) I7-4712HQ CPU @ 2.30GHz,2301 MHz,4 core,8 threads
+
+ < g r a p h i c s >
+
+Figure 8: (a) Selected eye movements between the PFD and ND, (b) Selected eye movements in the vicinity of the PFD, (c) Saccades between the ND and the FCU, (d) Eye movement from the PFD to the LAI passing by the FCU.
+
+ < g r a p h i c s >
+
+Figure 9: This figure shows the animated transition between the $\mathrm{X}/\mathrm{Y}$ gaze view to the temporal view. This helps to detected how the selected eye movement occurred over time.
+
+Traditionally, researchers analyse GPS logs by defining a distance function (e.g, ${KNN}$ ) between two trajectories and then applying expensive processing algorithms to address the similarity detection. For example, they first convert the trajectories into a set of road segments by leveraging map-matching algorithms. Afterwards, the relationship between trajectories is managed using indexing structures [56,62]. Using the data provided by Zheng et al. [63], we seek to investigate different locations followed by the users in Beijing. The data consists of GPS trajectories collected for the Geolife project by 182 users during a period of over five years (from April 2007 to August 2012) [63]. Each trajectory is represented by a 3D latitude, longitude and altitude point. A range of users' outdoor movements were recorded, including life routines such as travelling to work, sports, shopping, etc.
+
+As the quantity of GPS data is becoming increasingly large and complex, proper brushing is challenging. Using bounding boxes somewhat alleviate this difficulty by setting the key of interest on the major corners. However, many boxes must be placed carefully for one single selection. The boxes can help the analysts to select all the trajectories that pass through a specific location, but do not simplify the analysis of overlapping and directional trajectories. This study's approach intuitively supports path differentiation for both overlapping trajectories and takes direction into account. For example, we are interested in answering questions about the activities people perform and their sequential order [63]. For this dataset, the authors were interested in finding event sequences that could inform tourists. The shape-based brushing could serve as a tool to further explore their results. For example, if they find an interesting classical sequence that passes through locations A and B they can further explore if this sequence corresponds to a larger sequence and what other locations are visited before or after. A first brushing and refinement using the FPCA metric and small multiples enables them to select all the trajectories that include a precise event sequence passing through a set of locations, as depicted in Fig. 10. A second brushing using the Pearson metric enables further explorations that also take into account the direction of the trajectories. Switching between the correlated trajectories and the anti-correlated ones, the user can gain insight about the visitation order of the selected locations.
+
+ < g r a p h i c s >
+
+Figure 10: GPS locations of pedestrians (black). Selection of three different trajectories containing three different event sequences from [63] (green).
+
+§ 6 DISCUSSION
+
+The proposed brushing technique leverages existing methods with the novel usage of the shape of the brush as an additional filtering parameter. The interaction pipeline shows different data processing steps where the comparison algorithm between the brushed items and the shape of the brush plays a central role. While the presented pipeline contains two specific and complementary comparison metric computations, another one can be used as long as it fulfills the continuity and metric semantic requirements (DR2). There are indeed many standard approaches (ED, DTW, Discrete Fréchet distance) that are largely used by the community and could be used to extend our technique when faced with different datasets. Furthermore, the contribution of this paper is a novel shape-based brushing technique and not simply a shape similarity measure. In our work, we found two reasonable similarity measures that fulfill our shape-based brushing method: The FPCA distance comparison provides an accurate curve similarity measurement while the Pearson metric provides a complementary criteria with the direction of the trajectory.
+
+In terms of visualization, the binning process provides a valuable overview of the order of the trajectory shapes. This important step eases the filtering and adjustment of the selected items. It is important to mention that this filtering operates in a continuous manner as such trajectories are added or removed one by one when adjusting this filtering parameter. This practice helps to fine tune the selected items with accurate filtering parameters. The presented scenario shows how small multiple interaction can provide flexibility. This is especially the case when the user brushes specific trajectories to be then removed when setting the compatibility metrics to uncorrelated. This operation performs a brush removal. The proposed filtering method can also consider other types of binning and allows different possible representations (i.e. various visual mapping solutions).
+
+This paper illustrates the shape-based brushing technique with three application domains (air traffic, eye tracking, GPS data), but it can be extended to any moving object dataset. However, our evaluation is limited by the number of studied application domains. Furthermore, even if various users and practitioners participated in the design of the technique, and assessed the simplicity and intuitiveness of the method, we did not conduct a more formal evaluation. The shape-based brush is aimed at complementing the traditional brush, and in no way do we argue that it is more efficient or effective than the original technique for all cases. The scenarios are examples of how this technique enables the selection of trails that would be otherwise difficult to manipulate, and how the usage of the brush area and its shape to perform comparison opens novel brushing perspectives. We believe they provide strong evidence of the potential of such a technique.
+
+The technique also presents limitations in its selection flexibility, as it is not yet possible to combine selections. Many extensions can be applied to the last step of the pipeline to support this. This step mainly addresses the DR4 where the selection can be refined thanks to user inputs. As such, multiple selections can be envisaged and finally be composed. Boolean operations can be considered with the standard And, Or, Not. While this composition is easy to model, it remains difficult for an end user to master the operations when there are more than 2 subset operations $\left\lbrack {{33},{60}}\right\rbrack$ . As a solution, Hurter et al. proposed an implicit item composition with a simple drag and drop technique [33]. The pipeline can be extended with the same paradigm where a place holder can store filtered items and then be composed to produce the final result. The user can then refine the selection by adding, removing or merging multiple selections.
+
+§ 7 CONCLUSION
+
+In this paper, a novel sketch-based brushing technique for trail selection was proposed and investigated. This approach facilitates user selection in occluded and cluttered data visualization where the selection is performed on a standard brush basis while taking into account the shape of the brush area as a filtering tool. This brushing tool works as follows. Firstly, the user brushes the trajectory of interest trying to follow its shape as closely as possible. Then the system pre-selects every trajectory which touches the brush area. Next, the algorithm computes a distance between every brushed shape and the shape of the brushed area. Comparison scores are then sorted and the system displays visual bins presenting trajectories from the lowest scores (unrelated - or dissimilar trajectories) to the highest values/scores (highly correlated or similar trajectories). The user can then adjust a filtering parameter to refine the actual selected trajectories that touch the brushed area and which have a suitable correlation with the shape of the brushed area. The cornerstone of this shape-based technique relies on the shape comparison method. Therefore, we choose two algorithms which provide enough flexibility to adjust the set of selected trajectories. One algorithm relies on functional decomposition analysis which ensures a shape curvature comparison, while the other method insures an accurate geometric based comparison (Pearson algorithm). To validate the efficiency of this method, we show three examples of usage with various types of trail datasets.
+
+This work can be extended in many directions. We can first extend it with additional application domains and other types of dataset such as car or animal movements or any type of time-varying data. We can also consider other types of input to extend the mouse pointer usage. Virtual Reality data exploration with the so-called immersive analytic domain gives a relevant work extension which will be investigated in the near future. Finally, we can also consider adding machine learning to help users brush relevant trajectories. For instance, in a very dense area, where the relevant trajectories or even a part of the trajectories are not visible due to the occlusion, additional visual processing may be useful to guide the user during the brushing process.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..58586beda4cb2cafec507493e8521a09ecccd079
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,409 @@
+# Using Action-Level Metrics to Report the Performance of Multi-Step Keyboards
+
+Category: Research
+
+## Abstract
+
+Computer users commonly use multi-step text entry methods on handheld devices and as alternative input methods for accessibility. These methods require performing multiple actions simultaneously (chorded) or in a specific sequence (constructive) to produce input. However, evaluating these methods is difficult since traditional performance metrics were designed explicitly for non-ambiguous, uni-step methods (e.g., QWERTY). They fail to capture the actual performance of a multi-step method, and do not provide enough detail to aid in design improvements. Here, we present three new action-level performance metrics: UnitER, UA, and UnitCX. They account for the error rate, accuracy, and complexity of multi-step chorded and constructive methods. They describe the multiple inputs that comprise typing a single character: action-level metrics observe actions performed to type one character, while conventional metrics look into the whole character. In a longitudinal study, we used these metrics to identify probable causes of slower text entry and input errors with two existing multi-step methods. Consequently, we suggest design changes to improve learnability and input execution.
+
+Index Terms: Human-centered computing-Empirical studies in HCI; Human-centered computing-Usability testing; Human-centered computing-User studies
+
+## 1 INTRODUCTION
+
+Most existing text entry performance metrics were designed to characterize uni-step methods that map one action (a key press or a tap) to one input. However, there are also multi-step constructive and chorded methods. These require the user to perform a predetermined set of actions either in a specific sequence (pressing multiple keys in a particular order) or simultaneously (pressing multiple keys at the same time). Examples of constructive methods include multi-tap [13] and Morse code [16,33]. Chorded methods include braille [3] and stenotype [18]. Many text entry methods used for accessibility are either chorded or constructive. However, current metrics for analyzing the performance of text entry techniques were designed for uni-step methods, such as the standard desktop keyboard. Due to the fundamental difference in their input process, these metrics often fail to accurately illustrate participants' actions in user studies evaluating the performance of multi-step methods.
+
+Table 1: Conventional ER and action-level UnitER for a constructive method (Morse code).
+
+| i i | | | | | ER (%) | UnitER (%) |
| Presented Text | qU | C | k 1 -.- | $y$ | 14.28 | 7.14 |
| Transcribed Text | - .- ... q U - - . - | C | ... $\mathrm{k}$ j -.-_____ | -. -- $y$ - . - . |
| Presented Text | qU | C ... | $\mathrm{k}$ 1 -.- | $y$ | 14.28 | 3.57 |
| Transcribed Text | qU | C | ... k p -.-... | -. ... $y$ -. -- |
+
+To address this, we present a set of revised and novel performance metrics that account for the multi-step nature of chorded and constructive text entry techniques by analyzing the actions required to enter a character rather than the final output. We posit that while conventional metrics are effective in reporting the overall speed and accuracy, a set of action-level metrics can provide extra details about the user's input actions. For designers of multi-step methods, this additional insight is crucial for evaluating the method, identifying its pain points, and facilitating improvements. More specifically, conventional error rates fail to capture learning within chords and sequences. For instance, if entering a letter requires four actions in a specific order, with practice, users may learn some of these actions and the corresponding sub-sequences. Conventional metrics ignore partially correct content by counting each incorrect letter as one error, giving the impression that no learning has occurred. UnitER, in contrast, accounts for this and provides designers with an understanding of (1) whether users learned some actions or not and (2) which actions were difficult to learn, thus can be replaced.
+
+Table 1 illustrates this phenomenon using Morse code for entering "quickly", with one character error ("l") in each attempt. The conventional error rate (ER) is ${14.28}\%$ for each attempt. One of our proposed action-level metrics, UnitER, gives a deeper insight by accounting for the entered input sequence. It yields an error rate of 7.14% for the first attempt (with two erroneous dashes), and an improved error rate of 3.57% for the second attempt (with only one erroneous dash). The action-level metric shows that learning has occurred with a minor improvement in error rate, but this phenomenon is omitted in the ER metric, which is the same for both attempts.
+
+The remainder of this paper is organized as follows. We start with a discussion on the existing, commonly used text entry performance metrics, then elaborate on our motivations. This is followed with a set of revised and new performance metrics targeted at multistep input techniques. We proceed to validate the metrics in a longitudinal study evaluating one constructive and one chorded input techniques. The results demonstrate how the new metrics provide a deeper insight into action-level interactions. Researchers identify factors compromising the performance and learning of a multi-step keyboard, and suggest changes to address the issues.
+
+## 2 RELATED WORK
+
+Conventional metrics for text entry speed include characters per second (CPS) and words per minute (WPM). These metrics represent the number of resulting characters entered, divided by the time spent performing input. Text entry researchers usually transform CPS to WPM by multiplying by a fixed constant ( 60 seconds, divided by a word length of 5 characters for English text entry) rather than recalculating the metrics [6]. A common error metric, keystrokes per character (KSPC), is the ratio of user actions, such as keystrokes, to the number of characters in the final output $\left\lbrack {{21},{34}}\right\rbrack$ . Although this metric was designed primarily to measure the number of attempts at typing each character accurately [21], many researchers have used it to represent a method’s potential entry speed as well [22,36], since techniques that require fewer actions are usually faster. Some researchers have also customized this metric to fit the need of their user studies. The two most common variants of this metric are gesture per character (GPS) [8, 39, 40] and actions per character (APC) [39], which extend keystrokes to include gestures and other actions, respectively. Error rate (ER) and minimum string distance (MSD) are metrics that measure errors based on the number of incorrect characters in the final output [6]. Another metric, erroneous keystrokes (EKS) [6], considers the number of incorrect actions performed in an input episode. None of these metrics considers error correction efforts, thus Soukoreff and MacKenzie [34] proposed the total error rate (TER) metric that combines two constituent errors metrics: corrected error rate (CER) and not-corrected error rate (NER). The former measures the number of corrected errors in an input episode and the latter measures the number of errors left in the output.
+
+Table 2: Performance metrics used to evaluate chorded and constructive keyboards in recent user studies.
+
+| Technique | Speed | Accuracy | Low Level | ${R}^{2}$ |
| Twiddler [20] | WPM | ER | ${NA}$ | Yes |
| BrailleType [27] | WPM | MSDER | ${NA}$ | ${NA}$ |
| ChordedChordTap [38] | WPM | ER | ${NA}$ | Yes |
| Chording Glove [30] | WPM | MSD ER | ${NA}$ | ${NA}$ |
| Two-handed [41] | WPM | ER | ${NA}$ | ${NA}$ |
| Mutitap [12] | WPM | ER | ${NA}$ | ${NA}$ |
| Reduced Qwerty [14] | WPM | ${NA}$ | ${NA}$ | ${NA}$ |
| Construct.JustType [24] | WPM | ${NA}$ | ${NA}$ | ${NA}$ |
| UOIT [2] | WPM | TER | ${NA}$ | ${NA}$ |
+
+Accessibility text entry systems mainly use constructive or chorded techniques. Morse code is a constructive text entry system that was named one of the "Top 10 Accessibility Technologies" by RESNA [29]. In 2018, this entry system was integrated into Android phones as an accessibility feature for users with motor impairments [37]. Users of Morse code can tap on one or two keys to enter short sequences of keystrokes representing each letter of the alphabet. This method reduces the dexterity required to enter text, compared to the ubiquitous QWERTY keyboard. Braille is a tactile representation of language used by individuals with visual impairments. Braille keyboards $\left\lbrack {{25},{26},{35}}\right\rbrack$ contain just six keys so that users do not need to search for buttons; instead, users can keep their hands resting across the keys. To produce each letter using this system, users must press several of these keys simultaneously.
+
+Little work has focused on performance metrics for such multistep input techniques. Sarcar et al. [31] proposed a convention for calculating the most common performance metrics for constructive techniques, which considers both the input and the output streams. Grossman et al. [15] defined "soft" and "hard" errors for two-level input techniques, where errors at the first level are considered "soft" and errors at the second level are considered "hard". ${}^{1}$ Seim et al. [32] used a dynamic time warping (DTW) algorithm [17] to measure the accuracy of chorded keystrokes on a piano keyboard. Similar to the Levenshtein distance [19], it measures the similarity between two sequences, but accounts for variants in time or speed. Table 2 shows the performance metrics used to evaluate recent chorded and constructive text entry techniques.
+
+## 3 MOTIVATION
+
+### 3.1 Partial Errors and Predictions
+
+Users can make partial errors in the process of performing a chord or a sequence of actions. To enter " $x$ ", a Twiddler [20] user could incorrectly perform the chord "MORO" instead of "MROO", and a Morse code user could incorrectly perform the sequence "-..." instead of "-..-" (Table 3). In both cases, user actions would be partially correct. However, typical error metrics ignore this detail by considering the complete sequence as one incorrect input. Hence, they yield the same value as when users enter no correct information. In reality, users may have learned, and made fewer mistakes within a chord or a sequence. Not only does this show an incomplete picture of user performance, it also fails to provide the means to fully explore learning of the text entry method. More detailed metrics of multi-step keyboard interaction can facilitate improved input method design through a better understanding of the user experience. These data can also train algorithms to predict and compensate for the most common types of errors.
+
+Table 3: Performance metrics used to evaluate " $x$ " for chorded (Twiddler) and constructive (Morse) methods.
+
+| Technique | | r ’ $x$ ’ Actual input for ’ $x$ ’ | ER | UnitER |
| Twiddler [20] | MR00 | $\mathbf{{M0R0}}$ | 100 | 50 |
| Morse code [33] | ... | ... | 100 | 25 |
+
+### 3.2 Correction Effort
+
+Prior research [4] shows that correction effort impacts both performance and user experience, but most current error metrics do not represent the effort required to fix errors with constructive techniques. With uni-step character-based techniques, one error traditionally requires two corrective actions: a backspace to delete the erroneous character, and a keystroke to enter the correct one [7]. Suppose two users want to input "- - " for the letter " $x$ ". One user enters "-...", the other enters "...-". Existing error metrics consider both as one erroneous input. However, correcting these may require different numbers of actions. If a technique allowed users to change the direction of a gesture mid-way [5], the errors would require one and five corrective actions, respectively. If the system only allowed the correction of one action at a time, then the former would require two and the latter would require five corrective actions. In contrast, if the system does not allow the correction of individual actions within a sequence, both would require five corrective actions. Hence, error correction relies on the correction mechanism of a technique, the length of a sequence, and the position and type of error (insertion, deletion, or substitution). Existing metrics fail to capture this vital detail in both chorded and constructive text entry techniques.
+
+### 3.3 Deeper Insights into Limitations
+
+The proposed metrics aim to give quantitative insights into the learning and input of multi-step text entry techniques. Insights, such as tapping locations affecting speed, might seem like common sense, but action-level metrics can also provide similar insights for less straight-forward interactions, such as breath puffs, tilts, swipes, etc. Although some findings might seem unsurprising for experts, the proposed metrics will benefit the designers of new techniques aimed at multi-step text entry and would complement their efforts and insights.
+
+The new metrics can facilitate design improvements by quantifying the design choices. Conventional metrics identify difficult and erroneous letters, while our UnitER, UA and UnitCX indicate what is contributing towards it. The action-level metrics can help us to identify difficult-to-enter sequences, so they can be assigned to infrequent characters or avoided altogether. For example, UnitER, UA can show that target users are having difficulty in performing the long-press action of a three-action input sequence for some character. Designers then can replace the long-press action with an easier one for particular group of people (e.g., to physical button press) to reduce error rate for that letter.
+
+## 4 NOTATIONS
+
+In the next section, we propose three new action-level metrics for evaluating multi-step text entry techniques. For this, we use the following notations.
+
+---
+
+${}^{1}$ With two-level techniques, users usually perform two actions to select the desired character, for example, the first action to specify the region and the second to choose the character.
+
+---
+
+- Presented text(PT)is the text presented to the user for input, $\left| {PT}\right|$ is the length of ${PT}, P{T}_{i}$ is the $i$ th character in PT, $p{t}_{i}$ is the sequence of actions required to enter the $i$ th character in ${PT}$ , and $\left| {p{t}_{i}}\right|$ is the length of $p{t}_{i}$ .
+
+- Transcribed text(TT)is the text transcribed by the user, $\left| {TT}\right|$ is the length of ${TT}, T{T}_{i}$ is the $i$ th character in ${TT}, t{t}_{i}$ is the sequence of actions performed by the user to enter the $i$ th character in ${TT}$ , and $\left| {t{t}_{i}}\right|$ is the length of $t{t}_{i}$ .
+
+- Minimum string distance (MSD) measures the similarity between two sequences using the Levenshtein distance [19]. The "distance" is defined as the minimum number of primitive operations (insertion, deletion, and substitution) needed to transform a given sequence(TT)to the target sequence(PT)[21].
+
+- Input time(t)is the time, in seconds, the user took to enter a phrase (TT).
+
+- An action is a user action, including a keystroke, gesture, tap, finger position or posture, etc. Action sequence (AS) is the sequence of all actions required to enter the presented text. $\left| \mathrm{{AS}}\right|$ is the length of $\mathrm{{AS}}$ . We consider all sub-actions within a chord as individual actions. For example, if a chord requires pressing three keys simultaneously, then it is composed of three actions.
+
+## 5 SPEED METRICS
+
+### 5.1 Input per Second (IPS)
+
+We present IPS, a variant (for convenience of notation) of the commonly used CPS metric [40] to measure the entry speeds of multistep techniques.
+
+$$
+\text{ IPS } = \frac{\left| \mathrm{{AS}}\right| }{t} \tag{1}
+$$
+
+IPS uses the length of the action sequence $\left| {AS}\right|$ instead of the length of transcribed text $\left| {TT}\right|$ to account for all multi-step input actions. It is particularly useful to find out if the total number of actions needed for input is affecting performance or not.
+
+## 6 Proposed Action-Level Metrics
+
+### 6.1 Unit Error Rate (UnitER)
+
+The first of our novel metrics, unit error rate (UnitER) represents the average number of errors committed by the user when entering one input unit, such as a character or a symbol. This metric is calculated in the following three steps:
+
+#### 6.1.1 Step 1: Optimal Alignment
+
+First, obtain an optimal alignment between the presented(PT)and transcribed text(TT)using a variant of the MSD algorithm [23]. This addresses all instances where lengths of presented $\left( \left| {PT}\right| \right)$ and transcribed text $\left( \left| {TT}\right| \right)$ were different.
+
+$$
+\operatorname{MSD}\left( {a, b}\right) = \left\{ \begin{array}{ll} \left| b\right| , & \text{ if }a = \cdots , \\ \left| a\right| , & \text{ if }b = \cdots , \\ 0, & \text{ if }a = b, \\ S & \end{array}\right. \tag{2}
+$$
+
+where $S$ is defined as
+
+$$
+S = \min \left\{ \begin{array}{l} \operatorname{MSD}(a\left\lbrack {1 : }\right\rbrack , b\left\lbrack {1 : }\right\rbrack \;\text{ if }a\left\lbrack 0\right\rbrack = b\left\lbrack 0\right\rbrack , \\ \operatorname{MSD}\left( {a\left\lbrack {1 : }\right\rbrack , b}\right) + 1, \\ \operatorname{MSD}\left( {a, b\left\lbrack {1 : }\right\rbrack }\right) + 1, \\ \operatorname{MSD}\left( {a\left\lbrack {1 : }\right\rbrack , b\left\lbrack {1 : }\right\rbrack }\right) + 1. \end{array}\right. \tag{3}
+$$
+
+If multiple alignments are possible for a given MSD between two strings, select the one with the least number of insertions and deletions. If there are multiple alignments with the same number of insertions and deletions, then select the first such alignment. For example, if the user enters "qucehkly"(TT)instead of the target word "quickly"(PT), then $\operatorname{MSD}\left( {{PT},{TT}}\right) = 3$ and the following alignments are possible:
+
+quic--kly quic-kly qui-ckly qu-ickly
+
+qu-cehkly qucehkly qucehkly qucehkly
+
+Here, a dash in the top sequence represents an insertion, a dash in the bottom represents a deletion, and different letters in the top and bottom represents a substitution. Our algorithm selects the highlighted alignment.
+
+#### 6.1.2 Step 2: Constructive vs. Chorded
+
+Because the sequence of performed actions is inconsequential in chorded methods, sort both the required $\left( {p{t}_{i}}\right)$ and the performed actions $\left( {t{t}_{i}}\right)$ using any sorting algorithm to obtain consistent MSD scores. Action sequences are not sorted for constructive methods since the order in which they are performed is vital for such methods.
+
+#### 6.1.3 Step 3: Minimum String Distance
+
+Finally, apply the MSD algorithm [21] to measure the minimum number of actions needed to correct an incorrect sequence.
+
+$$
+\text{UnitER} = \frac{\mathop{\sum }\limits_{{i = 1}}^{\left| \overline{TT}\right| }\frac{\operatorname{MSD}\left( {p{t}_{i}, t{t}_{i}}\right) }{\max \left( {\left| {p{t}_{i}}\right| ,\left| {t{t}_{i}}\right| }\right) }}{\left| \overline{TT}\right| } \times {100}\% \tag{4}
+$$
+
+Here, $\left| \overline{TT}\right|$ is the length of the aligned transcribe text (same as $\left| \overline{PT}\right|$ ), and $\overline{p{t}_{i}}$ and $\overline{t{t}_{i}}$ are the sequence of actions (sorted for chorded techniques in Step 2) required and performed, respectively, to enter the $i$ th character in ${TT}$ .
+
+This step requires certain considerations about the presence of insertion and deletion in the optimal alignment. If the $i$ -th character of the aligned presented text $\left| \overline{TT}\right|$ has a deletion, then MSD of the corresponding $i$ -th character is ${100}\%$ . But when $\left| \overline{PT}\right|$ has an insertion, a misstroke error is assumed, as it is the most common cause of insertions [11]. A misstroke errors occur when the user mistakenly strokes (or taps) an incorrect key. However, the question remains: To which character do we attribute the insertion? For this, we propose comparing the MSD-s of current $T{T}_{i}$ to the neighboring letters of $P{T}_{i}$ (which are different for different layouts), and attributing it to the one with the lowest MSD. If there is a tie, attributed it to the right neighbor $P{T}_{i} + 1$ . We propose this simplification, since it is difficult to determine the exact cause of an insertion in such a scenario.
+
+This metric can be used to measure error rate of a specific character, in which case, Equation 5 considers only the character under investigation, where $c$ is the character under investigation and To- $\operatorname{tal}\left( c\right)$ is the total occurrence of $c$ in the transcribed text.
+
+$$
+\operatorname{UnitER}\left( c\right) = \frac{\mathop{\sum }\limits_{{i = 1}}^{\left| \overline{TT}\right| }\frac{\operatorname{MSD}\left( {p{t}_{i}, t{t}_{i}}\right) }{\max \left( {\left| {p{t}_{i}}\right| ,\left| {t{t}_{i}}\right| }\right) }\left( {\text{ if }P{T}_{i} = c}\right) }{\operatorname{Total}\left( c\right) } \tag{5}
+$$
+
+### 6.2 Unit Accuracy (UA)
+
+Unit accuracy (UA) is the opposite and simply a convenient reframing of UnitER. Instead of error rate, UA represents the accuracy rate of a unit. Also, unlike UnitER, the values of UA range between 0 and 1 inclusive to reflect the action-level nature of the metric (i.e., $0\% - {100}\%$ ). UA can be used for a specific character $c$ as well, using Equation 7.
+
+$$
+\mathrm{{UA}} = \frac{{100} - \text{ UnitER }}{100} \tag{6}
+$$
+
+$$
+\operatorname{UA}\left( c\right) = \frac{{100} - \operatorname{UnitER}\left( c\right) }{100} \tag{7}
+$$
+
+### 6.3 Unit Complexity (UnitCX)
+
+Apart from speed and accuracy, we also propose the following novel metric to measure an input unit's complexity. For this, each action in a unit (such as a character or a symbol) is categorized into different difficulty levels, represented by the continuous values from 0 to 1 .
+
+$$
+\text{UnitCX} = \frac{\left( {\mathop{\sum }\limits_{{n = 1}}^{\left| t{t}_{i}\right| }\frac{d\left( {a}_{n}\right) }{\left| TT\right| }}\right) - {d}_{\min }}{{d}_{\max } - {d}_{\min }} \tag{8}
+$$
+
+Here, $d\left( {a}_{n}\right)$ signifies the difficulty level of the $n$ -th action in $t{t}_{i}$ , and ${d}_{\min }$ and ${d}_{\max }$ are the minimum and maximum difficulty levels of all actions within or between text entry techniques. This yields a normalized unit complexity value, ranging from 0 to 1 . The difficulty level of an action is based on a custom convention, considering the posture and ergonomics, memorability, and the frequency of the letters [10]. However, more sophisticated methods are available in the literature $\left\lbrack {9,{42}}\right\rbrack$ .
+
+## 7 EXPERIMENT: VALIDATION
+
+We validated the effectiveness of our proposed metrics by applying them to data collected from a longitudinal user study. This study evaluated one constructive and one chorded text entry technique. Although we conducted a comparative study, out intent was to demonstrate how the proposed metrics can provide deeper insights into the techniques' performance and limitations, specifically with respect to learning.
+
+## 8 APPARATUS
+
+We used a Motorola Moto ${\mathrm{G}}^{5}$ Plus smartphone $({150.2} \times {74} \times {7.7}$ $\mathrm{{mm}},{155}\mathrm{\;g}$ ) at ${1080} \times {1920}$ pixels in the study (Fig. 3). The virtual multi-step keyboards used in the study were developed using the default Android Studio 3.1, SDK 27. The keyboards logged all user actions with timestamps and calculated all performance metrics directly.
+
+## 9 CONSTRUCTIVE METHOD: MORSE CODE
+
+We received the original code from the authors of a Morse code keyboard [16] to investigate the performance of a constructive keyboard. It enables users to enter characters using sequences of dots (.) and dashes (-) [33]. The keyboard has dedicated keys for dot, dash, backspace, and space (Fig. 1). To enter the letter "R", represented by ".-" in Morse code, the user presses the respective keys in that exact sequence, followed by the SEND key, which terminates the input sequence. The user presses the NEXT key to terminate the current phrase and progress to the next one.
+
+
+
+Figure 1: The Morse code inspired constructive keyboard used in the experiment.
+
+## 10 CHORDED METHOD: SENORITA
+
+We received the original code from the authors of the chorded keyboard Senorita [28]. It enables users to enter characters using eight keys (Fig. 2). The most frequent eight letters in English appear on the top of the key labels, and are entered with only one tap of their respective key. All other letters require simultaneous taps on two keys (with two thumbs). For example, the user taps on the "E" and "T" keys together to enter the letter "H". The keyboard provides visual feedback to facilitate learning. Pressing a key with one thumb highlights all available chords corresponding to that key, and right-thumb keys have a lighter shade than left-thumb keys (Fig. 2, right).
+
+
+
+Figure 2: The Senorita chorded keyboard used in the experiment. It enables users to enter text by pressing two keys simultaneously using the thumbs. Pressing a key highlights all available chords for the key (right).
+
+## 11 PARTICIPANTS
+
+Ten participants, aged from 18 to37years $\left( {M = {26.1},{SD} = {5.6}}\right)$ , took part in the experiment. Six identified as male, four as female. None identified as non-binary. Eight were right-handed, one was left-handed, and the other was ambidextrous. They all used both hands to hold the device and their thumbs to type. All participants were proficient in English. Six rated themselves as Level 5: Functionally Native and four as Level 4: Advanced Professional on the Interagency Language Roundtable (ILR) scale [1]. All participants were experienced smartphone users, with on average 9.3 years' experience $\left( {{SD} = {1.8}}\right)$ . None of them had prior experience with any chorded methods, but eight had used a constructive method in the past (either multi-tap or pinyin). None had experience with the keyboards used in the study. They all received US \$50 for volunteering.
+
+## 12 DESIGN
+
+We used a within-subjects design, where the independent variables were input method and session. The dependent variables were the IPS, APC, ER, and UnitER metrics. In summary, the design was as follows.
+
+10 participants $\times$
+
+5 sessions (different days) $\times$
+
+2 input methods (constructive vs. chorded, counterbal-
+
+anced) $\times$
+
+10 English pangrams
+
+$= 1,{000}$ phrases, in total.
+
+## 13 Procedure
+
+To study learning of all letters of the English alphabet, we used the following five pangrams during the experiment, all in lowercase.
+
+quick brown fox jumps over the lazy dog
+
+the five boxing wizards jump quickly
+
+fix problem quickly with galvanized jets
+
+pack my red box with five dozen quality jugs
+
+two driven jocks help fax my big quiz
+
+Participants were divided into two groups, one started with the constructive method and the other started with the chorded method. This order was switched on each subsequent session. Sessions were scheduled on different days, with at most a two-day gap in between. In each session, participants entered one pangram ten times with one method, and then a different pangram ten times with the other method. Pangrams were never repeated for a method. There was a mandatory 30-60 minutes break between the conditions to mitigate the effect of fatigue. During the first session, we demonstrated both methods to participants and collected their consent forms. We asked them to complete a demographics and mobile usage questionnaire. We allowed them to practice with the methods, where they entered all letters (A-Z) until they felt comfortable with the methods. Participants were provided with a cheat-sheet for Morse code that included all combinations (Fig. 3, left) as Morse code relies on memorization of those. For Senorita we did not provide a cheat-sheet as the keyboard provides visual feedback by displaying the characters (i.e., it has a "built-in cheat sheet"). The experiment started shortly after that, where a custom app presented one pangram at a time, and asked participants to enter it "as fast and accurately as possible". Once done, they pressed the NEXT key to re-enter the phrase. Logging started from the first tap and ended with the last. Error correction was intentionally disabled to exclude correction efforts from the data to observe the UnitER metric. Subsequent sessions used the same procedure, excluding practice and questionnaire.
+
+
+
+Figure 3: Two volunteers entering text using: (left) the Morse code constructive keyboard with the assistance of a cheat- sheet, and (right) the Senorita chorded keyboard.
+
+## 14 RESULTS
+
+A Shapiro-Wilk test and a Mauchly's test revealed that the assumption of normality and sphericity were not violated for the data, respectively. Hence, we used a repeated-measures ANOVA for all analysis.
+
+### 14.1 Input per Second (IPS)
+
+An ANOVA identified a significant effect of session on IPS for both the constructive method $\left( {{F}_{4.36} = {17.16}, p < {.0001}}\right)$ and the chorded method $\left( {{F}_{4.36} = {28.26}, p < {.0001}}\right)$ . On average, constructive and chorded yielded ${1.29}\left( {{SD} = {0.38}}\right)$ and ${1.02}\left( {{SD} = {0.29}}\right)$ IPS, respectively. Fig. 4 top displays IPS for both methods in all sessions. The commonly used WPM metric yielded comparable statistical results for both the constructive method $\left( {{F}_{4,{36}} = {17.32}, p < {.0001}}\right)$ and the chorded method $\left( {{F}_{4,{36}} = {27.69}, p < {.0001}}\right)$ .
+
+### 14.2 Actions per Character (APC)
+
+An ANOVA identified a significant effect of session on APC for constructive $\left( {{F}_{4,{36}} = {2.84}, p < {.05}}\right)$ and chorded methods $\left( {{F}_{4,{36}} = }\right.$ ${6.52}, p < {.0005})$ . Average APC for constructive and chorded methods were 2.58 (SD = 0.13) and 1.49 (SD = 0.02), respectively. Fig. 4 bottom displays APC for both methods in all sessions.
+
+### 14.3 Error Rate (ER)
+
+Error rate (ER) is a commonly used error metric, which is traditionally calculated as the ratio of the total number of incorrect characters in the transcribed text to the length of the transcribed text [6]. An ANOVA identified a significant effect of session on ER for constructive $\left( {{F}_{4,{36}} = {2.86}, p < {.05}}\right)$ , but not for chorded $\left( {{F}_{4.36} = {1.79}, p = {.15}}\right)$ . On average, constructive and chorded methods yielded ${6.05}\% \left( {{SD} = {10.11}}\right)$ and ${2.46}\% \left( {{SD} = {4.65}}\right) \mathrm{{ER}}$ , respectively. Fig. 5 top displays ER for both methods in all sessions. Another widely used error metric, TER [34], yielded comparable result for the constructive method $\left( {{F}_{4.36} = {3.33}, p < {.05}}\right)$ and the chorded method $\left( {{F}_{4,{36}} = {1.51}, p = {.22}}\right)$ .
+
+
+
+Figure 4: Average IPS for the two methods in all sessions (top) and average APC for the two methods in all sessions (bottom). Note the scale on the vertical axis.
+
+### 14.4 Unit Error Rate (UnitER)
+
+An ANOVA failed to identify a significant effect of session on UnitER for both constructive $\left( {{F}_{4,{36}} = {2.14}, p = {.09}}\right)$ and chorded $\left( {{F}_{4,{36}} = {0.12}, p = {.97}}\right)$ . On average, UnitER for constructive and chorded methods were 1.32 (SD = 2.2) and ${0.91}\left( {{SD} = {2.1}}\right)$ . Fig. 5 bottom displays UnitER for both methods in all sessions.
+
+## 15 Discussion
+
+There was a significant effect of session on IPS for both methods, which suggests that entry speed improved for both methods with practice. Accordingly, entry speed per session correlated well with the power law of practice [9] for both constructive $\left( {{R}^{2} = {0.98}}\right)$ and chorded $\left( {{R}^{2} = {0.95}}\right)$ methods, see Fig. 4 top. As stated earlier, the commonly used WPM [6] metric yielded comparable statistical results. There was a significant effect of session on APC for the two techniques. This may seem odd just by looking at Fig. 4 bottom, where both the lines appear flat. However, a Tukey-Kramer Multiple-Comparison test revealed that sessions 1 and 2 were significantly different for constructive, while sessions 1, 4, and 5 were different for chorded. Similar trends were observed for UnitCX, and the commonly used KSPC metric. We failed to identify any reason for this phenomenon.
+
+Interestingly, there was a significant effect of session on ER for constructive, but not for chorded. Accordingly, ER per session correlated well with the power law of practice [9] for the constructive method $\left( {{R}^{2} = {0.94}}\right)$ , see Fig. 5 top. This and the fact that IPS improved over session for chorded suggest that participants learned to type faster with the method while maintaining a consistent error rate. Additional data is needed to fully investigate it. There was no significant effect of session on UnitER for either method. Yet, UnitER per session correlated well with the power law of practice [9] for constructive $\left( {{R}^{2} = {0.94}}\right)$ , see Fig. 5 bottom. Regardless of not reaching statistical significance, this metric provides useful details, as we will demonstrate in the following section, since unlike high-level text entry metrics that only looks at the differences in the output, action-level metrics accounts for similarities in actions.
+
+
+
+Figure 5: Average ER for the two methods in all sessions (top) and average UnitER for the two methods in all sessions (bottom). Note the scale on the vertical axis.
+
+## 16 ACTION-LEVEL ANALYSIS
+
+The above discussion demonstrates that the proposed metrics (on the whole text) complement the conventional metrics by facilitating a discussion of the methods' speed and accuracy in general. In this section, we demonstrate how the proposed metrics shed further light on these methods' performance and limitations.
+
+To investigate which letters may have hampered text entry speed and accuracy, we first calculated UnitER for each letter. Because the letters do not appear the same number of times, we normalized the values based on each letter's total appearance in the study. We then identified the letters that were both difficult to enter and learn to input accurately (henceforth "Not Learned"), and the letters that were difficult to enter but relatively easier to learn to input accurately (henceforth "Learned"). For this, we compared the average UnitER of each letter from the first three sessions to the last two sessions. The "Learned" included the letters that demonstrated a significant improvement in UnitER from the first to the last sessions. In contrast, the "Not Learned" included the letters that consistently yielded higher UnitER in all sessions.
+
+### 16.1 Constructive Method
+
+Table 4 displays the top four Not Learned and the Learned letters from the Morse code keyboard. For better visualization, we calculated Unit Accuracy (UA) for these letters using simple transformation given by eq. 7 and fitted them to power law of practice [9] to identify any trends, see Fig. 6.
+
+Fig. 6 (top) illustrates the high inconsistency in UA across the Not Learned letters. However, for the Learned group (Fig. 6 bottom), there is a prominent upward trend. Evidently, participants were learning these letters even in the last session, suggesting that performance with these letters would have continued to increase if the study lasted longer. We performed multiple comparisons between the letters to find the cause of entry difficulty, as well as the factors that facilitated learning. We identified the following trends that may have contributed towards the aforementioned trends.
+
+Table 4: Action-Level representation of the difficult to enter and learn (Not Learned) and difficult to enter but easier to learn letters (Learned) for the constructive method.
+
+| Not Learned | Sequence | Learned | Sequence |
| Z | -. . | g | ... |
| h | ... | p | ... |
| S | ... | $X$ | ... |
| C | ... | q | - .- |
+
+
+
+Figure 6: (top) Average Unit Accuracy (UA) of the letters z, h, s, c per session with power trendlines. Trends for the letters show decreasing UA across sessions, indicating that learning was not occuring, (bottom) average UA of the letters $g, p, x, q$ per session with power trendlines. Trends for these letters show increasing UA as the sessions progressed, indicating learning.
+
+Analysis revealed that participants were having difficulty differentiating between the letters that required similar actions to enter. For instance, it was difficult for them to differentiate between h ("...") and $s\left( \text{"..."}\right)$ , and $k\left( \text{"-.-"}\right)$ and $c\left( \text{"-.-."}\right)$ , presumably because their corresponding actions are very similar. A deeper analysis of the action-level errors revealed that participants frequently entered an extra dot when typing " $\mathrm{k}$ ", resulting in the letter " $\mathrm{c}$ ", and vice versa. This error totaled 23% of all UnitER for "c". Participants also made similar mistakes for other Not Learned letters. For example, they entered s ("...") instead of h ("....") and vice versa, resulting in 50% of all UnitER for "h" and 30% of all UnitER for "s". This information is vital. It helps the designer of a constructive method assign sufficiently different actions to frequent letters to avoid confusion, likely resulting in increased speed and accuracy.
+
+An interesting observation is that participants tend to enter an extra dot or dash when these actions are repeated. For example, participants often entered "---", "---.-" or similar patterns for z ("- -."), resulting in 17% of all UnitER in "z". Likewise, participants entered additional dots when entering $g\left( \text{"-","}\right)$ , such as $z\left( \text{"-.."}\right)$ , which resulted in ${20}\%$ of all UnitER for "g". Similar errors were committed for other letters as well. These details are useful, since the designer can compensate by reducing the number of repeated actions for frequent letters.
+
+### 16.2 Chorded Method
+
+Table 5 displays the top four Not Learned and the Learned letters for the Senorita keyboard. Like the constructive method, we calculated UA for these letters using Equation 7 and fitted them to power law of practice [9] to identify any trends, see Fig. 8. We observed that, like the constructive method, trends for all Learned letters were increasing, suggesting the occurrence of learning across the techniques. Participants were learning these letters even in the last session. This might indicate that the performance of these letters would have been much better if the study lasted longer. Multiple comparisons between action-level actions per letter identified the following trends that may have contributed towards the aforementioned trends.
+
+Table 5: Action-Level representation of the difficult to enter and learn (Not Learned) and difficult to enter but easier to learn letters (Learned) for the chorded method.
+
+| Not Learned letter | Chord | Learned letter | Chord |
| q | rs | $x$ | ir |
| p | er | d | eo |
| m | ${en}$ | f | io |
| Z | ${rt}$ | 1 | at |
+
+Letters 'q', 'p', and 'z' have the 'r' key in their chord ('r', 'er', and 'rt', respectively), which is the furthest key from the right side. We speculate that it is difficult to learn, and be fast and accfurate if letter has "r" in a chord pair, because "r" and "s" are the furthest letters from the edges (Fig. 7). Keys near the center of the screen are more difficult to reach than those at the edge. Relevantly, four letters with improving trendlines ( ‘x', 'd', 'f', and 'l' ) have chord pairs that are close to the screen edge. This detail may encourage designers to place the most frequent letters toward the edge of the device.
+
+
+
+Figure 7: The user stretching her thumb to reach the letter "s" (left) and the letter "r" (right).
+
+### 16.3 Time Spent per Letter
+
+We compared the above analysis with the time spent to perform the sequences and chords for each letter in the last two sessions. The purpose was to further validate the metrics by studying if the letters that took additional time to enter correspond well to the Not Learned letters. Fig. 9a illustrates all difficult letters (highlighted in red and green) identified for the constructive method. One can see the correlation between UnitER and the time spent to enter the sequence of these letters. This suggests that these letters require higher cognitive and motor effort to enter. Similarly, Fig. 9 (b)
+
+
+
+Figure 8: (top) Average Unit Accuracy (UA) of the letters q, p, m, z per session with power trendlines. Trends for the letters show decreasing UA across sessions, indicating that learning was not occurring, (bottom) average UA of the letters $x, d, f, l$ per session with power trendlines. Trends for these letters show increasing UA as the sessions progressed, indicating learning.
+
+illustrates all difficult letters (highlighted in red and green) identified for the chorded method. One can see that the chords (q, v, p, g, z, j) required more time than taps (s, e, n, o, r, i, t, a). However, the chords that are composed of the keys closer to the boundaries were much faster (e.g., h, u, l, etc.). This further strengthens the argument of the previous section.
+
+## 17 CONCLUSION
+
+In this paper, we proposed a trio of action-level performance metrics (UnitER, UA, and UnitCX) aimed at multi-step text entry techniques that account and compensate for the constructive and chorded input process. We validated these metrics in a longitudinal study involving one constructive and one chorded technique. In our presentation, we used existing techniques such as Morse [33] and Senorita [28], not to change their mapping or design, but to amply demonstrate how can the newly proposed metrics be applied for different multi-step techniques and to give a deeper insight into possible limitations of the techniques with an emphasis on learning. None of the metrics would give the conclusions automatically, but they could point towards limitations. Our UnitER helped us to investigate specific erroneous characters, while conventional metrics failed to identify them. The results of this study demonstrate how the proposed metrics provide a deeper understanding of action-level error behaviors, particularly the difficulties in performing and learning the sequences and chords of the letters, facilitating the design of faster and more accurate multistep keyboards. Although, there is no formal method to analyze action-level actions of multi-step method, it is likely that veteran text entry researchers perform similar analysis on user study data. However, this work provides a formal method, that will enable researchers new to the area to perform these analyses, as well as facilitate better comparison between methods from the literature.
+
+
+
+Figure 9: (top) Average UnitER of all letters vs. the time spent to perform the sequence for those letters for constructive method, (bottom) average UnitER of all letters vs. the time spent to perform the chords for those letters with the chorded method.
+
+## 18 FUTURE WORK
+
+We intend to combine the insights gained from the proposed action-level metrics, particularly the most error-prone characters and the types of errors frequently committed in the sequence or chord of these characters, with machine learning approaches to make multistep keyboards adapt to human errors. Such a system can also use language models to provide users with more effective auto-correction and word suggestion techniques.
+
+## REFERENCES
+
+[1] Irl, interagency language roundtable language skill level descriptions, 2020.
+
+[2] T. AbuHmed, K. Lee, and D. Nyang. Uoit keyboard: A constructive keyboard for small touchscreen devices. IEEE Transactions on Human-Machine Systems, 45(6):782-789, Dec 2015. doi: 10.1109/THMS. 2015.2449309
+
+[3] AmericanFoundationfortheBlind. What is braille, 2020.
+
+[4] A. S. Arif, S. Kim, W. Stuerzlinger, G. Lee, and A. Mazalek. Evaluation of a smart-restorable backspace technique to facilitate text entry error correction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 5151-5162. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858407
+
+[5] A. S. Arif, M. Pahud, K. Hinckley, and B. Buxton. Experimental study of stroke shortcuts for a touchscreen keyboard with gesture-redundant keys removed. In GI '14 Proceedings of Graphics Interface 2014, pp. 43-50. Canadian Information Processing Society, May 2014. Michael A. J. Sweeney Award for Best Student Paper.
+
+[6] A. S. Arif and W. Stuerzlinger. Analysis of text entry performance metrics. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), pp. 100-105, Sep. 2009. doi: 10. 1109/TIC-STH.2009.5444533
+
+[7] A. S. Arif and W. Stuerzlinger. Predicting the cost of error correction in
+
+character-based text entry technologies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, p. 5-14. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1753326.1753329
+
+[8] A. S. Arif and W. Stuerzlinger. User adaptation to a faulty unistroke-based text entry technique by switching to an alternative gesture set. In Proceedings of Graphics Interface 2014, pp. 183-192. 2014.
+
+[9] X. Bi, B. A. Smith, and S. Zhai. Multilingual touchscreen keyboard design and optimization. Human-Computer Interaction, 27(4):352- 382, 2012. doi: 10.1080/07370024.2012.678241
+
+[10] T. M. E. Club. Digraph frequency (based on a sample of 40,000 words), 2020.
+
+[11] D. R. Gentner, J. T. Grudin, S. Larochelle, D. A. Norman, and D. E. Rumelhart. A Glossary of Terms Including a Classification of Typing Errors, pp. 39-43. Springer New York, New York, NY, 1983. doi: 10. 1007/978-1-4612-5470-6_2
+
+[12] J. Gong, B. Haggerty, and P. Tarasewich. An enhanced multitap text entry method with predictive next-letter highlighting. In CHI '05 Extended Abstracts on Human Factors in Computing Systems, CHI EA '05, p. 1399-1402. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1056808.1056926
+
+[13] J. Gong and P. Tarasewich. Alphabetically constrained keypad designs for text entry on mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '05, p. 211-220. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1054972.1055002
+
+[14] N. Green, J. Kruger, C. Faldu, and R. St. Amant. A reduced querty keyboard for mobile text entry. In CHI '04 Extended Abstracts on Human Factors in Computing Systems, CHI EA '04, p. 1429-1432. Association for Computing Machinery, New York, NY, USA, 2004. doi: 10.1145/985921.986082
+
+[15] T. Grossman, X. A. Chen, and G. Fitzmaurice. Typing on glasses: Adapting text entry to smart eyewear. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '15, p. 144-152. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/ 2785830.2785867
+
+[16] A.-M. Gueorguieva, G. Rakhmetulla, and A. S. Arif. Enabling Input on Tiny/Headless Systems Using Morse Code. Technical report, Dec. 2020. arXiv: 2012.06708.
+
+[17] K. Huang, T. Starner, E. Do, G. Weinberg, D. Kohlsdorf, C. Ahlrichs, and R. Leibrandt. Mobile music touch: Mobile tactile stimulation for passive learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, p. 791-800. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/ 1753326.1753443
+
+[18] X. Lee. Stenotype machine, 2019.
+
+[19] V. I. Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, vol. 10, pp. 707-710, 1966.
+
+[20] K. Lyons, T. Starner, D. Plaisted, J. Fusia, A. Lyons, A. Drew, and E. W. Looney. Twiddler typing: One-handed chording text entry for mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, p. 671-678. Association for Computing Machinery, New York, NY, USA, 2004. doi: 10.1145/ 985692.985777
+
+[21] I. S. MacKenzie. Kspc (keystrokes per character) as a characteristic of text entry techniques. In F. Paternò, ed., Human Computer Interaction with Mobile Devices, pp. 195-210. Springer Berlin Heidelberg, Berlin, Heidelberg, 2002.
+
+[22] I. S. MacKenzie, H. Kober, D. Smith, T. Jones, and E. Skepner. Letter-wise: Prefix-based disambiguation for mobile text input. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, UIST '01, p. 111-120. Association for Computing Machinery, New York, NY, USA, 2001. doi: 10.1145/502348.502365
+
+[23] I. S. MacKenzie and R. W. Soukoreff. A character-level error analysis technique for evaluating text entry methods. In Proceedings of the Second Nordic Conference on Human-Computer Interaction, NordiCHI
+
+'02, p. 243-246. Association for Computing Machinery, New York, NY, USA, 2002. doi: 10.1145/572020.572056
+
+[24] I. S. Mackenzie, S. X. Zhang, and R. W. Soukoreff. Text entry using soft keyboards. Behaviour & Information Technology, 18(4):235-244, 1999. doi: 10.1080/014492999118995
+
+[25] S. Mascetti, C. Bernareggi, and M. Belotti. Typeinbraille: A braille-based typing application for touchscreen devices. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '11, p. 295-296. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2049536. 2049614
+
+[26] J. Oliveira, T. Guerreiro, H. Nicolau, J. Jorge, and D. Gonçalves. Brail-letype: Unleashing braille over touch screen mobile phones. In P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, and M. Winckler, eds., Human-Computer Interaction - INTERACT 2011, pp. 100-107. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
+
+[27] J. Oliveira, T. Guerreiro, H. Nicolau, J. Jorge, and D. Gonçalves. Brail-letype: Unleashing braille over touch screen mobile phones. In P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, and M. Winckler, eds., Human-Computer Interaction - INTERACT 2011, pp. 100-107. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
+
+[28] G. Rakhmetulla and A. S. Arif. Senorita: A chorded keyboard for sighted, low vision, and blind mobile users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376576
+
+[29] RESNA. Resna, 2020.
+
+[30] R. Rosenberg and M. Slater. The chording glove: a glove-based text input device. IEEE Transactions on Systems, Man, and Cybernetics, Part $C$ (Applications and Reviews),29(2):186-191, May 1999. doi: 10. 1109/5326.760563
+
+[31] S. Sarcar, A. S. Arif, and A. Mazalek. Metrics for bengali text entry research. arXiv preprint arXiv:1706.08205, 2017.
+
+[32] C. Seim, T. Estes, and T. Starner. Towards passive haptic learning of piano songs. In 2015 IEEE World Haptics Conference (WHC), pp. 445-450, June 2015. doi: 10.1109/WHC.2015.7177752
+
+[33] R. T. Snodgrass and V. F. Camp. Radio Receiving for Beginners. New York: MacMillan, 1922.
+
+[34] R. W. Soukoreff and I. S. MacKenzie. Metrics for text entry research:
+
+An evaluation of msd and kspc, and a new unified error metric. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, p. 113-120. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/642611.642632
+
+[35] C. Southern, J. Clawson, B. Frey, G. Abowd, and M. Romero. An evaluation of brailletouch: Mobile touchscreen text entry for the visually impaired. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 317-326. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2371574.2371623
+
+[36] K. Tanaka-Ishii, Y. Inutsuka, and M. Takeichi. Entering text with a four-button device. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1, COLING '02, p. 1-7. Association for Computational Linguistics, USA, 2002. doi: 10. 3115/1072228.1072377
+
+[37] C. Welch. Google's gboard keyboard now lets you communicate through morse code on both android and ios, 2018.
+
+[38] D. Wigdor and R. Balakrishnan. A comparison of consecutive and concurrent input text entry techniques for mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, p. 81-88. Association for Computing Machinery, New York, NY, USA, 2004. doi: 10.1145/985692.985703
+
+[39] J. O. Wobbrock. Measures of text entry performance. Text entry systems: Mobility, accessibility, universality, pp. 47-74, 2007.
+
+[40] J. O. Wobbrock, B. A. Myers, and J. A. Kembel. Edgewrite: A stylus-based text entry method designed for high accuracy and stability of motion. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, UIST '03, p. 61-70. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/ 964696.964703
+
+[41] K. Yatani and K. N. Truong. An evaluation of stylus-based text entry methods on handheld devices studied in different user mobility states. Pervasive and Mobile Computing, 5(5):496-508, 2009. doi: 10.1016/j .pmcj.2009.04.002
+
+[42] X. Yi, C. Yu, W. Shi, X. Bi, and Y. Shi. Word clarity as a metric in sampling keyboard test sets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 4216-4228. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025701
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4f1328823b02d1587d903c0d921860919f39308e
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/-Yus5M-WjZT/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,402 @@
+§ USING ACTION-LEVEL METRICS TO REPORT THE PERFORMANCE OF MULTI-STEP KEYBOARDS
+
+Category: Research
+
+§ ABSTRACT
+
+Computer users commonly use multi-step text entry methods on handheld devices and as alternative input methods for accessibility. These methods require performing multiple actions simultaneously (chorded) or in a specific sequence (constructive) to produce input. However, evaluating these methods is difficult since traditional performance metrics were designed explicitly for non-ambiguous, uni-step methods (e.g., QWERTY). They fail to capture the actual performance of a multi-step method, and do not provide enough detail to aid in design improvements. Here, we present three new action-level performance metrics: UnitER, UA, and UnitCX. They account for the error rate, accuracy, and complexity of multi-step chorded and constructive methods. They describe the multiple inputs that comprise typing a single character: action-level metrics observe actions performed to type one character, while conventional metrics look into the whole character. In a longitudinal study, we used these metrics to identify probable causes of slower text entry and input errors with two existing multi-step methods. Consequently, we suggest design changes to improve learnability and input execution.
+
+Index Terms: Human-centered computing-Empirical studies in HCI; Human-centered computing-Usability testing; Human-centered computing-User studies
+
+§ 1 INTRODUCTION
+
+Most existing text entry performance metrics were designed to characterize uni-step methods that map one action (a key press or a tap) to one input. However, there are also multi-step constructive and chorded methods. These require the user to perform a predetermined set of actions either in a specific sequence (pressing multiple keys in a particular order) or simultaneously (pressing multiple keys at the same time). Examples of constructive methods include multi-tap [13] and Morse code [16,33]. Chorded methods include braille [3] and stenotype [18]. Many text entry methods used for accessibility are either chorded or constructive. However, current metrics for analyzing the performance of text entry techniques were designed for uni-step methods, such as the standard desktop keyboard. Due to the fundamental difference in their input process, these metrics often fail to accurately illustrate participants' actions in user studies evaluating the performance of multi-step methods.
+
+Table 1: Conventional ER and action-level UnitER for a constructive method (Morse code).
+
+max width=
+
+i i X X X X ER (%) UnitER (%)
+
+1-7
+Presented Text qU C k 1 -.- $y$ 2*14.28 2*7.14
+
+1-5
+Transcribed Text - .- ... q U - - . - C ... $\mathrm{k}$ j -.-_____ -. – $y$ - . - .
+
+1-7
+Presented Text qU C ... $\mathrm{k}$ 1 -.- $y$ 2*14.28 2*3.57
+
+1-5
+Transcribed Text qU C ... k p -.-... -. ... $y$ -. –
+
+1-7
+
+To address this, we present a set of revised and novel performance metrics that account for the multi-step nature of chorded and constructive text entry techniques by analyzing the actions required to enter a character rather than the final output. We posit that while conventional metrics are effective in reporting the overall speed and accuracy, a set of action-level metrics can provide extra details about the user's input actions. For designers of multi-step methods, this additional insight is crucial for evaluating the method, identifying its pain points, and facilitating improvements. More specifically, conventional error rates fail to capture learning within chords and sequences. For instance, if entering a letter requires four actions in a specific order, with practice, users may learn some of these actions and the corresponding sub-sequences. Conventional metrics ignore partially correct content by counting each incorrect letter as one error, giving the impression that no learning has occurred. UnitER, in contrast, accounts for this and provides designers with an understanding of (1) whether users learned some actions or not and (2) which actions were difficult to learn, thus can be replaced.
+
+Table 1 illustrates this phenomenon using Morse code for entering "quickly", with one character error ("l") in each attempt. The conventional error rate (ER) is ${14.28}\%$ for each attempt. One of our proposed action-level metrics, UnitER, gives a deeper insight by accounting for the entered input sequence. It yields an error rate of 7.14% for the first attempt (with two erroneous dashes), and an improved error rate of 3.57% for the second attempt (with only one erroneous dash). The action-level metric shows that learning has occurred with a minor improvement in error rate, but this phenomenon is omitted in the ER metric, which is the same for both attempts.
+
+The remainder of this paper is organized as follows. We start with a discussion on the existing, commonly used text entry performance metrics, then elaborate on our motivations. This is followed with a set of revised and new performance metrics targeted at multistep input techniques. We proceed to validate the metrics in a longitudinal study evaluating one constructive and one chorded input techniques. The results demonstrate how the new metrics provide a deeper insight into action-level interactions. Researchers identify factors compromising the performance and learning of a multi-step keyboard, and suggest changes to address the issues.
+
+§ 2 RELATED WORK
+
+Conventional metrics for text entry speed include characters per second (CPS) and words per minute (WPM). These metrics represent the number of resulting characters entered, divided by the time spent performing input. Text entry researchers usually transform CPS to WPM by multiplying by a fixed constant ( 60 seconds, divided by a word length of 5 characters for English text entry) rather than recalculating the metrics [6]. A common error metric, keystrokes per character (KSPC), is the ratio of user actions, such as keystrokes, to the number of characters in the final output $\left\lbrack {{21},{34}}\right\rbrack$ . Although this metric was designed primarily to measure the number of attempts at typing each character accurately [21], many researchers have used it to represent a method’s potential entry speed as well [22,36], since techniques that require fewer actions are usually faster. Some researchers have also customized this metric to fit the need of their user studies. The two most common variants of this metric are gesture per character (GPS) [8, 39, 40] and actions per character (APC) [39], which extend keystrokes to include gestures and other actions, respectively. Error rate (ER) and minimum string distance (MSD) are metrics that measure errors based on the number of incorrect characters in the final output [6]. Another metric, erroneous keystrokes (EKS) [6], considers the number of incorrect actions performed in an input episode. None of these metrics considers error correction efforts, thus Soukoreff and MacKenzie [34] proposed the total error rate (TER) metric that combines two constituent errors metrics: corrected error rate (CER) and not-corrected error rate (NER). The former measures the number of corrected errors in an input episode and the latter measures the number of errors left in the output.
+
+Table 2: Performance metrics used to evaluate chorded and constructive keyboards in recent user studies.
+
+max width=
+
+Technique Speed Accuracy Low Level ${R}^{2}$
+
+1-5
+Twiddler [20] WPM ER ${NA}$ Yes
+
+1-5
+BrailleType [27] WPM MSDER ${NA}$ ${NA}$
+
+1-5
+ChordedChordTap [38] WPM ER ${NA}$ Yes
+
+1-5
+Chording Glove [30] WPM MSD ER ${NA}$ ${NA}$
+
+1-5
+Two-handed [41] WPM ER ${NA}$ ${NA}$
+
+1-5
+Mutitap [12] WPM ER ${NA}$ ${NA}$
+
+1-5
+Reduced Qwerty [14] WPM ${NA}$ ${NA}$ ${NA}$
+
+1-5
+Construct.JustType [24] WPM ${NA}$ ${NA}$ ${NA}$
+
+1-5
+UOIT [2] WPM TER ${NA}$ ${NA}$
+
+1-5
+
+Accessibility text entry systems mainly use constructive or chorded techniques. Morse code is a constructive text entry system that was named one of the "Top 10 Accessibility Technologies" by RESNA [29]. In 2018, this entry system was integrated into Android phones as an accessibility feature for users with motor impairments [37]. Users of Morse code can tap on one or two keys to enter short sequences of keystrokes representing each letter of the alphabet. This method reduces the dexterity required to enter text, compared to the ubiquitous QWERTY keyboard. Braille is a tactile representation of language used by individuals with visual impairments. Braille keyboards $\left\lbrack {{25},{26},{35}}\right\rbrack$ contain just six keys so that users do not need to search for buttons; instead, users can keep their hands resting across the keys. To produce each letter using this system, users must press several of these keys simultaneously.
+
+Little work has focused on performance metrics for such multistep input techniques. Sarcar et al. [31] proposed a convention for calculating the most common performance metrics for constructive techniques, which considers both the input and the output streams. Grossman et al. [15] defined "soft" and "hard" errors for two-level input techniques, where errors at the first level are considered "soft" and errors at the second level are considered "hard". ${}^{1}$ Seim et al. [32] used a dynamic time warping (DTW) algorithm [17] to measure the accuracy of chorded keystrokes on a piano keyboard. Similar to the Levenshtein distance [19], it measures the similarity between two sequences, but accounts for variants in time or speed. Table 2 shows the performance metrics used to evaluate recent chorded and constructive text entry techniques.
+
+§ 3 MOTIVATION
+
+§ 3.1 PARTIAL ERRORS AND PREDICTIONS
+
+Users can make partial errors in the process of performing a chord or a sequence of actions. To enter " $x$ ", a Twiddler [20] user could incorrectly perform the chord "MORO" instead of "MROO", and a Morse code user could incorrectly perform the sequence "-..." instead of "-..-" (Table 3). In both cases, user actions would be partially correct. However, typical error metrics ignore this detail by considering the complete sequence as one incorrect input. Hence, they yield the same value as when users enter no correct information. In reality, users may have learned, and made fewer mistakes within a chord or a sequence. Not only does this show an incomplete picture of user performance, it also fails to provide the means to fully explore learning of the text entry method. More detailed metrics of multi-step keyboard interaction can facilitate improved input method design through a better understanding of the user experience. These data can also train algorithms to predict and compensate for the most common types of errors.
+
+Table 3: Performance metrics used to evaluate " $x$ " for chorded (Twiddler) and constructive (Morse) methods.
+
+max width=
+
+Technique X r ’ $x$ ’ Actual input for ’ $x$ ’ ER UnitER
+
+1-5
+Twiddler [20] MR00 $\mathbf{{M0R0}}$ 100 50
+
+1-5
+Morse code [33] ... ... 100 25
+
+1-5
+
+§ 3.2 CORRECTION EFFORT
+
+Prior research [4] shows that correction effort impacts both performance and user experience, but most current error metrics do not represent the effort required to fix errors with constructive techniques. With uni-step character-based techniques, one error traditionally requires two corrective actions: a backspace to delete the erroneous character, and a keystroke to enter the correct one [7]. Suppose two users want to input "- - " for the letter " $x$ ". One user enters "-...", the other enters "...-". Existing error metrics consider both as one erroneous input. However, correcting these may require different numbers of actions. If a technique allowed users to change the direction of a gesture mid-way [5], the errors would require one and five corrective actions, respectively. If the system only allowed the correction of one action at a time, then the former would require two and the latter would require five corrective actions. In contrast, if the system does not allow the correction of individual actions within a sequence, both would require five corrective actions. Hence, error correction relies on the correction mechanism of a technique, the length of a sequence, and the position and type of error (insertion, deletion, or substitution). Existing metrics fail to capture this vital detail in both chorded and constructive text entry techniques.
+
+§ 3.3 DEEPER INSIGHTS INTO LIMITATIONS
+
+The proposed metrics aim to give quantitative insights into the learning and input of multi-step text entry techniques. Insights, such as tapping locations affecting speed, might seem like common sense, but action-level metrics can also provide similar insights for less straight-forward interactions, such as breath puffs, tilts, swipes, etc. Although some findings might seem unsurprising for experts, the proposed metrics will benefit the designers of new techniques aimed at multi-step text entry and would complement their efforts and insights.
+
+The new metrics can facilitate design improvements by quantifying the design choices. Conventional metrics identify difficult and erroneous letters, while our UnitER, UA and UnitCX indicate what is contributing towards it. The action-level metrics can help us to identify difficult-to-enter sequences, so they can be assigned to infrequent characters or avoided altogether. For example, UnitER, UA can show that target users are having difficulty in performing the long-press action of a three-action input sequence for some character. Designers then can replace the long-press action with an easier one for particular group of people (e.g., to physical button press) to reduce error rate for that letter.
+
+§ 4 NOTATIONS
+
+In the next section, we propose three new action-level metrics for evaluating multi-step text entry techniques. For this, we use the following notations.
+
+${}^{1}$ With two-level techniques, users usually perform two actions to select the desired character, for example, the first action to specify the region and the second to choose the character.
+
+ * Presented text(PT)is the text presented to the user for input, $\left| {PT}\right|$ is the length of ${PT},P{T}_{i}$ is the $i$ th character in PT, $p{t}_{i}$ is the sequence of actions required to enter the $i$ th character in ${PT}$ , and $\left| {p{t}_{i}}\right|$ is the length of $p{t}_{i}$ .
+
+ * Transcribed text(TT)is the text transcribed by the user, $\left| {TT}\right|$ is the length of ${TT},T{T}_{i}$ is the $i$ th character in ${TT},t{t}_{i}$ is the sequence of actions performed by the user to enter the $i$ th character in ${TT}$ , and $\left| {t{t}_{i}}\right|$ is the length of $t{t}_{i}$ .
+
+ * Minimum string distance (MSD) measures the similarity between two sequences using the Levenshtein distance [19]. The "distance" is defined as the minimum number of primitive operations (insertion, deletion, and substitution) needed to transform a given sequence(TT)to the target sequence(PT)[21].
+
+ * Input time(t)is the time, in seconds, the user took to enter a phrase (TT).
+
+ * An action is a user action, including a keystroke, gesture, tap, finger position or posture, etc. Action sequence (AS) is the sequence of all actions required to enter the presented text. $\left| \mathrm{{AS}}\right|$ is the length of $\mathrm{{AS}}$ . We consider all sub-actions within a chord as individual actions. For example, if a chord requires pressing three keys simultaneously, then it is composed of three actions.
+
+§ 5 SPEED METRICS
+
+§ 5.1 INPUT PER SECOND (IPS)
+
+We present IPS, a variant (for convenience of notation) of the commonly used CPS metric [40] to measure the entry speeds of multistep techniques.
+
+$$
+\text{ IPS } = \frac{\left| \mathrm{{AS}}\right| }{t} \tag{1}
+$$
+
+IPS uses the length of the action sequence $\left| {AS}\right|$ instead of the length of transcribed text $\left| {TT}\right|$ to account for all multi-step input actions. It is particularly useful to find out if the total number of actions needed for input is affecting performance or not.
+
+§ 6 PROPOSED ACTION-LEVEL METRICS
+
+§ 6.1 UNIT ERROR RATE (UNITER)
+
+The first of our novel metrics, unit error rate (UnitER) represents the average number of errors committed by the user when entering one input unit, such as a character or a symbol. This metric is calculated in the following three steps:
+
+§ 6.1.1 STEP 1: OPTIMAL ALIGNMENT
+
+First, obtain an optimal alignment between the presented(PT)and transcribed text(TT)using a variant of the MSD algorithm [23]. This addresses all instances where lengths of presented $\left( \left| {PT}\right| \right)$ and transcribed text $\left( \left| {TT}\right| \right)$ were different.
+
+$$
+\operatorname{MSD}\left( {a,b}\right) = \left\{ \begin{array}{ll} \left| b\right| , & \text{ if }a = \cdots , \\ \left| a\right| , & \text{ if }b = \cdots , \\ 0, & \text{ if }a = b, \\ S & \end{array}\right. \tag{2}
+$$
+
+where $S$ is defined as
+
+$$
+S = \min \left\{ \begin{array}{l} \operatorname{MSD}(a\left\lbrack {1 : }\right\rbrack ,b\left\lbrack {1 : }\right\rbrack \;\text{ if }a\left\lbrack 0\right\rbrack = b\left\lbrack 0\right\rbrack , \\ \operatorname{MSD}\left( {a\left\lbrack {1 : }\right\rbrack ,b}\right) + 1, \\ \operatorname{MSD}\left( {a,b\left\lbrack {1 : }\right\rbrack }\right) + 1, \\ \operatorname{MSD}\left( {a\left\lbrack {1 : }\right\rbrack ,b\left\lbrack {1 : }\right\rbrack }\right) + 1. \end{array}\right. \tag{3}
+$$
+
+If multiple alignments are possible for a given MSD between two strings, select the one with the least number of insertions and deletions. If there are multiple alignments with the same number of insertions and deletions, then select the first such alignment. For example, if the user enters "qucehkly"(TT)instead of the target word "quickly"(PT), then $\operatorname{MSD}\left( {{PT},{TT}}\right) = 3$ and the following alignments are possible:
+
+quic–kly quic-kly qui-ckly qu-ickly
+
+qu-cehkly qucehkly qucehkly qucehkly
+
+Here, a dash in the top sequence represents an insertion, a dash in the bottom represents a deletion, and different letters in the top and bottom represents a substitution. Our algorithm selects the highlighted alignment.
+
+§ 6.1.2 STEP 2: CONSTRUCTIVE VS. CHORDED
+
+Because the sequence of performed actions is inconsequential in chorded methods, sort both the required $\left( {p{t}_{i}}\right)$ and the performed actions $\left( {t{t}_{i}}\right)$ using any sorting algorithm to obtain consistent MSD scores. Action sequences are not sorted for constructive methods since the order in which they are performed is vital for such methods.
+
+§ 6.1.3 STEP 3: MINIMUM STRING DISTANCE
+
+Finally, apply the MSD algorithm [21] to measure the minimum number of actions needed to correct an incorrect sequence.
+
+$$
+\text{ UnitER } = \frac{\mathop{\sum }\limits_{{i = 1}}^{\left| \overline{TT}\right| }\frac{\operatorname{MSD}\left( {p{t}_{i},t{t}_{i}}\right) }{\max \left( {\left| {p{t}_{i}}\right| ,\left| {t{t}_{i}}\right| }\right) }}{\left| \overline{TT}\right| } \times {100}\% \tag{4}
+$$
+
+Here, $\left| \overline{TT}\right|$ is the length of the aligned transcribe text (same as $\left| \overline{PT}\right|$ ), and $\overline{p{t}_{i}}$ and $\overline{t{t}_{i}}$ are the sequence of actions (sorted for chorded techniques in Step 2) required and performed, respectively, to enter the $i$ th character in ${TT}$ .
+
+This step requires certain considerations about the presence of insertion and deletion in the optimal alignment. If the $i$ -th character of the aligned presented text $\left| \overline{TT}\right|$ has a deletion, then MSD of the corresponding $i$ -th character is ${100}\%$ . But when $\left| \overline{PT}\right|$ has an insertion, a misstroke error is assumed, as it is the most common cause of insertions [11]. A misstroke errors occur when the user mistakenly strokes (or taps) an incorrect key. However, the question remains: To which character do we attribute the insertion? For this, we propose comparing the MSD-s of current $T{T}_{i}$ to the neighboring letters of $P{T}_{i}$ (which are different for different layouts), and attributing it to the one with the lowest MSD. If there is a tie, attributed it to the right neighbor $P{T}_{i} + 1$ . We propose this simplification, since it is difficult to determine the exact cause of an insertion in such a scenario.
+
+This metric can be used to measure error rate of a specific character, in which case, Equation 5 considers only the character under investigation, where $c$ is the character under investigation and To- $\operatorname{tal}\left( c\right)$ is the total occurrence of $c$ in the transcribed text.
+
+$$
+\operatorname{UnitER}\left( c\right) = \frac{\mathop{\sum }\limits_{{i = 1}}^{\left| \overline{TT}\right| }\frac{\operatorname{MSD}\left( {p{t}_{i},t{t}_{i}}\right) }{\max \left( {\left| {p{t}_{i}}\right| ,\left| {t{t}_{i}}\right| }\right) }\left( {\text{ if }P{T}_{i} = c}\right) }{\operatorname{Total}\left( c\right) } \tag{5}
+$$
+
+§ 6.2 UNIT ACCURACY (UA)
+
+Unit accuracy (UA) is the opposite and simply a convenient reframing of UnitER. Instead of error rate, UA represents the accuracy rate of a unit. Also, unlike UnitER, the values of UA range between 0 and 1 inclusive to reflect the action-level nature of the metric (i.e., $0\% - {100}\%$ ). UA can be used for a specific character $c$ as well, using Equation 7.
+
+$$
+\mathrm{{UA}} = \frac{{100} - \text{ UnitER }}{100} \tag{6}
+$$
+
+$$
+\operatorname{UA}\left( c\right) = \frac{{100} - \operatorname{UnitER}\left( c\right) }{100} \tag{7}
+$$
+
+§ 6.3 UNIT COMPLEXITY (UNITCX)
+
+Apart from speed and accuracy, we also propose the following novel metric to measure an input unit's complexity. For this, each action in a unit (such as a character or a symbol) is categorized into different difficulty levels, represented by the continuous values from 0 to 1 .
+
+$$
+\text{ UnitCX } = \frac{\left( {\mathop{\sum }\limits_{{n = 1}}^{\left| t{t}_{i}\right| }\frac{d\left( {a}_{n}\right) }{\left| TT\right| }}\right) - {d}_{\min }}{{d}_{\max } - {d}_{\min }} \tag{8}
+$$
+
+Here, $d\left( {a}_{n}\right)$ signifies the difficulty level of the $n$ -th action in $t{t}_{i}$ , and ${d}_{\min }$ and ${d}_{\max }$ are the minimum and maximum difficulty levels of all actions within or between text entry techniques. This yields a normalized unit complexity value, ranging from 0 to 1 . The difficulty level of an action is based on a custom convention, considering the posture and ergonomics, memorability, and the frequency of the letters [10]. However, more sophisticated methods are available in the literature $\left\lbrack {9,{42}}\right\rbrack$ .
+
+§ 7 EXPERIMENT: VALIDATION
+
+We validated the effectiveness of our proposed metrics by applying them to data collected from a longitudinal user study. This study evaluated one constructive and one chorded text entry technique. Although we conducted a comparative study, out intent was to demonstrate how the proposed metrics can provide deeper insights into the techniques' performance and limitations, specifically with respect to learning.
+
+§ 8 APPARATUS
+
+We used a Motorola Moto ${\mathrm{G}}^{5}$ Plus smartphone $({150.2} \times {74} \times {7.7}$ $\mathrm{{mm}},{155}\mathrm{\;g}$ ) at ${1080} \times {1920}$ pixels in the study (Fig. 3). The virtual multi-step keyboards used in the study were developed using the default Android Studio 3.1, SDK 27. The keyboards logged all user actions with timestamps and calculated all performance metrics directly.
+
+§ 9 CONSTRUCTIVE METHOD: MORSE CODE
+
+We received the original code from the authors of a Morse code keyboard [16] to investigate the performance of a constructive keyboard. It enables users to enter characters using sequences of dots (.) and dashes (-) [33]. The keyboard has dedicated keys for dot, dash, backspace, and space (Fig. 1). To enter the letter "R", represented by ".-" in Morse code, the user presses the respective keys in that exact sequence, followed by the SEND key, which terminates the input sequence. The user presses the NEXT key to terminate the current phrase and progress to the next one.
+
+ < g r a p h i c s >
+
+Figure 1: The Morse code inspired constructive keyboard used in the experiment.
+
+§ 10 CHORDED METHOD: SENORITA
+
+We received the original code from the authors of the chorded keyboard Senorita [28]. It enables users to enter characters using eight keys (Fig. 2). The most frequent eight letters in English appear on the top of the key labels, and are entered with only one tap of their respective key. All other letters require simultaneous taps on two keys (with two thumbs). For example, the user taps on the "E" and "T" keys together to enter the letter "H". The keyboard provides visual feedback to facilitate learning. Pressing a key with one thumb highlights all available chords corresponding to that key, and right-thumb keys have a lighter shade than left-thumb keys (Fig. 2, right).
+
+ < g r a p h i c s >
+
+Figure 2: The Senorita chorded keyboard used in the experiment. It enables users to enter text by pressing two keys simultaneously using the thumbs. Pressing a key highlights all available chords for the key (right).
+
+§ 11 PARTICIPANTS
+
+Ten participants, aged from 18 to37years $\left( {M = {26.1},{SD} = {5.6}}\right)$ , took part in the experiment. Six identified as male, four as female. None identified as non-binary. Eight were right-handed, one was left-handed, and the other was ambidextrous. They all used both hands to hold the device and their thumbs to type. All participants were proficient in English. Six rated themselves as Level 5: Functionally Native and four as Level 4: Advanced Professional on the Interagency Language Roundtable (ILR) scale [1]. All participants were experienced smartphone users, with on average 9.3 years' experience $\left( {{SD} = {1.8}}\right)$ . None of them had prior experience with any chorded methods, but eight had used a constructive method in the past (either multi-tap or pinyin). None had experience with the keyboards used in the study. They all received US $50 for volunteering.
+
+§ 12 DESIGN
+
+We used a within-subjects design, where the independent variables were input method and session. The dependent variables were the IPS, APC, ER, and UnitER metrics. In summary, the design was as follows.
+
+10 participants $\times$
+
+5 sessions (different days) $\times$
+
+2 input methods (constructive vs. chorded, counterbal-
+
+anced) $\times$
+
+10 English pangrams
+
+$= 1,{000}$ phrases, in total.
+
+§ 13 PROCEDURE
+
+To study learning of all letters of the English alphabet, we used the following five pangrams during the experiment, all in lowercase.
+
+quick brown fox jumps over the lazy dog
+
+the five boxing wizards jump quickly
+
+fix problem quickly with galvanized jets
+
+pack my red box with five dozen quality jugs
+
+two driven jocks help fax my big quiz
+
+Participants were divided into two groups, one started with the constructive method and the other started with the chorded method. This order was switched on each subsequent session. Sessions were scheduled on different days, with at most a two-day gap in between. In each session, participants entered one pangram ten times with one method, and then a different pangram ten times with the other method. Pangrams were never repeated for a method. There was a mandatory 30-60 minutes break between the conditions to mitigate the effect of fatigue. During the first session, we demonstrated both methods to participants and collected their consent forms. We asked them to complete a demographics and mobile usage questionnaire. We allowed them to practice with the methods, where they entered all letters (A-Z) until they felt comfortable with the methods. Participants were provided with a cheat-sheet for Morse code that included all combinations (Fig. 3, left) as Morse code relies on memorization of those. For Senorita we did not provide a cheat-sheet as the keyboard provides visual feedback by displaying the characters (i.e., it has a "built-in cheat sheet"). The experiment started shortly after that, where a custom app presented one pangram at a time, and asked participants to enter it "as fast and accurately as possible". Once done, they pressed the NEXT key to re-enter the phrase. Logging started from the first tap and ended with the last. Error correction was intentionally disabled to exclude correction efforts from the data to observe the UnitER metric. Subsequent sessions used the same procedure, excluding practice and questionnaire.
+
+ < g r a p h i c s >
+
+Figure 3: Two volunteers entering text using: (left) the Morse code constructive keyboard with the assistance of a cheat- sheet, and (right) the Senorita chorded keyboard.
+
+§ 14 RESULTS
+
+A Shapiro-Wilk test and a Mauchly's test revealed that the assumption of normality and sphericity were not violated for the data, respectively. Hence, we used a repeated-measures ANOVA for all analysis.
+
+§ 14.1 INPUT PER SECOND (IPS)
+
+An ANOVA identified a significant effect of session on IPS for both the constructive method $\left( {{F}_{4.36} = {17.16},p < {.0001}}\right)$ and the chorded method $\left( {{F}_{4.36} = {28.26},p < {.0001}}\right)$ . On average, constructive and chorded yielded ${1.29}\left( {{SD} = {0.38}}\right)$ and ${1.02}\left( {{SD} = {0.29}}\right)$ IPS, respectively. Fig. 4 top displays IPS for both methods in all sessions. The commonly used WPM metric yielded comparable statistical results for both the constructive method $\left( {{F}_{4,{36}} = {17.32},p < {.0001}}\right)$ and the chorded method $\left( {{F}_{4,{36}} = {27.69},p < {.0001}}\right)$ .
+
+§ 14.2 ACTIONS PER CHARACTER (APC)
+
+An ANOVA identified a significant effect of session on APC for constructive $\left( {{F}_{4,{36}} = {2.84},p < {.05}}\right)$ and chorded methods $\left( {{F}_{4,{36}} = }\right.$ ${6.52},p < {.0005})$ . Average APC for constructive and chorded methods were 2.58 (SD = 0.13) and 1.49 (SD = 0.02), respectively. Fig. 4 bottom displays APC for both methods in all sessions.
+
+§ 14.3 ERROR RATE (ER)
+
+Error rate (ER) is a commonly used error metric, which is traditionally calculated as the ratio of the total number of incorrect characters in the transcribed text to the length of the transcribed text [6]. An ANOVA identified a significant effect of session on ER for constructive $\left( {{F}_{4,{36}} = {2.86},p < {.05}}\right)$ , but not for chorded $\left( {{F}_{4.36} = {1.79},p = {.15}}\right)$ . On average, constructive and chorded methods yielded ${6.05}\% \left( {{SD} = {10.11}}\right)$ and ${2.46}\% \left( {{SD} = {4.65}}\right) \mathrm{{ER}}$ , respectively. Fig. 5 top displays ER for both methods in all sessions. Another widely used error metric, TER [34], yielded comparable result for the constructive method $\left( {{F}_{4.36} = {3.33},p < {.05}}\right)$ and the chorded method $\left( {{F}_{4,{36}} = {1.51},p = {.22}}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 4: Average IPS for the two methods in all sessions (top) and average APC for the two methods in all sessions (bottom). Note the scale on the vertical axis.
+
+§ 14.4 UNIT ERROR RATE (UNITER)
+
+An ANOVA failed to identify a significant effect of session on UnitER for both constructive $\left( {{F}_{4,{36}} = {2.14},p = {.09}}\right)$ and chorded $\left( {{F}_{4,{36}} = {0.12},p = {.97}}\right)$ . On average, UnitER for constructive and chorded methods were 1.32 (SD = 2.2) and ${0.91}\left( {{SD} = {2.1}}\right)$ . Fig. 5 bottom displays UnitER for both methods in all sessions.
+
+§ 15 DISCUSSION
+
+There was a significant effect of session on IPS for both methods, which suggests that entry speed improved for both methods with practice. Accordingly, entry speed per session correlated well with the power law of practice [9] for both constructive $\left( {{R}^{2} = {0.98}}\right)$ and chorded $\left( {{R}^{2} = {0.95}}\right)$ methods, see Fig. 4 top. As stated earlier, the commonly used WPM [6] metric yielded comparable statistical results. There was a significant effect of session on APC for the two techniques. This may seem odd just by looking at Fig. 4 bottom, where both the lines appear flat. However, a Tukey-Kramer Multiple-Comparison test revealed that sessions 1 and 2 were significantly different for constructive, while sessions 1, 4, and 5 were different for chorded. Similar trends were observed for UnitCX, and the commonly used KSPC metric. We failed to identify any reason for this phenomenon.
+
+Interestingly, there was a significant effect of session on ER for constructive, but not for chorded. Accordingly, ER per session correlated well with the power law of practice [9] for the constructive method $\left( {{R}^{2} = {0.94}}\right)$ , see Fig. 5 top. This and the fact that IPS improved over session for chorded suggest that participants learned to type faster with the method while maintaining a consistent error rate. Additional data is needed to fully investigate it. There was no significant effect of session on UnitER for either method. Yet, UnitER per session correlated well with the power law of practice [9] for constructive $\left( {{R}^{2} = {0.94}}\right)$ , see Fig. 5 bottom. Regardless of not reaching statistical significance, this metric provides useful details, as we will demonstrate in the following section, since unlike high-level text entry metrics that only looks at the differences in the output, action-level metrics accounts for similarities in actions.
+
+ < g r a p h i c s >
+
+Figure 5: Average ER for the two methods in all sessions (top) and average UnitER for the two methods in all sessions (bottom). Note the scale on the vertical axis.
+
+§ 16 ACTION-LEVEL ANALYSIS
+
+The above discussion demonstrates that the proposed metrics (on the whole text) complement the conventional metrics by facilitating a discussion of the methods' speed and accuracy in general. In this section, we demonstrate how the proposed metrics shed further light on these methods' performance and limitations.
+
+To investigate which letters may have hampered text entry speed and accuracy, we first calculated UnitER for each letter. Because the letters do not appear the same number of times, we normalized the values based on each letter's total appearance in the study. We then identified the letters that were both difficult to enter and learn to input accurately (henceforth "Not Learned"), and the letters that were difficult to enter but relatively easier to learn to input accurately (henceforth "Learned"). For this, we compared the average UnitER of each letter from the first three sessions to the last two sessions. The "Learned" included the letters that demonstrated a significant improvement in UnitER from the first to the last sessions. In contrast, the "Not Learned" included the letters that consistently yielded higher UnitER in all sessions.
+
+§ 16.1 CONSTRUCTIVE METHOD
+
+Table 4 displays the top four Not Learned and the Learned letters from the Morse code keyboard. For better visualization, we calculated Unit Accuracy (UA) for these letters using simple transformation given by eq. 7 and fitted them to power law of practice [9] to identify any trends, see Fig. 6.
+
+Fig. 6 (top) illustrates the high inconsistency in UA across the Not Learned letters. However, for the Learned group (Fig. 6 bottom), there is a prominent upward trend. Evidently, participants were learning these letters even in the last session, suggesting that performance with these letters would have continued to increase if the study lasted longer. We performed multiple comparisons between the letters to find the cause of entry difficulty, as well as the factors that facilitated learning. We identified the following trends that may have contributed towards the aforementioned trends.
+
+Table 4: Action-Level representation of the difficult to enter and learn (Not Learned) and difficult to enter but easier to learn letters (Learned) for the constructive method.
+
+max width=
+
+Not Learned Sequence Learned Sequence
+
+1-4
+Z -. . g ...
+
+1-4
+h ... p ...
+
+1-4
+S ... $X$ ...
+
+1-4
+C ... q - .-
+
+1-4
+
+ < g r a p h i c s >
+
+Figure 6: (top) Average Unit Accuracy (UA) of the letters z, h, s, c per session with power trendlines. Trends for the letters show decreasing UA across sessions, indicating that learning was not occuring, (bottom) average UA of the letters $g,p,x,q$ per session with power trendlines. Trends for these letters show increasing UA as the sessions progressed, indicating learning.
+
+Analysis revealed that participants were having difficulty differentiating between the letters that required similar actions to enter. For instance, it was difficult for them to differentiate between h ("...") and $s\left( \text{ "..." }\right)$ , and $k\left( \text{ "-.-" }\right)$ and $c\left( \text{ "-.-." }\right)$ , presumably because their corresponding actions are very similar. A deeper analysis of the action-level errors revealed that participants frequently entered an extra dot when typing " $\mathrm{k}$ ", resulting in the letter " $\mathrm{c}$ ", and vice versa. This error totaled 23% of all UnitER for "c". Participants also made similar mistakes for other Not Learned letters. For example, they entered s ("...") instead of h ("....") and vice versa, resulting in 50% of all UnitER for "h" and 30% of all UnitER for "s". This information is vital. It helps the designer of a constructive method assign sufficiently different actions to frequent letters to avoid confusion, likely resulting in increased speed and accuracy.
+
+An interesting observation is that participants tend to enter an extra dot or dash when these actions are repeated. For example, participants often entered "—", "—.-" or similar patterns for z ("- -."), resulting in 17% of all UnitER in "z". Likewise, participants entered additional dots when entering $g\left( \text{ "-"," }\right)$ , such as $z\left( \text{ "-.." }\right)$ , which resulted in ${20}\%$ of all UnitER for "g". Similar errors were committed for other letters as well. These details are useful, since the designer can compensate by reducing the number of repeated actions for frequent letters.
+
+§ 16.2 CHORDED METHOD
+
+Table 5 displays the top four Not Learned and the Learned letters for the Senorita keyboard. Like the constructive method, we calculated UA for these letters using Equation 7 and fitted them to power law of practice [9] to identify any trends, see Fig. 8. We observed that, like the constructive method, trends for all Learned letters were increasing, suggesting the occurrence of learning across the techniques. Participants were learning these letters even in the last session. This might indicate that the performance of these letters would have been much better if the study lasted longer. Multiple comparisons between action-level actions per letter identified the following trends that may have contributed towards the aforementioned trends.
+
+Table 5: Action-Level representation of the difficult to enter and learn (Not Learned) and difficult to enter but easier to learn letters (Learned) for the chorded method.
+
+max width=
+
+Not Learned letter Chord Learned letter Chord
+
+1-4
+q rs $x$ ir
+
+1-4
+p er d eo
+
+1-4
+m ${en}$ f io
+
+1-4
+Z ${rt}$ 1 at
+
+1-4
+
+Letters 'q', 'p', and 'z' have the 'r' key in their chord ('r', 'er', and 'rt', respectively), which is the furthest key from the right side. We speculate that it is difficult to learn, and be fast and accfurate if letter has "r" in a chord pair, because "r" and "s" are the furthest letters from the edges (Fig. 7). Keys near the center of the screen are more difficult to reach than those at the edge. Relevantly, four letters with improving trendlines ( ‘x', 'd', 'f', and 'l' ) have chord pairs that are close to the screen edge. This detail may encourage designers to place the most frequent letters toward the edge of the device.
+
+ < g r a p h i c s >
+
+Figure 7: The user stretching her thumb to reach the letter "s" (left) and the letter "r" (right).
+
+§ 16.3 TIME SPENT PER LETTER
+
+We compared the above analysis with the time spent to perform the sequences and chords for each letter in the last two sessions. The purpose was to further validate the metrics by studying if the letters that took additional time to enter correspond well to the Not Learned letters. Fig. 9a illustrates all difficult letters (highlighted in red and green) identified for the constructive method. One can see the correlation between UnitER and the time spent to enter the sequence of these letters. This suggests that these letters require higher cognitive and motor effort to enter. Similarly, Fig. 9 (b)
+
+ < g r a p h i c s >
+
+Figure 8: (top) Average Unit Accuracy (UA) of the letters q, p, m, z per session with power trendlines. Trends for the letters show decreasing UA across sessions, indicating that learning was not occurring, (bottom) average UA of the letters $x,d,f,l$ per session with power trendlines. Trends for these letters show increasing UA as the sessions progressed, indicating learning.
+
+illustrates all difficult letters (highlighted in red and green) identified for the chorded method. One can see that the chords (q, v, p, g, z, j) required more time than taps (s, e, n, o, r, i, t, a). However, the chords that are composed of the keys closer to the boundaries were much faster (e.g., h, u, l, etc.). This further strengthens the argument of the previous section.
+
+§ 17 CONCLUSION
+
+In this paper, we proposed a trio of action-level performance metrics (UnitER, UA, and UnitCX) aimed at multi-step text entry techniques that account and compensate for the constructive and chorded input process. We validated these metrics in a longitudinal study involving one constructive and one chorded technique. In our presentation, we used existing techniques such as Morse [33] and Senorita [28], not to change their mapping or design, but to amply demonstrate how can the newly proposed metrics be applied for different multi-step techniques and to give a deeper insight into possible limitations of the techniques with an emphasis on learning. None of the metrics would give the conclusions automatically, but they could point towards limitations. Our UnitER helped us to investigate specific erroneous characters, while conventional metrics failed to identify them. The results of this study demonstrate how the proposed metrics provide a deeper understanding of action-level error behaviors, particularly the difficulties in performing and learning the sequences and chords of the letters, facilitating the design of faster and more accurate multistep keyboards. Although, there is no formal method to analyze action-level actions of multi-step method, it is likely that veteran text entry researchers perform similar analysis on user study data. However, this work provides a formal method, that will enable researchers new to the area to perform these analyses, as well as facilitate better comparison between methods from the literature.
+
+ < g r a p h i c s >
+
+Figure 9: (top) Average UnitER of all letters vs. the time spent to perform the sequence for those letters for constructive method, (bottom) average UnitER of all letters vs. the time spent to perform the chords for those letters with the chorded method.
+
+§ 18 FUTURE WORK
+
+We intend to combine the insights gained from the proposed action-level metrics, particularly the most error-prone characters and the types of errors frequently committed in the sequence or chord of these characters, with machine learning approaches to make multistep keyboards adapt to human errors. Such a system can also use language models to provide users with more effective auto-correction and word suggestion techniques.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b997e9bdbe450750524ede57c18b5d8bdd8888c
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,403 @@
+# A Comparative Evaluation of Techniques for Locating Out-of-View Targets in Virtual Reality
+
+Category: Research
+
+## Abstract
+
+In this work, we present the design and comparative evaluation of techniques for increasing awareness of out-of-view targets in virtual reality. We first compare two variants of SOUS-a technique that guides the user to out-of-view targets using circle cues in their peripheral vision-with the existing FlyingARrow technique, in which arrows fly from the user's central (foveal) vision toward the target. fSOUS, a variant with low visual salience, performed well in a simple environment but not in visually complex environments, while bSOUS, a visually salient variant, yielded faster target selection than both fSous and FlyingARrow across all environments. We then compare hybrid techniques in which aspects of SOUS relating to unobtrusiveness and visual persistence were reflected in design modifications made to FlyingARrow. Increasing persistence by adding trails to arrows improved performance but there were concerns about obtrusiveness, while other modifications yielded slower and less accurate target acquisition. Nevertheless, since fSOUS and bSOUS are exclusively for head-mounted display with wide field-of-view, Fly-ingARrow with trail can still be beneficial for devices with limited field-of-view.
+
+Index Terms: Human-centered computing-Virtual Reality
+
+## 1 INTRODUCTION
+
+Locating and selecting out-of-view targets without prior knowledge of their positions is a demanding task-particularly in a virtual reality (VR) environment displayed on a commodity head-mounted display (HMD) with a limited field of view (FoV) [7]. However, we can also use VR to augment our field of view with artificial visual cues that assist us in finding and select the target (e.g., $\left\lbrack {7,8,{29}}\right\rbrack$ ). Building on prior work employing visual effects in peripheral vision to enhance awareness of off-screen objects or action ( $\left\lbrack {{18},{28}}\right\rbrack$ ), and taking advantage of the recent emergence of commodity HMDs with wide FoV, we designed Sign-of-the-UnSeen (SOUS) to allow a user to become aware of and then acquire of out-of-view objects. The cue strictly resides within the user's peripheral vision and moves radially around the user's gaze to inform the user of the position of an out-of-view target. To explore the extent to which SOUS can be unobtrusive to the VR scene while remaining useful we compare two variants: bold SOUS (bSOUS), an opaque circular cue, and faint SOUS (fSOUS), a transparent circular cue. Figure 1 contains the screenshots of the techniques in action.
+
+We conducted two experiments. In the first experiment, we compared bSOUS and fSOUS with a modified version of FlyingARrow (FA) [7]. FA consists of a 3D arrow that flies toward a target. In the experiment participants selected targets placed in their central vision, and periodically acquired off-screen targets indicated by one of the three techniques. Participants acquired off-screen targets faster using bSOUS and fSOUS than with FA, as many participants would visually track the arrow to its destination. While the increase in performance is small, it is significant in contexts such as competitive gaming where any performance increase can lead to a victory. Participants sometimes did not notice target cueing with fSOUS in visually complex environments, while bSOUS and FA were more robust to varied detail and changes in the environment. Despite FA being the slowest technique, it had higher overall subjective ratings. While bSOUS compared well to FA in these ratings, many participants found fSOUS frustrating to use.
+
+By being placed in the user's peripheral vision, SOUS is less obstructive than FA, as it does not block the scene in the user's central vision. In addition, while FA arrows move through the scene in the 3D environment, SOUS cues are presented on an invisible layer in front of the scene (as with a Heads Up Display or HUD): this may further help in distinguishing SOUS as a cue, rather than another object in the scene. SOUS is also more visually persistent than FA, whose arrow flies outside the user's FoV if not followed. If it was possible to provide these qualities to FA, this could be beneficial for HMDs with limited FoVs unable to display a technique that requires the use of far-peripheral vision like SOUS. We therefore extended FlyingARrow (FA) [8] with two behaviours: +Arc, and +Trail. +Arc attempts to make FA less obstructive by making the cue orbit around the user at a set distance-keeping it out of the way of on-screen targets and making it more distinctive as a cue to the user vs. an object in the scene. +Trail makes FA emit a trail, making it more visually persistent-if the user loses sight of the arrow, the trail will linger, allowing the user to remain aware of the target's direction.
+
+In a second experiment, we compared FA, FA+Arc, FA+Trail, and FA+Arc+Trail. This experiment involved the same combination of selecting on-screen and off-screen targets. We found that +Arc slowed down participants and did not make the technique more comfortable to use. While participants found +Arc to be slightly less obtrusive, it was also less robust to visual complexity, suggesting that the intended increase in visual distinctiveness was not achieved. Despite participants stating that +Trail was more obtrusive, including a trail improved the speed of target acquisition.
+
+In the remainder of this paper we discuss related work in visual perception and techniques for off-screen target awareness and acquisition, detail the SOUS and FA designs, describe our experiments and present results. After this we discuss the implications of our findings for the design of cues for off-screen objects in VR.
+
+## 2 RELATED WORK
+
+### 2.1 Visual Perception
+
+Prior literature $\left\lbrack {{14},{19}}\right\rbrack$ suggests that the shape of cues in peripheral vision should be simple, because it is difficult to distinguish complex shapes in this region. Luyten et al. [14] performed an experiment where each participant wore a pair of glasses. On each side of the glasses, there was a colour screen that could display a shape. Each shape was positioned almost ${90}^{ \circ }$ away from the foveal center of vision. They found the participants could recognize that the shapes were different but had difficulty recognizing composite shapes. While our ability to distinguish shapes is reduced in peripheral vision [19], it is adapted to notice sudden changes [12]. Work by Bartram et al. [2] suggests animation can further enhance awareness of new objects in the peripheral vision, while Luyten et al. [14] found that blinking is effective in the peripheral region to notify the user of changes.
+
+Ball and North [1] conducted a study to investigate why users have better target acquisition performance on larger screens, finding that the improved performance was due to peripheral awareness of content that they could rapidly focus on. Although bSOUS and fSOUS are VR techniques, they build on Ball and North's observations by making use of the user's peripheral vision and providing support for rapidly locating the indicated off-screen target.
+
+Visual cues can interact with other objects in the visual field, impacting their ability to capture attention. According to Rosenholtz et al. [20], whether we will find a target or not depends on the visual salience of the environment. Salience indicates how different the target is from its environment. For example, a low-salience target tends to have a similar visual appearance to or blend in with the environment. Experiments by Neider and Zelinsky [17] support this by demonstrating that it is more difficult for a person to find a target if the background resembles the target. Additionally, Rosenholtz et al. [20], and Rosenholtz [19] present mathematical formulae to quantify visual salience, but they are designed for static images projected onto 2D space and in scenarios without animation. Therefore, they don't directly translate to dynamic VR environments and are not used in our study. More recent work has explored machine learning methods to model visual salience (e.g. [13]); while such techniques are promising ways to measure salience in a given scene, no standard has been established for using these methods to generate scenes with desirable salience attributes in controlled experiments.
+
+Cues with high visual salience tend to be more effective, but such cues are is not always appropriate. For example, in cinematic viewing, users may prefer subtler cues to avoid obstruction and distraction. McNamara et al. [15] designed a study where a part of a 2D screen would be subtly modulated in order to guide the participants toward a certain area. Their study showed some efficacy in modulation. Later on, Rothe and Hußmann [21, 22] conducted an experiment where they used spotlights to subtly guide the user to a target and found them to be effective. We created fSOUS as a subtler way to guide the user to the out-of-view targets. However, unlike the cues explored in these prior studies $\left\lbrack {{15},{21},{22}}\right\rbrack$ , our cue strictly resides in the user's peripheral vision.
+
+### 2.2 Existing Techniques
+
+Some existing techniques for guiding the user to out-of-view targets, such as EyeSee360 [7], 3D Wedge [29], and Mirror Ball [4], have roots in an earlier technique called Halo [3]. Halo provides cues for off-screen targets on small-screen handheld devices. Halo uses circles to represent the targets, with sections of the circles rendered on the edge of the device. The position and size of a circle indicates a target's position relative to the area displayed on the device. Halo was compared with a technique called Arrow which uses arrows pointing toward the targets labelled with the target's distance. Halo was better at indicating both position and distance in their tests. Burigat et al. [6] compared Halo to a variant of Arrow in which the length of an arrow indicated distance. This allowed participants to more easily rank the distances of the targets but fared worse than Halo for indicating the actual target distances. Schinke et al. [23] developed an handheld Augmented Reality (AR) version of Arrow where 3D arrows point toward AR targets located some distance from the viewer (and often off-screen). The user then uses the device to guide themselves toward the targets. Their evaluation showed the technique to work better than Radar, a technique that provides a simplified overhead view of the area.
+
+Gruenefeld et al. [7] notes that many AR HMDs such as Mi-crosoft HoloLens v1 suffer from limited screen real estate (much like handhelds) and limited FoV. They introduced EyeSee360, an overview technique that allows the user to see out-of-view targets by representing them as dots on a grid. The dot's position on the grid indicates the target's orientation and distance relative to the viewer. EyeSee360 is a visually obtrusive technique. As such, Gruenefeld et al. [9] suggest that the user should be able to set the cue's visibility on an "on-demand" basis. They compared EyeSee360 against Halo, Wedge (a variant of Halo that uses acute isosceles triangles [10]), and Arrow and found it to be the best-performing technique. Greuenefeld et al. [8] later developed FlyingARrow (FA), an animated variant of Arrow, which they found to be slower than EyeSee360. Other overview techniques explored in the literature include Radar and 3D Radar [9], and Mirror Ball [4], which presents a distorted view of the surroundings rendered as a ball.
+
+Yu et al. [29] proposed a 3D variant of Wedge for use in VR, to indicate relative position and distance of targets. Unlike the original Wedge, the cue for 3D Wedge appears in front of the user instead of around the edges of the screen. Each wedge is also a 3D pyramid whose base is pointing the toward the target, and whose size indicates the distance. The researchers found that 3D Wedge was more effective at finding targets than overview techniques such as Radar-except when there were many targets. They improved 3D Wedge by embedding an arrow pointing toward the target inside each pyramid.
+
+Unlike the techniques covered so far which focus on target ac-qusition, Xiao and Benko [28], and Qian et al. [18] implemented techniques for increasing the user's awareness of off-screen objects without requiring the user to select them. Xiao and Benko [28] added a small LED light grid around the main HMD display. Although the grid had low resolution, it was sufficient for the user to glean additional information in their peripheral vision based on colour and lighting changes. Qian et al. [18] used a similar approach for object awareness specifically: when there is an object close to the user, a corresponding area of the screen's edges lights up. Their evaluation found that this allowed users to notice off-screen targets.
+
+## 3 TECHNIQUES
+
+### 3.1 bSOUS and fSOUS
+
+
+
+Figure 1: a: fSOUS. b: bSOUS.
+
+Sign of the UnSeen (SOUS) is a family of peripheral vision-based techniques that include bSOUS and fSOUS. When a target of interest appears off-screen, a SOUS cue appears in the user's peripheral vision. The cue moves radially based on the user's relative position to the target. For example, if the target is slightly above and left of the user, the cue will appear on the left side and will be rotated slightly upward around the user's forward gaze cursor. Although we would like the cue to be as far away as possible from the user's foveal vision, there is currently no commercially-available VR headset that encompasses the full human visual field (about ${105}^{ \circ }$ from the center of foveal vision [25]). Nevertheless, some commercial headsets (e.g., the Pimax 5K Plus used in this study) can display what is considered to be in the far-peripheral. As such, the SOUS cue is located around ${60}^{ \circ }$ from the center which is considered to be in peripheral vision [25] and is displayable by commercially available VR headsets.
+
+fSOUS is semi-transparent and subtle, and intended to support scenarios such as cinematic viewing in which more explicit cues might be highly disruptive to the viewing experience. We conducted a small pilot experiment with 5 participants to determine a lower bound opacity level that was still detectable. All five participants found that they could see the cue at $5\%$ opacity within a minimal skybox environment(Figure 4:a). While we also found that closer to ${50}\%$ opacity would be readily detectable in a more complex environment like Mixed (Figure 4:d), we maintain the 5% level in our experiment across environments. The circular cue uses a radial gradient that shifts between black and white at ${5.56}\mathrm{\;{Hz}}$ .
+
+bSOUS appears as a ring that blinks from red to white at 1.11 Hz. The cue is opaque and blinks rather than gradually changing colour, making the cue more immediately noticeable in peripheral vision. bSOUS uses a ring instead of the solid circle used by fSOUS because without transparency a solid circle is not visually distinct enough from the targets used in our experiments.
+
+### 3.2 FlyingARrow (FA)
+
+FA is a further refinement of Arrow [23] for immersive AR explored by Gruenefeld et al [8]. FA's cue is a 3D arrow that flies toward the target, using animation to encourage the user to act. The cue would play a sound once it collides with the target and then subsequently disappeared.
+
+FA was designed for AR devices with small FoVs like Microsoft Hololens v1, and we adapt FA for HMDs with larger FoVs. In the original version FA arrows start in a corner of the screen and move across the user's limited screen space, to allow the user time to perceive and interpret the cue. Given the increased FoV, in our variant the arrow starts $1\mathrm{\;m}$ in front of the user (in virtual space) in the center of the screen. We also removed the sound effect as it was a potential confounding factor: all off-screen targets were equidistant from the user in our experiments, eliminating the need for a distance indicator (the role of the sound). We reused the 3D arrow asset used by Gruenefeld et al. [8], available in: https: //github.com/UweGruenefeld/OutOfView. The 3D arrow can be seen in Figure 2.
+
+
+
+Figure 2: FlyingARrow as appeared in the first part of the study. The arrow is travelling toward the target.
+
+We now describe the +Arc and +Trail modifications to FA that we explore in the second experiment. As discussed, +Arc was designed to make FA less obstructive to and more visually distinguished from on-screen targets by orbiting around the user toward the target. While the standard FA cue starts $1\mathrm{\;m}$ in front of the user and travels straight to the target at the speed of ${10}\mathrm{\;m}/\mathrm{s}$ , a +Arc cue starts at $x$ metres away from the user, where $x$ is the physical distance from the user to the target. In our experiment, this is $5\mathrm{\;m}$ for all off-screen targets, placing the cue behind the on-screen targets. The cue's physical size is then adjusted to make sure that it has the angular size of ${5}^{ \circ }$ , equal to the cue’s default angular size at $1\mathrm{\;m}$ . The cue then orbits around the user at the speed of at ${\tan }^{-1}\left( \frac{{10}\mathrm{\;m}/\mathrm{s}}{\mathrm{x}}\right)$ around the user's upward vector. This vector is recalculated at every frame update.
+
++Trail makes the FA cue's visibility persist longer by emitting a trail. The standard FA cue is not visible to the user once it leaves the screen. The trail has the following Unity properties: Widest Size $=$ ${0.315}\mathrm{\;m}$ , Narrowest Size $= {0.315}\mathrm{\;m}$ , Time $= 5\mathrm{\;s}$ . This trail allows the user to maintain awareness of an FA cue and to follow the trail to relocate it. We altered the shape of the cue from an arrow to a cone in our second experiment across all conditions to improve the visual integration of the trail and main cue.
+
+Below is the summary of four variations of FA based on the new behaviours:
+
+- FA-Arc-Trail: The cone travels straight and directly to the target without any trail, similar to FA used in the first study.
+
+- FA+Arc-Trail: The cone orbits around the user (rotating around user’s upward vector $\overrightarrow{u}$ at the moment the target first appears) to reach the target. It does not leave a trail.
+
+- FA-Arc+Trail: The cone travels straight to the target, leaving a trail.
+
+- FA+Arc+Trail: The cone orbits around the user to reach the target (rotating around user’s upward vector $\overrightarrow{u}$ at the moment when the target first appears) and leaves behind a trail.
+
+
+
+Figure 3: The variations of FA cues. The number represents the current score the user has. a: FA-Arc-Trail in the Training environment. b: FA+Arc-Trail in the Hotel environment. c: FA-Arc+Trail in the None environment. d:FA+Arc+Trail in the Mixed environment. For more information about the environment and the credits to the assets used, please refer to Section 4.
+
+## 4 ENVIRONMENTS
+
+Prior work [17] suggests that the visual complexity of the environment may impact user performance. In order to explore how techniques interact with environment complexity we varied environment as an experimental factor in our study. We created three types of environment: None, Hotel, and Mixed. The details of the environments are as follows:
+
+
+
+Figure 4: Screenshots of the environments as they appeared in our implementation (the left eye/screen is shown here). a: None. b: Hotel. c: Mixed.
+
+- None (Figure 4:a): a generic skybox with a brown horizon and a blue clear sky, representing environments with low visual complexity. Constructed using the default skybox in Unity 2018.3.7f1.
+
+- Hotel (Figure 4:b): a photorealistic skybox of a hotel room, representing environments with moderate visual complexity. This skybox is CC Emil Persson (https://opengameart.org/content/indoors-skyboxes).
+
+- Mixed (Figure 4:c): a combination of a photorealistic sky-box and 3D models, some of which are animated; this represents environments with high visual complexity. The 3D models are taken from the Pupil Unity 3D Plug-in (https: //github.com/pupil-labs/hmd-eyes) and the skybox is CC Emil Persson (https://opengameart.org/content/ winter-skyboxes).
+
+While each environment differs in visual complexity, we are unable to quantify this precisely, as discussed previously. Instead, including these environments allows us to generally explore the robustness of each technique to typical environmental differences.
+
+## 5 EXPERIMENT 1: COMPARING BSOUS, FSOUS AND FLY- INGARROW
+
+We performed the first experiment to evaluate our techniques bSOUS and fSOUS against an existing technique called FA. Furthermore, since bSOUS and fSOUS have different visual salience (achieved through differences in animation and opacity), we explore the impact of a peripheral cue's visual salience on target acquisition.
+
+### 5.1 Research Questions
+
+RQ1.1: How do the techniques affect target acquisition performance and the user's cognitive load? To measure performance, we collect (1) number of successful out-of-view target acquisitions, and (2) time to acquire an out-of-view target. We administer the NASA TLX to assess cognitive load. An ideal cue has fast acquisition times, high success rate, and low cognitive load.
+
+RQ1.2: How do the techniques interact with the visual scene? We measure how the environments affect (1) number of successful out-of-view target acquisitions, and (2) time to acquire an out-of-view target. An ideal cue works well under a range of visual scenes.
+
+RQ.13: What are the subjective impressions of the cues? We gather subjective feedback through questionnaire and interview. An ideal cue provides a positive experience for the user. A cue with good performance may be less viable than a technique with inferior performance that is preferred by the user.
+
+### 5.2 Participants
+
+We conducted the first study at a research university with 24 participants. We recruited the participants using an email list for graduate students in the faculty of computer science at our institution. Six participants were female, and 14 were males. Four participants did not indicate their gender. 19 participants indicated that they had a prior experience using a VR headset, and seven indicated that they had participated in a VR study before. The median score for self-reported VR proficiency level is 4 out of 7 .
+
+### 5.3 Software and Hardware Instrument
+
+We used the Pimax $5\mathrm{\;K}$ Plus for the study because it has a wider FoV than most commercially available headsets. Pimax $5\mathrm{\;K}$ Plus has the diagonal FoV of ${200}^{ \circ }\left\lbrack {26}\right\rbrack )$ . The diagonal FoVs of other popular and widely available HMDs are: Occulus Rift DK1 - 110 ${}^{ \circ }$ [5] , HTC Vive - 110° [5], Microsoft HoloLens v1 - 35° [27].
+
+During the training and the trials, each participants interacted with a VR interface implemented using Unity 2018.3.7f1 and SteamVR. The interface had a score on the top-left corner of the screen to keep the participants engaged. The targets were 3D spheres that the participants could select by rotating their head to land the cursor onto the target (gaze cursor). The gaze cursor is a circle ${1.2}\mathrm{\;m}$ in front of the user with the size of ${0.01}\mathrm{\;m}$ . This means that it has the angular size of ${0.57}^{ \circ }$ . Based on the condition, the participant would be operating inside a specific virtual environment and could avail themselves to one of the techniques to select out-of-view targets. We used $\mathrm{R}$ to analyze the data collected while the participant was performing the tasks. We also used it to analyze NASA-TLX raw scores and the questionnaire answers.
+
+
+
+Figure 5: A screenshot of the interface taken from the left-side screen of P15. The current environment of the screen is Mixed. The number shows the current score that the participant currently had. The spheres represent the in-view targets. One of the target was yellow, because the participant was dwelling on it. The credits to the skybox photo and the 3D assets are available in Section 4.
+
+### 5.4 Questionnaire Instrument
+
+After completing each technique, each participant must complete NASA-TLX Questionnaires (More information in Hart [11]) and 7-point Likert scale questions:
+
+- S1: The technique is overall effective for helping me to locate an object.
+
+- S2: I can immediately understand what the technique is telling me.
+
+- S3: The technique precisely tells me where the target is.
+
+- S4: The technique helps me to rapidly locate the target.
+
+- S5: The technique makes it difficult to concentrate.
+
+- S6: The technique can be startling.
+
+- S7: The technique gets my attention immediately.
+
+- S8: The technique gets in the way of the virtual scene.
+
+- S9: The technique makes me aware of the objects outside the FoV.
+
+- S10: The technique is uncomfortable to use.
+
+For each Likert-scale question, each participant would rate the statement from 1 to 7 with 1 being "completely disagree" and 7 being "completely agree."
+
+### 5.5 Procedure
+
+#### 5.5.1 Overview
+
+The steps were as follows: STEP 1 - The participant provided informed consent and completed the background questionnaire. STEP 2 - Then, we trained a participant to use one of the three techniques (bSOUS, fSOUS, FA) by asking them to select 10 out-of-view targets in the training environment while trying simultaneously to select as many in-view targets as possible. During the training, we primed the participant to prioritize selecting out-of-view targets. If the participant failed to become familiar with the technique, they would repeat the training trials. STEP 3 - The participants would complete the actual trials by selecting 20 out-of-view targets while trying to simultaneously select as many in-view targets as possible in one of the three environments (None, Hotel, Mixed). After the 20 trials were completed, we altered the environments. We repeated these steps until the participant experienced all of the environment with the technique. The next section, Section 5.5.2, contains additional information on how a participant would complete a trial during the study. STEP 4 - Afterward, the participant completed a NASA-TLX instrument and the 10 Likert scale questions. STEP 5 - The participant then repeated STEP 2 to STEP 4 until they experienced all the techniques.
+
+
+
+Figure 6: a: When the head cursor lands on the target, the target turns yellow. The participant must dwell for 500 milliseconds to select it. b: When an out-of-view target is select, it sparkles like this screenshot. An in-view target simply fades away.
+
+Since there were 20 trials for each environment and each technique, a participant would have completed $3 \times 3 \times {20} = {180}$ trials. We used Latin squares to arrange the ordering of the techniques and the environments. Therefore, there were nine orders during the studies.
+
+#### 5.5.2 Completing a Trial
+
+The main task of the study involved target selection. Each participant selected a target via gaze selection by dwelling a cursor onto the target for 500 milliseconds. There were two types of selection in the study: in-view and out-of-view targets. We considered a selection of an out-of-view targets as a trial for our studies. While we asked our participants to select both types of targets, we also primed our participants to prioritize selecting out-of-view targets. We also told the participants that they would earn more points by selecting out-of-view targets, and the targets could disappear before a successful selection. We made the targets disappear to encourage the participants to find the targets as quickly as possible.
+
+The in-view targets spawned in front of the participant (within ${40}^{ \circ }$ of the user’s forward vector) every one to two seconds. They had one second to select the target before it would disappear. These targets could appear in any direction. An in-view target was worth one point. The out-of-view targets spawned at least ${80}^{ \circ }$ away from the participants' forward vectors in any direction. Since the targets were further away, the participants must use a technique to locate the targets. The spawning rate for this type of target was every 5.5 to 6.5 seconds. The participant had five seconds to select an out-of-view target after it spawned. Since the out-of-view targets were further away in term of angular distance, longer time was required. This type of target was worth 10 points. Both types of targets have the same appearance before selection (white sphere with the angular size of ${7}^{ \circ }$ ). The only visual difference between an in-view and an out-of-view target was that the in-view target faded upon selection whereas the out-of-view target sparkled (Figure 6:b). We decided to make the target appearances the same, because we were controlling for visual salience.
+
+We used the out-of-view target selection task to evaluate the performance and efficacy of the techniques. The in-view targets encouraged participants to return to the original orientation, and dissuade the participants from waiting for the next out-of-view target to appear. In our study, we considered an attempt to select an out-of-view target a trial. We considered a trial to be successful if the participant could dwell on the target long enough that it would trigger the selection animation. We considered a trial to be unsuccessful if the participant could not dwell on the target long enough to trigger the animation or if they could not locate the target. A trial completion time for a successful target selection was the duration from when the target first spawned and until when the participant landed the cursor onto the target. This excludes the dwell time and the animation time.
+
+### 5.6 Results
+
+#### 5.6.1 Number of Unsuccessful Target Acquisition
+
+To answer RQ1.1-1.2, we recorded the numbers of failed out-of-view target acquisition or the numbers of failed trials per participant. For the number of unsuccessful target selection, we used lme4 to model a mixed logistic regression that predicted the probability of failure with the following factors: the techniques, the environments, with the participants as the random effect. Then, we computed pseudo- ${r}^{2}$ for the model using MuMIn which implements an algorithm found in Nakagawa, Johnson and Schietzeth [16].
+
+The model that we obtained after the fitting is as following: $P\left( \text{ Fail }\right) = - {3.85} + {0.43} \times {fSOUS} + 0 \times {bSOUS} + {0.06} \times$ Hotel- ${0.13} \times$ Mixed + 7.65 × fSOUS : Hotel + 1.38 × bSOUS : Hotel + ${6.75} \times {fSOUS} :$ Mixed $+ {1.22} \times {bSOUS} :$ Mixed. The coefficients, odd ratios (OR), standard errors (SE), and other information are summarized in Table 1. The ${r}^{2}$ are as following: theoretical marginal $= {0.06}$ , theoretical conditional $= {0.39}$ , delta marginal $= {0.06}$ , delta conditional $= {0.14}$ . The most important effect size for interpretation is the theoretical conditional ${r}^{2}$ since it represents the variance explained by the entire model including the random effect. Since it is 0.39 , it indicates that the techniques and the visual scenes had moderate effect on the success of target selection. However, the theoretical marginal ${r}^{2}$ , or ${r}^{2}$ that excludes the random effect of the participants is only 0.06 -meaning that there is a strong effect from each individual themselves.
+
+Based on Table 1, we found that there was a strong interaction between the environments and fSOUS. The participants failed more often while using fSOUS with Hotel and Mixed. Since the participants $\left( {n = {14}}\right)$ indicated during the interviews that they often found fSOUS cues blending into the environment, we conclude that the faint nature of the cue led the participants to lose sight of the cue and subsequently failed to select the targets.
+
+Regarding RQ1.1 we observed that target acquisition success differed between techniques, but this was also conditioned on the visual scene or the environment which ties directly to RQ1.2. We found that the visual scene could affect how the participants perceived the cue and subsequent success in target acquisition. We found that despite bSOUS and fSOUS have very similar cueing mechanism, they have very different performance in term of target acquisition success in different environments.
+
+ | $\beta$ | $\mathbf{{OR}}$ | SE | Z | $\mathbf{p}$ |
| (Intercept) | -3.85 | 0.02 | 0.34 | -11.21 | 0.00* |
| Techniques |
| fSOUS | 0.43 | 1.53 | 0.33 | 1.31 | 0.19 |
| bSOUS | 0.00 | 1.00 | 0.35 | 0.00 | 1.00 |
| Environments |
| Hotel | 0.06 | 1.06 | 0.35 | 0.18 | 0.86 |
| Mixed | -0.13 | 0.88 | 0.36 | -0.37 | 0.71 |
| Tech. : Env. |
| fSOUS:Hotel | 7.65 | 2.04 | 0.43 | 4.78 | 0.00* |
| bSOUS:Hotel | 1.38 | 0.32 | 0.48 | 0.67 | 0.50 |
| fSOUS:Mixed | 6.75 | 1.91 | 0.44 | 4.34 | 0.00* |
| bSOUS:Mixed | 1.22 | 0.20 | 0.50 | 0.39 | 0.70 |
+
+Table 1: The summary of the coefficients, ORs, and other information for fitting a mixed multiple linear that predict the probability of failing to acquire an out-of-view targets based on the techniques and the environments. * signifies that $p \leq {0.05}$ .
+
+#### 5.6.2 Time for Target Acquisition
+
+In addition to the probability of target acquisition, we also considered the time of target acquisition as another important measure to answer RQ1.1-1.2. We measured the time the participants took to reach the targets in successful trials (ie. excluding the ${500}\mathrm{\;{ms}}$ dwelling time). Then, we fitted a mixed multiple linear regression model using the participants as the random effect. We studied the following variables: (1) the techniques, (2) the environments, and (3) the angular distance between the user's initial position to the target. Even though our main focuses are the techniques and the environments, we also have to discuss distance as the out-of-view targets had different distances from the participants. We did not have to consider Fitts's Law for this study, because our targets have the same angular size.
+
+We did not normalize the data because of the suggestion made by Schmidt and Finan [24]. They argue that if the sample size is sufficiently large, normalization could introduce a statistical bias when fitting a linear model. The model that we fitted using lme4 was as following: Time $= {2.19} - {0.14} \times {bSOUS} + {0.31} \times {fSOUS} - {0.06} \times$ None $- {0.03} \times$ Mixed $+ {0.01}$ Dist $+ {0.09} \times$ bSOUS : None $- {0.34} \times$ fSOUS : None + 0.07 × bSOUS : Mixed + 0.02fSOUS : Mixed. The ${r}^{2}$ of the model computed using MuMIn were: marginal $= {0.06}$ , conditional $= {0.25}$ . The conditional ${r}^{2}$ indicated that the model was moderately decent at explaining the time required to reach target. Table 2 show the results of the tests on the coefficients. Despite Fitts' Law suggests that the angular distances may increase time to reach the target, the coefficient representing angular distance $(\beta = {0.01}$ , $t\left( {3930.34}\right) = {13.62}, p \leq {0.05})$ was small when compared to other coefficients. We found that there was an interaction effect between the techniques and the environments. Particularly, fSOUS was faster in None $\left( {\beta = - {0.34}, t\left( {3927.99}\right) = - {4.73}, p \leq {0.05}}\right)$ . The participants $\left( {n = {14}}\right)$ indicated that fSOUS cue blending with more visually complex environments (Mixed, Hotel) caused them to be slower. We found that in term of main effects, the techniques were statistically significant with bSOUS being the fastest, FA being the second fastest, and fSOUS being the slowest. On the other hand, the main effects from the environment are not statistically significant.
+
+Interestingly, some participants indicated during the interviews FA gave them more time to reach the target $\left( {n = 4}\right)$ when this was actually not the case. Some $\left( {n = 4}\right)$ felt that the speed of the cue influenced their own target acquisition speed-making them slower.
+
+ | $\beta$ | SE | df | t | $\mathbf{p}$ |
| (Intercept) | 2.19 | 0.10 | 57.73 | 22.58 | 0.00* |
| Techniques |
| bSOUS | -0.14 | 0.05 | 3927.89 | -2.81 | 0.00* |
| fSOUS | 0.31 | 0.05 | 3928.12 | 5.94 | 0.00* |
| Environments |
| None | -0.06 | 0.05 | 3927.91 | -1.30 | 0.19 |
| Mixed | -0.03 | 0.05 | 3927.91 | -0.66 | 0.51 |
| Angular Distance Dist | 0.01 | 0.00 | 3930.34 | 13.62 | 0.00* |
| Techn. : Env. |
| bSOUS:None | 0.09 | 0.07 | 3927.88 | 1.36 | 0.17 |
| fSOUS:None | -0.34 | 0.07 | 3927.99 | -4.73 | 0.00* |
| bSOUS:Mixed | 0.07 | 0.07 | 3927.93 | 0.97 | 0.33 |
| fSOUS:Mixed | 0.02 | 0.07 | 3928.04 | 0.28 | 0.78 |
+
+Table 2: The summary of coefficients and the results of the tests for the coefficients for the time required to reach out-of-view targets. We only considered successful trials for analysis. * signifies $p \leq {0.05}$ .
+
+#### 5.6.3 Cognitive Load
+
+To further answer RQ1.1, we collected and analyzed the participants' cognitive load after using a technique using NASA-TLX scores. The median NASA-TLX raw scores are as follows: $\mathrm{{FA}} = {42}$ , fSOUS $= {70.5}$ , bSOUS $= {57.5}$ . This suggests that fSOUS and bSOUS induced higher cognitive load than FA. ART ANOVA with repeated measure (using art) revealed significant differences between the techniques $\left( {F\left( {2,{18.63}}\right) = {7.40}, p \leq {0.05}}\right)$ . Pairwise comparisons with Tukey adjustment (using emmeans) showed that there were significant differences between FA and fSOUS $(c = - {21.0},{t}_{\text{ratio }}\left( {46}\right) =$ $- {6.0}, p \leq {0.05})$ , and fSOUS and bSOUS $(c = {13.9},{t}_{\text{ratio }}\left( {46}\right) =$ ${3.92}, p \leq {0.05})$ . The difference between FA and bSOUS was not statistically significant $\left( {c = - {7.1},{t}_{\text{ratio }}\left( {46}\right) = - {2.03}}\right)$ . The interview data from 16 participants suggested that fSOUS induced more cognitive load, because the cue tended to blend with the environment which forced them to simultaneously find the cue and the target.
+
+#### 5.6.4 Questionnaire
+
+To answer RQ1.3, we administered a questionnaire after a participant finished using a technique.
+
+S1: Many of the participants indicated that FA $\left( {{Mdn} = 7}\right)$ was overall effective at helping. fSOUS was considerably less effective $\left( {{Mdn} = 4}\right)$ ; however, Figure 10 shows that the Likert scores were distributed quite evenly. This indicated that there are mixed opinions from the participants. bSOUS $\left( {{Mdn} = 6}\right)$ was overall more effective than fSOUS, but slightly more less effective than FA.
+
+S2: Many of the participants found FA and bSOUS to be very comprehensible $\left( {{Mdn} : \mathrm{{FA}} = 7,\text{ bSOUS } = 6}\right)$ . Interestingly, on average, they found fSOUS to be less comprehensible $\left( {{Mdn} = {4.5}}\right)$ than bSOUS despite that it uses the same mechanism to provide the location information of the out-of-view targets. The scores for fSOUS were somewhat evenly distributed (Figure 10) which indicates mixed opinions among the participants for fSOUS.
+
+S3: Many of the participants found FA and bSOUS to be very precise $\left( {{Mdn} : \mathrm{{FA}} = 7,\text{ bSOUS } = 6}\right)$ . Interestingly, the participants overall found fSOUS $\left( {{Mdn} = {4.5}}\right)$ to be less precise despite that it had the same cueing mechanism with fSOUS.
+
+S4: Many of the particants found FA and bSOUS were helping them to quickly acquire the out-of-view targets $({Mdn} : \mathrm{{FA}} = 7$ , bSOUS $= 6$ ). fSOUS was less effective $\left( {{Mdn} = 3}\right)$ despite it had the same cueing mechanism with bSOUS. However, some participants still found fSOUS to be effective.
+
+S5: The medians $\left( {{Mdn} : \mathrm{{FA}} = {3.5},\mathrm{{fSOUS}} = 3,\mathrm{{bSOUS}} = 2}\right)$ indicated that, overall, the three techniques did not negatively affect their concentration. However, Figure 10 indicated that bSOUS had the best performance in this regard.
+
+ | Q Techniques | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Q | Techniques | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 3 | FA | 0 | 0 | 0 | 2 | 1 | 3 | 18 | | FA | 3 | 5 | 1 | 6 | 6 | 1 | 2 |
| fSOUS | 2 | 5 | 2 | 5 | 6 | 1 | 3 | 6 | fSOUS | 5 | 4 | 5 | 5 | 4 | 0 | 1 |
| bSOUS | 0 | 0 | 0 | 1 | 8 | 5 | 10 | | bSOUS | 4 | 6 | 4 | 3 | 3 | 3 | 1 |
| FA | 0 | 0 | 1 | 0 | 1 | 3 | 19 | | FA | 0 | 0 | 1 | 0 | 1 | 9 | 13 |
| fSOUS | 4 | 3 | 1 | 4 | 5 | 5 | 2 | 7 | fSOUS | 11 | 2 | 6 | 1 | 1 | 2 | 1 |
| bSOUS | 0 | 1 | 0 | 2 | 5 | 7 | 9 | | bSOUS | 0 | 0 | 1 | 3 | 4 | 7 | 9 |
| FA | 0 | 0 | 0 | 0 | 3 | 2 | 19 | | FA | 2 | 3 | 4 | 1 | 4 | 4 | 6 |
| fSOUS | 3 | 6 | 5 | 2 | 3 | 2 | 3 | 8 | fSOUS | 13 | 2 | 4 | 2 | 1 | 2 | 0 |
| bSOUS | 0 | 2 | 2 | 4 | 2 | 5 | 9 | | bSOUS | 8 | 4 | 7 | 2 | 1 | 1 | 1 |
| FA | 0 | 1 | 3 | 1 | 2 | 3 | 14 | | FA | 0 | 0 | 0 | 1 | 2 | 5 | 16 |
| fSOUS | 4 | 7 | 2 | 2 | 3 | 3 | 3 | 9 | fSOUS | 2 | 3 | 2 | 5 | 6 | 4 | 2 |
| bSOUS | 0 | 0 | 3 | 0 | 3 | 8 | 10 | | bSOUS | 1 | 0 | 0 | 1 | 4 | 8 | 10 |
| FA | 5 | 3 | 4 | 2 | 3 | 6 | 1 | | FA | 10 | 5 | 3 | 2 | 2 | 2 | 0 |
| fSOUS | 5 | 2 | 6 | 3 | 3 | 2 | 3 | 10 | fSOUS | 1 | 4 | 5 | 4 | 1 | 2 | 7 |
| bSOUS | 9 | 6 | 1 | 1 | 3 | 2 | 2 | | bSOUS | 7 | 4 | 6 | 2 | 3 | 2 | 0 |
+
+Figure 7: The heatmap represents the frequencies of the responses for the Likert scale statements. Red means less participants and green means more participants. The numbers indicate indicates the frequencies of responses.
+
+S6: The participants indicated that on average $({Mdn} : \mathrm{{FA}} = 4$ , fSOUS $= 3$ , bSOUS $= 3$ ), all techniques were almost equally startling. We found this result to be interesting. We expected fSOUS to be the least startling, because its cue was faint. Figure 10 indicates somewhat even distributions of scores for all three techniques. The median score for FA was somewhat surprising as we expected the participants to find the technique more startling. Because unlike SOUS, FA could travel very close to the user or even through the user. However, during the interviews, only few participants $\left( {n = 3}\right)$ indicated this to be an issue.
+
+S7: FA $\left( {{Mdn} = 7}\right)$ and bSOUS $\left( {{Mdn} = 6}\right)$ were similarly effective at grabbing the participants' attention whereas fSOUS $\left( {{Mdn} = 2}\right)$ was less effective. It was not surprising for FA to be more attention-grabbing than fSOUS as the FA cue initially appears in the user's foveal vision as opposed to their peripheral vision. However, we found FA and bSOUS's similar effectiveness to be surprising.
+
+S8: Most of the participants indicated that FA $\left( {{Mdn} = 5}\right)$ was more obstructive than fSOUS $\left( {{Mdn} = 1}\right)$ and bSOUS $\left( {{Mdn} = {2.5}}\right)$ . This result means that peripheral-based techniques are beneficial at reducing visual obstruction.
+
+S9: Most of the participants indicated that FA $\left( {{Mdn} = 7}\right)$ and bSOUS $\left( {{Mdn} = 6}\right)$ were effective at making them aware of out-of-view targets. At a glance, fSOUS (Mdn = 4.5) seemed to not make the participants aware of the out-of-view targets. However, Figure 10 indicates a bimodal distribution for fSOUS-meaning that some participants found this technique helping them to become aware of out-of-view targets whereas some did not find it helpful.
+
+S10: FA $\left( {{Mdn} = 2}\right)$ and bSOUS $\left( {{Mdn} = 3}\right)$ were similar in term of comfort while fSOUS $\left( {{Mdn} = 4}\right)$ was slightly more uncomfortable to use. We noticed from Figure 10 that some participants found fSOUS to be very uncomfortable to use while some participants found it to be as comfortable to use as the other techniques.
+
+## 6 EXPERIMENT 2: COMPARING THE VARIANTS OF FLYIN- GARROW
+
+We performed the second experiment to observe if we could improve FA using certain properties of bSOUS and fSOUS. bSOUS and fSOUS have visually persistent cue and the position of the cue are also relative to the user. We compared the four variations of FA: FA-Arc-Trail, FA-Arc+Trail, FA+Arc-Trail, and FA+Arc+Trail in this part of the study. Whereas the FA cue in the first part is an arrow (Figure 2), the FA cue in the second part is a cone (Figure 3) to make its appearance more compatible with a trail.
+
+Experiment 2 was largely similar to the first one. Each participant used a technique in the three environments, completed a NASA-TLX questionnaire and 10 Likert-scale questions, completed an interview, and moved onto the next technique. After completing the first part of the study, each participant would proceed directly to this one after a short break. We asked our participants to use the four variations in the three environments to select 20 out-of-view targets per condition. Each participant performed $4 \times 3 \times {20} = {240}$ trials in total for the study. Similar to the first study, we also used Latin square to arrange the conditions. As such, there were 12 orders of conditions. Since the participants were already familiar with the target selection task, we increased the difficulty of the task by decreasing the size of the targets from ${7}^{ \circ }$ to ${5}^{ \circ }$ . After the completion of this part, each participant received 15 Canadian dollars. The compensation was for their time for both experiments.
+
+### 6.1 Research Questions
+
+RQ2.1: Do the variations of FA have the same performance and induce the same cognitive load? If the variations have different performance, they should have different probability of target acquisition failure, and different speed to reach target. They should also induce similar amount of cognitive load.
+
+RQ2.2: Does each variation of FA have different interaction with the environments? The slight modifications to the original FA technique may induce different interaction with the environment.
+
+RQ2.3: Does each variation of FA induce different target acquisition paths? In particular, does the orbital +Arc encourage different acquisition paths than the more direct standard FA?
+
+RQ2.4: Does each variation of FA have different subjective impression? Although FA seems to have a good subjective impression in Gruenefeld et al. [8], we may be able to observe differences in the variations.
+
+### 6.2 Results
+
+#### 6.2.1 Number of Unsuccessful Target Acquisition
+
+To answer RQ2.1-2.2, we fitted a mixed multiple logistic regression model that models the probability of failing a trial using the participants as the random effect with lme4. We obtained the following model: $P\left( \text{Fail}\right) = - {5.04} + {0.96} \times$ Arc $+ {0.9} \times$ Trail $\times 0 \times$ None- ${0.5} \times$ Mixed $- {1.10} \times$ Arc : Trail $+ {0.22} \times$ Arc : None $+ {1.08} \times$ Arc : Mixed $- {0.56} \times$ Trail : None + 0.21 timesTrail : Mixed +0.26 × Arc : Trail : None-0.14 × Arc : Trail : Mixed. The tests for the coefficients are summarized in Table 3. The ${r}^{2}$ for the models computed with MuMIn were: theoretical marginal $= {0.06}$ , theoretical conditional $= {0.43}$ , delta marginal $= {0.01}$ , and delta conditional $=$ 0.07. The most important ${r}^{2}$ is theoretical conditional ${r}^{2}$ which represents goodness of fit of all terms in the model including the random effect. In this case, the effect size was moderate.
+
+The main effects+Arc $\left( {Z = {2.22}, p \leq {0.03},{OR} = {2.61}}\right)$ , and +Trail $\left( {Z = {2.06}, p \leq {0.05},{OR} = {2.46}}\right)$ were statistically significant-meaning that +Arc and +Trail increased the probability of missing the target. Meanwhile, the interaction between +Arc and +Trail was borderline statistically significant $\left( {Z = - {1.95}, p = {0.051}}\right)$ . This meant that a combination of $+ \mathrm{{Arc}}$ and $+ \mathrm{{Trail}}$ was better than a variation with just one of the two behaviours. The environment did not affect performance for any variation of FA-meaning that despite the different cue trajectories and visual effects, the variations were relatively resistant to the visual complexities the environments.
+
+ | $\beta$ | $\mathbf{{OR}}$ | SE | Z | p |
| (Intercept) | -5.04 | 0.01 | 0.49 | -10.39 | 0.00* |
| +Arc TRUE | 0.96 | 2.61 | 0.43 | 2.22 | 0.03* |
| +Trail TRUE | 0.90 | 2.45 | 0.44 | 2.06 | 0.04* |
| Environments None | 0.00 | 1.00 | 0.51 | 0.01 | 0.99 |
| Mixed | -0.50 | 0.61 | 0.57 | -0.86 | 0.39 |
| +Arc : +Trail TRUE:TRUE | -1.10 | 0.33 | 0.56 | -1.95 | 0.05* |
| +Arc : Env. TRUE:None | 0.22 | 1.24 | 0.60 | 0.36 | 0.72 |
| TRUE:Mixed | 1.08 | 2.94 | 0.65 | 1.65 | 0.10 |
| +Trail : Env. TRUE:None | -0.56 | 0.57 | 0.64 | -0.87 | 0.39 |
| TRUE:Mixed | 0.21 | 1.24 | 0.68 | 0.31 | 0.76 |
| +Arc : Trail : Env. | | | | | |
| TRUE:TRUE:None | 0.26 | 1.30 | 0.81 | 0.32 | 0.75 |
| TRUE:TRUE:Mixed | -0.14 | 0.87 | 0.82 | -0.17 | 0.87 |
+
+Table 3: The coefficients, their associated ORs, and tests for the mixed multiple logistic regression predicting the probability of failing a trial. * signifies that $p \leq {0.05}$ .
+
+
+
+Figure 8: The average speed of target acquisition per condition. The unit is degrees per second. Darker green means faster target acquisition.
+
+#### 6.2.2 Time for Target Acquisition
+
+To answer RQ2.1-2.2, we fitted a mixed multiple linear regression model that predicted the time for successful target selection using lme4. We fitted the following model: $\operatorname{Time} = {2.17} + {0.38} \times \operatorname{Arc} -$ ${0.12} \times$ Trail $- {0.08} \times$ None $- {0.02} \times$ Mixed $+ {0.01} \times$ Dist $+ {0.17} \times$ Arc : Trail $+ {0.15} \times$ Arc : None $- {0.01} \times$ Arc : Mixed $+ {0.03} \times$ Trail : None $+ {0.02} \times$ Trail : Mixed $- {0.25} \times$ Arc : Trail : None $- {0.03} \times$ Arc : Trail : Mixed. Table 4 shows the results of the tests on the coefficients. The pseudo- ${r}^{2}$ that we computed using MuMIn were as following: marginal $= {0.12}$ , conditional $= {0.36}$ . The conditional ${r}^{2}$ was considered moderate. While distance is statistically significant $\left( {\beta = {0.01}, t\left( {5530.64}\right) = {18.52}, p \leq {0.05}}\right)$ , it did not contribute much in term of time required to reach the targets. We found that in general, +Trail increased the speed of target while +Arc slowed down the participants. To better explain how +Trail and +Arc affected the speed, we created a supplementary heatmap (Figure 8) that represents the average speed of target acquisition per condition.
+
+ | $\beta$ | SE | df | t | $\mathbf{p}$ |
| (Intercept) | 2.17 | 0.09 | 40.11 | 23.16 | 0.00* |
| +Arc TRUE | 0.38 | 0.04 | 5527.98 | 8.70 | 0.00* |
| +Trail TRUE | -0.12 | 0.04 | 5527.99 | -2.89 | 0.00* |
| Environments None | -0.08 | 0.04 | 5527.99 | -1.76 | 0.08 |
| Mixed | -0.02 | 0.04 | 5527.98 | -0.50 | 0.62 |
| Angular Distance Dist | 0.01 | 0.00 | 5530.64 | 18.52 | 0.00* |
| +Arc : +Trail TRUE : TRUE | 0.17 | 0.06 | 5527.98 | 2.72 | 0.01* |
| +Arc : Env. TRUE:None | 0.15 | 0.06 | 5527.98 | 2.42 | 0.02* |
| TRUE:Mixed | -0.01 | 0.06 | 5527.98 | -0.20 | 0.84 |
| +Trail : Env. TRUE:None | 0.03 | 0.06 | 5527.98 | 0.45 | 0.65 |
| TRUE:Mixed | 0.02 | 0.06 | 5527.98 | 0.30 | 0.77 |
| +Arc : +Trail : Env. TRUE:TRUE:None | -0.25 | 0.09 | 5527.98 | -2.92 | 0.00* |
| TRUE:TRUE:Mixed | -0.03 | 0.09 | 5527.99 | -0.33 | 0.74 |
+
+Table 4: The summary of coefficients and the results of the tests for the coefficients for the time required to reach out-of-view targets. We only considered successful trials for analysis. * signifies $p \leq {0.05}$ .
+
+#### 6.2.3 Straightness
+
+To answer RQ2.3, we performed a three-way repeated measure ANOVA (using lme4) on normalized $d$ of successful trials. We used bestNormalize for the normalization process. The factors were: +Arc,+Trail, and the environments. The significant effects were: $+ \operatorname{Arc}\left( {F\left( {1,{5529.2}}\right) = {81.58}, p \leq {0.05}}\right)$ and $+ \operatorname{Trail}(F\left( {1,{5529.0}}\right) =$ ${13.55}, p \leq {0.05})$ . The test for the environments $(F\left( {1,{5529.0}}\right) =$ 1.74) was not statistically significant as well as the tests for interactions $( +$ Arc:Trail $- F\left( {1,{5529.0}}\right) = {0.32}, +$ Arc:Environment $- F\left( {2,{5528.9}}\right) = {0.74}, +$ Trail:Environment $- F\left( {2,{5528.9}}\right) = {1.02}$ , +Arc:+Trail:Environment $- F\left( {2,{5528.8}}\right) = {1.32})$ . The subsequent post-hoc tests with emmeans on $+ \operatorname{Arc}\left( {c = {0.23},{Z}_{\text{ratio }} = {9.03}, p \leq }\right.$ ${0.05})$ and $+ \operatorname{Trail}\left( {c = - {0.10},{Z}_{\text{ratio }} = - {3.68}, p \leq {0.05}}\right)$ were statistically significant. Figure 9 represents an interaction plot of the results. We observed that while +Arc made target acquisition trajectories more circuitous while +Trail made them more direct.
+
+
+
+Figure 9: An interaction plot representing the tests of normalized $d$ in the second part of the study. env = Environment. As the y-axis representing normalized $d$ , the plot does not represent descriptive statistics. Rather, it is to help with interpreting ANOVA results in Section 6.2.3.
+
+#### 6.2.4 Cognitive Load
+
+To further answer RQ2.1, we analyzed the cognitive load collected using NASA-TLX questionnaire. After each participant completed a technique with all environments, we collected their raw NASA TLX-Score. The median scores were as following-FA-Arc-Trail: 54, FA-Arc+Trail: 57, FA+Arc-Trail: 48, FA+Arc+Trail: 57. Using repeated measure ART ANOVA with art, we found that the interaction between+Arc and+Trail was not statistically significant $(F\left( {1,{69}}\right) =$ 0.82 ), and neither was the main effects: $+ \operatorname{Arc}\left( {F\left( {1,{69}}\right) = {0.11}}\right)$ and +Trail $\left( {F\left( {1,{69}}\right) = {0.92}}\right)$ . Despite these results, we could not conclude that the techniques induced roughly the same amount of cognitive load either, because we observed that the median for FA+Arc-Trail was much lower than the ones for the other variations.
+
+#### 6.2.5 Questionnaire
+
+Overall, apart from some statements (For example, S10), the participants did not really indicate much different. Observing Figure 10 , we note that the distributions of the scores tend to centre around higher numbers or somewhat uniformly distributed. Therefore, the questionnaire results did not provide a clear answer to RQ2.4 except in a few cases.
+
+S1: On average, participants indicated that all variations were almost as effective as each other (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail $= 7$ , FA+Arc-Trail $= 6$ , FA+Arc+Trail $= 6$ ).
+
+S2: On average, participants indicated that all variations were as understandable as each other (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail $= 7,\mathrm{{FA}} + \mathrm{{Arc}} - \mathrm{{Trail}} = 6,\mathrm{{FA}} + \mathrm{{Arc}} + \mathrm{{Trail}} = 6$ ).
+
+
+
+Figure 10: The heatmap represents the frequencies of the responses for the Likert scale statements. Red means less participants and green means more participants. The numbers indicate the frequencies of responses.
+
+S3: On average, participants indicated that all techniques were as precise as each other $($ Mdn: FA-Arc-Trail $= 7$ , FA-Arc+Trail $= 7$ , FA+Arc-Trail $= 6$ , FA+Arc+Trail $= 6$ ).
+
+S4: The participants, on average, indicated that +Arc techniques were slightly more helpful $({Mdn} : \mathrm{{FA}} + \mathrm{{Arc}} - \mathrm{{Trail}} = 5,\mathrm{{FA}} + \mathrm{{Arc}} + \mathrm{{Trail}}$ $= 5$ ) and the techniques that traveled in straight line were more effective (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail = 6).
+
+S5: The participants found it was the most difficult to concentrate with FA-Arc+Trail $\left( {{Mdn} = {4.5}}\right)$ . The second worst technique is FA-Arc-Trail $\left( {{Mdn} = {3.5}}\right)$ . The third worst technique is FA+Arc+Trail $\left( {{Mdn} = 3}\right)$ . The best technique is FA-Arc-Trail $\left( {{Mdn} = 2}\right)$ . The interview data indicated that the trail might have made it difficult to concentrate; five participants indicated so during the interview for FA-Arc+Trail, and three participants indicated so during the interview for FA+Arc+Trail. We believe that +Arc might make it somewhat easier to concentrate, because the cue never got close to the participants.
+
+S6: FA-Arc+Trail was the most startling variation $\left( {{Mdn} = 5}\right)$ , closely followed FA-Arc-Trail $\left( {{Mdn} = 4}\right)$ and FA+Arc-Trail $({Mdn} =$ 3). The least startling variations was FA-Arc+Trail $\left( {{Mdn} = 2}\right)$ .
+
+S7: On average, participants indicated that all techniques were almost as effective as each other at getting their attnetion (Mdn: FA-Arc-Trail $= 6$ , FA-Arc+Trail $= 7$ , FA+Arc-Trail $= {6.5}$ , FA+Arc+Trail $= 6)$ .
+
+S8: We found that +Trail techniques (Mdn: FA-Arc+Trail 6.5, FA+Arc+Trail = 5) to be more obstructive than their -Trail counterpart (Mdn: FA-Arc-Trail = 4.5, FA+Arc-Trail = 4).
+
+S9: On average, participants indicated that all techniques were almost as effective as each other at making them aware of objects outside the FoV (Mdn: FA-Arc-Trail = 6, FA-Arc+Trail = 7, FA+Arc-Trail $= 6,\mathrm{{FA}} + \mathrm{{Arc}} +$ Trail $= 6$ ).
+
+S10: On average, participants indicated that all FA+Arc+Trail to be the most comfortable $\left( {{Mdn} = 2}\right)$ followed with FA-Arc-Trail $\left( {{Mdn} = {2.5}}\right)$ . The third best technique was FA+Arc-Trail $\left( {{Mdn} = 3}\right)$ and the worst technique was FA-Arc+Trail $\left( {{Mdn} = 4}\right)$ . We think the arc trajectory made the trail more comfortable to use, because the trail was never close to the participants.
+
+## 7 DISCUSSION
+
+### 7.1 Experiment 1
+
+In experiment 1, the results indicated that bSOUS is a viable technique. bSOUS is faster and less obstructive than FA. This also highlights the benefit of a technique with an interface solely inside the user's peripheral vision. Since the cue will always be inside the user's peripheral vision, we do not need to worry about positioning the cue like FA or 3D Wedge. Although the speed increment by SOUS is small, we still think it is still important for certain scenarios such as competitive gaming where any increment is important.
+
+We found the effectiveness of fSOUS is very sensitive to the visual complexity of the environment. Although fSOUS can help the user to locate targets quickly in a less visually complex environment, it can end up hindering the user in more complex environments. This contradicts other works that have successfully used faint cues to guide the user such as Rothe and Hußmann [21,22] and McNamara et al [15]. One notable difference between our work and these is that fSOUS is strictly inside the user's far-peripheral vision while the other techniques have cues that are closer to the user's macular vision. Our results are more in line with other findings (such as [14, 19]) which show discerning details in far-peripheral vision is difficult. Therefore, we conclude that while a faint technique can be effective, it tends to negatively interact with the visual scene and becomes less effective in the peripheral vision. We also recommend a faint cue to be used only in scenarios where it will not be restricted inside the user's peripheral vision. If the cue must be within the user's peripheral vision, then the scene must not visually complex. We also suggest using a faint cue in a low-stake task where successful target acquisition is not paramount to the task as a whole. For example, the user may just be exploring a virtual museum at their own pace and does not benefit from interacting with virtual museum pieces.
+
+Gruenefeld et al. [8] argue that since FA has low usability (as measured by System Usability Score), it may explain why it is slower than EyeSee360, another technique in their study. However, our study shows that there might be an alternative explanation to the relatively lower speed of FA. Some participants in our study believed that FA gave them a specific timeframe to complete the task while some said that the cue influenced their speed of target acquisition. Therefore, we argue that the true strength of FA is not about maximizing speed of target acquisition, but to limit it. Still, how well FA can control the speed will depend on many factors. For example, FA may not be as effective at controlling the speed if we prime the user to ignore the speed of the cue.
+
+### 7.2 Experiment 2
+
+While +Arc made FA less obtrusive and thus more similar to SOUS, we did not observe an increase in speed like SOUS. Instead, it slowed down the participants even further. FA also did not become more comfortable to use despite less visual obstruction. However, we believe that we can increase comfort by making the cue adjusting its own trajectory to maximize comfort. For example, a cue takes a path around the user instead of above the user to reach the target. Interestingly, while the first study implementation of FA has a consistent behaviour through all environments, +Arc makes FA more sensitive to the visual complexity in the environment in term of speed-somewhat similarly to fSOUS. We require further investigation to find the reason behind the increased sensitivity.
+
++Trail improved the speed of target acquisition through increased persistence of the cue. However, we also found the participants to be less successful at acquiring out-of-view targets. The interview data revealed the participants did not find the trail comfortable to use. They suggest that the trail should be smaller, and more translucent so they can better see the surrounding. Based on bSOUS and fSOUS being faster than FA in the first study and the improvement brought by +Trail, we suggest that a more visible cue may reduce target acquisition time. A more visible cue also makes target acquisition trajectories more direct.
+
+Overall, this study suggests that placing an interface completely inside a user's far-peripheral vision provides the best balance between obtrusiveness and visual persistence. However, if the user has to use a low-FoV HMD that is incapable of displaying beyond the user's mid-peripheral vision, something similar to +Trail may be useful to increase their speed of target acquisition. However, one must be aware that a +Trail technique requires fine-tuning of the cue to ensure an optimal experience.
+
+### 7.3 What Is a Good Technique?
+
+Upon analyzing our results, it appears that each technique could be useful in different contexts. However, as reflected in our research questions, we maintain that our set of desirable characteristics of off-screen target cuing hold in most cases: (1) make the user more successful at target acquisition while inducing low cognitive load at the "optimal" speed, (2) not be impacted by the visual complexity of a scene, and (3) provide a good subjective user experience. It is important to note that "optimal" speed does not always mean "fastest." Rather, this may be the speed that will lead to the best user experience. For example, in competitive gaming, the highest speed is likely better whereas in a VR museum exhibit, a slower speed might be desirable to allow the user to observe the environment along a trajectory. We think that bSOUS and fSOUS are appropriate for maximizing speed whereas FA may be viable for reducing and controlling the speed.
+
+## 8 CONCLUSION
+
+We conducted a two-part study in which participants selected out-of-view targets with the aid of visual cuing techniques. In the first experiment, we found that bSOUS and fSOUS had a reasonable performance when compared to FA. However, fSOUS has a significant weakness: the cue tends to be not sufficiently salient against the environments. Overall, the first experiment demonstrates that a technique whose interface is completely inside the user's far-peripheral vision can be effective. Although SOUS has a simple and a straightforward design, far-peripheral techniques like SOUS were impossible to evaluate until recently due to limited FoV of the commodity HMDs. In the second experiment, we modified FA to make cue trajectories travel relative to the user (+Arc) and make it more noticeable (+Trail), so that FA can be less obtrusive and more persistent like SOUS. Overall, +Arc decreased the effectiveness of FA while +Trail increased the speed of the participants while reducing the chance of acquiring the target. This means that decreasing obtrusiveness does not necessarily lead to a desirable behaviour and increasing persistence can reduce target acquisition time. We suggest that a technique that exclusively uses the user's far-peripheral vision has the best balance between obtrusiveness and persistence. However, if a HMD cannot effectively display beyond the user's mid-peripheral vision, something akin to +Trail may be useful. Our study shows that there is no one-size-fit-all technique. When designing a technique or modifying an existing technique, we must consider multiple competing factors.
+
+## REFERENCES
+
+[1] R. Ball and C. North. The effects of peripheral vision and physical navigation on large scale visualization. In Proceedings of Graphics Interface 2008, GI '08, p. 9-16. Canadian Information Processing Society, CAN, 2008.
+
+[2] L. Bartram, C. Ware, and T. Calvert. Moving Icons : Detection And Distraction. Proceedings of the 8th IFIP TC 13 International Conference on Human-Computer Interaction: Part II (INTERACT '01), pp. 157-166, July 2001.
+
+[3] P. Baudisch and R. Rosenholtz. Halo: a Technique for Visualizing Off-screen Objects. Proceedings of the SIGCHI Conference on Human
+
+Factors in Computing Systems, (September 2013):481-488, April 2003. doi: 10.1145/642611.642695
+
+[4] F. Bork, C. Schnelzer, U. Eck, and N. Navab. Towards efficient visual
+
+guidance in limited field-of-view head-mounted displays. IEEE Transactions on Visualization and Computer Graphics, 24(11):2983-2992, November 2018. doi: 10.1109/TVCG.2018.2868584
+
+[5] L. E. Buck, M. K. Young, and B. Bodenheimer. A comparison of distance estimation in hmd-based virtual environments with different hmd-based conditions. ACM Trans. Appl. Percept., 15(3), July 2018. doi: 10.1145/3196885
+
+[6] S. Burigat, L. Chittaro, and S. Gabrielli. Visualizing locations of offscreen objects on mobile devices: a comparative evaluation of three approaches. MobileHCI, pp. 239 - 246, September 2006. doi: 10. 1145/1152215.1152266
+
+[7] U. Gruenefeld, D. Ennenga, A. E. Ali, W. Heuten, and S. Boll. Eye-See360: designing a visualization technique for out-of-view objects in head-mounted augmented reality. Proceedings of the 5th Symposium on Spatial User Interaction - SUI '17, pp. 109-118, October 2017. doi: 10.1145/3131277.3132175
+
+[8] U. Gruenefeld, D. Lange, L. Hammer, S. Boll, and W. Heuten. Fly-ingARrow: Pointing Towards Out-of-View Objects on Augmented Reality Devices. In Proceedings of the 7th ACM International Symposium on Pervasive Displays - PerDis '18, pp. 1-6. ACM Press, New York, New York, USA, April 2018. doi: 10.1145/3205873.3205881
+
+[9] U. Gruenefeld, D. Lange, and S. Weiß. Comparing Techniques for Visualizing Moving Out-of-View Objects in Head-mounted Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, number March. IEEE, March 2019.
+
+[10] S. Gustafson, P. Baudisch, C. Gutwin, and P. Irani. Wedge: Clutter-Free Visualization of Off-Screen Locations. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI '08, pp. 787-796, April 2008. doi: 10.1145/1357054.1357179
+
+[11] S. G. Hart. NASA-task load index (NASA-TLX); 20 years later. Human Factors and Ergonomics Society Annual Meeting, pp. 904-908, October 2006. doi: 10.1037/e577632012-009
+
+[12] V. Heun, A. von Kapri, and P. Maes. Perifoveal display: Combining foveal and peripheral vision in one visualization. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, UbiComp '12, p. 1150-1155. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2370216.2370460
+
+[13] M. Kattenbeck. Empirically measuring salience of objects for use in pedestrian navigation. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, SIGSPATIAL '15. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2820783.2820820
+
+[14] K. Luyten, D. Degraen, G. Rovelo R., S. Coppers, and D. Vanacken. Hidden in Plain Sight: An Exploration of a Visual Language for Near-Eye Out-of-Focus Displays in the Peripheral View. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16, pp. 487-497, May 2016. doi: 10.1145/2858036.2858339
+
+[15] A. McNamara, R. Bailey, and C. Grimm. Search task performance using subtle gaze direction with the presence of distractions. ${ACM}$ Trans. Appl. Percept., 6(3), Sept. 2009. doi: 10.1145/1577755. 1577760
+
+[16] S. Nakagawa, P. C. D. Johnson, and H. Schielzeth. The coefficient of determination r2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. Journal of The Royal Society Interface, 14(134):20170213, September 2017. doi: 10. 1098/rsif.2017.0213
+
+[17] M. B. Neider and G. J. Zelinsky. Searching for camouflaged targets: Effects of target-background similarity on visual search. Vision Research, 46(14):2217-2235, July 2006. doi: 10.1016/j.visres.2006.01. 006
+
+[18] L. Qian, A. Plopski, N. Navab, and P. Kazanzides. Restoring the awareness in the occluded visual field for optical see-through head-mounted displays. IEEE Transactions on Visualization and Computer Graphics, 24(11):2936-2946, November 2018. doi: 10.1109/TVCG. 2018.2868559
+
+[19] R. Rosenholtz. What your visual system sees where you are not looking. In B. Rogowitz and P. T.N., eds., SPIE Human Vision and Electronic Imaging, vol. 16, February 2011. doi: 10.1117/12.876659
+
+[20] R. Rosenholtz, L. Nakano, and L. Yuanzhen. Measuring visual clutter. Journal of Vision, 7(2):1-22, August 2007. doi: 10.1167/7.2.17. Introduction
+
+[21] S. Rothe and H. Hußmann. Guiding the viewer in cinematic virtual reality by diegetic cues. In L. T. De Paolis and P. Bourdot, eds., Augmented Reality, Virtual Reality, and Computer Graphics, pp. 101- 117. Springer International Publishing, November 2018.
+
+[22] S. Rothe, H. Hußmann, and M. Allary. Diegetic cues for guiding the viewer in cinematic virtual reality. In Proceedings of the ${23rd}\mathrm{{ACM}}$ Symposium on Virtual Reality Software and Technology, VRST '17. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3139131.3143421
+
+[23] T. Schinke, N. Henze, and S. Boll. Visualization of off-screen objects in mobile augmented reality. Proceedings of the 12th international conference on Human computer interaction with mobile devices and services - MobileHCI '10, (January):313, 2010. doi: 10.1145/1851600. 1851655
+
+[24] A. F. Schmidt and C. Finan. Linear regression and the normality assumption. Journal of Clinical Epidemiology, 98:146 - 151, 2018. doi: 10.1016/j.jclinepi.2017.12.006
+
+[25] M. J. Simpson. Mini-review: Far peripheral vision. Vision Research, 140:96-105, November 2017. doi: 10.1016/j.visres.2017.08.001
+
+[26] H. H. Solum. Readability in virtual reality, an investigation into displaying text in a virtual environment. Master's thesis, Norwegian University of Science and Technology, June 2019.
+
+[27] C. Trepkowski, D. Eibich, J. Maiero, A. Marquardt, E. Kruijff, and S. Feiner. The effect of narrow field of view and information density on visual search performance in augmented reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 575- 584, March 2019. doi: 10.1109/VR.2019.8798312
+
+[28] R. Xiao and H. Benko. Augmenting the field-of-view of head-mounted displays with sparse peripheral displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 1221-1232. Association for Computing Machinery, New York, NY, USA, May 2016. doi: 10.1145/2858036.2858212
+
+[29] D. Yu, H.-N. Liang, K. Fan, H. Zhang, C. Fleming, and K. Papangelis. Design and evaluation of visualization techniques of off-screen and occluded targets in virtual reality environments. IEEE Transactions on Visualization and Computer Graphics, March 2019. doi: 10.1109/ TVCG.2019.2905580
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0e91716f17f50fc0656d30c0e9066004a807cbb7
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1S3TXjkEVmH/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,557 @@
+§ A COMPARATIVE EVALUATION OF TECHNIQUES FOR LOCATING OUT-OF-VIEW TARGETS IN VIRTUAL REALITY
+
+Category: Research
+
+§ ABSTRACT
+
+In this work, we present the design and comparative evaluation of techniques for increasing awareness of out-of-view targets in virtual reality. We first compare two variants of SOUS-a technique that guides the user to out-of-view targets using circle cues in their peripheral vision-with the existing FlyingARrow technique, in which arrows fly from the user's central (foveal) vision toward the target. fSOUS, a variant with low visual salience, performed well in a simple environment but not in visually complex environments, while bSOUS, a visually salient variant, yielded faster target selection than both fSous and FlyingARrow across all environments. We then compare hybrid techniques in which aspects of SOUS relating to unobtrusiveness and visual persistence were reflected in design modifications made to FlyingARrow. Increasing persistence by adding trails to arrows improved performance but there were concerns about obtrusiveness, while other modifications yielded slower and less accurate target acquisition. Nevertheless, since fSOUS and bSOUS are exclusively for head-mounted display with wide field-of-view, Fly-ingARrow with trail can still be beneficial for devices with limited field-of-view.
+
+Index Terms: Human-centered computing-Virtual Reality
+
+§ 1 INTRODUCTION
+
+Locating and selecting out-of-view targets without prior knowledge of their positions is a demanding task-particularly in a virtual reality (VR) environment displayed on a commodity head-mounted display (HMD) with a limited field of view (FoV) [7]. However, we can also use VR to augment our field of view with artificial visual cues that assist us in finding and select the target (e.g., $\left\lbrack {7,8,{29}}\right\rbrack$ ). Building on prior work employing visual effects in peripheral vision to enhance awareness of off-screen objects or action ( $\left\lbrack {{18},{28}}\right\rbrack$ ), and taking advantage of the recent emergence of commodity HMDs with wide FoV, we designed Sign-of-the-UnSeen (SOUS) to allow a user to become aware of and then acquire of out-of-view objects. The cue strictly resides within the user's peripheral vision and moves radially around the user's gaze to inform the user of the position of an out-of-view target. To explore the extent to which SOUS can be unobtrusive to the VR scene while remaining useful we compare two variants: bold SOUS (bSOUS), an opaque circular cue, and faint SOUS (fSOUS), a transparent circular cue. Figure 1 contains the screenshots of the techniques in action.
+
+We conducted two experiments. In the first experiment, we compared bSOUS and fSOUS with a modified version of FlyingARrow (FA) [7]. FA consists of a 3D arrow that flies toward a target. In the experiment participants selected targets placed in their central vision, and periodically acquired off-screen targets indicated by one of the three techniques. Participants acquired off-screen targets faster using bSOUS and fSOUS than with FA, as many participants would visually track the arrow to its destination. While the increase in performance is small, it is significant in contexts such as competitive gaming where any performance increase can lead to a victory. Participants sometimes did not notice target cueing with fSOUS in visually complex environments, while bSOUS and FA were more robust to varied detail and changes in the environment. Despite FA being the slowest technique, it had higher overall subjective ratings. While bSOUS compared well to FA in these ratings, many participants found fSOUS frustrating to use.
+
+By being placed in the user's peripheral vision, SOUS is less obstructive than FA, as it does not block the scene in the user's central vision. In addition, while FA arrows move through the scene in the 3D environment, SOUS cues are presented on an invisible layer in front of the scene (as with a Heads Up Display or HUD): this may further help in distinguishing SOUS as a cue, rather than another object in the scene. SOUS is also more visually persistent than FA, whose arrow flies outside the user's FoV if not followed. If it was possible to provide these qualities to FA, this could be beneficial for HMDs with limited FoVs unable to display a technique that requires the use of far-peripheral vision like SOUS. We therefore extended FlyingARrow (FA) [8] with two behaviours: +Arc, and +Trail. +Arc attempts to make FA less obstructive by making the cue orbit around the user at a set distance-keeping it out of the way of on-screen targets and making it more distinctive as a cue to the user vs. an object in the scene. +Trail makes FA emit a trail, making it more visually persistent-if the user loses sight of the arrow, the trail will linger, allowing the user to remain aware of the target's direction.
+
+In a second experiment, we compared FA, FA+Arc, FA+Trail, and FA+Arc+Trail. This experiment involved the same combination of selecting on-screen and off-screen targets. We found that +Arc slowed down participants and did not make the technique more comfortable to use. While participants found +Arc to be slightly less obtrusive, it was also less robust to visual complexity, suggesting that the intended increase in visual distinctiveness was not achieved. Despite participants stating that +Trail was more obtrusive, including a trail improved the speed of target acquisition.
+
+In the remainder of this paper we discuss related work in visual perception and techniques for off-screen target awareness and acquisition, detail the SOUS and FA designs, describe our experiments and present results. After this we discuss the implications of our findings for the design of cues for off-screen objects in VR.
+
+§ 2 RELATED WORK
+
+§ 2.1 VISUAL PERCEPTION
+
+Prior literature $\left\lbrack {{14},{19}}\right\rbrack$ suggests that the shape of cues in peripheral vision should be simple, because it is difficult to distinguish complex shapes in this region. Luyten et al. [14] performed an experiment where each participant wore a pair of glasses. On each side of the glasses, there was a colour screen that could display a shape. Each shape was positioned almost ${90}^{ \circ }$ away from the foveal center of vision. They found the participants could recognize that the shapes were different but had difficulty recognizing composite shapes. While our ability to distinguish shapes is reduced in peripheral vision [19], it is adapted to notice sudden changes [12]. Work by Bartram et al. [2] suggests animation can further enhance awareness of new objects in the peripheral vision, while Luyten et al. [14] found that blinking is effective in the peripheral region to notify the user of changes.
+
+Ball and North [1] conducted a study to investigate why users have better target acquisition performance on larger screens, finding that the improved performance was due to peripheral awareness of content that they could rapidly focus on. Although bSOUS and fSOUS are VR techniques, they build on Ball and North's observations by making use of the user's peripheral vision and providing support for rapidly locating the indicated off-screen target.
+
+Visual cues can interact with other objects in the visual field, impacting their ability to capture attention. According to Rosenholtz et al. [20], whether we will find a target or not depends on the visual salience of the environment. Salience indicates how different the target is from its environment. For example, a low-salience target tends to have a similar visual appearance to or blend in with the environment. Experiments by Neider and Zelinsky [17] support this by demonstrating that it is more difficult for a person to find a target if the background resembles the target. Additionally, Rosenholtz et al. [20], and Rosenholtz [19] present mathematical formulae to quantify visual salience, but they are designed for static images projected onto 2D space and in scenarios without animation. Therefore, they don't directly translate to dynamic VR environments and are not used in our study. More recent work has explored machine learning methods to model visual salience (e.g. [13]); while such techniques are promising ways to measure salience in a given scene, no standard has been established for using these methods to generate scenes with desirable salience attributes in controlled experiments.
+
+Cues with high visual salience tend to be more effective, but such cues are is not always appropriate. For example, in cinematic viewing, users may prefer subtler cues to avoid obstruction and distraction. McNamara et al. [15] designed a study where a part of a 2D screen would be subtly modulated in order to guide the participants toward a certain area. Their study showed some efficacy in modulation. Later on, Rothe and Hußmann [21, 22] conducted an experiment where they used spotlights to subtly guide the user to a target and found them to be effective. We created fSOUS as a subtler way to guide the user to the out-of-view targets. However, unlike the cues explored in these prior studies $\left\lbrack {{15},{21},{22}}\right\rbrack$ , our cue strictly resides in the user's peripheral vision.
+
+§ 2.2 EXISTING TECHNIQUES
+
+Some existing techniques for guiding the user to out-of-view targets, such as EyeSee360 [7], 3D Wedge [29], and Mirror Ball [4], have roots in an earlier technique called Halo [3]. Halo provides cues for off-screen targets on small-screen handheld devices. Halo uses circles to represent the targets, with sections of the circles rendered on the edge of the device. The position and size of a circle indicates a target's position relative to the area displayed on the device. Halo was compared with a technique called Arrow which uses arrows pointing toward the targets labelled with the target's distance. Halo was better at indicating both position and distance in their tests. Burigat et al. [6] compared Halo to a variant of Arrow in which the length of an arrow indicated distance. This allowed participants to more easily rank the distances of the targets but fared worse than Halo for indicating the actual target distances. Schinke et al. [23] developed an handheld Augmented Reality (AR) version of Arrow where 3D arrows point toward AR targets located some distance from the viewer (and often off-screen). The user then uses the device to guide themselves toward the targets. Their evaluation showed the technique to work better than Radar, a technique that provides a simplified overhead view of the area.
+
+Gruenefeld et al. [7] notes that many AR HMDs such as Mi-crosoft HoloLens v1 suffer from limited screen real estate (much like handhelds) and limited FoV. They introduced EyeSee360, an overview technique that allows the user to see out-of-view targets by representing them as dots on a grid. The dot's position on the grid indicates the target's orientation and distance relative to the viewer. EyeSee360 is a visually obtrusive technique. As such, Gruenefeld et al. [9] suggest that the user should be able to set the cue's visibility on an "on-demand" basis. They compared EyeSee360 against Halo, Wedge (a variant of Halo that uses acute isosceles triangles [10]), and Arrow and found it to be the best-performing technique. Greuenefeld et al. [8] later developed FlyingARrow (FA), an animated variant of Arrow, which they found to be slower than EyeSee360. Other overview techniques explored in the literature include Radar and 3D Radar [9], and Mirror Ball [4], which presents a distorted view of the surroundings rendered as a ball.
+
+Yu et al. [29] proposed a 3D variant of Wedge for use in VR, to indicate relative position and distance of targets. Unlike the original Wedge, the cue for 3D Wedge appears in front of the user instead of around the edges of the screen. Each wedge is also a 3D pyramid whose base is pointing the toward the target, and whose size indicates the distance. The researchers found that 3D Wedge was more effective at finding targets than overview techniques such as Radar-except when there were many targets. They improved 3D Wedge by embedding an arrow pointing toward the target inside each pyramid.
+
+Unlike the techniques covered so far which focus on target ac-qusition, Xiao and Benko [28], and Qian et al. [18] implemented techniques for increasing the user's awareness of off-screen objects without requiring the user to select them. Xiao and Benko [28] added a small LED light grid around the main HMD display. Although the grid had low resolution, it was sufficient for the user to glean additional information in their peripheral vision based on colour and lighting changes. Qian et al. [18] used a similar approach for object awareness specifically: when there is an object close to the user, a corresponding area of the screen's edges lights up. Their evaluation found that this allowed users to notice off-screen targets.
+
+§ 3 TECHNIQUES
+
+§ 3.1 BSOUS AND FSOUS
+
+ < g r a p h i c s >
+
+Figure 1: a: fSOUS. b: bSOUS.
+
+Sign of the UnSeen (SOUS) is a family of peripheral vision-based techniques that include bSOUS and fSOUS. When a target of interest appears off-screen, a SOUS cue appears in the user's peripheral vision. The cue moves radially based on the user's relative position to the target. For example, if the target is slightly above and left of the user, the cue will appear on the left side and will be rotated slightly upward around the user's forward gaze cursor. Although we would like the cue to be as far away as possible from the user's foveal vision, there is currently no commercially-available VR headset that encompasses the full human visual field (about ${105}^{ \circ }$ from the center of foveal vision [25]). Nevertheless, some commercial headsets (e.g., the Pimax 5K Plus used in this study) can display what is considered to be in the far-peripheral. As such, the SOUS cue is located around ${60}^{ \circ }$ from the center which is considered to be in peripheral vision [25] and is displayable by commercially available VR headsets.
+
+fSOUS is semi-transparent and subtle, and intended to support scenarios such as cinematic viewing in which more explicit cues might be highly disruptive to the viewing experience. We conducted a small pilot experiment with 5 participants to determine a lower bound opacity level that was still detectable. All five participants found that they could see the cue at $5\%$ opacity within a minimal skybox environment(Figure 4:a). While we also found that closer to ${50}\%$ opacity would be readily detectable in a more complex environment like Mixed (Figure 4:d), we maintain the 5% level in our experiment across environments. The circular cue uses a radial gradient that shifts between black and white at ${5.56}\mathrm{\;{Hz}}$ .
+
+bSOUS appears as a ring that blinks from red to white at 1.11 Hz. The cue is opaque and blinks rather than gradually changing colour, making the cue more immediately noticeable in peripheral vision. bSOUS uses a ring instead of the solid circle used by fSOUS because without transparency a solid circle is not visually distinct enough from the targets used in our experiments.
+
+§ 3.2 FLYINGARROW (FA)
+
+FA is a further refinement of Arrow [23] for immersive AR explored by Gruenefeld et al [8]. FA's cue is a 3D arrow that flies toward the target, using animation to encourage the user to act. The cue would play a sound once it collides with the target and then subsequently disappeared.
+
+FA was designed for AR devices with small FoVs like Microsoft Hololens v1, and we adapt FA for HMDs with larger FoVs. In the original version FA arrows start in a corner of the screen and move across the user's limited screen space, to allow the user time to perceive and interpret the cue. Given the increased FoV, in our variant the arrow starts $1\mathrm{\;m}$ in front of the user (in virtual space) in the center of the screen. We also removed the sound effect as it was a potential confounding factor: all off-screen targets were equidistant from the user in our experiments, eliminating the need for a distance indicator (the role of the sound). We reused the 3D arrow asset used by Gruenefeld et al. [8], available in: https: //github.com/UweGruenefeld/OutOfView. The 3D arrow can be seen in Figure 2.
+
+ < g r a p h i c s >
+
+Figure 2: FlyingARrow as appeared in the first part of the study. The arrow is travelling toward the target.
+
+We now describe the +Arc and +Trail modifications to FA that we explore in the second experiment. As discussed, +Arc was designed to make FA less obstructive to and more visually distinguished from on-screen targets by orbiting around the user toward the target. While the standard FA cue starts $1\mathrm{\;m}$ in front of the user and travels straight to the target at the speed of ${10}\mathrm{\;m}/\mathrm{s}$ , a +Arc cue starts at $x$ metres away from the user, where $x$ is the physical distance from the user to the target. In our experiment, this is $5\mathrm{\;m}$ for all off-screen targets, placing the cue behind the on-screen targets. The cue's physical size is then adjusted to make sure that it has the angular size of ${5}^{ \circ }$ , equal to the cue’s default angular size at $1\mathrm{\;m}$ . The cue then orbits around the user at the speed of at ${\tan }^{-1}\left( \frac{{10}\mathrm{\;m}/\mathrm{s}}{\mathrm{x}}\right)$ around the user's upward vector. This vector is recalculated at every frame update.
+
++Trail makes the FA cue's visibility persist longer by emitting a trail. The standard FA cue is not visible to the user once it leaves the screen. The trail has the following Unity properties: Widest Size $=$ ${0.315}\mathrm{\;m}$ , Narrowest Size $= {0.315}\mathrm{\;m}$ , Time $= 5\mathrm{\;s}$ . This trail allows the user to maintain awareness of an FA cue and to follow the trail to relocate it. We altered the shape of the cue from an arrow to a cone in our second experiment across all conditions to improve the visual integration of the trail and main cue.
+
+Below is the summary of four variations of FA based on the new behaviours:
+
+ * FA-Arc-Trail: The cone travels straight and directly to the target without any trail, similar to FA used in the first study.
+
+ * FA+Arc-Trail: The cone orbits around the user (rotating around user’s upward vector $\overrightarrow{u}$ at the moment the target first appears) to reach the target. It does not leave a trail.
+
+ * FA-Arc+Trail: The cone travels straight to the target, leaving a trail.
+
+ * FA+Arc+Trail: The cone orbits around the user to reach the target (rotating around user’s upward vector $\overrightarrow{u}$ at the moment when the target first appears) and leaves behind a trail.
+
+ < g r a p h i c s >
+
+Figure 3: The variations of FA cues. The number represents the current score the user has. a: FA-Arc-Trail in the Training environment. b: FA+Arc-Trail in the Hotel environment. c: FA-Arc+Trail in the None environment. d:FA+Arc+Trail in the Mixed environment. For more information about the environment and the credits to the assets used, please refer to Section 4.
+
+§ 4 ENVIRONMENTS
+
+Prior work [17] suggests that the visual complexity of the environment may impact user performance. In order to explore how techniques interact with environment complexity we varied environment as an experimental factor in our study. We created three types of environment: None, Hotel, and Mixed. The details of the environments are as follows:
+
+ < g r a p h i c s >
+
+Figure 4: Screenshots of the environments as they appeared in our implementation (the left eye/screen is shown here). a: None. b: Hotel. c: Mixed.
+
+ * None (Figure 4:a): a generic skybox with a brown horizon and a blue clear sky, representing environments with low visual complexity. Constructed using the default skybox in Unity 2018.3.7f1.
+
+ * Hotel (Figure 4:b): a photorealistic skybox of a hotel room, representing environments with moderate visual complexity. This skybox is CC Emil Persson (https://opengameart.org/content/indoors-skyboxes).
+
+ * Mixed (Figure 4:c): a combination of a photorealistic sky-box and 3D models, some of which are animated; this represents environments with high visual complexity. The 3D models are taken from the Pupil Unity 3D Plug-in (https: //github.com/pupil-labs/hmd-eyes) and the skybox is CC Emil Persson (https://opengameart.org/content/ winter-skyboxes).
+
+While each environment differs in visual complexity, we are unable to quantify this precisely, as discussed previously. Instead, including these environments allows us to generally explore the robustness of each technique to typical environmental differences.
+
+§ 5 EXPERIMENT 1: COMPARING BSOUS, FSOUS AND FLY- INGARROW
+
+We performed the first experiment to evaluate our techniques bSOUS and fSOUS against an existing technique called FA. Furthermore, since bSOUS and fSOUS have different visual salience (achieved through differences in animation and opacity), we explore the impact of a peripheral cue's visual salience on target acquisition.
+
+§ 5.1 RESEARCH QUESTIONS
+
+RQ1.1: How do the techniques affect target acquisition performance and the user's cognitive load? To measure performance, we collect (1) number of successful out-of-view target acquisitions, and (2) time to acquire an out-of-view target. We administer the NASA TLX to assess cognitive load. An ideal cue has fast acquisition times, high success rate, and low cognitive load.
+
+RQ1.2: How do the techniques interact with the visual scene? We measure how the environments affect (1) number of successful out-of-view target acquisitions, and (2) time to acquire an out-of-view target. An ideal cue works well under a range of visual scenes.
+
+RQ.13: What are the subjective impressions of the cues? We gather subjective feedback through questionnaire and interview. An ideal cue provides a positive experience for the user. A cue with good performance may be less viable than a technique with inferior performance that is preferred by the user.
+
+§ 5.2 PARTICIPANTS
+
+We conducted the first study at a research university with 24 participants. We recruited the participants using an email list for graduate students in the faculty of computer science at our institution. Six participants were female, and 14 were males. Four participants did not indicate their gender. 19 participants indicated that they had a prior experience using a VR headset, and seven indicated that they had participated in a VR study before. The median score for self-reported VR proficiency level is 4 out of 7 .
+
+§ 5.3 SOFTWARE AND HARDWARE INSTRUMENT
+
+We used the Pimax $5\mathrm{\;K}$ Plus for the study because it has a wider FoV than most commercially available headsets. Pimax $5\mathrm{\;K}$ Plus has the diagonal FoV of ${200}^{ \circ }\left\lbrack {26}\right\rbrack )$ . The diagonal FoVs of other popular and widely available HMDs are: Occulus Rift DK1 - 110 ${}^{ \circ }$ [5], HTC Vive - 110° [5], Microsoft HoloLens v1 - 35° [27].
+
+During the training and the trials, each participants interacted with a VR interface implemented using Unity 2018.3.7f1 and SteamVR. The interface had a score on the top-left corner of the screen to keep the participants engaged. The targets were 3D spheres that the participants could select by rotating their head to land the cursor onto the target (gaze cursor). The gaze cursor is a circle ${1.2}\mathrm{\;m}$ in front of the user with the size of ${0.01}\mathrm{\;m}$ . This means that it has the angular size of ${0.57}^{ \circ }$ . Based on the condition, the participant would be operating inside a specific virtual environment and could avail themselves to one of the techniques to select out-of-view targets. We used $\mathrm{R}$ to analyze the data collected while the participant was performing the tasks. We also used it to analyze NASA-TLX raw scores and the questionnaire answers.
+
+ < g r a p h i c s >
+
+Figure 5: A screenshot of the interface taken from the left-side screen of P15. The current environment of the screen is Mixed. The number shows the current score that the participant currently had. The spheres represent the in-view targets. One of the target was yellow, because the participant was dwelling on it. The credits to the skybox photo and the 3D assets are available in Section 4.
+
+§ 5.4 QUESTIONNAIRE INSTRUMENT
+
+After completing each technique, each participant must complete NASA-TLX Questionnaires (More information in Hart [11]) and 7-point Likert scale questions:
+
+ * S1: The technique is overall effective for helping me to locate an object.
+
+ * S2: I can immediately understand what the technique is telling me.
+
+ * S3: The technique precisely tells me where the target is.
+
+ * S4: The technique helps me to rapidly locate the target.
+
+ * S5: The technique makes it difficult to concentrate.
+
+ * S6: The technique can be startling.
+
+ * S7: The technique gets my attention immediately.
+
+ * S8: The technique gets in the way of the virtual scene.
+
+ * S9: The technique makes me aware of the objects outside the FoV.
+
+ * S10: The technique is uncomfortable to use.
+
+For each Likert-scale question, each participant would rate the statement from 1 to 7 with 1 being "completely disagree" and 7 being "completely agree."
+
+§ 5.5 PROCEDURE
+
+§ 5.5.1 OVERVIEW
+
+The steps were as follows: STEP 1 - The participant provided informed consent and completed the background questionnaire. STEP 2 - Then, we trained a participant to use one of the three techniques (bSOUS, fSOUS, FA) by asking them to select 10 out-of-view targets in the training environment while trying simultaneously to select as many in-view targets as possible. During the training, we primed the participant to prioritize selecting out-of-view targets. If the participant failed to become familiar with the technique, they would repeat the training trials. STEP 3 - The participants would complete the actual trials by selecting 20 out-of-view targets while trying to simultaneously select as many in-view targets as possible in one of the three environments (None, Hotel, Mixed). After the 20 trials were completed, we altered the environments. We repeated these steps until the participant experienced all of the environment with the technique. The next section, Section 5.5.2, contains additional information on how a participant would complete a trial during the study. STEP 4 - Afterward, the participant completed a NASA-TLX instrument and the 10 Likert scale questions. STEP 5 - The participant then repeated STEP 2 to STEP 4 until they experienced all the techniques.
+
+ < g r a p h i c s >
+
+Figure 6: a: When the head cursor lands on the target, the target turns yellow. The participant must dwell for 500 milliseconds to select it. b: When an out-of-view target is select, it sparkles like this screenshot. An in-view target simply fades away.
+
+Since there were 20 trials for each environment and each technique, a participant would have completed $3 \times 3 \times {20} = {180}$ trials. We used Latin squares to arrange the ordering of the techniques and the environments. Therefore, there were nine orders during the studies.
+
+§ 5.5.2 COMPLETING A TRIAL
+
+The main task of the study involved target selection. Each participant selected a target via gaze selection by dwelling a cursor onto the target for 500 milliseconds. There were two types of selection in the study: in-view and out-of-view targets. We considered a selection of an out-of-view targets as a trial for our studies. While we asked our participants to select both types of targets, we also primed our participants to prioritize selecting out-of-view targets. We also told the participants that they would earn more points by selecting out-of-view targets, and the targets could disappear before a successful selection. We made the targets disappear to encourage the participants to find the targets as quickly as possible.
+
+The in-view targets spawned in front of the participant (within ${40}^{ \circ }$ of the user’s forward vector) every one to two seconds. They had one second to select the target before it would disappear. These targets could appear in any direction. An in-view target was worth one point. The out-of-view targets spawned at least ${80}^{ \circ }$ away from the participants' forward vectors in any direction. Since the targets were further away, the participants must use a technique to locate the targets. The spawning rate for this type of target was every 5.5 to 6.5 seconds. The participant had five seconds to select an out-of-view target after it spawned. Since the out-of-view targets were further away in term of angular distance, longer time was required. This type of target was worth 10 points. Both types of targets have the same appearance before selection (white sphere with the angular size of ${7}^{ \circ }$ ). The only visual difference between an in-view and an out-of-view target was that the in-view target faded upon selection whereas the out-of-view target sparkled (Figure 6:b). We decided to make the target appearances the same, because we were controlling for visual salience.
+
+We used the out-of-view target selection task to evaluate the performance and efficacy of the techniques. The in-view targets encouraged participants to return to the original orientation, and dissuade the participants from waiting for the next out-of-view target to appear. In our study, we considered an attempt to select an out-of-view target a trial. We considered a trial to be successful if the participant could dwell on the target long enough that it would trigger the selection animation. We considered a trial to be unsuccessful if the participant could not dwell on the target long enough to trigger the animation or if they could not locate the target. A trial completion time for a successful target selection was the duration from when the target first spawned and until when the participant landed the cursor onto the target. This excludes the dwell time and the animation time.
+
+§ 5.6 RESULTS
+
+§ 5.6.1 NUMBER OF UNSUCCESSFUL TARGET ACQUISITION
+
+To answer RQ1.1-1.2, we recorded the numbers of failed out-of-view target acquisition or the numbers of failed trials per participant. For the number of unsuccessful target selection, we used lme4 to model a mixed logistic regression that predicted the probability of failure with the following factors: the techniques, the environments, with the participants as the random effect. Then, we computed pseudo- ${r}^{2}$ for the model using MuMIn which implements an algorithm found in Nakagawa, Johnson and Schietzeth [16].
+
+The model that we obtained after the fitting is as following: $P\left( \text{ Fail }\right) = - {3.85} + {0.43} \times {fSOUS} + 0 \times {bSOUS} + {0.06} \times$ Hotel- ${0.13} \times$ Mixed + 7.65 × fSOUS : Hotel + 1.38 × bSOUS : Hotel + ${6.75} \times {fSOUS} :$ Mixed $+ {1.22} \times {bSOUS} :$ Mixed. The coefficients, odd ratios (OR), standard errors (SE), and other information are summarized in Table 1. The ${r}^{2}$ are as following: theoretical marginal $= {0.06}$ , theoretical conditional $= {0.39}$ , delta marginal $= {0.06}$ , delta conditional $= {0.14}$ . The most important effect size for interpretation is the theoretical conditional ${r}^{2}$ since it represents the variance explained by the entire model including the random effect. Since it is 0.39, it indicates that the techniques and the visual scenes had moderate effect on the success of target selection. However, the theoretical marginal ${r}^{2}$ , or ${r}^{2}$ that excludes the random effect of the participants is only 0.06 -meaning that there is a strong effect from each individual themselves.
+
+Based on Table 1, we found that there was a strong interaction between the environments and fSOUS. The participants failed more often while using fSOUS with Hotel and Mixed. Since the participants $\left( {n = {14}}\right)$ indicated during the interviews that they often found fSOUS cues blending into the environment, we conclude that the faint nature of the cue led the participants to lose sight of the cue and subsequently failed to select the targets.
+
+Regarding RQ1.1 we observed that target acquisition success differed between techniques, but this was also conditioned on the visual scene or the environment which ties directly to RQ1.2. We found that the visual scene could affect how the participants perceived the cue and subsequent success in target acquisition. We found that despite bSOUS and fSOUS have very similar cueing mechanism, they have very different performance in term of target acquisition success in different environments.
+
+max width=
+
+X $\beta$ $\mathbf{{OR}}$ SE Z $\mathbf{p}$
+
+1-6
+(Intercept) -3.85 0.02 0.34 -11.21 0.00*
+
+1-6
+6|c|Techniques
+
+1-6
+fSOUS 0.43 1.53 0.33 1.31 0.19
+
+1-6
+bSOUS 0.00 1.00 0.35 0.00 1.00
+
+1-6
+6|c|Environments
+
+1-6
+Hotel 0.06 1.06 0.35 0.18 0.86
+
+1-6
+Mixed -0.13 0.88 0.36 -0.37 0.71
+
+1-6
+6|c|Tech. : Env.
+
+1-6
+fSOUS:Hotel 7.65 2.04 0.43 4.78 0.00*
+
+1-6
+bSOUS:Hotel 1.38 0.32 0.48 0.67 0.50
+
+1-6
+fSOUS:Mixed 6.75 1.91 0.44 4.34 0.00*
+
+1-6
+bSOUS:Mixed 1.22 0.20 0.50 0.39 0.70
+
+1-6
+
+Table 1: The summary of the coefficients, ORs, and other information for fitting a mixed multiple linear that predict the probability of failing to acquire an out-of-view targets based on the techniques and the environments. * signifies that $p \leq {0.05}$ .
+
+§ 5.6.2 TIME FOR TARGET ACQUISITION
+
+In addition to the probability of target acquisition, we also considered the time of target acquisition as another important measure to answer RQ1.1-1.2. We measured the time the participants took to reach the targets in successful trials (ie. excluding the ${500}\mathrm{\;{ms}}$ dwelling time). Then, we fitted a mixed multiple linear regression model using the participants as the random effect. We studied the following variables: (1) the techniques, (2) the environments, and (3) the angular distance between the user's initial position to the target. Even though our main focuses are the techniques and the environments, we also have to discuss distance as the out-of-view targets had different distances from the participants. We did not have to consider Fitts's Law for this study, because our targets have the same angular size.
+
+We did not normalize the data because of the suggestion made by Schmidt and Finan [24]. They argue that if the sample size is sufficiently large, normalization could introduce a statistical bias when fitting a linear model. The model that we fitted using lme4 was as following: Time $= {2.19} - {0.14} \times {bSOUS} + {0.31} \times {fSOUS} - {0.06} \times$ None $- {0.03} \times$ Mixed $+ {0.01}$ Dist $+ {0.09} \times$ bSOUS : None $- {0.34} \times$ fSOUS : None + 0.07 × bSOUS : Mixed + 0.02fSOUS : Mixed. The ${r}^{2}$ of the model computed using MuMIn were: marginal $= {0.06}$ , conditional $= {0.25}$ . The conditional ${r}^{2}$ indicated that the model was moderately decent at explaining the time required to reach target. Table 2 show the results of the tests on the coefficients. Despite Fitts' Law suggests that the angular distances may increase time to reach the target, the coefficient representing angular distance $(\beta = {0.01}$ , $t\left( {3930.34}\right) = {13.62},p \leq {0.05})$ was small when compared to other coefficients. We found that there was an interaction effect between the techniques and the environments. Particularly, fSOUS was faster in None $\left( {\beta = - {0.34},t\left( {3927.99}\right) = - {4.73},p \leq {0.05}}\right)$ . The participants $\left( {n = {14}}\right)$ indicated that fSOUS cue blending with more visually complex environments (Mixed, Hotel) caused them to be slower. We found that in term of main effects, the techniques were statistically significant with bSOUS being the fastest, FA being the second fastest, and fSOUS being the slowest. On the other hand, the main effects from the environment are not statistically significant.
+
+Interestingly, some participants indicated during the interviews FA gave them more time to reach the target $\left( {n = 4}\right)$ when this was actually not the case. Some $\left( {n = 4}\right)$ felt that the speed of the cue influenced their own target acquisition speed-making them slower.
+
+max width=
+
+X $\beta$ SE df t $\mathbf{p}$
+
+1-6
+(Intercept) 2.19 0.10 57.73 22.58 0.00*
+
+1-6
+6|c|Techniques
+
+1-6
+bSOUS -0.14 0.05 3927.89 -2.81 0.00*
+
+1-6
+fSOUS 0.31 0.05 3928.12 5.94 0.00*
+
+1-6
+6|c|Environments
+
+1-6
+None -0.06 0.05 3927.91 -1.30 0.19
+
+1-6
+Mixed -0.03 0.05 3927.91 -0.66 0.51
+
+1-6
+Angular Distance Dist 0.01 0.00 3930.34 13.62 0.00*
+
+1-6
+6|c|Techn. : Env.
+
+1-6
+bSOUS:None 0.09 0.07 3927.88 1.36 0.17
+
+1-6
+fSOUS:None -0.34 0.07 3927.99 -4.73 0.00*
+
+1-6
+bSOUS:Mixed 0.07 0.07 3927.93 0.97 0.33
+
+1-6
+fSOUS:Mixed 0.02 0.07 3928.04 0.28 0.78
+
+1-6
+
+Table 2: The summary of coefficients and the results of the tests for the coefficients for the time required to reach out-of-view targets. We only considered successful trials for analysis. * signifies $p \leq {0.05}$ .
+
+§ 5.6.3 COGNITIVE LOAD
+
+To further answer RQ1.1, we collected and analyzed the participants' cognitive load after using a technique using NASA-TLX scores. The median NASA-TLX raw scores are as follows: $\mathrm{{FA}} = {42}$ , fSOUS $= {70.5}$ , bSOUS $= {57.5}$ . This suggests that fSOUS and bSOUS induced higher cognitive load than FA. ART ANOVA with repeated measure (using art) revealed significant differences between the techniques $\left( {F\left( {2,{18.63}}\right) = {7.40},p \leq {0.05}}\right)$ . Pairwise comparisons with Tukey adjustment (using emmeans) showed that there were significant differences between FA and fSOUS $(c = - {21.0},{t}_{\text{ ratio }}\left( {46}\right) =$ $- {6.0},p \leq {0.05})$ , and fSOUS and bSOUS $(c = {13.9},{t}_{\text{ ratio }}\left( {46}\right) =$ ${3.92},p \leq {0.05})$ . The difference between FA and bSOUS was not statistically significant $\left( {c = - {7.1},{t}_{\text{ ratio }}\left( {46}\right) = - {2.03}}\right)$ . The interview data from 16 participants suggested that fSOUS induced more cognitive load, because the cue tended to blend with the environment which forced them to simultaneously find the cue and the target.
+
+§ 5.6.4 QUESTIONNAIRE
+
+To answer RQ1.3, we administered a questionnaire after a participant finished using a technique.
+
+S1: Many of the participants indicated that FA $\left( {{Mdn} = 7}\right)$ was overall effective at helping. fSOUS was considerably less effective $\left( {{Mdn} = 4}\right)$ ; however, Figure 10 shows that the Likert scores were distributed quite evenly. This indicated that there are mixed opinions from the participants. bSOUS $\left( {{Mdn} = 6}\right)$ was overall more effective than fSOUS, but slightly more less effective than FA.
+
+S2: Many of the participants found FA and bSOUS to be very comprehensible $\left( {{Mdn} : \mathrm{{FA}} = 7,\text{ bSOUS } = 6}\right)$ . Interestingly, on average, they found fSOUS to be less comprehensible $\left( {{Mdn} = {4.5}}\right)$ than bSOUS despite that it uses the same mechanism to provide the location information of the out-of-view targets. The scores for fSOUS were somewhat evenly distributed (Figure 10) which indicates mixed opinions among the participants for fSOUS.
+
+S3: Many of the participants found FA and bSOUS to be very precise $\left( {{Mdn} : \mathrm{{FA}} = 7,\text{ bSOUS } = 6}\right)$ . Interestingly, the participants overall found fSOUS $\left( {{Mdn} = {4.5}}\right)$ to be less precise despite that it had the same cueing mechanism with fSOUS.
+
+S4: Many of the particants found FA and bSOUS were helping them to quickly acquire the out-of-view targets $({Mdn} : \mathrm{{FA}} = 7$ , bSOUS $= 6$ ). fSOUS was less effective $\left( {{Mdn} = 3}\right)$ despite it had the same cueing mechanism with bSOUS. However, some participants still found fSOUS to be effective.
+
+S5: The medians $\left( {{Mdn} : \mathrm{{FA}} = {3.5},\mathrm{{fSOUS}} = 3,\mathrm{{bSOUS}} = 2}\right)$ indicated that, overall, the three techniques did not negatively affect their concentration. However, Figure 10 indicated that bSOUS had the best performance in this regard.
+
+max width=
+
+X Q Techniques 1 2 3 4 5 6 7 Q Techniques 1 2 3 4 5 6 7
+
+1-18
+15*3 FA 0 0 0 2 1 3 18 X FA 3 5 1 6 6 1 2
+
+2-18
+ fSOUS 2 5 2 5 6 1 3 6 fSOUS 5 4 5 5 4 0 1
+
+2-18
+ bSOUS 0 0 0 1 8 5 10 X bSOUS 4 6 4 3 3 3 1
+
+2-18
+ FA 0 0 1 0 1 3 19 X FA 0 0 1 0 1 9 13
+
+2-18
+ fSOUS 4 3 1 4 5 5 2 7 fSOUS 11 2 6 1 1 2 1
+
+2-18
+ bSOUS 0 1 0 2 5 7 9 X bSOUS 0 0 1 3 4 7 9
+
+2-18
+ FA 0 0 0 0 3 2 19 X FA 2 3 4 1 4 4 6
+
+2-18
+ fSOUS 3 6 5 2 3 2 3 8 fSOUS 13 2 4 2 1 2 0
+
+2-18
+ bSOUS 0 2 2 4 2 5 9 X bSOUS 8 4 7 2 1 1 1
+
+2-18
+ FA 0 1 3 1 2 3 14 X FA 0 0 0 1 2 5 16
+
+2-18
+ fSOUS 4 7 2 2 3 3 3 9 fSOUS 2 3 2 5 6 4 2
+
+2-18
+ bSOUS 0 0 3 0 3 8 10 X bSOUS 1 0 0 1 4 8 10
+
+2-18
+ FA 5 3 4 2 3 6 1 X FA 10 5 3 2 2 2 0
+
+2-18
+ fSOUS 5 2 6 3 3 2 3 10 fSOUS 1 4 5 4 1 2 7
+
+2-18
+ bSOUS 9 6 1 1 3 2 2 X bSOUS 7 4 6 2 3 2 0
+
+1-18
+
+Figure 7: The heatmap represents the frequencies of the responses for the Likert scale statements. Red means less participants and green means more participants. The numbers indicate indicates the frequencies of responses.
+
+S6: The participants indicated that on average $({Mdn} : \mathrm{{FA}} = 4$ , fSOUS $= 3$ , bSOUS $= 3$ ), all techniques were almost equally startling. We found this result to be interesting. We expected fSOUS to be the least startling, because its cue was faint. Figure 10 indicates somewhat even distributions of scores for all three techniques. The median score for FA was somewhat surprising as we expected the participants to find the technique more startling. Because unlike SOUS, FA could travel very close to the user or even through the user. However, during the interviews, only few participants $\left( {n = 3}\right)$ indicated this to be an issue.
+
+S7: FA $\left( {{Mdn} = 7}\right)$ and bSOUS $\left( {{Mdn} = 6}\right)$ were similarly effective at grabbing the participants' attention whereas fSOUS $\left( {{Mdn} = 2}\right)$ was less effective. It was not surprising for FA to be more attention-grabbing than fSOUS as the FA cue initially appears in the user's foveal vision as opposed to their peripheral vision. However, we found FA and bSOUS's similar effectiveness to be surprising.
+
+S8: Most of the participants indicated that FA $\left( {{Mdn} = 5}\right)$ was more obstructive than fSOUS $\left( {{Mdn} = 1}\right)$ and bSOUS $\left( {{Mdn} = {2.5}}\right)$ . This result means that peripheral-based techniques are beneficial at reducing visual obstruction.
+
+S9: Most of the participants indicated that FA $\left( {{Mdn} = 7}\right)$ and bSOUS $\left( {{Mdn} = 6}\right)$ were effective at making them aware of out-of-view targets. At a glance, fSOUS (Mdn = 4.5) seemed to not make the participants aware of the out-of-view targets. However, Figure 10 indicates a bimodal distribution for fSOUS-meaning that some participants found this technique helping them to become aware of out-of-view targets whereas some did not find it helpful.
+
+S10: FA $\left( {{Mdn} = 2}\right)$ and bSOUS $\left( {{Mdn} = 3}\right)$ were similar in term of comfort while fSOUS $\left( {{Mdn} = 4}\right)$ was slightly more uncomfortable to use. We noticed from Figure 10 that some participants found fSOUS to be very uncomfortable to use while some participants found it to be as comfortable to use as the other techniques.
+
+§ 6 EXPERIMENT 2: COMPARING THE VARIANTS OF FLYIN- GARROW
+
+We performed the second experiment to observe if we could improve FA using certain properties of bSOUS and fSOUS. bSOUS and fSOUS have visually persistent cue and the position of the cue are also relative to the user. We compared the four variations of FA: FA-Arc-Trail, FA-Arc+Trail, FA+Arc-Trail, and FA+Arc+Trail in this part of the study. Whereas the FA cue in the first part is an arrow (Figure 2), the FA cue in the second part is a cone (Figure 3) to make its appearance more compatible with a trail.
+
+Experiment 2 was largely similar to the first one. Each participant used a technique in the three environments, completed a NASA-TLX questionnaire and 10 Likert-scale questions, completed an interview, and moved onto the next technique. After completing the first part of the study, each participant would proceed directly to this one after a short break. We asked our participants to use the four variations in the three environments to select 20 out-of-view targets per condition. Each participant performed $4 \times 3 \times {20} = {240}$ trials in total for the study. Similar to the first study, we also used Latin square to arrange the conditions. As such, there were 12 orders of conditions. Since the participants were already familiar with the target selection task, we increased the difficulty of the task by decreasing the size of the targets from ${7}^{ \circ }$ to ${5}^{ \circ }$ . After the completion of this part, each participant received 15 Canadian dollars. The compensation was for their time for both experiments.
+
+§ 6.1 RESEARCH QUESTIONS
+
+RQ2.1: Do the variations of FA have the same performance and induce the same cognitive load? If the variations have different performance, they should have different probability of target acquisition failure, and different speed to reach target. They should also induce similar amount of cognitive load.
+
+RQ2.2: Does each variation of FA have different interaction with the environments? The slight modifications to the original FA technique may induce different interaction with the environment.
+
+RQ2.3: Does each variation of FA induce different target acquisition paths? In particular, does the orbital +Arc encourage different acquisition paths than the more direct standard FA?
+
+RQ2.4: Does each variation of FA have different subjective impression? Although FA seems to have a good subjective impression in Gruenefeld et al. [8], we may be able to observe differences in the variations.
+
+§ 6.2 RESULTS
+
+§ 6.2.1 NUMBER OF UNSUCCESSFUL TARGET ACQUISITION
+
+To answer RQ2.1-2.2, we fitted a mixed multiple logistic regression model that models the probability of failing a trial using the participants as the random effect with lme4. We obtained the following model: $P\left( \text{ Fail }\right) = - {5.04} + {0.96} \times$ Arc $+ {0.9} \times$ Trail $\times 0 \times$ None- ${0.5} \times$ Mixed $- {1.10} \times$ Arc : Trail $+ {0.22} \times$ Arc : None $+ {1.08} \times$ Arc : Mixed $- {0.56} \times$ Trail : None + 0.21 timesTrail : Mixed +0.26 × Arc : Trail : None-0.14 × Arc : Trail : Mixed. The tests for the coefficients are summarized in Table 3. The ${r}^{2}$ for the models computed with MuMIn were: theoretical marginal $= {0.06}$ , theoretical conditional $= {0.43}$ , delta marginal $= {0.01}$ , and delta conditional $=$ 0.07. The most important ${r}^{2}$ is theoretical conditional ${r}^{2}$ which represents goodness of fit of all terms in the model including the random effect. In this case, the effect size was moderate.
+
+The main effects+Arc $\left( {Z = {2.22},p \leq {0.03},{OR} = {2.61}}\right)$ , and +Trail $\left( {Z = {2.06},p \leq {0.05},{OR} = {2.46}}\right)$ were statistically significant-meaning that +Arc and +Trail increased the probability of missing the target. Meanwhile, the interaction between +Arc and +Trail was borderline statistically significant $\left( {Z = - {1.95},p = {0.051}}\right)$ . This meant that a combination of $+ \mathrm{{Arc}}$ and $+ \mathrm{{Trail}}$ was better than a variation with just one of the two behaviours. The environment did not affect performance for any variation of FA-meaning that despite the different cue trajectories and visual effects, the variations were relatively resistant to the visual complexities the environments.
+
+max width=
+
+X $\beta$ $\mathbf{{OR}}$ SE Z p
+
+1-6
+(Intercept) -5.04 0.01 0.49 -10.39 0.00*
+
+1-6
++Arc TRUE 0.96 2.61 0.43 2.22 0.03*
+
+1-6
++Trail TRUE 0.90 2.45 0.44 2.06 0.04*
+
+1-6
+Environments None 0.00 1.00 0.51 0.01 0.99
+
+1-6
+Mixed -0.50 0.61 0.57 -0.86 0.39
+
+1-6
++Arc : +Trail TRUE:TRUE -1.10 0.33 0.56 -1.95 0.05*
+
+1-6
++Arc : Env. TRUE:None 0.22 1.24 0.60 0.36 0.72
+
+1-6
+TRUE:Mixed 1.08 2.94 0.65 1.65 0.10
+
+1-6
++Trail : Env. TRUE:None -0.56 0.57 0.64 -0.87 0.39
+
+1-6
+TRUE:Mixed 0.21 1.24 0.68 0.31 0.76
+
+1-6
++Arc : Trail : Env. X X X X X
+
+1-6
+TRUE:TRUE:None 0.26 1.30 0.81 0.32 0.75
+
+1-6
+TRUE:TRUE:Mixed -0.14 0.87 0.82 -0.17 0.87
+
+1-6
+
+Table 3: The coefficients, their associated ORs, and tests for the mixed multiple logistic regression predicting the probability of failing a trial. * signifies that $p \leq {0.05}$ .
+
+ < g r a p h i c s >
+
+Figure 8: The average speed of target acquisition per condition. The unit is degrees per second. Darker green means faster target acquisition.
+
+§ 6.2.2 TIME FOR TARGET ACQUISITION
+
+To answer RQ2.1-2.2, we fitted a mixed multiple linear regression model that predicted the time for successful target selection using lme4. We fitted the following model: $\operatorname{Time} = {2.17} + {0.38} \times \operatorname{Arc} -$ ${0.12} \times$ Trail $- {0.08} \times$ None $- {0.02} \times$ Mixed $+ {0.01} \times$ Dist $+ {0.17} \times$ Arc : Trail $+ {0.15} \times$ Arc : None $- {0.01} \times$ Arc : Mixed $+ {0.03} \times$ Trail : None $+ {0.02} \times$ Trail : Mixed $- {0.25} \times$ Arc : Trail : None $- {0.03} \times$ Arc : Trail : Mixed. Table 4 shows the results of the tests on the coefficients. The pseudo- ${r}^{2}$ that we computed using MuMIn were as following: marginal $= {0.12}$ , conditional $= {0.36}$ . The conditional ${r}^{2}$ was considered moderate. While distance is statistically significant $\left( {\beta = {0.01},t\left( {5530.64}\right) = {18.52},p \leq {0.05}}\right)$ , it did not contribute much in term of time required to reach the targets. We found that in general, +Trail increased the speed of target while +Arc slowed down the participants. To better explain how +Trail and +Arc affected the speed, we created a supplementary heatmap (Figure 8) that represents the average speed of target acquisition per condition.
+
+max width=
+
+X $\beta$ SE df t $\mathbf{p}$
+
+1-6
+(Intercept) 2.17 0.09 40.11 23.16 0.00*
+
+1-6
++Arc TRUE 0.38 0.04 5527.98 8.70 0.00*
+
+1-6
++Trail TRUE -0.12 0.04 5527.99 -2.89 0.00*
+
+1-6
+Environments None -0.08 0.04 5527.99 -1.76 0.08
+
+1-6
+Mixed -0.02 0.04 5527.98 -0.50 0.62
+
+1-6
+Angular Distance Dist 0.01 0.00 5530.64 18.52 0.00*
+
+1-6
++Arc : +Trail TRUE : TRUE 0.17 0.06 5527.98 2.72 0.01*
+
+1-6
++Arc : Env. TRUE:None 0.15 0.06 5527.98 2.42 0.02*
+
+1-6
+TRUE:Mixed -0.01 0.06 5527.98 -0.20 0.84
+
+1-6
++Trail : Env. TRUE:None 0.03 0.06 5527.98 0.45 0.65
+
+1-6
+TRUE:Mixed 0.02 0.06 5527.98 0.30 0.77
+
+1-6
++Arc : +Trail : Env. TRUE:TRUE:None -0.25 0.09 5527.98 -2.92 0.00*
+
+1-6
+TRUE:TRUE:Mixed -0.03 0.09 5527.99 -0.33 0.74
+
+1-6
+
+Table 4: The summary of coefficients and the results of the tests for the coefficients for the time required to reach out-of-view targets. We only considered successful trials for analysis. * signifies $p \leq {0.05}$ .
+
+§ 6.2.3 STRAIGHTNESS
+
+To answer RQ2.3, we performed a three-way repeated measure ANOVA (using lme4) on normalized $d$ of successful trials. We used bestNormalize for the normalization process. The factors were: +Arc,+Trail, and the environments. The significant effects were: $+ \operatorname{Arc}\left( {F\left( {1,{5529.2}}\right) = {81.58},p \leq {0.05}}\right)$ and $+ \operatorname{Trail}(F\left( {1,{5529.0}}\right) =$ ${13.55},p \leq {0.05})$ . The test for the environments $(F\left( {1,{5529.0}}\right) =$ 1.74) was not statistically significant as well as the tests for interactions $( +$ Arc:Trail $- F\left( {1,{5529.0}}\right) = {0.32}, +$ Arc:Environment $- F\left( {2,{5528.9}}\right) = {0.74}, +$ Trail:Environment $- F\left( {2,{5528.9}}\right) = {1.02}$ , +Arc:+Trail:Environment $- F\left( {2,{5528.8}}\right) = {1.32})$ . The subsequent post-hoc tests with emmeans on $+ \operatorname{Arc}\left( {c = {0.23},{Z}_{\text{ ratio }} = {9.03},p \leq }\right.$ ${0.05})$ and $+ \operatorname{Trail}\left( {c = - {0.10},{Z}_{\text{ ratio }} = - {3.68},p \leq {0.05}}\right)$ were statistically significant. Figure 9 represents an interaction plot of the results. We observed that while +Arc made target acquisition trajectories more circuitous while +Trail made them more direct.
+
+ < g r a p h i c s >
+
+Figure 9: An interaction plot representing the tests of normalized $d$ in the second part of the study. env = Environment. As the y-axis representing normalized $d$ , the plot does not represent descriptive statistics. Rather, it is to help with interpreting ANOVA results in Section 6.2.3.
+
+§ 6.2.4 COGNITIVE LOAD
+
+To further answer RQ2.1, we analyzed the cognitive load collected using NASA-TLX questionnaire. After each participant completed a technique with all environments, we collected their raw NASA TLX-Score. The median scores were as following-FA-Arc-Trail: 54, FA-Arc+Trail: 57, FA+Arc-Trail: 48, FA+Arc+Trail: 57. Using repeated measure ART ANOVA with art, we found that the interaction between+Arc and+Trail was not statistically significant $(F\left( {1,{69}}\right) =$ 0.82 ), and neither was the main effects: $+ \operatorname{Arc}\left( {F\left( {1,{69}}\right) = {0.11}}\right)$ and +Trail $\left( {F\left( {1,{69}}\right) = {0.92}}\right)$ . Despite these results, we could not conclude that the techniques induced roughly the same amount of cognitive load either, because we observed that the median for FA+Arc-Trail was much lower than the ones for the other variations.
+
+§ 6.2.5 QUESTIONNAIRE
+
+Overall, apart from some statements (For example, S10), the participants did not really indicate much different. Observing Figure 10, we note that the distributions of the scores tend to centre around higher numbers or somewhat uniformly distributed. Therefore, the questionnaire results did not provide a clear answer to RQ2.4 except in a few cases.
+
+S1: On average, participants indicated that all variations were almost as effective as each other (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail $= 7$ , FA+Arc-Trail $= 6$ , FA+Arc+Trail $= 6$ ).
+
+S2: On average, participants indicated that all variations were as understandable as each other (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail $= 7,\mathrm{{FA}} + \mathrm{{Arc}} - \mathrm{{Trail}} = 6,\mathrm{{FA}} + \mathrm{{Arc}} + \mathrm{{Trail}} = 6$ ).
+
+ < g r a p h i c s >
+
+Figure 10: The heatmap represents the frequencies of the responses for the Likert scale statements. Red means less participants and green means more participants. The numbers indicate the frequencies of responses.
+
+S3: On average, participants indicated that all techniques were as precise as each other $($ Mdn: FA-Arc-Trail $= 7$ , FA-Arc+Trail $= 7$ , FA+Arc-Trail $= 6$ , FA+Arc+Trail $= 6$ ).
+
+S4: The participants, on average, indicated that +Arc techniques were slightly more helpful $({Mdn} : \mathrm{{FA}} + \mathrm{{Arc}} - \mathrm{{Trail}} = 5,\mathrm{{FA}} + \mathrm{{Arc}} + \mathrm{{Trail}}$ $= 5$ ) and the techniques that traveled in straight line were more effective (Mdn: FA-Arc-Trail = 7, FA-Arc+Trail = 6).
+
+S5: The participants found it was the most difficult to concentrate with FA-Arc+Trail $\left( {{Mdn} = {4.5}}\right)$ . The second worst technique is FA-Arc-Trail $\left( {{Mdn} = {3.5}}\right)$ . The third worst technique is FA+Arc+Trail $\left( {{Mdn} = 3}\right)$ . The best technique is FA-Arc-Trail $\left( {{Mdn} = 2}\right)$ . The interview data indicated that the trail might have made it difficult to concentrate; five participants indicated so during the interview for FA-Arc+Trail, and three participants indicated so during the interview for FA+Arc+Trail. We believe that +Arc might make it somewhat easier to concentrate, because the cue never got close to the participants.
+
+S6: FA-Arc+Trail was the most startling variation $\left( {{Mdn} = 5}\right)$ , closely followed FA-Arc-Trail $\left( {{Mdn} = 4}\right)$ and FA+Arc-Trail $({Mdn} =$ 3). The least startling variations was FA-Arc+Trail $\left( {{Mdn} = 2}\right)$ .
+
+S7: On average, participants indicated that all techniques were almost as effective as each other at getting their attnetion (Mdn: FA-Arc-Trail $= 6$ , FA-Arc+Trail $= 7$ , FA+Arc-Trail $= {6.5}$ , FA+Arc+Trail $= 6)$ .
+
+S8: We found that +Trail techniques (Mdn: FA-Arc+Trail 6.5, FA+Arc+Trail = 5) to be more obstructive than their -Trail counterpart (Mdn: FA-Arc-Trail = 4.5, FA+Arc-Trail = 4).
+
+S9: On average, participants indicated that all techniques were almost as effective as each other at making them aware of objects outside the FoV (Mdn: FA-Arc-Trail = 6, FA-Arc+Trail = 7, FA+Arc-Trail $= 6,\mathrm{{FA}} + \mathrm{{Arc}} +$ Trail $= 6$ ).
+
+S10: On average, participants indicated that all FA+Arc+Trail to be the most comfortable $\left( {{Mdn} = 2}\right)$ followed with FA-Arc-Trail $\left( {{Mdn} = {2.5}}\right)$ . The third best technique was FA+Arc-Trail $\left( {{Mdn} = 3}\right)$ and the worst technique was FA-Arc+Trail $\left( {{Mdn} = 4}\right)$ . We think the arc trajectory made the trail more comfortable to use, because the trail was never close to the participants.
+
+§ 7 DISCUSSION
+
+§ 7.1 EXPERIMENT 1
+
+In experiment 1, the results indicated that bSOUS is a viable technique. bSOUS is faster and less obstructive than FA. This also highlights the benefit of a technique with an interface solely inside the user's peripheral vision. Since the cue will always be inside the user's peripheral vision, we do not need to worry about positioning the cue like FA or 3D Wedge. Although the speed increment by SOUS is small, we still think it is still important for certain scenarios such as competitive gaming where any increment is important.
+
+We found the effectiveness of fSOUS is very sensitive to the visual complexity of the environment. Although fSOUS can help the user to locate targets quickly in a less visually complex environment, it can end up hindering the user in more complex environments. This contradicts other works that have successfully used faint cues to guide the user such as Rothe and Hußmann [21,22] and McNamara et al [15]. One notable difference between our work and these is that fSOUS is strictly inside the user's far-peripheral vision while the other techniques have cues that are closer to the user's macular vision. Our results are more in line with other findings (such as [14, 19]) which show discerning details in far-peripheral vision is difficult. Therefore, we conclude that while a faint technique can be effective, it tends to negatively interact with the visual scene and becomes less effective in the peripheral vision. We also recommend a faint cue to be used only in scenarios where it will not be restricted inside the user's peripheral vision. If the cue must be within the user's peripheral vision, then the scene must not visually complex. We also suggest using a faint cue in a low-stake task where successful target acquisition is not paramount to the task as a whole. For example, the user may just be exploring a virtual museum at their own pace and does not benefit from interacting with virtual museum pieces.
+
+Gruenefeld et al. [8] argue that since FA has low usability (as measured by System Usability Score), it may explain why it is slower than EyeSee360, another technique in their study. However, our study shows that there might be an alternative explanation to the relatively lower speed of FA. Some participants in our study believed that FA gave them a specific timeframe to complete the task while some said that the cue influenced their speed of target acquisition. Therefore, we argue that the true strength of FA is not about maximizing speed of target acquisition, but to limit it. Still, how well FA can control the speed will depend on many factors. For example, FA may not be as effective at controlling the speed if we prime the user to ignore the speed of the cue.
+
+§ 7.2 EXPERIMENT 2
+
+While +Arc made FA less obtrusive and thus more similar to SOUS, we did not observe an increase in speed like SOUS. Instead, it slowed down the participants even further. FA also did not become more comfortable to use despite less visual obstruction. However, we believe that we can increase comfort by making the cue adjusting its own trajectory to maximize comfort. For example, a cue takes a path around the user instead of above the user to reach the target. Interestingly, while the first study implementation of FA has a consistent behaviour through all environments, +Arc makes FA more sensitive to the visual complexity in the environment in term of speed-somewhat similarly to fSOUS. We require further investigation to find the reason behind the increased sensitivity.
+
++Trail improved the speed of target acquisition through increased persistence of the cue. However, we also found the participants to be less successful at acquiring out-of-view targets. The interview data revealed the participants did not find the trail comfortable to use. They suggest that the trail should be smaller, and more translucent so they can better see the surrounding. Based on bSOUS and fSOUS being faster than FA in the first study and the improvement brought by +Trail, we suggest that a more visible cue may reduce target acquisition time. A more visible cue also makes target acquisition trajectories more direct.
+
+Overall, this study suggests that placing an interface completely inside a user's far-peripheral vision provides the best balance between obtrusiveness and visual persistence. However, if the user has to use a low-FoV HMD that is incapable of displaying beyond the user's mid-peripheral vision, something similar to +Trail may be useful to increase their speed of target acquisition. However, one must be aware that a +Trail technique requires fine-tuning of the cue to ensure an optimal experience.
+
+§ 7.3 WHAT IS A GOOD TECHNIQUE?
+
+Upon analyzing our results, it appears that each technique could be useful in different contexts. However, as reflected in our research questions, we maintain that our set of desirable characteristics of off-screen target cuing hold in most cases: (1) make the user more successful at target acquisition while inducing low cognitive load at the "optimal" speed, (2) not be impacted by the visual complexity of a scene, and (3) provide a good subjective user experience. It is important to note that "optimal" speed does not always mean "fastest." Rather, this may be the speed that will lead to the best user experience. For example, in competitive gaming, the highest speed is likely better whereas in a VR museum exhibit, a slower speed might be desirable to allow the user to observe the environment along a trajectory. We think that bSOUS and fSOUS are appropriate for maximizing speed whereas FA may be viable for reducing and controlling the speed.
+
+§ 8 CONCLUSION
+
+We conducted a two-part study in which participants selected out-of-view targets with the aid of visual cuing techniques. In the first experiment, we found that bSOUS and fSOUS had a reasonable performance when compared to FA. However, fSOUS has a significant weakness: the cue tends to be not sufficiently salient against the environments. Overall, the first experiment demonstrates that a technique whose interface is completely inside the user's far-peripheral vision can be effective. Although SOUS has a simple and a straightforward design, far-peripheral techniques like SOUS were impossible to evaluate until recently due to limited FoV of the commodity HMDs. In the second experiment, we modified FA to make cue trajectories travel relative to the user (+Arc) and make it more noticeable (+Trail), so that FA can be less obtrusive and more persistent like SOUS. Overall, +Arc decreased the effectiveness of FA while +Trail increased the speed of the participants while reducing the chance of acquiring the target. This means that decreasing obtrusiveness does not necessarily lead to a desirable behaviour and increasing persistence can reduce target acquisition time. We suggest that a technique that exclusively uses the user's far-peripheral vision has the best balance between obtrusiveness and persistence. However, if a HMD cannot effectively display beyond the user's mid-peripheral vision, something akin to +Trail may be useful. Our study shows that there is no one-size-fit-all technique. When designing a technique or modifying an existing technique, we must consider multiple competing factors.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b36665dec35888ac5d39b5e41446717345c466c
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,339 @@
+# CODA: A Design Assistant to Facilitate Specifying Constraints and Parametric Behavior in CAD Models
+
+Category: Research
+
+## Abstract
+
+We present CODA, an interactive software tool, implemented on top of Autodesk Fusion 360, that helps novice modelers with designing well-constrained parametric 2D and 3D CAD models. We do this by extracting relations that are present in the design but not yet enforced by constraints. CODA presents these relations as suggestions for constraints and clearly communicates the implications of every constraint by rendering animated visualizations in the CAD model. CODA also suggest dynamic alternatives for static constraints present in the model. These suggestions assist novices in CAD to work towards well-constrained models that are easy to adapt. Such well-constrained models are convenient to modify and simplify the process to make design alternatives to accommodate changing needs or specific requirements of machinery for fabricating the design.
+
+Index Terms: Human-centered computing-Interactive systems and tools Human-centered computing-Visualization-Visualization techniques;
+
+## 1 INTRODUCTION
+
+The maker movement is largely driven by a community of DIY enthusiasts building on each other's work by sharing digital versions of artefacts through online platforms, such as Thingiverse ${}^{1}$ or Youmagine ${}^{2}\left\lbrack {8,{33}}\right\rbrack$ . Makers frequently do this by making rough edits to mesh files or by starting from scratch while using concepts from existing designs [9]. A more convenient way to adapt an existing model is to adjust parameters in a well-constrained parametric model. Well-constrained parametric models allow for dimensional adjustments and personal and aesthetic refinements. Additionally, such changes can also make models compatible with machines and materials available in other labs. For example by compensating for shrinkage of ABS 3D printing filament or the kerf of a laser cutter. González-Lluch and Plumed [10] show however, that even for engineering students it is challenging to verify whether 3D models are fully constrained and thus behave as desired when changes are made. Especially under time pressure trained CAD modelers produce 3D models that are hard to modify because of errors or missing constraints [35]. For makers this can become even more challenging as many do not have a formal training in CAD modeling [22].
+
+To empower and encourage makers to design and share parametric models, Thingiverse launched a Customizer platform [29] that allows for making adjustments to parametric $3\mathrm{D}$ models using simple GUI controls, such as sliders and drop-down menus. However, Thingiverse Customizer requires making the entire 3D model in a CSG scripting language [34], which is significantly different from the feature-based modeling approach supported by popular 3D CAD environments [15], such as Autodesk Fusion and SolidWorks. An analysis of Thingiverse in 2015 shows that only a small fraction of designs (1.0%) are compatible with the Customizer [1]. To help users in specifying parametric models, recent versions of 3D CAD environments automatically add basic constraints to 2D sketches. AutoCAD's auto-constraint feature [19] and BricsCAD's Auto Parameterize functionality [12] go even further and automatically inject constraints models. As there are multiple valid alternatives to constrain models, fully-automatically introducing constraints does not always lead to the desired behavior of the model. For example, the hole for the charging cable of a phone holder (Figure 3) is positioned in the center of the design. It is up to the designers preferences whether to constrain this hole at a fixed distance or a ratio from the top or the bottom of the phone holder. Therefore, the system presented in this paper takes a different approach and suggests constraints and explains their differences and implications on the model.
+
+
+
+Figure 1: Using CODA to correctly specify parametric behavior in a 3D CAD model of a laptop stand. (a) A 2D sketch of the cross-sectional profile for the laptop stand designed by the user. (b) CODA lists relations that are present in the design but not yet enforced by constraints. (c) Accepting CODA's suggestions allows novice modelers to quickly transition to a well-constraint model ready for fabrication.
+
+In this paper, we present CODA, a Constrained Design Assistant. CODA is an interactive software tool that helps novice CAD modelers with designing well-constrained 2D and 3D CAD models. Such well-constrained models are convenient to modify which simplifies the process to make design alternatives or when making adjustments for fabricating models with different machinery. First, our system makes users aware of relations that are present in the current design but are not yet enforced by constraints, such as edges being parallel without a parallel constraint. Second, CODA reconsiders static constraints added by the user and suggests more dynamic alternatives to make the design flexible to changes. CODA also helps in communicating the meaning and implications of all suggested constraints by animating the model and demonstrating its implications (Figure 2).
+
+---
+
+${}^{1}$ https://www.thingiverse.com/
+
+${}^{2}$ https://www.youmagine.com/
+
+---
+
+The core contribution of this paper is ${CODA}$ , an interactive software assistant to aid novice modelers in making well-constrained CAD models that are robust to changes. More specifically, we contribute:
+
+1. A computational approach for extracting relations in a model which are not yet enforced by constraints.
+
+2. A set of novel interactive animations to clearly communicate the impact of constraints to novice users.
+
+## 2 RELATED WORK
+
+This work draws from, and builds upon prior work on facilitating CAD modeling and work related to sharing models for fabrication.
+
+### 2.1 Facilitating CAD Modeling
+
+3D CAD modeling environments offer hundreds of features. How these features are used and combined determine the flexibility, adaptability, and ultimately the reusability of $3\mathrm{D}$ models $\left\lbrack {5,{11}}\right\rbrack$ . Research shows, however, that even models designed by students with a formal 3D modeling training are often hard to reuse and adapt, especially when designed under time pressure [35]. In line with these observations, González-Lluch and Plumed [10] show that engineering students have a hard time reasoning whether profiles are over- or under-constrained. When considering modeling within the maker community, an emerging group of people learn 3D modeling practices by themselves through online resources [22]. However, these novice modelers could significantly benefit from high-quality models that are easy to adapt as they frequently make new artefacts by starting from existing designs [9].
+
+To lower the barrier to get started with 2D and 3D modeling, various tools have been developed that specifically target novices, such as Autodesk Tinkercad ${}^{3}$ and BlocksCAD ${}^{4}$ . However, several studies with casual makers [16, 30], children [17], and students in special education schools [4] show that 3D modeling is still challenging. To facilitate further, the Chateau [18] system helps with CAD modeling by suggesting modeling operations based on simple sketch gestures by the user. Rod et al. [37] presents various novel interaction techniques to further facilitate 3D modeling on touch-screen devices.
+
+Instead of adapting CAD environments and making custom modeling operations for novices, researchers also explored how to facilitate the process for novices to learn a new CAD environment. GamiCAD [27] and CADament [28] gradually introduce sketching and modeling operations using gamification techniques $\left\lbrack {{27},{28}}\right\rbrack$ to lower the barrier and keep novices motivated to continue learning new aspects. Alternatively, Blocks-to-CAD [24] shows how to gradually introduce 3D modeling operations in sandbox games, such as Minecraft, to get newcomers introduced to the basics of CAD modeling. Additionally, recent research results show how modeling strategies from experts can be modeled, analyzed, and compared to provide guidance for other users during modeling sessions [6].
+
+Instead of embedding expert knowledge in software systems, software systems can help in bringing novices in contact with experts while facing issues with 3D modeling. MicroMentor [20], for example, makes it possible for novices to request one-on-one help for specific issues. In contrast, the Maestro [7] system makes educators in workshops aware of student's progress and common challenges as they occur. Although experts typically provide more nuanced answers to the various challenges novices face, experts often need an incentive to help other users and first need to get familiar with the specific problem the user is facing [20].
+
+Several techniques have been developed that specifically aim to improve the adaptability and re-usability of models by facilitating specifying parametric behaviour. Commercial feature-based CAD modeling tools support, for example, snapping interaction techniques to ease and improve precision in $2\mathrm{D}$ sketches, such as centering a point in the middle of a line or sketching two perpendicular lines. Leveraging this snapping functionality oftentimes automatically fixes the relation by injecting the associated constraints in the $2\mathrm{D}$ sketch. AutoConstraint [19] takes a different approach and adds constraints to a completed sketch until it is fully constrained. Closest to our work is the Auto Parameterize [12] feature of BricsCAD ${}^{5}$ which automatically converts all static dimensions of a 3D model to algebraic equations to facilitate scaling and adapting the model. However, there are always multiple valid alternatives to constrain models and fully-automatically introducing constraints does not always lead to the desired behavior of the model. We therefore take a different approach and present various geometry constraints and algebraic relations that could be applied to the model. CODA communicates the implications of all suggestions using in-context animations to allow novice users to make informed decisions.
+
+Also related to our work are computational approaches to reverse engineering CAD models from mesh models by extracting modeling features $\left\lbrack {{36},{41}}\right\rbrack$ . Several systems also present algorithms to detect and extract geometry constraints in mesh models using numerical methods for constrained fitting [2] and techniques for detecting repeating patterns [25] and symmetries [26].
+
+### 2.2 Sharing Models for Fabrication
+
+Over the past decade, digital fabrication has become accessible mainly via public maker labs and affordable digital fabrication equipment [32]. Shewbridge et al. [39] report that households are interested in replacing, modifying, customizing, repairing, or replicating household objects using digital fabrication machinery. However, starters frequently need help from more experienced users translate ideas into 3D models. This is often done via drawings, photographs, and spoken language [39]. While platforms, such as Upwork ${}^{6}$ and Cad Crowd ${}^{7}$ are available to outsource $3\mathrm{D}$ modeling work, they require additional expenses.
+
+Instead of designing CAD models from scratch, makers often adapt or combine existing 3D models, found on public repositories, such as Thingiverse $\left\lbrack {4,9,{16}}\right\rbrack$ . This process can be challenging as many users only share triangular mesh file-formats (STL) [1]. While users can request changes for models through the comments section, studies show that only ${32}\%$ of such requests are granted [1]. To empower novice CAD modelers to adapt models themselves, Thingiverse introduced the Customizer feature [29], a plugin that exposes GUI controls to adjust the parameters of models designed with the OpenSCAD [34] scripting language. While the Thingiverse Customizer is highly popular [8,33], only a small portion (3.7%) of 3D models available on the platform are modeled in OpenSCAD, and only $1\%$ are compatible with the Customizer [1]. Hudson et al. [16] observe that modeling in OpenSCAD is challenging and significantly different from feature-based parametric modeling environments traditionally used by CAD modelers.
+
+In contrast to these efforts, CODA guides and stimulates novice modelers in making well-constrained models in popular feature-based CAD modeling environments. Well-constrained parametric models are convenient to adapt as they represent a family of alternative models [3].
+
+---
+
+${}^{5}$ https://www.bricsys.com/
+
+${}^{6}$ https://www.upwork.com/
+
+${}^{7}$ https://www.cadcrowd.com/
+
+${}^{3}$ https://www.tinkercad.com/
+
+${}^{4}$ https://www.blockscad3d.com/
+
+---
+
+
+
+Figure 2: CODA’s animations show the implications of suggested constraints. (a) A suggestion for constraining the two tabs to have the same height. (b) A suggestion for overwriting a constraint that ensures the slot at the bottom is always centered. (c) A suggestion for making the front edges collineair. (All three figures are edited to visualize the animation)
+
+## 3 System Overview
+
+This section gives an overview of CODA's core features. We start with a short walkthrough demonstrating how our system can be used in a real modeling workflow. Afterwards, we discuss CODA's features in more detail.
+
+### 3.1 Walkthrough
+
+This walkthrough demonstrates the design process Emily, a novice modeler in Autodesk Fusion 360, follows to design a laptop stand that can be laser cut (Figure 1c). During this process, CODA offers support to make a laptop stand that is well-constrained, and easy to adapt and scale to other laptops or devices (Figure 1b).
+
+As shown in Figure 1a, Emily starts with sketching the 2D cross-sectional profile for the laptop stand. She adds dimensions to the sketch to fit the size of her laptop. While sketching, CODA informs Emily that the slot and tabs are $5\mathrm{\;{mm}}$ in size and asks whether these features should always have the same dimension. When hovering this suggestion, CODA animates the model by resizing these features at the same time to demonstrate the effect of the suggested constraint (Figure 2a). Emily accepts the suggested constraint and CODA replaces the static dimension constraint with dimensions that share the same value (variable).
+
+CODA also notices that the slot at the bottom is currently positioned in the center but not constrained as such. Therefore the system suggests to replace the dimension that offsets the slot from the left edge with a geometry constraint that ensures that the bottom edges on both sides of the slot are always equal. Again, Emily accepts the suggestion after inspecting the animation to understand the implications of the constraint (Figure 2b).
+
+Next, Emily notices a suggestion for making the two vertical edges at the right of the profile collinear. When hovering the suggestion, the animation informs her that when the size of the slanted edge of the stand would change, the bottom of the laptop stand does not yet adjust accordingly (Figure 2c). Emily accepts the suggestion to make the two edges collinear as she prefers a laptop stand that is well aligned. CODA offers more relevant suggestions to improve the constraints in this cross-sectional profile which Emily can accept as desired. Examples include, making all three sides equally wide (uniform thickness), relating the width of the two tabs, and suggestions related to the positioning of the two tabs.
+
+Further in the design process, when extruding the profiles $5\mathrm{\;{mm}}$ , CODA also notices this extrusion depth equals the size of the tabs and slots and suggests creating a constraint. When the final laptop stand is finished, Emily can easily adjust the stand to fit other laptops or adjust the material thickness to fabricate it with different material. She also decides to make the model available on Thingiverse as it is versatile and robust to changes.
+
+### 3.2 Extracting Relations
+
+In order to suggest constraints, CODA continuously extracts the following four types of relations in CAD models [38]:
+
+1. Ground relations: relations with respect to the reference coordinate system, such as a line being horizontal or vertical in a 2D sketch (Figure 3a).
+
+2. Geometric relations: relations that define known geometric alignments, such as tangency, collinearity, parallelism, perpendicularity, and coincidence of points (Figure 3b).
+
+3. Dimensional relations: relations between sizes or offsets between elements, such as edges with an equal length or a point in the middle of an edge (Figure 3c).
+
+4. Algebraic relations: restrictions on the model in the form of mathematical expressions. For example, a edge being twice as long as another edge (Figure 3d).
+
+While extracting relations, CODA considers parameters of modeling operations as well as attributes of all entities in a sketch (i.e. sketch entities). Sketch entities in Autodesk Fusion 360 include points, lines, circles, ellipses and arcs. Rectangles, for example, are not sketch entities but profiles as they consist of multiple lines. While CODA always extracts relations between exactly two attributes, atributes of various types can be related by CODA. Here we can distinguish the following combinations:
+
+- Relations between sketch entities within a single sketch. Within a sketch, all four types of relations are applicable. For example, a line being tangent to a circle or a rectangle having twice the width of the diameter of a circle.
+
+- Relations between sketch entities across different sketches. For these relations, dimensional and algebraic relations are applicable, such as a slot in two different sketches being equal in size.
+
+- Relations between parameters of modeling operations. For these relations, dimensional and algebraic relations are applicable, such as two fillet operations with the same radius or the depth of an extrusion being equal or half the size of the radius of a fillet operation.
+
+
+
+Figure 3: Cross section profile of a phone holder with four types of relations annotated: (a) Ground relations. (b) Geometric relations. (c) Dimensional relations. (d) Algebraic relations.
+
+- Relations between a parameter of a modeling operation and a sketch entity. For these relations, dimensional and algebraic relations are applicable, such as the width of a slot in a sketch being equal to the depth of an extrusion.
+
+To not overwhelm users with suggestions and to offer suggestions in context, we present relations within a single sketch only when the user is editing the respective sketch. All other relations are presented outside the $2\mathrm{D}$ sketching mode in $3\mathrm{D}$ modeling mode.
+
+As the number of algebraic relations is possibly very large, especially when considering all pairs of sketch entities and parameters of modeling features, CODA first extracts algebraic relationships that are frequently present in CAD models. Studies of Mills et al. [31] and Langbein et al. [25] show that the most common relations in CAD models include equal radii and lengths of edges, congruent faces, and radii and edges being half, one third, and one fourth in length. CODA thus presents algebraic relations between all pairs of sketch entities and parameters of modeling operations and vice versa that are equal, half, one third, and one fourth in value. To help users create well-constrained models with less common relations, the next section covers CODA's features to facilitate specifying custom algebraic relations between specific pairs of entities.
+
+### 3.3 Custom Algebraic Constraints
+
+To help users identify and specify custom algebraic constraints (Figure 4a), CODA supports an in-depth search between two entities. When starting this feature, the user selects two entities. These can be sketch entities (e.g. points, lines, circles, ellipses, arcs), dimensions, or modeling features (e.g. extrusion, fillet operations). CODA now calculates mathematical expressions between all pairs of attributes of the two entities, independent of whether the relation is a common ratio. As shown in Figure 4b, CODA suggests a relation for every pair of attributes both as ratios $\left( {y = x * {cte}}\right)$ and sums $\left( {y = x + {cte}}\right)$ .
+
+### 3.4 Communicating and Explaining Constraints
+
+To clearly communicate the meaning and implications of suggested constraints, CODA creates animated visualizations for all suggestions. Hovering a suggested constraint animates the model and communicates the meaning of the constraint in the context of the model. Explaining a constraint using animations on top of the model, in contrast to generic visualizations, textual explanations, or symbols, makes it convenient for users to understand its implications in the model. We developed two classes of animations to visualize the behavior of the four supported types of relations:
+
+
+
+Figure 4: (a) This wrench requires custom constraints to ensure it scales appropriately with respect to the nut diameter. (b) CODA assist users in making such custom relations.
+
+- For ground and geometric relations, we animate the variability currently present in the model when the suggested ground or geometric constraint would not be added. For example, hovering the suggested collinear constraint in Figure 2c, repeatedly moves the two edges that are collinear but not yet constrained as such. The animation makes the user aware that these two edges can still move with respect to each other. Using the "yes" and "no" buttons, the user specifies whether the demonstrated movement of these lines is allowed. "no" adds the collinear constraint while "yes" discards the suggestion without further action. Figure 5a gives an overview of how CODA animates all geometric and ground relations.
+
+- For dimensional and algebraic relations, we animate how two entities would behave when their position or sizes would be constrained to each other. For example, hovering the suggested dimensional constraint in Figure 2a and Figure 2b shows how two lines would scale when they are constrained to have equal dimensions. For these relations CODA asks whether the suggested relations fit the design. "yes" adds the respective dimensional or algebraic constraint, while "no" discards the suggestion. Figure 5b gives an overview of how CODA animates all dimensional and algebraic relations.
+
+### 3.5 Invalidated Relations
+
+It is important to note that CODA only suggests constraints to enforce relations that are present in the current version of the model. Oftentimes while testing the robustness of a model, the model breaks because of constraints that are still missing. Figure 6a shows how the symmetry in the laptop stand breaks when changing the width because of a missing constraint. These missing constraints will not be suggested by CODA as the relations are not present anymore in the broken model. To solve this inconvenience, CODA continuously presents a message communicating how many relations are invalidated by the last modeling operation (Figure 6b). As shown in Figure 6c, the user then gets the option to revert the last action that broke the model and see the list of invalidated relations that could be added to further constrain the model. This is an iterative and powerful workflow that allows users to break relations in a model and show which constraints can be added to enforce relations.
+
+
+
+Figure 5: CODA animates the meaning and implications of suggested constraints. (a) shows an abstraction of animations for demonstrating ground and geometric relations. (b) shows an abstraction of animations for demonstrating dimensional and algebraic relations.
+
+
+
+Figure 6: CODA only suggests relations to enforce options that are currently present in the model. (a) Changing parameters can break relations because of missing constraints. (b-c) CODA resolves this by showing a list of constraints that were invalidated by the last modeling operation.
+
+## 4 IMPLEMENTATION
+
+CODA is implemented as a Python plugin for Autodesk Fusion ${360}^{8}$ . The concepts and features presented in CODA, however, are not specific to the Fusion 360 environment and could be implemented in other feature-based parametric CAD environments, such as SOLIDWORKS, Rhinoceros 3D, or Autodesk Inventor.
+
+### 4.1 Extracting unconstrained relations
+
+To suggest geometry constraints, CODA continuously checks for the presence of such relations between all pairs of sketch entities within a single sketch and across different sketches. Figure 7 gives an overview of how CODA checks if a geometric relation is present between two sketch entities.
+
+| Perpendicular | | $\bar{a} \cdot \bar{b} = 0$ |
| Parallel | | $\frac{ax}{ay} = \frac{bx}{by}$ |
| Collinear | | $\bar{a} = n\bar{b}$ for a real n |
| Tangent (line-circle/ellipse) | $\left( {{l}_{1} = {l}_{1}\text{for circle}}\right)$ | $\frac{{\left( x - {x}_{1}\right) }^{2}}{{l}_{2}} + \frac{{\left( mx + c - {y}_{1}\right) }^{2}}{{l}_{1}} = 1$ has one unique real root |
| Tangent (circle/ellipse- circle/ellipse) | $\left( {{l}_{11} = {l}_{12},{l}_{21} = {l}_{22}\text{for circle}}\right)$ | $\left( {\frac{{\left( x - {x}_{1}\right) }^{2}}{{l}_{12}^{2}} + \frac{{\left( y - {y}_{1}\right) }^{2}}{{l}_{11}^{2}} = 1}\right.$ $\left\lbrack {\frac{{\left( x - {x}_{2}\right) }^{2}}{{{l}_{22}}^{2}} + \frac{{\left( y - {y}_{2}\right) }^{2}}{{{l}_{21}}^{2}} = 1}\right.$ has one unique real root |
| Concentric | | $C\left( a\right) = C\left( b\right)$ with $C =$ center point |
| Coincident (point-point) | -a, b | $a = b$ |
| Coincident (point-line) | | ${b}_{y} = m{b}_{x} + c$ |
+
+Figure 7: CODA uses basic linear algebra methods to check for geometric relations between entities in a sketch.
+
+---
+
+${}^{8}$ https://www.autodesk.com/products/fusion-360/overview
+
+---
+
+| Sketch entity | Attributes |
| Points | (x, y) position |
| Lines | (x, y)position of start, mid, and end point, length of line |
| Circles | (x, y)position of center, diameter of circle |
| Ellipses | (x, y)position of center, size along minor and major axis |
| Arcs | (x, y)position of start, mid, and end point, ra- dius of arc |
+
+Table 1: Attributes of sketch entities considered by CODA to extract dimensional and algebraic relations.
+
+When extracting dimensional and algebraic relations, CODA finds pairs of attributes in a 3D CAD model that are equal, half, one third, and one fourth in value. These are frequently used relations in CAD models according to studies by Mills et al. [31] and Langbein et al. [25]. Note that for these relations, parameter values of modeling operations, such as extrusions and fillet, as well as the position and sizes of sketch entities are considered. Table 1 gives an overview of the attributes CODA considers for extracting dimensional and algebraic relations per sketch entity.
+
+For CODA to only suggest relations that are not yet enforced by constraints, our algorithm needs to take into account constraints already present in CAD models. The Fusion 360 API, however, only exposes constraints that are explicitly present in the CAD model. Sketch entities, however, can also be implicitly constrained by other constraints. The next subsection discusses how we extract those implicit constraints to ensure CODA does not offer these suggestions.
+
+### 4.2 Extracting implicit constraints present in the model
+
+Fusion 360's API does not provide access to the constraint graph and only exposes constraints that are explicitly added by the user or through the API. Besides these constraints, however, other implicit constraints can be present in a sketch. For example, two lines with a parallel constraint to a third line are always parallel to each other without that parallel constraint being present. CODA needs to be aware of these implicit constraints to avoid suggesting constraints that are already present in the model (Section 4.1) as well as to prevent over-constraining models (Section 4.3).
+
+To extract these implicit constraints, we re-implemented and extended the technique of Juan-Arinyo and Soto [21]. This technique requires all constraints to be expressed as either constrained distance (CD) sets, constrained angle (CA) sets, or constrained perpendicular distance(CH)sets. While CD sets includes points between which all distances are constrained, CA sets consist of line segments between which all angles are constrained, and $\mathrm{{CH}}$ sets consist of a point for which the perpendicular distance to a line segment is constrained. We convert all length/size constraints in Fusion 360 to constrained distance (CD) sets, angle constraints to constrained angles (CA) sets, and offset constraints to constrained perpendicular distance(CH) sets. For ground and geometry constraints in Fusion 360 we use the following conversion:
+
+- Horizontal/vertical: We add a constrained angle (CA) set representing an angle of ${0}^{ \circ }$ between the line segment and the xor y-axis.
+
+- Perpendicular: We add a constrained angle (CA) set representing an angle of ${90}^{ \circ }$ between the two line segments.
+
+- Parallel: We add a constrained angle (CA) set representing an angle of ${0}^{ \circ }$ between the two line segments.
+
+- Collinear: For all pairs of end-points of the two line segments, we add a constrained perpendicular distance(CH)set with a distance of 0 .
+
+- Concentric: We add a constrained distance (CD) set, representing a distance of 0 between the mid-points of the two concentric circles.
+
+- Coincident (point-point): We add a constrained distance (CD) set, representing a distance of 0 between the coincident points.
+
+- Coincident (point-line): We add a constrained perpendicular distance(CH)set, representing a distance of 0 between the point and the line.
+
+- Tangent (circle/ellipse-line): We add a constrained angle (CA) set, representing an angle of ${90}^{ \circ }$ between the line segment and the radius of the circle at the tangency point.
+
+- Tangent (circle/ellipse-circle/ellipse): We add a constrained angle (CA) set, representing an angle of ${0}^{ \circ }$ between the radii of the two circles at the tangency point.
+
+Once all Fusion constraints are converted to CD, CA, and CH sets, we compute the transitive closure of the constrained angles (CA) sets and use the 20 rules presented by Juan-Arinyo and Soto [21] to merge constraints. Now we get CD, CA, and CH sets that reflect the implicit constraints. When two sketch entities have a relation that is not yet enforced by a explicit constraint, CODA can now check whether these entities are implicitly constraint using the following rules:
+
+- Horizontal/vertical: Implicitly constrained if a constrained angle (CA) set exist between the line segment and a line segment representing the $\mathrm{x}$ - or y-axis.
+
+- Perpendicular/parallel: Implicitly constrained if a constrained angle (CA) set exist representing both line segments or if a constrained distance (CD) set exists representing the four points of the two line segments.
+
+- Collinear: Implicitly constrained if a constrained perpendicular distance(CH)set exist representing both line segments or if a constrained distance (CD) set exists representing the four points of the two line segments.
+
+- Concentric: Implicitly constrained if a constrained distance (CD) set exists representing the center points of the two circles.
+
+- Coincident point-point: Implicitly constrained if a constrained distance (CD) set exists representing the two points.
+
+- Coincident point-line: Implicitly constrained if a constrained perpendicular distance(CH)set exists representing the point and the line or if a constrained distance (CD) set exist representing the point and both points of the line segment.
+
+- Tangent circle/ellipse-line: Implicitly constrained if a constrained angle (CA) set exists representing the line segment and a radius (line segment) passing through the tangency point. Alternatively, if a constrained distance (CD) set exists including the two points of the line segments and the two points of the radius passing through the tangency point.
+
+- Tangent circle/ellipse-circle/ellipse: Implicitly constrained if a constrained angle (CA) set exists representing the radii of both circles passing through the tangency point. Alternatively, if a constrained distance (CD) set exist including the point of the four line segments passing through the tangency point.
+
+### 4.3 Removing dimensions to avoid over-constraining models
+
+Accepting suggested constraints oftentimes requires removing existing constraints present in the model. In the sketch in Figure 8a-left, for example, CODA suggests to constrain the length of the lines so that they are always half the length of each other. To accept this constraint, CODA replaces the static dimensional constraint with the dynamic constraint shown in Figure 8a-right. However, when both dimensions are implicitly constrained as shown in Figure 8b-left, CODA needs to remove one of the other constraints before adding the suggested constraint shown in Figure 8b-right. For CODA to know the explicit constraints that are responsible for every implicit constraint, we keep track of all explicit constraints while merging $\mathrm{{CD}},\mathrm{{CA}}$ , and $\mathrm{{CH}}$ sets in the algorithm explained in Section 4.2. When multiple explicit constraints are responsible for an implicit constraint (Figure 8b), CODA shows multiple suggestions and communicates their differences through the animated visualizations.
+
+
+
+Figure 8: When accepting suggested constraints CODA oftentimes (a) overwrites existing constraints or (b) removes existing constraints.
+
+### 4.4 Rendering animations in CAD models
+
+When hovering suggested constraints, CODA previews the implications of a constraint by animating features in the CAD model (Figure 5). When the animation requires lines to tilt, we continuously rotate the line between $- {5}^{ \circ }$ and $+ {5}^{ \circ }$ . When changes in size are required, we apply a scaling that transitions between half and double the size of the entity. Finally, when animating a parameter of a modeling operation, such as an extrusion, we alternate between half and double of the original value. Animations are updated every ${100}\mathrm{\;{ms}}$ and are repeated as long as the user hovers the suggested constraint.
+
+As CODA directly manipulates parameters in the original design, we make a copy of all the attributes to be able to restore them afterwards. For some sketch entities, additional temporary construction lines are required to realize the animation. For example, to vary the angle between two parallel lines, Fusion 360 does not allow to temporarily add an angular constraint between the two lines as they are exactly parallel. CODA therefore first adds a temporary construction line perpendicular to both lines and varies the angle between the perpendicular line and both parallel lines during the animation (Figure 5a).
+
+## 5 LIMITATIONS AND FUTURE WORK
+
+Although CODA offers many novel opportunities to facilitate making well-constrained CAD models, our work also has several limitations which reveal many exciting directions for future research.
+
+First, future research could study how novices in CAD modeling use CODA during their design workflow. While CODA offers suggestions in real-time while modeling, our tool could also be used after a CAD model is finished or at a later time to make an existing model more flexible and adaptable. We believe this could be a major asset as it allows novices to further improve CAD models shared via platforms, such as Thingiverse, and thus distribute the workload across the community. Furthermore, it would be interesting to investigate whether CODA also provides value to expert modelers and for students that learn CAD modeling. For example, the suggested constraints can make students aware of available CAD features and how they are composed.
+
+Second, while CODA offers suggestions for the most common constraints in CAD, more advanced suggestions can be supported in the future. For geometry relations this includes identifying patterns, such as symmetry or repetition in models and offering suggestions to convert these patterns into more adaptable features. For algebraic constraints, the current version of CODA supports frequently used ratios in CAD models according to Mills et al. [31] and Langbein et al. [25]. Other common ratios used in design could be supported in the future, such as the Golden ratio or the Lichtenberg ratio. Future versions could also offer suggestions for relations that are nearly present in the model, such as lines that are almost perpendicular or are almost equal in length. Similar approaches have been explored for improving mesh models [23].
+
+Third, to further facilitate adapting models, future versions of CODA could offer suggestions for other types of constraints, such as limiting the range in which parameters and dimensions can change without breaking the model. While computing valid ranges of features has been investigated for 2D sketches [13, 14, 40], more research is needed to compute valid ranges for all features in 3D to ensure the integrity of the model when changing parameters.
+
+## 6 CONCLUSION
+
+In this paper we presented CODA, an interactive software tool that helps novice modelers to design well-constrained parametric 2D and 3D CAD models. In order to do so, CODA contributes a computational approach for extracting and suggesting relations in a model that are not yet enforced by constraints. CODA also clearly communicates the meaning and implications of suggested constraints using novel animated visualizations rendered in the CAD model. By facilitating the creation of well-constrained parametric designs we hope to further democratize CAD and encourage users to upload high quality parametric models to public sharing repositories, such as Thingiverse.
+
+## REFERENCES
+
+[1] C. Alcock, N. Hudson, and P. K. Chilana. Barriers to using, customizing, and printing $3\mathrm{\;d}$ designs on thingiverse. In Proceedings of the 19th International Conference on Supporting Group Work, GROUP '16, p. 195-199. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2957276.2957301
+
+[2] P. Benko. Constrained fitting in reverse engineering. Computer Aided Geometric Design, 19(3):173-205, Mar. 2002. doi: 10.1016/s0167 -8396(01)00085-1
+
+[3] B. Bettig and J. Shah. Derivation of a standard set of geometric constraints for parametric modeling and data exchange. Computer-Aided Design, 33(1):17-33, Jan. 2001. doi: 10.1016/s0010-4485(00)00058-0
+
+[4] E. Buehler, S. K. Kane, and A. Hurst. Abc and 3d: Opportunities and obstacles to $3\mathrm{\;d}$ printing in special education environments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS '14, p. 107-114. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/ 2661334.2661365
+
+[5] J. D. Camba, M. Contero, and P. Company. Parametric CAD modeling: An analysis of strategies for design reusability. Computer-Aided Design, 74:18-31, May 2016. doi: 10.1016/j.cad.2016.01.003
+
+[6] M. Chang, B. Lafreniere, J. Kim, G. Fitzmaurice, and T. Grossman. Workflow graphs: A computational model of collective task strategies for 3d design software. In Proceedings of Graphics Interface 2020, GI 2020, pp. 114 - 124. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2020. doi: 10.20380/GI2020.13
+
+[7] V. Dziubak, B. Lafreniere, T. Grossman, A. Bunt, and G. Fitzmaurice. Maestro: Designing a system for real-time orchestration of $3\mathrm{\;d}$ modeling workshops. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST '18, p. 287-298. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3242587.3242606
+
+[8] C. M. Flath, S. Friesike, M. Wirth, and F. Thiesse. Copy, transform, combine: Exploring the remix as a form of innovation. Journal of
+
+Information Technology, 32(4):306-325, Dec. 2017. doi: 10.1057/ s41265-017-0043-9
+
+[9] S. Friesike, C. M. Flath, M. Wirth, and F. Thiesse. Creativity and productivity in product design for additive manufacturing: Mechanisms and platform outcomes of remixing. Journal of Operations Management, 65(8):735-752, Apr. 2019. doi: 10.1016/j.jom. 2018.10. 004
+
+[10] C. González-Lluch and R. Plumed. Are we training our novices towards quality 2d profiles for $3\mathrm{\;d}$ models? In Advances on Mechanics, Design Engineering and Manufacturing II, pp. 714-721. Springer International Publishing, 2019. doi: 10.1007/978-3-030-12346-8_69
+
+[11] N. W. Hartman. Defining expertise in the use of constraint-based cad tools by examining practicing professionals. The Engineering Design Graphics Journal, 69(1), 2005.
+
+[12] H. Hewitt. New in bricscad® v19: Parametrize. https://blog.bricsys.com/new-bricscad-v19-parametrize/, 2018. Accessed: 2020-11-02.
+
+[13] M. Hidalgo and R. Joan-Arinyo. Computing parameter ranges in constructive geometric constraint solving: Implementation and correctness proof. Computer-Aided Design, 44(7):709-720, July 2012. doi: 10. 1016/j.cad.2012.02.012
+
+[14] C. Hoffmann and K.-J. Kim. Towards valid parametric CAD models. Computer-Aided Design, 33(1):81-90, Jan. 2001. doi: 10.1016/s0010 $- {4485}\left( {00}\right) {00073} - 7$
+
+[15] C. M. Hoffmann and R. Joan-Arinyo. Parametric modeling. In Handbook of Computer Aided Geometric Design, pp. 519-541. Elsevier, 2002. doi: 10.1016/b978-044451104-1/50022-8
+
+[16] N. Hudson, C. Alcock, and P. K. Chilana. Understanding newcomers to $3\mathrm{\;d}$ printing: Motivations, workflows, and barriers of casual makers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 384-396. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036. 2858266
+
+[17] N. Hudson, B. Lafreniere, P. K. Chilana, and T. Grossman. Investigating how online help and learning resources support children's use of 3d design software. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3173574.3173831
+
+[18] T. Igarashi and J. F. Hughes. A suggestive interface for 3d drawing. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, UIST '01, p. 173-181. Association for Computing Machinery, New York, NY, USA, 2001. doi: 10.1145/502348. 502379
+
+[19] A. Inc. Autoconstrain (command). https://knowledge.autodesk.com/support/autocad/learn-explore/caas/ CloudHelp/cloudhelp/2016/ENU/AutoCAD-Core/files/ GUID-2E53F@A6-640C-4B3A-A650-18F1A5F781E1-htm.html.Accessed: 2020-11-02.
+
+[20] N. Joshi, J. Matejka, F. Anderson, T. Grossman, and G. Fitzmaurice. Micromentor: Peer-to-peer software help sessions in three minutes or less. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831. 3376230
+
+[21] R. Juan Arinyo and A. Soto Riera. A set of rules for a constructive geometric constraint solver. 1995.
+
+[22] K. Kiani, G. Cui, A. Bunt, J. McGrenere, and P. K. Chilana. Beyond "one-size-fits-all": Understanding the diversity in how software newcomers discover and make use of help resources. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300570
+
+[23] I. Kovács, T. Várady, and P. Salvi. Applying geometric constraints for perfecting CAD models in reverse engineering. Graphical Models, 82:44-57, Nov. 2015. doi: 10.1016/j.gmod.2015.06.002
+
+[24] B. Lafreniere and T. Grossman. Blocks-to-cad: A cross-application
+
+bridge from minecraft to $3\mathrm{\;d}$ modeling. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST '18, p. 637-648. Association for Computing Machinery, New
+
+York, NY, USA, 2018. doi: 10.1145/3242587.3242602
+
+[25] F. C. Langbein, B. I. Mills, A. D. Marshall, and R. R. Martin. Recognizing geometric patterns for beautification of reconstructed solid models. In Proceedings International Conference on Shape Modeling and Applications, pp. 10-19, 2001. doi: 10.1109/SMA. 2001.923370
+
+[26] M. Li, F. C. Langbein, and R. R. Martin. Detecting design intent in approximate CAD models using symmetry. Computer-Aided Design, 42(3):183-201, Mar. 2010. doi: 10.1016/j.cad.2009.10.001
+
+[27] W. Li, T. Grossman, and G. Fitzmaurice. Gamicad: A gamified tutorial system for first time autocad users. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST '12, p. 103-112. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2380116.2380131
+
+[28] W. Li, T. Grossman, and G. Fitzmaurice. Cadament: A gamified multiplayer software tutorial system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 3369-3378. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2556954
+
+[29] M. I. LLC. Thingiverse customizer. https://www.thingiverse.com/customizer, 2020. Accessed: 2020-02-19.
+
+[30] C. Mahapatra, J. K. Jensen, M. McQuaid, and D. Ashbrook. Barriers to end-user designers of augmented fabrication. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300613
+
+[31] B. Mills, F. Langbein, D. Marshall, and R. Martin. Estimate of frequencies of geometric regularities for use in reverse engineering of simple mechanical components, 032001. doi: 10.13140/RG.2.1.3683.2087
+
+[32] C. Mota. The rise of personal fabrication. In Proceedings of the 8th ACM Conference on Creativity and Cognition, C&C '11, p. 279-288. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2069618.2069665
+
+[33] L. Oehlberg, W. Willett, and W. E. Mackay. Patterns of physical design remixing in online maker communities. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 639-648. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702175
+
+[34] OpenSCAD. Openscad. https://www.openscad.org/, 2020. Accessed: 2020-02-19.
+
+[35] X. Peng, P. McGary, M. Johnson, B. Yalvac, and E. Ozturk. Assessing novice cad model creation and alteration. Computer-Aided Design and Applications, PACE (2), pp. 9-19, 2012.
+
+[36] R. Plumed, P. Varley, and P. Company. Features and design intent in engineering sketches. In Studies in Computational Intelligence, pp. 77-106. Springer Berlin Heidelberg, 2013. doi: 10.1007/978-3-642 $- {31745} - {3.5}$
+
+[37] J. Rod, C. Li, D. Zhang, and H. Lee. Designing a 3d modelling tool for novice users. In Proceedings of the 28th Australian Conference on Computer-Human Interaction, OzCHI '16, p. 140-144. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 3010915.3010957
+
+[38] J. Shah. Parametric and feature-based CAD/CAM : concepts, techniques, and applications. Wiley, New York, 1995.
+
+[39] R. Shewbridge, A. Hurst, and S. K. Kane. Everyday making: Identifying future uses for $3\mathrm{\;d}$ printing in the home. In Proceedings of the 2014 Conference on Designing Interactive Systems, DIS ' 14, p. 815-824. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2598510.2598544
+
+[40] H. A. van der Meiden and W. F. Bronsvoort. A constructive approach to calculate parameter ranges for systems of geometric constraints. In Proceedings of the 2005 ACM Symposium on Solid and Physical Modeling, SPM '05, p. 135-142. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1060244.1060260
+
+[41] J. Vergeest, Y. Song, T. Langerak, et al. Design intent management for design reuse. 2006.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..502185fff110d77e19563525b3f430395898c560
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/1dLDPJeafRZ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,295 @@
+§ CODA: A DESIGN ASSISTANT TO FACILITATE SPECIFYING CONSTRAINTS AND PARAMETRIC BEHAVIOR IN CAD MODELS
+
+Category: Research
+
+§ ABSTRACT
+
+We present CODA, an interactive software tool, implemented on top of Autodesk Fusion 360, that helps novice modelers with designing well-constrained parametric 2D and 3D CAD models. We do this by extracting relations that are present in the design but not yet enforced by constraints. CODA presents these relations as suggestions for constraints and clearly communicates the implications of every constraint by rendering animated visualizations in the CAD model. CODA also suggest dynamic alternatives for static constraints present in the model. These suggestions assist novices in CAD to work towards well-constrained models that are easy to adapt. Such well-constrained models are convenient to modify and simplify the process to make design alternatives to accommodate changing needs or specific requirements of machinery for fabricating the design.
+
+Index Terms: Human-centered computing-Interactive systems and tools Human-centered computing-Visualization-Visualization techniques;
+
+§ 1 INTRODUCTION
+
+The maker movement is largely driven by a community of DIY enthusiasts building on each other's work by sharing digital versions of artefacts through online platforms, such as Thingiverse ${}^{1}$ or Youmagine ${}^{2}\left\lbrack {8,{33}}\right\rbrack$ . Makers frequently do this by making rough edits to mesh files or by starting from scratch while using concepts from existing designs [9]. A more convenient way to adapt an existing model is to adjust parameters in a well-constrained parametric model. Well-constrained parametric models allow for dimensional adjustments and personal and aesthetic refinements. Additionally, such changes can also make models compatible with machines and materials available in other labs. For example by compensating for shrinkage of ABS 3D printing filament or the kerf of a laser cutter. González-Lluch and Plumed [10] show however, that even for engineering students it is challenging to verify whether 3D models are fully constrained and thus behave as desired when changes are made. Especially under time pressure trained CAD modelers produce 3D models that are hard to modify because of errors or missing constraints [35]. For makers this can become even more challenging as many do not have a formal training in CAD modeling [22].
+
+To empower and encourage makers to design and share parametric models, Thingiverse launched a Customizer platform [29] that allows for making adjustments to parametric $3\mathrm{D}$ models using simple GUI controls, such as sliders and drop-down menus. However, Thingiverse Customizer requires making the entire 3D model in a CSG scripting language [34], which is significantly different from the feature-based modeling approach supported by popular 3D CAD environments [15], such as Autodesk Fusion and SolidWorks. An analysis of Thingiverse in 2015 shows that only a small fraction of designs (1.0%) are compatible with the Customizer [1]. To help users in specifying parametric models, recent versions of 3D CAD environments automatically add basic constraints to 2D sketches. AutoCAD's auto-constraint feature [19] and BricsCAD's Auto Parameterize functionality [12] go even further and automatically inject constraints models. As there are multiple valid alternatives to constrain models, fully-automatically introducing constraints does not always lead to the desired behavior of the model. For example, the hole for the charging cable of a phone holder (Figure 3) is positioned in the center of the design. It is up to the designers preferences whether to constrain this hole at a fixed distance or a ratio from the top or the bottom of the phone holder. Therefore, the system presented in this paper takes a different approach and suggests constraints and explains their differences and implications on the model.
+
+ < g r a p h i c s >
+
+Figure 1: Using CODA to correctly specify parametric behavior in a 3D CAD model of a laptop stand. (a) A 2D sketch of the cross-sectional profile for the laptop stand designed by the user. (b) CODA lists relations that are present in the design but not yet enforced by constraints. (c) Accepting CODA's suggestions allows novice modelers to quickly transition to a well-constraint model ready for fabrication.
+
+In this paper, we present CODA, a Constrained Design Assistant. CODA is an interactive software tool that helps novice CAD modelers with designing well-constrained 2D and 3D CAD models. Such well-constrained models are convenient to modify which simplifies the process to make design alternatives or when making adjustments for fabricating models with different machinery. First, our system makes users aware of relations that are present in the current design but are not yet enforced by constraints, such as edges being parallel without a parallel constraint. Second, CODA reconsiders static constraints added by the user and suggests more dynamic alternatives to make the design flexible to changes. CODA also helps in communicating the meaning and implications of all suggested constraints by animating the model and demonstrating its implications (Figure 2).
+
+${}^{1}$ https://www.thingiverse.com/
+
+${}^{2}$ https://www.youmagine.com/
+
+The core contribution of this paper is ${CODA}$ , an interactive software assistant to aid novice modelers in making well-constrained CAD models that are robust to changes. More specifically, we contribute:
+
+1. A computational approach for extracting relations in a model which are not yet enforced by constraints.
+
+2. A set of novel interactive animations to clearly communicate the impact of constraints to novice users.
+
+§ 2 RELATED WORK
+
+This work draws from, and builds upon prior work on facilitating CAD modeling and work related to sharing models for fabrication.
+
+§ 2.1 FACILITATING CAD MODELING
+
+3D CAD modeling environments offer hundreds of features. How these features are used and combined determine the flexibility, adaptability, and ultimately the reusability of $3\mathrm{D}$ models $\left\lbrack {5,{11}}\right\rbrack$ . Research shows, however, that even models designed by students with a formal 3D modeling training are often hard to reuse and adapt, especially when designed under time pressure [35]. In line with these observations, González-Lluch and Plumed [10] show that engineering students have a hard time reasoning whether profiles are over- or under-constrained. When considering modeling within the maker community, an emerging group of people learn 3D modeling practices by themselves through online resources [22]. However, these novice modelers could significantly benefit from high-quality models that are easy to adapt as they frequently make new artefacts by starting from existing designs [9].
+
+To lower the barrier to get started with 2D and 3D modeling, various tools have been developed that specifically target novices, such as Autodesk Tinkercad ${}^{3}$ and BlocksCAD ${}^{4}$ . However, several studies with casual makers [16, 30], children [17], and students in special education schools [4] show that 3D modeling is still challenging. To facilitate further, the Chateau [18] system helps with CAD modeling by suggesting modeling operations based on simple sketch gestures by the user. Rod et al. [37] presents various novel interaction techniques to further facilitate 3D modeling on touch-screen devices.
+
+Instead of adapting CAD environments and making custom modeling operations for novices, researchers also explored how to facilitate the process for novices to learn a new CAD environment. GamiCAD [27] and CADament [28] gradually introduce sketching and modeling operations using gamification techniques $\left\lbrack {{27},{28}}\right\rbrack$ to lower the barrier and keep novices motivated to continue learning new aspects. Alternatively, Blocks-to-CAD [24] shows how to gradually introduce 3D modeling operations in sandbox games, such as Minecraft, to get newcomers introduced to the basics of CAD modeling. Additionally, recent research results show how modeling strategies from experts can be modeled, analyzed, and compared to provide guidance for other users during modeling sessions [6].
+
+Instead of embedding expert knowledge in software systems, software systems can help in bringing novices in contact with experts while facing issues with 3D modeling. MicroMentor [20], for example, makes it possible for novices to request one-on-one help for specific issues. In contrast, the Maestro [7] system makes educators in workshops aware of student's progress and common challenges as they occur. Although experts typically provide more nuanced answers to the various challenges novices face, experts often need an incentive to help other users and first need to get familiar with the specific problem the user is facing [20].
+
+Several techniques have been developed that specifically aim to improve the adaptability and re-usability of models by facilitating specifying parametric behaviour. Commercial feature-based CAD modeling tools support, for example, snapping interaction techniques to ease and improve precision in $2\mathrm{D}$ sketches, such as centering a point in the middle of a line or sketching two perpendicular lines. Leveraging this snapping functionality oftentimes automatically fixes the relation by injecting the associated constraints in the $2\mathrm{D}$ sketch. AutoConstraint [19] takes a different approach and adds constraints to a completed sketch until it is fully constrained. Closest to our work is the Auto Parameterize [12] feature of BricsCAD ${}^{5}$ which automatically converts all static dimensions of a 3D model to algebraic equations to facilitate scaling and adapting the model. However, there are always multiple valid alternatives to constrain models and fully-automatically introducing constraints does not always lead to the desired behavior of the model. We therefore take a different approach and present various geometry constraints and algebraic relations that could be applied to the model. CODA communicates the implications of all suggestions using in-context animations to allow novice users to make informed decisions.
+
+Also related to our work are computational approaches to reverse engineering CAD models from mesh models by extracting modeling features $\left\lbrack {{36},{41}}\right\rbrack$ . Several systems also present algorithms to detect and extract geometry constraints in mesh models using numerical methods for constrained fitting [2] and techniques for detecting repeating patterns [25] and symmetries [26].
+
+§ 2.2 SHARING MODELS FOR FABRICATION
+
+Over the past decade, digital fabrication has become accessible mainly via public maker labs and affordable digital fabrication equipment [32]. Shewbridge et al. [39] report that households are interested in replacing, modifying, customizing, repairing, or replicating household objects using digital fabrication machinery. However, starters frequently need help from more experienced users translate ideas into 3D models. This is often done via drawings, photographs, and spoken language [39]. While platforms, such as Upwork ${}^{6}$ and Cad Crowd ${}^{7}$ are available to outsource $3\mathrm{D}$ modeling work, they require additional expenses.
+
+Instead of designing CAD models from scratch, makers often adapt or combine existing 3D models, found on public repositories, such as Thingiverse $\left\lbrack {4,9,{16}}\right\rbrack$ . This process can be challenging as many users only share triangular mesh file-formats (STL) [1]. While users can request changes for models through the comments section, studies show that only ${32}\%$ of such requests are granted [1]. To empower novice CAD modelers to adapt models themselves, Thingiverse introduced the Customizer feature [29], a plugin that exposes GUI controls to adjust the parameters of models designed with the OpenSCAD [34] scripting language. While the Thingiverse Customizer is highly popular [8,33], only a small portion (3.7%) of 3D models available on the platform are modeled in OpenSCAD, and only $1\%$ are compatible with the Customizer [1]. Hudson et al. [16] observe that modeling in OpenSCAD is challenging and significantly different from feature-based parametric modeling environments traditionally used by CAD modelers.
+
+In contrast to these efforts, CODA guides and stimulates novice modelers in making well-constrained models in popular feature-based CAD modeling environments. Well-constrained parametric models are convenient to adapt as they represent a family of alternative models [3].
+
+${}^{5}$ https://www.bricsys.com/
+
+${}^{6}$ https://www.upwork.com/
+
+${}^{7}$ https://www.cadcrowd.com/
+
+${}^{3}$ https://www.tinkercad.com/
+
+${}^{4}$ https://www.blockscad3d.com/
+
+ < g r a p h i c s >
+
+Figure 2: CODA’s animations show the implications of suggested constraints. (a) A suggestion for constraining the two tabs to have the same height. (b) A suggestion for overwriting a constraint that ensures the slot at the bottom is always centered. (c) A suggestion for making the front edges collineair. (All three figures are edited to visualize the animation)
+
+§ 3 SYSTEM OVERVIEW
+
+This section gives an overview of CODA's core features. We start with a short walkthrough demonstrating how our system can be used in a real modeling workflow. Afterwards, we discuss CODA's features in more detail.
+
+§ 3.1 WALKTHROUGH
+
+This walkthrough demonstrates the design process Emily, a novice modeler in Autodesk Fusion 360, follows to design a laptop stand that can be laser cut (Figure 1c). During this process, CODA offers support to make a laptop stand that is well-constrained, and easy to adapt and scale to other laptops or devices (Figure 1b).
+
+As shown in Figure 1a, Emily starts with sketching the 2D cross-sectional profile for the laptop stand. She adds dimensions to the sketch to fit the size of her laptop. While sketching, CODA informs Emily that the slot and tabs are $5\mathrm{\;{mm}}$ in size and asks whether these features should always have the same dimension. When hovering this suggestion, CODA animates the model by resizing these features at the same time to demonstrate the effect of the suggested constraint (Figure 2a). Emily accepts the suggested constraint and CODA replaces the static dimension constraint with dimensions that share the same value (variable).
+
+CODA also notices that the slot at the bottom is currently positioned in the center but not constrained as such. Therefore the system suggests to replace the dimension that offsets the slot from the left edge with a geometry constraint that ensures that the bottom edges on both sides of the slot are always equal. Again, Emily accepts the suggestion after inspecting the animation to understand the implications of the constraint (Figure 2b).
+
+Next, Emily notices a suggestion for making the two vertical edges at the right of the profile collinear. When hovering the suggestion, the animation informs her that when the size of the slanted edge of the stand would change, the bottom of the laptop stand does not yet adjust accordingly (Figure 2c). Emily accepts the suggestion to make the two edges collinear as she prefers a laptop stand that is well aligned. CODA offers more relevant suggestions to improve the constraints in this cross-sectional profile which Emily can accept as desired. Examples include, making all three sides equally wide (uniform thickness), relating the width of the two tabs, and suggestions related to the positioning of the two tabs.
+
+Further in the design process, when extruding the profiles $5\mathrm{\;{mm}}$ , CODA also notices this extrusion depth equals the size of the tabs and slots and suggests creating a constraint. When the final laptop stand is finished, Emily can easily adjust the stand to fit other laptops or adjust the material thickness to fabricate it with different material. She also decides to make the model available on Thingiverse as it is versatile and robust to changes.
+
+§ 3.2 EXTRACTING RELATIONS
+
+In order to suggest constraints, CODA continuously extracts the following four types of relations in CAD models [38]:
+
+1. Ground relations: relations with respect to the reference coordinate system, such as a line being horizontal or vertical in a 2D sketch (Figure 3a).
+
+2. Geometric relations: relations that define known geometric alignments, such as tangency, collinearity, parallelism, perpendicularity, and coincidence of points (Figure 3b).
+
+3. Dimensional relations: relations between sizes or offsets between elements, such as edges with an equal length or a point in the middle of an edge (Figure 3c).
+
+4. Algebraic relations: restrictions on the model in the form of mathematical expressions. For example, a edge being twice as long as another edge (Figure 3d).
+
+While extracting relations, CODA considers parameters of modeling operations as well as attributes of all entities in a sketch (i.e. sketch entities). Sketch entities in Autodesk Fusion 360 include points, lines, circles, ellipses and arcs. Rectangles, for example, are not sketch entities but profiles as they consist of multiple lines. While CODA always extracts relations between exactly two attributes, atributes of various types can be related by CODA. Here we can distinguish the following combinations:
+
+ * Relations between sketch entities within a single sketch. Within a sketch, all four types of relations are applicable. For example, a line being tangent to a circle or a rectangle having twice the width of the diameter of a circle.
+
+ * Relations between sketch entities across different sketches. For these relations, dimensional and algebraic relations are applicable, such as a slot in two different sketches being equal in size.
+
+ * Relations between parameters of modeling operations. For these relations, dimensional and algebraic relations are applicable, such as two fillet operations with the same radius or the depth of an extrusion being equal or half the size of the radius of a fillet operation.
+
+ < g r a p h i c s >
+
+Figure 3: Cross section profile of a phone holder with four types of relations annotated: (a) Ground relations. (b) Geometric relations. (c) Dimensional relations. (d) Algebraic relations.
+
+ * Relations between a parameter of a modeling operation and a sketch entity. For these relations, dimensional and algebraic relations are applicable, such as the width of a slot in a sketch being equal to the depth of an extrusion.
+
+To not overwhelm users with suggestions and to offer suggestions in context, we present relations within a single sketch only when the user is editing the respective sketch. All other relations are presented outside the $2\mathrm{D}$ sketching mode in $3\mathrm{D}$ modeling mode.
+
+As the number of algebraic relations is possibly very large, especially when considering all pairs of sketch entities and parameters of modeling features, CODA first extracts algebraic relationships that are frequently present in CAD models. Studies of Mills et al. [31] and Langbein et al. [25] show that the most common relations in CAD models include equal radii and lengths of edges, congruent faces, and radii and edges being half, one third, and one fourth in length. CODA thus presents algebraic relations between all pairs of sketch entities and parameters of modeling operations and vice versa that are equal, half, one third, and one fourth in value. To help users create well-constrained models with less common relations, the next section covers CODA's features to facilitate specifying custom algebraic relations between specific pairs of entities.
+
+§ 3.3 CUSTOM ALGEBRAIC CONSTRAINTS
+
+To help users identify and specify custom algebraic constraints (Figure 4a), CODA supports an in-depth search between two entities. When starting this feature, the user selects two entities. These can be sketch entities (e.g. points, lines, circles, ellipses, arcs), dimensions, or modeling features (e.g. extrusion, fillet operations). CODA now calculates mathematical expressions between all pairs of attributes of the two entities, independent of whether the relation is a common ratio. As shown in Figure 4b, CODA suggests a relation for every pair of attributes both as ratios $\left( {y = x * {cte}}\right)$ and sums $\left( {y = x + {cte}}\right)$ .
+
+§ 3.4 COMMUNICATING AND EXPLAINING CONSTRAINTS
+
+To clearly communicate the meaning and implications of suggested constraints, CODA creates animated visualizations for all suggestions. Hovering a suggested constraint animates the model and communicates the meaning of the constraint in the context of the model. Explaining a constraint using animations on top of the model, in contrast to generic visualizations, textual explanations, or symbols, makes it convenient for users to understand its implications in the model. We developed two classes of animations to visualize the behavior of the four supported types of relations:
+
+ < g r a p h i c s >
+
+Figure 4: (a) This wrench requires custom constraints to ensure it scales appropriately with respect to the nut diameter. (b) CODA assist users in making such custom relations.
+
+ * For ground and geometric relations, we animate the variability currently present in the model when the suggested ground or geometric constraint would not be added. For example, hovering the suggested collinear constraint in Figure 2c, repeatedly moves the two edges that are collinear but not yet constrained as such. The animation makes the user aware that these two edges can still move with respect to each other. Using the "yes" and "no" buttons, the user specifies whether the demonstrated movement of these lines is allowed. "no" adds the collinear constraint while "yes" discards the suggestion without further action. Figure 5a gives an overview of how CODA animates all geometric and ground relations.
+
+ * For dimensional and algebraic relations, we animate how two entities would behave when their position or sizes would be constrained to each other. For example, hovering the suggested dimensional constraint in Figure 2a and Figure 2b shows how two lines would scale when they are constrained to have equal dimensions. For these relations CODA asks whether the suggested relations fit the design. "yes" adds the respective dimensional or algebraic constraint, while "no" discards the suggestion. Figure 5b gives an overview of how CODA animates all dimensional and algebraic relations.
+
+§ 3.5 INVALIDATED RELATIONS
+
+It is important to note that CODA only suggests constraints to enforce relations that are present in the current version of the model. Oftentimes while testing the robustness of a model, the model breaks because of constraints that are still missing. Figure 6a shows how the symmetry in the laptop stand breaks when changing the width because of a missing constraint. These missing constraints will not be suggested by CODA as the relations are not present anymore in the broken model. To solve this inconvenience, CODA continuously presents a message communicating how many relations are invalidated by the last modeling operation (Figure 6b). As shown in Figure 6c, the user then gets the option to revert the last action that broke the model and see the list of invalidated relations that could be added to further constrain the model. This is an iterative and powerful workflow that allows users to break relations in a model and show which constraints can be added to enforce relations.
+
+ < g r a p h i c s >
+
+Figure 5: CODA animates the meaning and implications of suggested constraints. (a) shows an abstraction of animations for demonstrating ground and geometric relations. (b) shows an abstraction of animations for demonstrating dimensional and algebraic relations.
+
+ < g r a p h i c s >
+
+Figure 6: CODA only suggests relations to enforce options that are currently present in the model. (a) Changing parameters can break relations because of missing constraints. (b-c) CODA resolves this by showing a list of constraints that were invalidated by the last modeling operation.
+
+§ 4 IMPLEMENTATION
+
+CODA is implemented as a Python plugin for Autodesk Fusion ${360}^{8}$ . The concepts and features presented in CODA, however, are not specific to the Fusion 360 environment and could be implemented in other feature-based parametric CAD environments, such as SOLIDWORKS, Rhinoceros 3D, or Autodesk Inventor.
+
+§ 4.1 EXTRACTING UNCONSTRAINED RELATIONS
+
+To suggest geometry constraints, CODA continuously checks for the presence of such relations between all pairs of sketch entities within a single sketch and across different sketches. Figure 7 gives an overview of how CODA checks if a geometric relation is present between two sketch entities.
+
+max width=
+
+Perpendicular
+ < g r a p h i c s >
+ $\bar{a} \cdot \bar{b} = 0$
+
+1-3
+Parallel
+ < g r a p h i c s >
+ $\frac{ax}{ay} = \frac{bx}{by}$
+
+1-3
+Collinear
+ < g r a p h i c s >
+ $\bar{a} = n\bar{b}$ for a real n
+
+1-3
+Tangent (line-circle/ellipse)
+ < g r a p h i c s >
+ $\left( {{l}_{1} = {l}_{1}\text{ for circle }}\right)$ $\frac{{\left( x - {x}_{1}\right) }^{2}}{{l}_{2}} + \frac{{\left( mx + c - {y}_{1}\right) }^{2}}{{l}_{1}} = 1$ has one unique real root
+
+1-3
+Tangent (circle/ellipse- circle/ellipse)
+ < g r a p h i c s >
+ $\left( {{l}_{11} = {l}_{12},{l}_{21} = {l}_{22}\text{ for circle }}\right)$ $\left( {\frac{{\left( x - {x}_{1}\right) }^{2}}{{l}_{12}^{2}} + \frac{{\left( y - {y}_{1}\right) }^{2}}{{l}_{11}^{2}} = 1}\right.$ $\left\lbrack {\frac{{\left( x - {x}_{2}\right) }^{2}}{{{l}_{22}}^{2}} + \frac{{\left( y - {y}_{2}\right) }^{2}}{{{l}_{21}}^{2}} = 1}\right.$ has one unique real root
+
+1-3
+Concentric
+ < g r a p h i c s >
+ $C\left( a\right) = C\left( b\right)$ with $C =$ center point
+
+1-3
+Coincident (point-point) -a, b $a = b$
+
+1-3
+Coincident (point-line)
+ < g r a p h i c s >
+ ${b}_{y} = m{b}_{x} + c$
+
+1-3
+
+Figure 7: CODA uses basic linear algebra methods to check for geometric relations between entities in a sketch.
+
+${}^{8}$ https://www.autodesk.com/products/fusion-360/overview
+
+max width=
+
+Sketch entity Attributes
+
+1-2
+Points (x, y) position
+
+1-2
+Lines (x, y)position of start, mid, and end point, length of line
+
+1-2
+Circles (x, y)position of center, diameter of circle
+
+1-2
+Ellipses (x, y)position of center, size along minor and major axis
+
+1-2
+Arcs (x, y)position of start, mid, and end point, ra- dius of arc
+
+1-2
+
+Table 1: Attributes of sketch entities considered by CODA to extract dimensional and algebraic relations.
+
+When extracting dimensional and algebraic relations, CODA finds pairs of attributes in a 3D CAD model that are equal, half, one third, and one fourth in value. These are frequently used relations in CAD models according to studies by Mills et al. [31] and Langbein et al. [25]. Note that for these relations, parameter values of modeling operations, such as extrusions and fillet, as well as the position and sizes of sketch entities are considered. Table 1 gives an overview of the attributes CODA considers for extracting dimensional and algebraic relations per sketch entity.
+
+For CODA to only suggest relations that are not yet enforced by constraints, our algorithm needs to take into account constraints already present in CAD models. The Fusion 360 API, however, only exposes constraints that are explicitly present in the CAD model. Sketch entities, however, can also be implicitly constrained by other constraints. The next subsection discusses how we extract those implicit constraints to ensure CODA does not offer these suggestions.
+
+§ 4.2 EXTRACTING IMPLICIT CONSTRAINTS PRESENT IN THE MODEL
+
+Fusion 360's API does not provide access to the constraint graph and only exposes constraints that are explicitly added by the user or through the API. Besides these constraints, however, other implicit constraints can be present in a sketch. For example, two lines with a parallel constraint to a third line are always parallel to each other without that parallel constraint being present. CODA needs to be aware of these implicit constraints to avoid suggesting constraints that are already present in the model (Section 4.1) as well as to prevent over-constraining models (Section 4.3).
+
+To extract these implicit constraints, we re-implemented and extended the technique of Juan-Arinyo and Soto [21]. This technique requires all constraints to be expressed as either constrained distance (CD) sets, constrained angle (CA) sets, or constrained perpendicular distance(CH)sets. While CD sets includes points between which all distances are constrained, CA sets consist of line segments between which all angles are constrained, and $\mathrm{{CH}}$ sets consist of a point for which the perpendicular distance to a line segment is constrained. We convert all length/size constraints in Fusion 360 to constrained distance (CD) sets, angle constraints to constrained angles (CA) sets, and offset constraints to constrained perpendicular distance(CH) sets. For ground and geometry constraints in Fusion 360 we use the following conversion:
+
+ * Horizontal/vertical: We add a constrained angle (CA) set representing an angle of ${0}^{ \circ }$ between the line segment and the xor y-axis.
+
+ * Perpendicular: We add a constrained angle (CA) set representing an angle of ${90}^{ \circ }$ between the two line segments.
+
+ * Parallel: We add a constrained angle (CA) set representing an angle of ${0}^{ \circ }$ between the two line segments.
+
+ * Collinear: For all pairs of end-points of the two line segments, we add a constrained perpendicular distance(CH)set with a distance of 0 .
+
+ * Concentric: We add a constrained distance (CD) set, representing a distance of 0 between the mid-points of the two concentric circles.
+
+ * Coincident (point-point): We add a constrained distance (CD) set, representing a distance of 0 between the coincident points.
+
+ * Coincident (point-line): We add a constrained perpendicular distance(CH)set, representing a distance of 0 between the point and the line.
+
+ * Tangent (circle/ellipse-line): We add a constrained angle (CA) set, representing an angle of ${90}^{ \circ }$ between the line segment and the radius of the circle at the tangency point.
+
+ * Tangent (circle/ellipse-circle/ellipse): We add a constrained angle (CA) set, representing an angle of ${0}^{ \circ }$ between the radii of the two circles at the tangency point.
+
+Once all Fusion constraints are converted to CD, CA, and CH sets, we compute the transitive closure of the constrained angles (CA) sets and use the 20 rules presented by Juan-Arinyo and Soto [21] to merge constraints. Now we get CD, CA, and CH sets that reflect the implicit constraints. When two sketch entities have a relation that is not yet enforced by a explicit constraint, CODA can now check whether these entities are implicitly constraint using the following rules:
+
+ * Horizontal/vertical: Implicitly constrained if a constrained angle (CA) set exist between the line segment and a line segment representing the $\mathrm{x}$ - or y-axis.
+
+ * Perpendicular/parallel: Implicitly constrained if a constrained angle (CA) set exist representing both line segments or if a constrained distance (CD) set exists representing the four points of the two line segments.
+
+ * Collinear: Implicitly constrained if a constrained perpendicular distance(CH)set exist representing both line segments or if a constrained distance (CD) set exists representing the four points of the two line segments.
+
+ * Concentric: Implicitly constrained if a constrained distance (CD) set exists representing the center points of the two circles.
+
+ * Coincident point-point: Implicitly constrained if a constrained distance (CD) set exists representing the two points.
+
+ * Coincident point-line: Implicitly constrained if a constrained perpendicular distance(CH)set exists representing the point and the line or if a constrained distance (CD) set exist representing the point and both points of the line segment.
+
+ * Tangent circle/ellipse-line: Implicitly constrained if a constrained angle (CA) set exists representing the line segment and a radius (line segment) passing through the tangency point. Alternatively, if a constrained distance (CD) set exists including the two points of the line segments and the two points of the radius passing through the tangency point.
+
+ * Tangent circle/ellipse-circle/ellipse: Implicitly constrained if a constrained angle (CA) set exists representing the radii of both circles passing through the tangency point. Alternatively, if a constrained distance (CD) set exist including the point of the four line segments passing through the tangency point.
+
+§ 4.3 REMOVING DIMENSIONS TO AVOID OVER-CONSTRAINING MODELS
+
+Accepting suggested constraints oftentimes requires removing existing constraints present in the model. In the sketch in Figure 8a-left, for example, CODA suggests to constrain the length of the lines so that they are always half the length of each other. To accept this constraint, CODA replaces the static dimensional constraint with the dynamic constraint shown in Figure 8a-right. However, when both dimensions are implicitly constrained as shown in Figure 8b-left, CODA needs to remove one of the other constraints before adding the suggested constraint shown in Figure 8b-right. For CODA to know the explicit constraints that are responsible for every implicit constraint, we keep track of all explicit constraints while merging $\mathrm{{CD}},\mathrm{{CA}}$ , and $\mathrm{{CH}}$ sets in the algorithm explained in Section 4.2. When multiple explicit constraints are responsible for an implicit constraint (Figure 8b), CODA shows multiple suggestions and communicates their differences through the animated visualizations.
+
+ < g r a p h i c s >
+
+Figure 8: When accepting suggested constraints CODA oftentimes (a) overwrites existing constraints or (b) removes existing constraints.
+
+§ 4.4 RENDERING ANIMATIONS IN CAD MODELS
+
+When hovering suggested constraints, CODA previews the implications of a constraint by animating features in the CAD model (Figure 5). When the animation requires lines to tilt, we continuously rotate the line between $- {5}^{ \circ }$ and $+ {5}^{ \circ }$ . When changes in size are required, we apply a scaling that transitions between half and double the size of the entity. Finally, when animating a parameter of a modeling operation, such as an extrusion, we alternate between half and double of the original value. Animations are updated every ${100}\mathrm{\;{ms}}$ and are repeated as long as the user hovers the suggested constraint.
+
+As CODA directly manipulates parameters in the original design, we make a copy of all the attributes to be able to restore them afterwards. For some sketch entities, additional temporary construction lines are required to realize the animation. For example, to vary the angle between two parallel lines, Fusion 360 does not allow to temporarily add an angular constraint between the two lines as they are exactly parallel. CODA therefore first adds a temporary construction line perpendicular to both lines and varies the angle between the perpendicular line and both parallel lines during the animation (Figure 5a).
+
+§ 5 LIMITATIONS AND FUTURE WORK
+
+Although CODA offers many novel opportunities to facilitate making well-constrained CAD models, our work also has several limitations which reveal many exciting directions for future research.
+
+First, future research could study how novices in CAD modeling use CODA during their design workflow. While CODA offers suggestions in real-time while modeling, our tool could also be used after a CAD model is finished or at a later time to make an existing model more flexible and adaptable. We believe this could be a major asset as it allows novices to further improve CAD models shared via platforms, such as Thingiverse, and thus distribute the workload across the community. Furthermore, it would be interesting to investigate whether CODA also provides value to expert modelers and for students that learn CAD modeling. For example, the suggested constraints can make students aware of available CAD features and how they are composed.
+
+Second, while CODA offers suggestions for the most common constraints in CAD, more advanced suggestions can be supported in the future. For geometry relations this includes identifying patterns, such as symmetry or repetition in models and offering suggestions to convert these patterns into more adaptable features. For algebraic constraints, the current version of CODA supports frequently used ratios in CAD models according to Mills et al. [31] and Langbein et al. [25]. Other common ratios used in design could be supported in the future, such as the Golden ratio or the Lichtenberg ratio. Future versions could also offer suggestions for relations that are nearly present in the model, such as lines that are almost perpendicular or are almost equal in length. Similar approaches have been explored for improving mesh models [23].
+
+Third, to further facilitate adapting models, future versions of CODA could offer suggestions for other types of constraints, such as limiting the range in which parameters and dimensions can change without breaking the model. While computing valid ranges of features has been investigated for 2D sketches [13, 14, 40], more research is needed to compute valid ranges for all features in 3D to ensure the integrity of the model when changing parameters.
+
+§ 6 CONCLUSION
+
+In this paper we presented CODA, an interactive software tool that helps novice modelers to design well-constrained parametric 2D and 3D CAD models. In order to do so, CODA contributes a computational approach for extracting and suggesting relations in a model that are not yet enforced by constraints. CODA also clearly communicates the meaning and implications of suggested constraints using novel animated visualizations rendered in the CAD model. By facilitating the creation of well-constrained parametric designs we hope to further democratize CAD and encourage users to upload high quality parametric models to public sharing repositories, such as Thingiverse.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..77b8c8373ff494054158a43b833f99e76ba8ef4a
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,351 @@
+# CoAware: Designing Solutions for Being Aware of a Co-Located Partner’s Smartphone Usage Activities
+
+Author Name*
+
+Affiliation
+
+## Abstract
+
+There is a growing concern that smartphone usage in front of family or friends can be bothersome and even deteriorate relationships. We report on a survey examining smartphone usage behavior and problems that arise from overuse when partners (married couples, common-law relationships) are co-located. Results show that people have various expectations from their partner, and often feel frustrated when their partner uses a smartphone in front of them. Study participants also reported a lack of smartphone activity awareness that could help decide when or how to communicate expectations to the partner. This motivated us to develop an app, CoAware, for sharing smartphone activity-related information between partners. In a lab study with couples, we found that CoAware has the potential to improve smartphone activity awareness among co-located partners. In light of the study results, we suggest design strategies for sharing smartphone activity information among co-located partners.
+
+Index Terms: Human-centered computing-HCI design and evaluation methods-User Studies; User interface design
+
+## 1 INTRODUCTION
+
+Smartphones continue playing a pivotal role in our daily communications with family and friends $\left\lbrack {5,{40}}\right\rbrack$ . Smartphones not only enable seamless communication over long distances, but also allow access to information anywhere, anytime. However, there is growing evidence that people may overuse smartphones, both when alone and in the presence of others, i.e., in a co-located situation [3, 28]. Moreover, smartphones are designed as private and personal devices: the activities that take place on the screen can, when desired, easily remain completely unknown to co-located persons. Not being aware of a co-located person's on-screen activities can cause frustration and even anxiety $\left\lbrack {{42},{43}}\right\rbrack$ .
+
+A significant amount of work has explored smartphone overuse and its consequences $\left\lbrack {3,{28},{46}}\right\rbrack$ . Much less attention has been devoted to designing solutions for co-located activity awareness which could mitigate the frustration associated with smartphone overuse and improve interpersonal communication. A few recent studies have attempted to increase smartphone activity awareness by helping people to be more aware of co-located people's smartphone activities, providing a rich shared experience, and even motivating people to initiate interaction with nearby persons [26,42]. These studies suggested different strategies to raise awareness, such as using 'talk-aloud' to pass on what one is doing on the device [42] or to attach a second display to the back of the phone to show on-screen activities to co-located individuals [26]. Though these solutions have the potential to increase smartphone activity awareness, they might not be appropriate in some common contexts (e.g., in social gatherings and in public places) and they might not be practicable due to the dependency on hardware instrumentation.
+
+Social relationships can greatly shape the degree and nature of people's information sharing with other co-located individuals [8,14]. For example, the information sharing patterns of people with their partner, parents, and children may be very different, and may even vary largely depending on the age of the individuals or the age of their relationship $\left\lbrack {4,{22}}\right\rbrack$ . To narrow down the focus, we concentrate on an in-depth investigation of various aspects of smartphone use by co-located partners who are married or in a common-law relationship, or in any other (romantic) relationship. We focus on such relationships because partners are often co-located for a substantial portion of each day and their mutual understanding is important for a healthy home environment $\left\lbrack {9,{19},{41},{47}}\right\rbrack$ . Couple relationships are already very nuanced and complex and include many aspects, e.g., closeness, connectedness, interpersonal trust, and perceptions of empathy. This makes them even challenging to study on their own as a focal relationship $\left\lbrack {8,{19},{39}}\right\rbrack$ .
+
+We first conducted a crowdsourced study examining: 1) people's smartphone habits when being co-located with their partner, 2) the concerns that people have about their partner's smartphone usage, 3) the rules and privacy issues partners have regarding their co-located smartphone usage, and 4) the strategies that people take to become aware of their partner's smartphone on-screen activities. Our results show that people often use smartphones while co-located with the partner, which sometimes leads to anger and frustration. We also found that people often need to respond to their partner's queries about their smartphone activities. Most people share information truthfully, although the details of the shared information vary widely between apps. Furthermore, many people feel that they are not fully aware of their partner's smartphone activities when co-located. This lack of awareness can lead to unpleasant situations. Some strategies such as 'talk-aloud' are being used to be aware of others' smartphone activities, yet there remains a lack of expressive tools to support smartphone activity awareness.
+
+Guided by these findings, we explored ways to increase smart-phone activity awareness among co-located partners. Our goal was to investigate smartphone-based solutions to help partners become aware of each other's smartphone activities and to help them improve their interpersonal communication. We developed a smartphone app, CoAware, that enables users to create co-located smartphone usage awareness by sharing the names, categories, or screens of apps being used by one's partner. Additionally, CoAware enables partners with ways to send notifications to each other such that they might motivate the partner to reduce co-located smartphone usage. We continued with an in-lab study with couples who explored and provided feedback on the features offered by CoAware. The results revealed that high-level information, such as sharing app names and sending notifications, is useful to provide co-located smartphone usage awareness; however, low-level information about phone usage (e.g., screen sharing) and allowing co-located partners to control a different person's phone were seen to be less necessary and sometimes overbearing. We also found that awareness of a partner's smartphone activities was not desired by all participants. Some participants were satisfied with the level of information they knew about, and some were fine relying on social protocols with their partner to remedy challenging situations. Thus, design solutions should not be thought of as a one size fits all approach.
+
+---
+
+*e-mail: first.last@email.com
+
+---
+
+## 2 RELATED WORK
+
+### 2.1 Smartphone Overuse and Reduction
+
+Smartphone overuse and smartphone addiction are active areas of research that examine people's smartphone usage behavior. Prior work showed that the overuse of smartphones can lead to a decrease in productivity in workplaces due to phone use during work hours [10], hamper family relationships, and even cause domestic violence [29]. Researchers have investigated ways of detecting smartphone addiction based on users' smartphone usage behavior. For example, a recent study [40] identifies lifestyle and social media related apps to be associated with smartphone addiction. There are commercial apps that track users' smartphone and computer usage and offer summarized information allowing users to be more productive in their daily activities [36,44]. Researchers have also explored solutions to provide people with their desktop computer usage information $\left\lbrack {{21},{30}}\right\rbrack$ , and non-work related web access information [35]. Despite such systems, further exploration is needed to examine how to apply this knowledge to design persuasive smartphone interfaces to promote co-located engagement for improved interpersonal relationships.
+
+### 2.2 Information Sharing through Digital Devices
+
+Sharing digital devices and accounts are common practices amongst household members, yet this topic has received less attention from an awareness perspective. Instead, the topic has mostly been motivated by the pros and cons associated with device sharing. Sharing devices can be viewed as an all-or-nothing approach for sharing information [27, 37], which gives rise to privacy and security issues [4, 25]. Studies also showed that device sharing concerns depend on the user's relation to the other user as well as the types of data being shared, which suggests the need for better privacy and security models for device sharing $\left\lbrack {{16},{18},{27}}\right\rbrack$ . The intricate ways that couples communicate [8] and their need to have a nearly continuous connection have gained deep attention in the literature $\left\lbrack {{19},{41},{47}}\right\rbrack$ . This motivated context sharing among people in close relations. Prior research $\left\lbrack {6,{13},{14},{31}}\right\rbrack$ showed how contextual information sharing (heart rate, distance from home, etc.) can be leveraged to make partners aware of each other's context and activities. Both Buschek et al. [6] and Griggio et al. [14] observed that context-awareness improves the sense of connectedness and pointed out interpretability and privacy concerns that may arise due to inferred additional information from the given context. While context sharing helps people in close relationships to be aware of each other's activities, smart-phone addiction and overuse of other digital tools may create an awareness barrier even when people are present in the same context.
+
+### 2.3 Smartphone Activities among Co-located People
+
+A number of studies analyzed smartphone distractions during group activities $\left\lbrack {{26},{32},{34}}\right\rbrack$ . Ko et al. [32] developed a smartphone app that allows a group of co-located users to simultaneously lock their smartphones during activates such as when studying and chatting. Jarusriboonchai et al. [26] proposed an approach to communicate user's activities on the backside of a smartphone, by displaying the icon and name of the app being used. Users did not feel comfortable using it when they were unwilling to reveal the app name. With our design, CoAware, users can intuitively share activity information with various granularity levels. The goal is to help avoid privacy concerns. Oduor et al. [42] examined people's use of smartphones in the presence of family members in the home. They found that people often feel that the smartphone usage of their co-located family members is non-urgent. They feel ignored when not knowing their true activities. However, many fundamental questions are yet to be explored: When, where and how often do such problems arise? Are couples interested in knowing each other's smartphone activities? If so, then to what extent? How often do couples share activity information? With how much detail and how truthfully? Does the relationship, privacy, trust, or smartphone app play any role? We cover these aspects and more.
+
+### 2.4 Usage-Aware Co-Located Smartphones
+
+Though there has been substantial work on different sensing solutions that allow users to track the state of the device $\left\lbrack {{20},{45}}\right\rbrack$ , very little is known about how to detect two co-located smartphones and how to share information between them. Prior research showed that on-device sensors (e.g., tilt) could be used to make a smart-phone context-aware of its state (e.g., orientation) [20], and usage context (e.g., resting on a table) [45]. Beyond sensing a device state or context, researchers explored external sensing solutions to enable smartphones to track surrounding activities and environments $\left\lbrack {7,{15},{17},{33}}\right\rbrack$ and their social acceptance $\left\lbrack 1\right\rbrack$ , but in limited contexts. To date, however, due to the lack of advances in sensing solutions, such an approach has received little attention in the context of using the space for co-located collaborative interactions for promoting interpersonal engagement. CoAware provides a way of pairing co-located smartphones within 200 meters without requiring any WiFi hotspot or smartphone data connection and thus allows co-located users to connect with each other's devices.
+
+## 3 Study 1: Exploration Smartphone Usage
+
+We started our exploration by conducting a crowdsourced study investigating people's smartphone usage, rules, trust and privacy concerns, and activity awareness when they use smartphones in the presence of their partner. Prior research has shown that crowdsourc-ing platforms, such as Amazon Mechanical Turk (AMT), are popular and convenient tools for conducting user studies and collecting reliable data [2]. We used AMT to run our study.
+
+### 3.1 Online Survey
+
+We created an online survey with 58 questions to collect data from smartphone users. Figure 1 shows a sample of questions from the survey. The survey contained five sections: i) 18 questions to collect demographic information about participants and their partners (e.g., age, nationality, gender, education, and household conditions); ii) 15 questions about smartphone usage (e.g., how often, where and what types of apps), usage rules in the household, and privacy-related issues; iii) 9 questions about trust-related issues that arise when people share their smartphone usage activities with their partner and other family members; iv) 5 questions targeted at smartphone usage behavior and habits when co-located with the partner; and v) 11 questions regarding the awareness of the partner's smartphone usage and possible strategies used to share usage related information with a partner. In total, we used 15 open-ended questions, 26 single/multiple-choice questions, and 17 5-point Likert scale questions. Most of the open-ended questions were used to collect descriptive responses about co-located smartphone usage where the Likert scale questions were designed to quantify results and to obtain shades of perceptions regarding issues on smartphone usage.The single/multiple-choice questions were primarily used to collect demographic data.
+
+### 3.2 Participants and Study Procedure
+
+We posted our survey as a Human Intelligence Task (HIT) to AMT (with a $\$ {1.00}$ compensation). We specified two qualifications for participants: a minimum of ${70}\%$ approval rate and a minimum of 50 previously completed HITs. We also set the following requirements for the workers: they (a) must own a smartphone, (b) be either married, or in a common-law, or in a partner relationship, where (c) the partner must also own a smartphone and (d) currently live in the same household. In total, we collected 109 responses in seven days. We subsequently removed 31 responses which contained one or more unanswered questions and/or invalid answers. Consequently, we analyzed data from 78 participants (34 female, 44 male). On average, participants took 25 minutes to complete all the questions.
+
+### 3.3 Data Analysis and Results
+
+We applied a thematic analysis on the qualitative data where two researchers separately went through all the comments to perform open coding. Later they consolidated and reconciled codes into a common code set. Self-reported quantitative data were analyzed using standard statistical methods such as mean and standard deviation.
+
+#### 3.3.1 Demographics and smartphone usage
+
+The majority of our participants were from two age ranges: between 24 and 34 years (28 participants) and between 35 and 44 years (29 participants). Three participants were aged between 18 and 24 years, nine between 45 and 54 years, and nine between 55 and 64 years. Only two participants were 65 years or older. Fifty-five participants were from the USA, 23 from India. On average, our participants had been in their relationship for 13.3 years. Participants and their partners had been using smartphones for 8.5 and 7.9 years, respectively. Participants reported using smartphones an average of 3.3 (SD=2.0) hours per day.
+
+For how many years have you and Please estimate how often it hap-your partner been in a relationship? pens that you are concerned about
+
+In total, for an average day, how many your partner using his/her smartphone in your presence. hours do you co-locate with your part- ☐ A couple times a month or less ner (that means you and your partner ☐ Once a week are together at any place such as at home ☐ A few times a week - excluding sleeping time, shopping mall, ☐ About once a day restaurant, park, or sidewalk)? ☐ More than once a day
+
+In total, for an average day, how many Please describe the three most recent hours do you send in collaborative ac- situations, where your partner was con-tivities (for example, cooking) with your cerned about you using your smart-partner? phone in the presence of your partner.
+
+Figure 1: Sample questions on a) smartphone usage and b) concerns
+
+Participants indicated using some categories of apps more often than other categories: ${90}\%$ used communication apps (e.g., email, text message, skype, phone calls) at least once a day and ${83}\%$ used social media apps (e.g., Facebook, Instagram, Snapchat) and 85% used the Internet (e.g., reading news, hobby-related browsing, banking) at least once a day. Only 19% of the participants used location-sharing apps (e.g., Glympse, Life360, Find My Friends) once a day and only 29% used health-related apps (e.g., fitness tracking, sports, or medicines apps) at least once a day.
+
+#### 3.3.2 Co-located smartphone usage
+
+Participants reported that they are co-located with their partner a significant amount of time (mean 5.9, SD=3.6 hours/day excluding sleeping time) and that they often engage in collaborative activities with their partner, such as cooking or watching movies (mean ${2.2},\mathrm{{SD}} = {1.4}$ hours/day). In response to an open-ended question, participants reported various reasons for using their smartphones when co-located with the partner. In total, we analyzed 167 coded responses which can be categorized into the following five broad categories: 1) communication/ socialization with friends or family members (38% of the responses), 2) work-related activities (20% of the responses), 3) checking information and updates for own interest (20% of the responses), 4) finding information for a purpose shared with the co-located partner (13% of the responses), and 5) personal entertainment (9% of the responses). These results are similar to earlier qualitative results [42] which showed that people use smartphones in the presence of their family members to check notifications, find information, and fill time when they are bored. Our results also revealed that co-located smartphone use frequently happens at home (44% of the responses), mostly in the living room and bedroom. Many participants talked about using their phones at home when co-located with their partners. "... watching a movie on tv at home and he was upset that I checked my phone." [P20, female, relationship 27 years]
+
+Other common places for co-located smartphone use include restaurants ( 18% of the responses), public spaces such as in shopping malls or parks (24%), during social gatherings (7%), and inside cars (7%). All participants reported frequent occurrences (at least once a month) of their partner expressing concerns regarding their colocated smartphone usage. "I pulled out my phone just to go on it before the food came and he complained." [P1, female, relationship 4 years] || "We were in bed together and not really paying attention to what she was saying." [P16, male, relationship 21 years]
+
+We coded a total of 114 responses regarding the situations (places and activities) when this had happened: at mealtime, either at home or in a restaurant (29% of the responses), while watching tv/movies together at home (16%), in public places, such as in a shopping mall (15%), in the bedroom at bedtime (13%), during on-going conversations $\left( {{10}\% }\right)$ , and in some other situations such as at a social gathering or while in the car. Two participants from India responded that it had happened while being in a temple.
+
+
+
+Figure 2: Participants' self-reported smartphone application usage frequency when co-located with their partner.
+
+Participants' concerns were mostly related to the lack of attention to what they had expected their partner to concentrate on during a conversation or other activity. Sometimes they were concerned about disregarding family time and social engagement (especially when surrounded by family or friends in social gatherings). Several participants mentioned that their failure at paying attention sometimes led to frustrations and tensions between them. "When we sit together and talk to each other in our living area at home, I go through messages in WhatsApp. That time my partner gets irritated, thinking that I am not listening to him." [P10, female, relationship 21 years] || "During a dinner at his friend home... I was using my phone continuously to text my friends. He signaled me not to use the phone at a get together because it seems odd when I am not involving in the event. I keep on texting my friends he raised and fought with me." [P21, female, relationship 10 years]
+
+Participants reported that their partner expressed concerns about their co-located smartphone usage primarily due to disruption in their quality time, and sometimes expressed anger and annoyance. On average, participants reported that they spend 2.3 hours a day on their smartphones while co-located with their partners. For each app category, at least ${30}\%$ of the participants who use those apps more than once a day reported to use them less frequently while co-located with their partner.
+
+We also asked participants about the apps that they often use when they are co-located with the partner. Figure 2 shows the results. We observe that they frequently surf the Internet (e.g., reading news, browsing, banking), use communication apps (e.g., email, text message, Skype, phone calls) and social media apps (e.g., Facebook, Instagram, Snapchat). However, they rarely use health-related apps (e.g., fitness tracking or medicines apps) and location sharing apps (e.g., Glympse, Life360). The results suggest that people prefer to use communication and other related apps to connect to families and friends when co-located with their partner.
+
+#### 3.3.3 Rules or mutual understanding on smartphone use
+
+We asked participants questions about rules or agreements set in the household to reduce co-located smartphone usage. More than one-third of the participants (35%) mentioned having some house rules. The rest ( ${65}\%$ ) said they did not have any formal agreement, yet they shared a mutual understanding with their partner. "We're responsible and adult enough to know when it's time to use the phone or not." [P52, male, relationship 11 years]
+
+The participants who reported to have rules or agreements (43 coded responses) for smartphone use had rules based on either locations or situations. Mealtime (33%), family time with kids (21%), collaborative activities (16%), bedtime (12%), social gathering (7%) and driving (5%) were some contexts where the rules restricted smartphone use. "We agreed to not use our smartphones during dinner unless it's an emergency." [P46, male, relationship 26 years]
+
+Often the rules or agreements were set to ensure quality time within the family and in social gathering: "I am in agreement with him that we do not use our phones when it's quality time for us to be together or when we're with others in a social situation, unless everyone is using them, too, for some reason (like looking up some info or playing a game together)." [P58, male, relationship 8 years]
+
+The rules also came from self-realization of being disconnected: "Once me and my spouse was continuously using the phone when we were at home ... we realized that we didn't speak to each other. That moment we decided not to use phones unless an emergency when we both are together." [P21, female, relationship 10 years]
+
+We asked the 51 participants, who did not have any rules, how they would feel about creating them. Fifty percent of these participants welcomed the idea of having some rules for ensuring proper engagement with the partner. Twenty-five percent expressed being somewhat neutral about agreeing on rules. The remaining ${25}\%$ opposed the idea of agreeing on rules. They did so as they felt that rules would intrude on their smartphone activities or that it may be an "overkill" between adults who should be able to act on their own accord. Some stated that they shared mutual respect not to use smartphones in certain situations and do not need any rules. "We should come up with guidelines for smartphone usage that would make me feel better about our communication with one another." [P37, female, relationship 14 years]
+
+The average length of relationships for the participants who have no rules was larger (avg. 14 years) compared to the participants who reported to have rules (avg. 10 years). Binomial Logistic Regression showed no significant difference in gender and relationship length between these two groups.
+
+We asked participants whether they have any rules regarding smartphone use for other family members, excluding themselves, such as their parents (e.g., an older adult living at home with their adult children), teenage children, or younger children. Most participants mentioned that they do not have any rules for other adults in the home as they are responsible adults who do not use smartphones frequently. "There is no rule as my mom is an aged woman and did not use phones every day." [P39, male, relationship 12 years]
+
+However, 40 participants with children have strategies and rules to control the child's smartphone usage. For instance, out of 52 responses, 37% of the responses were about time-based restrictions (e.g., no more than ${30}\mathrm{\;{min}}$ per day), ${23}\%$ were about content-based restrictions (e.g., only for games and watching YouTube videos), ${21}\%$ were location-based restrictions (e.g., not at the dining table, in the bedroom or washroom), and 6% were about age-based restrictions (e.g., no phone before 8 years). Such "no phone" policies were primarily set to ensure that the children were engaged in more purposeful activities and to ensure they spent enough time with family members. Typical responses were: "We do not allow our sons to use their smartphones in private such as their bedroom or bathroom. We also have their settings configured so their phones may not be used between ${10}\mathrm{{pm}}$ and $6\mathrm{{am}}$ ." [P46, father of 2 children]
+
+#### 3.3.4 Strategies to reduce co-located smartphone usage
+
+We also asked our participants whether they know of or used any apps or other tools to reduce smartphone use in co-located situations. The majority (67 out of 78) mentioned that they do not know of any such solutions that could either help them be more aware of each other's smartphone activities or help to reduce co-located smartphone usage. The remaining participants (11) mentioned that they are aware of apps to restrict usage time. They mentioned using iPhone's Screen Time [24], Night Mode [23], Offtime [38] to track their daily smartphone usage activities and to limit smartphone app access after a certain amount of time.
+
+We used a 5-point Likert scale to get participants' opinions on the importance of using apps or other strategies to reduce co-located smartphone usage. Interestingly, younger participants felt that it was more important to have apps or strategies than the older participants Out of 28 participants in the age range 25 to 34 years, 17 participants expressed high importance (rating 3 or more) of having such apps or strategies, whereas only 25 of 50 participants in the age range 35 to 65+ years expressed high importance.
+
+#### 3.3.5 Sharing information, privacy, and trust
+
+About 74% of participants said that they told their partner what they were doing on their smartphone when co-located at least a few times a week. We also asked them how truthful they are when sharing information. Twenty five out of 34 females and 19 out of 44 males said that they share accurate/truthful information about their smartphone activities with their partner. Participants who said they told the truth commented that they do not have anything to hide from their partner and do not want to lose the trust. "We value honesty in our relationship, not that we do anything shady on our phones, but if we did, I would immediately inform her of anything I did, and vice versa." [P52, male, relationship 11 years]
+
+On the other hand, participants who said they did not always share accurate information with their partners did so because they were trying to safeguard their privacy or ensure personal boundaries. For instance, some participants mentioned that they are not comfortable sharing financial information, business matters, photos, videos and things that they search on their phone. We believe this is due to the sensitivity of this information, and sometimes to maintain personal space. "I might be slightly embarrassed about the random things $I$ look up." [P33, female, relationship 20 years]
+
+In a follow-up 5-point Likert question (5=very confident to 1=not confident at all), participants were asked to indicate their confidence level about whether their partners tell true information about their smartphone activities. Both male and female participants had strong confidence that their partners share accurate information with them, which reflects their average score of ${4.45}\left( {\mathrm{{SD}} = {0.84}}\right)$ and 4.68 $\left( {\mathrm{{SD}} = {0.79}}\right)$ , respectively. Only six participants gave a rating of 3 or below and expressed their past experience of finding their partners not being truthful. This experience could consequently create an impact on their level of trust in the future. "About 9 months ago my partner expressed that due to past cheating by previous partners, she felt paranoid when I was using my phone to chat with other people." [P8, male, relationship 1 year]
+
+#### 3.3.6 Smartphone usage awareness
+
+We examined participants' awareness about exactly what their partner is doing on their smartphone and how interested they are in knowing what their partner does on their smartphone. ${78}\%$ participants responded that they are not fully aware of their partner's smartphone activities. In some cases, participants reported that this lack of awareness triggered misunderstanding among the co-located partners as their partners make assumptions based on their smart-phone activities. A potential reason for such an assumption could be the limited information that can be seen from a distance about a person's usage activities. Similar results were found by Oduor et al. [42] who reported a lack of smartphone activity awareness among co-located family members.
+
+We included questions on the common strategies for sharing activity awareness with co-located partners. Participants reported that such awareness was often achieved by asking questions of their partners where they responded verbally or showed their screen to their partner. This action sometimes led to frustration and anxiety among partners. "I normally just ask what he's doing (especially if he laughs!) and he'll always tell me." [P1, female, relationship 4 years] || "My partner usually gets aggravated when I ask what he is doing, because usually, he is trying to figure something out on his phone." [P11, female, relationship 15 years]
+
+In exploring how interested participants were in knowing the partner's smartphone activities, we found that male participants were more interested ( ${66}\%$ of the males were interested) in knowing what their partner is doing on the smartphone than female participants (58%). On the contrary, in a question asking about their partner's interest in knowing what they are doing on their smartphone, we found that ${86}\%$ of male participants reported their partners to be interested in knowing their activities, whereas ${68}\%$ of the females reported the same. We observed a trend that this interest decreased gradually with the increase of age range. In the age range 25 to 34 years, ${82}\%$ expressed interest, whereas in the age range 35 to 44 years only 70% showed interest.
+
+#### 3.3.7 Co-Located Content Sharing
+
+We collected information on level (details vs. abstract) of smart-phone activity information that the participants are comfortable to share with their partner and the level of information that they would like to receive from their partner. We provided them with three different levels that they could choose from for sharing or receiving: (i) detailed information (e.g., chatting with "Alex" in Facebook), (ii) an app's name (e.g., using Facebook), and (iii) activity information (e.g., playing games). Additionally, they could write any other abstractions that they might be comfortable with. We collected 119 coded responses for sharing and 120 coded responses for receiving level of information as they were allowed to select multiple levels.
+
+Many participants are comfortable with providing very detailed information to their partners (37% responses), whereas others reported preferring to share only the app name (34% responses) or general activity information (29% responses). The other participants reported only feeling comfortable with providing less or no information at all. We also found that the preferred sharing level varies across apps. 37 participants mentioned that they share details when using communication apps whereas only 19 participants share details while they are browsing the Internet. We observed similar results for receiving information from their partner. Many participants indicated that they would like to receive detailed information from their partners (34% responses), whereas others expressed to get only the app name (36% responses) or activity information (29% responses). The other participants (only 1%) reported feeling comfortable with receiving any level or no information at all.
+
+We further asked participants to provide examples where they share smartphone usage information with different people (e.g., partner, family members). We observed that it is common to share different levels of information with different people: "I would give less details based on how well I know the person. My partner and family get more information that colleagues." [P57, male, relationship 2 years] || "To my partner, I share all the information; to my family members, I share only app name or activity name" [P53, male, relationship 7 years]
+
+### 3.4 Discussion
+
+Results from the survey revealed that people use smartphones in the presence of their partner even though their partner expressed concerns about the usage. In general, people can see when partners use smartphones in front of them, but exactly what a partner is doing on the phone cannot be easily inferred from an observer's viewpoint (also found by [26]). Our work builds on prior work by illustrating the locations and activities in which this occurs, the rules people have setup to help mitigate issues, and how they feel about sharing usage information. Participants reported that co-located usage and asking about their partner's activities sometimes triggers aggravating situations. However, they are not aware of technological solutions that would help them limit co-located smartphone usage. These findings motivated us to think of a means to improve people's awareness about their partner's phone activities while co-located by supporting different levels of information (details vs. abstract), such that they can make informed decisions about how to handle the situation. We also believe that improved awareness may help people to be motivated to use phones wisely and, thus, improve the quality of domestic life. Of course, we recognize that awareness of a partner's smartphone activities was not desired by all participants. Some participants were satisfied with the level of information they knew about, and some were fine relying on social protocols with their partner to remedy challenging situations. As such, we wanted to explore design solutions that might work for people who were more interested in additional knowledge of what their partners were doing on their phone in a hope to improve social interactions.
+
+## 4 THE DESIGN OF COAWARE
+
+Informed by our findings, we designed a smartphone app, Co-located Awareness (CoAware), intended for sharing smartphone usage information between co-located partners.
+
+
+
+Figure 3: (a) A connection initiates with a request to gain access to the partner's device; Once the access is gained, the partner can see (b) the app name, (c) the app category, and (d) a screen showing the screen content of the app that the partner is viewing.
+
+### 4.1 CoAware Features
+
+CoAware was designed to share users' smartphone usage information such as the number of times an app has been launched, the duration it has been used for, and the time it was initially opened. We used a foreground app checker external library [48] that allows access to smartphone app usage information. In addition, we developed a solution to directly share screens from one smartphone to another via WiFi direct. Based on these capabilities, we developed three techniques to share app usage information between two colocated smartphones with CoAware. As our survey results showed that people prefer to share information by varying degree of details about app usage, we designed the app to have three different levels of access; from very limited information which users may be more comfortable sharing (e.g., an app category) to very specific information (e.g., viewing the screen) that could possibly be more privacy intrusive. Thus, users can choose what level of sharing they and their partner are comfortable with. The specific levels are:
+
+App Category: This access level provides users with a high-level view of app usage information, where only the types of apps being used are shown and not the app names (Figure 3c). For instance, apps that are used for contacting other people (e.g., email, text message, skype, phone calls) are mapped to and labeled as "Communication." Commonly used apps are categorized into different labels.
+
+App Name: In this access level, CoAware tracks the name of a running app on the co-located phone (Figure 3b). The app name is displayed on the other phone.
+
+Screen Share: In this access level, CoAware captures images of a phone's screen and transfers it to the co-located phone every ${50}\mathrm{\;{ms}}$ . In this way, co-located users are aware of the exact on-screen activities of each other (Figure 3d).
+
+With CoAware, when two devices are co-located, one device sends a connection request and asks permission to access App Name, App Category or Screen Share information. The receiving device shows the request in a pop-up (Figure 3a) where the user of the device can accept or reject the request. If accepted, the sender device gains access to the app name, app category or the device screens of the other device. It also starts logging the app usage information (e.g., running app) on the receiving device. CoAware provides three notification strategies to allow co-located partners to send information through the app. We wanted to provide various levels of information exchange and control.
+
+
+
+Figure 4: CoAware (a) sending usage statistics, (b) a close request, showing (c) a summary, and (d) detailed information about apps used since the connection was established.
+
+Message: Users can send a preset or custom message to their partner. Examples of preset messages are "It feels like you've been using Gmail for a while now, can we talk instead?" and "Hey, it's me. How are things going?" Custom messages allow users to type anything in a textbox and send the text to their partner. We included this possibility to offer flexibility in terms of how users like to communicate with their partners. This messaging feature is similar to sending a text message; however, we hoped that preset suggestions for messages might help to create courteous exchanges between partners and not heighten tensions. This reflects findings from our survey where some participants said they would gently ask their partner about their smartphone usage if they felt it was inappropriate.
+
+Usage Statistics: Users can send app usage statistics such as "You have been using Gmail for $4\mathrm{\;{min}}$ and 54 seconds." to their partner (Figure 4a). Such messages are created using the duration of the longest-running app among the currently running ones, since the longest-running app is often likely to keep the user engaged for a longer period of time. This reflects findings that some participants did not realize how long they were on their phone in the presence of others; thus, some additional awareness information could be useful to regulate behavior on one's own.
+
+Close Request: Users can send a request to close the currently active app that their partner is interacting with. The partner sees a prompt such as "May I request you to close Facebook?" or "I was hoping you could close Gmail. OK?" (Figure 4b). The partner can cancel the request and continue using the app. If the partner agrees (tapping the Ok button), the app shows a 30-second countdown timer to let the person finish the current activity. When the 30 seconds are over, CoAware closes the currently running app. Our goal here was to make the app closing somewhat graceful and delayed and less of an immediate interruption. This reflects findings from our survey where, again, some participants said that they might ask their partner to change their behavior on their smartphone. Of course, we recognize that actually causing actions to occur on someone else's phone may come across as being strong or overly assertive to some people. We wanted to explore this idea to see how people would react to it in further studies.
+
+Using the Summary tab, one can see the usage statistics for the apps. The summary includes the app/category name, the total time the app or category has been used and the number of launches since the current connection was established (Figure 4c). If the access level App Name or Screen Share is given by the partner, then one can see the app names. The access level App Category only allows one to see the app categories. Using the Details tab, one can see more detailed information about individual apps or app category launches (depending on the access level), such as the name, launch time, and duration of use (Figure 4d). Overall, we recognize that not everyone will find the features we propose in CoAware to be useful. Some may find them to be overbearing, some may find them to be not needed at all, and some may find them to suit their needs well. This is as expected and purposeful such that we could explore our design ideas more and see what reactions participants would give with a fully working system that provides such options.
+
+### 4.2 Implementation Details
+
+CoAware was built on Android SDK 4.4 and leverages smartphones' WiFi Direct to establish peer-to-peer connections between two smart-phones. We used WiFi direct as this technology allows two devices to connect directly without requiring them to connect via Wi-Fi routers or wireless access points, thus enabling sharing information between co-located users in any location (e.g., at home, park). Prior research has shown that smartphone activity can be shared with others by instrumenting the device (e.g., attaching an additional display to the back of the device) [26] which can raise privacy concerns due to the visibility of private content in some common contexts (e.g., public places). Hence, we designed an application solution that does not require any additional hardware instrumentation.
+
+## 5 STUDY 2: EXPLORATIONS OF COAWARE
+
+We conducted a study in a lab setting as an initial attempt to get feedback from participants about CoAware. We investigated users' feedback on the three access levels for sharing app information amongst co-located partners and explored their opinions on the three notification strategies provided by CoAware. Additionally, we collected participant feedback on privacy issues and on how CoAware creates awareness and we asked for general feedback on CoAware's features. Naturally, we could have explored our ideas using a field study where participants from various socio-cultural backgrounds could have tried out CoAware over a prolonged period of time. We did not use this approach given that CoAware is still at an early design stage. Field studies bring the risk of participants not trying out all of the features within a design. We felt it was more reasonable to gather initial participant feedback such that the general ideas presented in CoAware could be assessed to understand which may hold the most merit. Then, either CoAware or other applications like it could be created and explored through longer-term usage. The caveat is that our study does not provide generalizable results across a range of real-world situations. Instead, it illustrates initial feedback and directions to help guide future designs, which, at a later point, could be evaluated in the wild with a field study.
+
+### 5.1 Participants and Procedure:
+
+We recruited 22 participants (11 couples) from the local community (a large city within North America) to participate in the study. Two participants were 18-24 years old, 11 were 25-34 years old, 4 were 35-44 years old, 2 participants were 45-54 years old, and 3 participants were 55-64 years old. All participants were smartphone users and have been in their partner relationship for an average of 9.5 (SD=5.7) years. None of them had experience using tools to reduce smartphone usage or to support the awareness of someone else's smartphone use.
+
+We used two smartphones, Google Pixel 3 and Google Nexus 5 , for the study. We first showed participants how to use CoAware. Next, participants were given the following two tasks to complete:
+
+Establish a connection: One person (sender) sends a connection request and the other person (receiver) accepts it.
+
+Access information: The sender accesses the app name, app category, and screen on the receiver's device while the receiver 1) browses information on an e-commerce website (Amazon) to find a suitable camera costing less than $\$ {500},2)$ finds a rumor/gossip about their favourite actor/actress, and 3) plays a game of their choosing. Once the tasks are completed, the participants switch roles as sender and receiver and repeat the tasks.
+
+We then used a questionnaire to collect their opinion on the access levels and notification strategies to create co-located smartphone usage activity awareness, privacy concerns related to CoAware, and design suggestions to improve the app. We asked participants close-ended questions using 5-point Likert scales regarding (Q1) the usefulness of the three access levels in creating awareness about their partner’s smartphone activities, (Q2) the usefulness of the three notification strategies to motivate them in reducing co-located smart-phone usage, (Q3) their comfort level when receiving a notification from their co-located partner (for each of the three notifications strategies); and (Q4) their comfort level in using the three access levels across five different app categories: communication (e.g., Gmail), social media (e.g., Facebook), games, music & entertainment (e.g., Netflix), Internet (e.g., news), and others (e.g., maps). Additionally, they were asked open-ended questions about privacy and awareness issues. In the end, we also asked to provide feedback and suggestions regarding CoAware’s features. A session lasted approx. ${40}\mathrm{\;{min}}$ . in total. We used thematic analysis to analyze qualitative responses where we iteratively reviewed the responses to look for main themes.
+
+
+
+Figure 5: Mean rating for (a) the usefulness of the access levels, (b) creating activity awareness with notification strategies, and (c) the usefulness of the notification strategies, (d) Mean comfort level for sharing information across various app categories.
+
+### 5.2 Results
+
+For questions Q1-Q3, we used Friedmans test with post-hoc Wilcoxon tests to analyze the data (Bonferroni adjusted $\alpha$ -level from 0.05 to 0.016). (Q1) Figure 5a shows the mean rating on how useful the three access levels were to create awareness of partners' activities. We found a mean rating of ${3.91}\left( {\mathrm{{SD}} = {0.53}}\right)$ for App Name, ${3.0}\left( {\mathrm{{SD}} = {0.93}}\right)$ for App Category, and 4.14 $\left( {\mathrm{{SD}} = {0.89}}\right)$ for Screen Share. A Friedman test showed a significant difference between the access levels $\left( {{\chi }^{2}\left( {2, N = {22}}\right) = {14.17}, p < {.001}}\right)$ . Post-hoc pairwise comparisons revealed that both App Name and App Category were rated more useful in creating an activity awareness than App Category (no significant difference between App Name and App Category). (Q2) The mean rating for the usefulness of three notification strategies to motivate reducing co-located smartphone usage was 4.18 (SD=1.05) for Message, 3.27 (SD=0.83) for Usage Statistics, and 2.55 (SD=1.18) for Close Request (Figure 5b). A Friedman test showed a significant difference $\left( {{\chi }^{2}\left( {2, N = {22}}\right) = {20.58}, p < {.001}}\right)$ and post-hoc pairwise comparisons showed significant differences between all pairs. (Q3) Figure 5c shows that participants were more comfortable receiving a notification with Message $\left( {{4.18},\mathrm{{SD}} = {1.05}}\right)$ and Usage Statistics $\left( {{3.5},\mathrm{{SD}} = {0.9}}\right)$ than with Close Request $({2.5}$ , $\mathrm{{SD}} = {1.3}$ ) as shown in Figure 5c. A Friedman test showed a significant difference between the strategies $\left( {{\chi }^{2}\left( {2, N = {22}}\right) = {20.55}, p < {.001}}\right)$ and post-hoc pairwise comparisons showed that Message and Usage Statistics were significantly different $\left( {p < {.001}}\right)$ than Close Request (no sig. difference between Message and Usage Statistics).
+
+We used a One-Way MANOVA to analyze the responses to Q4 (Figure 5d). Results revealed a significant difference in participants' rating on the access levels $\left( {{F}_{{10},{118}} = {4.37}, p < {.001}}\right.$ ; Wilk’s $\lambda = {.53}$ , partial ${\eta }^{2} = {.27}$ ). Tukey’s HSD post-hoc tests showed that for mean scores for the communication, social media, and other categories, there were significant differences between Screen Share and App Name $\left( {p < {.001}}\right)$ and between Screen Share and App Category $(p <$ .001 ). For games, results showed a significant difference between Screen Share and App Name $\left( {p < {.001}}\right)$ . For Internet browsing, there was a significant difference between Screen Share and App Category ( $p < {.001}$ ).
+
+In the open-ended questions about privacy and awareness issues, participants expressed that CoAware would be helpful to maintain their time commitment to each other, create more smartphone usage awareness so that they do not interrupt their partner during an important ongoing activity, and that Screen Share would help them during co-located collaborative activities such as sharing information with others. "[CoAware] can be very useful for creating awareness as it allows us to check what other has been doing, especially when he is on phone for a long time." [P7, female, relationship 5 years] || "The app will help when we want to show something to each other but sitting different places in a room." [P3, male, relationship 5 years]
+
+Additionally, participants mentioned that there are other potential use cases for CoAware (e.g., sharing information with partner, monitoring their children's smartphone usage). "Sharing feature is useful as I can show photos and videos to my wife; I can share the game that I am playing to my son." [P18, male, relationship 13 years]
+
+Participants also expressed privacy concerns regarding the Screen Share. For instance, two females mentioned that they used some apps to track their health-related issues which they might not feel comfortable sharing. Others wanted to have a personal digital space away from their partner which they did not want to be intruded in, as this might create stress and tension in family life. "It will hamper my privacy, I may not feel comfortable at all for sharing screens of my messages and emails" [P4, female, relationship 5 years]
+
+Participants provided suggestions to improve CoAware. For example, instead of showing pop-ups, they suggested using standard notifications that commonly appear at the top of the screen. Six participants also suggested that partners should be allowed to only send a fixed number of notifications within a certain time (e.g., 10 notifications per day). Some participants wanted more notification styles and strategies or more statistics to better motivate the partner to engage with them. Two participants also felt that instead of sharing the entire screen with their partner, a blurred image or custom screen area could be shared as this would protect privacy.
+
+## 6 DISCUSSION AND DESIGN RECOMMENDATIONS
+
+Through our studies, we were able to uncover a range of ways that applications like CoAware should be designed, based on its strengths and weaknesses.
+
+Ensuring personal digital space: The participants' lower comfort ratings when using Screen Share illustrate that there is often a personal digital space among partners which they typically want to preserve. Thus, we suggest that applications like CoAware should focus on sharing higher-level or more abstract information (e.g., the app name) instead of sharing very detailed information such as the screen content. Low-level information akin to what we provided as one feature within CoAware is likely too much for many people and could inadvertently create greater tensions between partners.
+
+Level of access: Participants expressed concerns about using the Close Request feature within CoAware as it takes some control over a partner's phone and may disrupt their on-going activities. This, again, could create further tension between partners. Instead, participants felt that solutions that alert others of what they may want to change in their own behavior, rather than take control, would be more acceptable. Thus, when designing apps for communication between devices, it is important to carefully consider how much control one should have over another person's device.
+
+Notification Strategies: Participants found pop-up notifications to be distracting and somewhat overbearing. Thus, we suggest that applications like CoAware use standard notification mechanisms as are already found on smartphones. Pop-ups notifications can be distracting and the forced change in activity (e.g., switching from a game to notification UI) could create new frustrations.
+
+Determining whether the phone is in use: Sometimes a phone may remain active although the user may not be engaged with it. This means that usage information provided to one's partner may not be accurate. One possibility could be to rely on information about the user's on-screen taps in combination with information about the running apps to determine usage history.
+
+Study 1 provided insights and direction for our design work with CoAware and helped lead to the aforementioned design suggestions. Its results also moved beyond prior literature (e.g., [42]) to allow us to more deeply understand when, where, and during what activities a system like CoAware would potentially be used in real-world situations. This can help guide future studies that want to test applications similar to CoAware to know when, where, and how such testing should be done. One could also imagine using Study 1's data on locations and context to think about ways to further refine applications such as CoAware. For example, users could be given options to customize applications so that they are able to choose what types of information they are comfortable revealing to their partner based on location, time, and activity. Such information could also be inferred by applications and then adjusted as needed by users.
+
+We also recognize that there is a darker side to applications like CoAware and designers should be cautious in this regard. The challenge with apps that track or share mobile device usage between partners is that they can potentially alter relationship dynamics, given an increased access to information [11]. This could create issues around trust or control between partners. While our results did not reveal such concerns, they are most certainly possible. Further research is required to understand partners' information sharing behaviour with others and its impact on their relationship. We also acknowledge that apps with features like CoAware could be seen as being highly problematic for relationships that contain domestic abuse or family violence [12]. As apps like CoAware enable access to information on partners' devices, this could lead to the system being misused (e.g., coercing to share information constantly, surveilling one's partner) and create anxiety and tensions within a relationship. Of course, there are no easy solutions for such types of situations. CoAware could, for example, ask users for details about their relationship satisfaction before making features available to them. Yet partners in an abusive relationship could easily answer untruthfully. Apps could track couples' information sharing behaviours and provide warnings if acts that appear to resemble surveillance occur, or features could be turned off based on certain negative behaviours. However, this may, again, not be a complete solution and may be hard to detect. As such, designers need to be cautious to think about the possible negative consequences of apps with features similar to CoAware.
+
+## 7 LIMITATIONS AND FUTURE WORK
+
+Our crowdsourced survey has some inherent limitations. Since we used AMT, our participant's demographics were determined by the demographics (USA and India) of the ATM. With a larger sample and with participants from more cultures, it is possible to investigate how people's perceptions of co-located phone usage differ between cultures. It would also be interesting to conduct an in-person study with interviews to determine whether the results differ from those that have been obtained from crowdsourced study. In an in-person study, we would be able to see how people use the design for real-life situations which may not completely match the tasks in our study and essential to ground and guide the developments of CoAware This would further help us cover ethical aspects of smartphone activity-awareness between couples which might be missed in our online survey. The challenges and potential problems that may arise with an increased awareness, especially in abusive and problematic relationships, needs further investigation. We believe such future research would provide us with important insights into the scope and impact (both positive and negative) of using CoAware and similar technological solutions in sensitive family situations.
+
+We concentrated on partners' co-located smartphone activity awareness and investigated their opinions on CoAware. However, we envision to extend our approach with CoAware to other relationship types, such as between parents and children, where the parent could use CoAware to monitor and control the child's smartphone activities. This would require an in-depth study of parent-children relationships and the consideration of many other aspects, such as the diversity of house rules, family traditions of raising children, child age, and the educational backgrounds of parents.
+
+In the future, it would be interesting to find out in more detail whether and how our findings were influenced by our participants' age, cultural background, and relationship length. This would require a larger and more diverse set of participant couples. Furthermore, with our initial encouraging results and reactions, we plan to further develop CoAware and to perform a longitudinal study with a full-fledged version to examine its effect on sustained behavior change among people. Additionally, CoAware inspires the design of context-aware smartphones where the solution can trigger notifications based on pre-set rules based on locations and any surrounding people to reduce the smartphone use of co-located persons.
+
+## 8 CONCLUSION
+
+Our paper examines the issues that arise when people use smart-phones in front of their partners, how they try to solve these problems, and how they search for potential solutions. In a crowdsourced survey, we found that people often feel ignored and get frustrated due to their partner's smartphone use in front of them. We also found that these problems exist even though people consciously attempt to resolve them through mutual understanding, and sometimes with explicit house rules. Partners frequently ask about and share each other's smartphone activities verbally, yet are often not confident about knowing what the other is doing on the smartphone. Consequently, we designed CoAware, a smartphone app, to further explore the design space. CoAware allows partners to digitally share each other's smartphone activities using different levels of detail. In a study where couples used CoAware, we observed that they found the app to be a promising solution to improve awareness, help reduce smartphone overuse, and, perhaps, even monitor a child's smartphone activities. We learned that designing similar solutions requires careful consideration of various app-specific privacy concerns and people's tolerance towards being advised by their partners.
+
+[1] D. Ahlström, K. Hasan, and P. Irani. Are you comfortable doing that? acceptance studies of around-device gestures in and for public settings. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, Mo-bileHCI '14, p. 193-202. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2628363.2628381
+
+[2] F. Alallah, A. Neshati, N. Sheibani, Y. Sakamoto, A. Bunt, P. Irani, and K. Hasan. Crowdsourcing vs laboratory-style social acceptability studies? examining the social acceptability of spatial user interactions for head-worn displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, p. 1-7. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3173884
+
+[3] M. G. Ames. Managing mobile multitasking: The culture of iphones on stanford campus. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, CSCW '13, p. 1487-1498. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/ 2441776.2441945
+
+[4] L. Barkhuus. The mismeasurement of privacy: Using contextual integrity to reconsider privacy in hci. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, p. 367-376. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2207676.2207727
+
+[5] S. Beech, E. Geelhoed, R. Murphy, J. Parker, A. Sellen, and K. Shaw. Lifestyles of working parents: Implications and opportunities for new technologies. Technical report, HP Tech report HPL-2003-88 (R. 1), 2004.
+
+[6] D. Buschek, M. Hassib, and F. Alt. Personal mobile messaging in context: Chat augmentations for expressiveness and awareness. ACM Trans. Comput.-Hum. Interact., 25(4), Aug. 2018. doi: 10.1145/ 3201404
+
+[7] A. Butler, S. Izadi, and S. Hodges. Sidesight: Multi-"touch" interaction around small devices. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, UIST '08, p. 201-204. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1449715.1449746
+
+[8] H. Cramer and M. L. Jacobs. Couples' communication channels: What, when & why? In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 709-712. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10. 1145/2702123.2702356
+
+[9] E. C. Derix and T. W. Leong. Days of our lives: Family experiences of digital technology use. In Proceedings of the 30th Australian Conference on Computer-Human Interaction, OzCHI '18, p. 332-337. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3292147.3292185
+
+[10] E. Duke and C. Montag. Smartphone addiction, daily interruptions and self-reported productivity. Addictive Behaviors Reports, 6:90-95, 2017. doi: 10.1016/j.abrep.2017.07.002
+
+[11] D. Freed, J. Palmer, D. Minchala, K. Levy, T. Ristenpart, and N. Dell. "a stalker's paradise": How intimate partner abusers exploit technology. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ' 18, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574. 3174241
+
+[12] D. Freed, J. Palmer, D. E. Minchala, K. Levy, T. Ristenpart, and N. Dell. Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders. Proc. ACM Hum.-Comput. Interact., 1(CSCW), Dec. 2017. doi: 10.1145/3134681
+
+[13] A. Gasimov, F. Magagna, and J. Sutanto. Camb: Context-aware mobile browser. In Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia, MUM '10. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1899475. 1899497
+
+[14] C. F. Griggio, M. Nouwens, J. McGrenere, and W. E. Mackay. Augmenting couples’ communication with [i] lifelines $i/ii$ : Shared timelines of mixed contextual information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-13.
+
+Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300853
+
+[15] S. Gustafson, D. Bierwirth, and P. Baudisch. Imaginary interfaces: Spatial interaction with empty hands and without visual feedback. In
+
+Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10, p. 3-12. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1866029. 1866033
+
+[16] A. Hang, E. von Zezschwitz, A. De Luca, and H. Hussmann. Too much information! user attitudes towards smartphone sharing. In Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, NordiCHI '12, p. 284-287. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/ 2399016.2399061
+
+[17] K. Hasan, D. Ahlström, and P. Irani. Ad-binning: Leveraging around device space for storing, browsing and retrieving mobile device content. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 13, p. 899-908. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654. 2466115
+
+[18] K. Hasan, D. Mondal, D. Ahlström, and C. Neustaedter. An exploration of rules and tools for family members to limit co-located smartphone usage. In Proceedings of the 11th Augmented Human International Conference, AH '20. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3396339.3396364
+
+[19] M. Hassenzahl, S. Heidecker, K. Eckoldt, S. Diefenbach, and U. Hill-mann. All you need is love: Current strategies of mediating intimate relationships through technology. ACM Trans. Comput.-Hum. Interact., 19(4), Dec. 2012. doi: 10.1145/2395131.2395137
+
+[20] K. Hinckley, J. Pierce, M. Sinclair, and E. Horvitz. Sensing techniques for mobile interaction. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, UIST '00, p. 91-100. Association for Computing Machinery, New York, NY, USA, 2000. doi: 10.1145/354401.354417
+
+[21] A. Hiniker, S. R. Hong, T. Kohno, and J. A. Kientz. Mytime: Designing and evaluating an intervention for smartphone non-use. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ' 16, p. 4746-4757. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036.2858403
+
+[22] A. Hiniker, S. Y. Schoenebeck, and J. A. Kientz. Not at the dinner table: Parents' and children's perspectives on family technology rules. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW '16, p. 1376-1389. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2818048.2819940
+
+[23] A. Inc. Use night shift on your iphone, ipad, and ipod touch, 2019. Retrieved October 9, 2020 from https://support.apple.com/ en-ca/HT207570.
+
+[24] A. Inc. Use screen time on your iphone, ipad, or ipod touch, 2019. Retrieved October 9, 2020 from https://support.apple.com/ en-ca/HT208982.
+
+[25] M. Jacobs, H. Cramer, and L. Barkhuus. Caring about sharing: Couples' practices in single user device access. In Proceedings of the 19th International Conference on Supporting Group Work, GROUP '16, p. 235-243. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2957276.2957296
+
+[26] P. Jarusriboonchai, A. Malapaschas, T. Olsson, and K. Väänänen. Increasing collocated people's awareness of the mobile user's activities: A field trial of social displays. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW '16, p. 1691-1702. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2818048.2819990
+
+[27] A. K. Karlson, A. B. Brush, and S. Schechter. Can i borrow your phone? understanding concerns when sharing mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, p. 1647-1650. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1518701.1518953
+
+[28] F. Kawsar and A. B. Brush. Home computing unplugged: Why, where and when people use different connected devices at home. In Proceedings of the 2013 ACM International Joint Conference on Pervasive
+
+and Ubiquitous Computing, UbiComp '13, p. 627-636. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/ 2493432.2493494
+
+[29] H.-J. Kim, J.-Y. Min, K.-B. Min, T.-J. Lee, and S. Yoo. Relationship
+
+among family environment, self-control, friendship quality, and adolescents' smartphone addiction in south korea: Findings from nationwide data. PLOS ONE, 13(2):1-13, 02 2018. doi: 10.1371/journal.pone. 0190896
+
+[30] Y.-H. Kim, J. H. Jeon, E. K. Choe, B. Lee, K. Kim, and J. Seo. Timeaware: Leveraging framing effects to enhance personal productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 272-283. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036. 2858428
+
+[31] J. Kjeldskov and J. Paay. Just-for-us: A context-aware mobile information system facilitating sociality. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services, MobileHCI '05, p. 23-30. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1085777.1085782
+
+[32] M. Ko, S. Choi, K. Yatani, and U. Lee. Lock n' lol: Group-based limiting assistance app to mitigate smartphone distractions in group activities. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 998-1010. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858568
+
+[33] S. Kratz and M. Rohs. Hoverflow: Exploring around-device interaction with ir distance sensors. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '09. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1613858.1613912
+
+[34] U. Lee, J. Lee, M. Ko, C. Lee, Y. Kim, S. Yang, K. Yatani, G. Gweon, K.-M. Chung, and J. Song. Hooked on smartphones: An exploratory study on smartphone overuse among college students. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 2327-2336. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2557366
+
+[35] D. Lottridge, E. Marschner, E. Wang, M. Romanovsky, and C. Nass. Browser design impacts multitasking. In Proceedings of the human factors and Ergonomics Society Annual Meeting, vol. 56, pp. 1957- 1961. SAGE Publications Sage CA: Los Angeles, CA, 2012.
+
+[36] ManicTime. Time tracker management tracking software, 2019. Retrieved October 9, 2020 from https://www.manictime.com/.
+
+[37] T. Matthews, K. Liao, A. Turner, M. Berkovich, R. Reeder, and S. Con-solvo. "she'll just grab any device that's closer": A study of everyday device & account sharing in households. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 5921-5932. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036.2858051
+
+[38] mINdCUBEd. Life, unplugged., 2019. Retrieved October 9, 2020 from https://github.com/ricvalerio/foregroundappchecker/.
+
+[39] C. Neustaedter, S. Harrison, and A. Sellen. Connecting families. The impact of new, 2013.
+
+[40] B. Noë, L. D. Turner, D. E. Linden, S. M. Allen, B. Winkens, and R. M. Whitaker. Identifying indicators of smartphone addiction through user-app interaction. Computers in human behavior, 99:56-65, 2019.
+
+[41] M. Nouwens, C. F. Griggio, and W. E. Mackay. "whatsapp is for family; messenger is for friends": Communication places in app ecosystems. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 727-735. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025484
+
+[42] E. Oduor, C. Neustaedter, W. Odom, A. Tang, N. Moallem, M. Tory, and P. Irani. The frustrations and benefits of mobile device usage in the home when co-present with family members. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, DIS '16, p. 1315-1327. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2901790.2901809
+
+[43] N. Park and H. Lee. Social implications of smartphone use: Korean college students' smartphone use and psychological well-being. Cyberpsychology, Behavior, and Social Networking, 15(9):491-497,
+
+2012.
+
+[44] RescueTime. Rescuetime: time management software for staying productive and happy in the modern workplace, 2019. Retrieved October 9, 2020 from https://github.com/ricvalerio/ foregroundappchecker/.
+
+[45] B. Schilit, N. Adams, and R. Want. Context-aware computing applications. In 1994 First Workshop on Mobile Computing Systems and Applications, pp. 85-90. IEEE, 1994.
+
+[46] C. Steiner-Adair and T. H. Barker. The big disconnect: Protecting childhood and family relationships in the digital age. Harper Business, 2013.
+
+[47] A. Thayer, M. J. Bietz, K. Derthick, and C. P. Lee. I love you, let's share calendars: Calendar sharing as relationship work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW '12, p. 749-758. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2145204.2145317
+
+[48] R. Valério. Foreground app checker for android, 2019. Retrieved October 9, 2020 from https://github.com/ricvalerio/ foregroundappchecker/.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e76897b23755e2415f8b912591890c2bdec4f14a
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/3fFZNlSO7GC/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,241 @@
+§ COAWARE: DESIGNING SOLUTIONS FOR BEING AWARE OF A CO-LOCATED PARTNER’S SMARTPHONE USAGE ACTIVITIES
+
+Author Name*
+
+Affiliation
+
+§ ABSTRACT
+
+There is a growing concern that smartphone usage in front of family or friends can be bothersome and even deteriorate relationships. We report on a survey examining smartphone usage behavior and problems that arise from overuse when partners (married couples, common-law relationships) are co-located. Results show that people have various expectations from their partner, and often feel frustrated when their partner uses a smartphone in front of them. Study participants also reported a lack of smartphone activity awareness that could help decide when or how to communicate expectations to the partner. This motivated us to develop an app, CoAware, for sharing smartphone activity-related information between partners. In a lab study with couples, we found that CoAware has the potential to improve smartphone activity awareness among co-located partners. In light of the study results, we suggest design strategies for sharing smartphone activity information among co-located partners.
+
+Index Terms: Human-centered computing-HCI design and evaluation methods-User Studies; User interface design
+
+§ 1 INTRODUCTION
+
+Smartphones continue playing a pivotal role in our daily communications with family and friends $\left\lbrack {5,{40}}\right\rbrack$ . Smartphones not only enable seamless communication over long distances, but also allow access to information anywhere, anytime. However, there is growing evidence that people may overuse smartphones, both when alone and in the presence of others, i.e., in a co-located situation [3, 28]. Moreover, smartphones are designed as private and personal devices: the activities that take place on the screen can, when desired, easily remain completely unknown to co-located persons. Not being aware of a co-located person's on-screen activities can cause frustration and even anxiety $\left\lbrack {{42},{43}}\right\rbrack$ .
+
+A significant amount of work has explored smartphone overuse and its consequences $\left\lbrack {3,{28},{46}}\right\rbrack$ . Much less attention has been devoted to designing solutions for co-located activity awareness which could mitigate the frustration associated with smartphone overuse and improve interpersonal communication. A few recent studies have attempted to increase smartphone activity awareness by helping people to be more aware of co-located people's smartphone activities, providing a rich shared experience, and even motivating people to initiate interaction with nearby persons [26,42]. These studies suggested different strategies to raise awareness, such as using 'talk-aloud' to pass on what one is doing on the device [42] or to attach a second display to the back of the phone to show on-screen activities to co-located individuals [26]. Though these solutions have the potential to increase smartphone activity awareness, they might not be appropriate in some common contexts (e.g., in social gatherings and in public places) and they might not be practicable due to the dependency on hardware instrumentation.
+
+Social relationships can greatly shape the degree and nature of people's information sharing with other co-located individuals [8,14]. For example, the information sharing patterns of people with their partner, parents, and children may be very different, and may even vary largely depending on the age of the individuals or the age of their relationship $\left\lbrack {4,{22}}\right\rbrack$ . To narrow down the focus, we concentrate on an in-depth investigation of various aspects of smartphone use by co-located partners who are married or in a common-law relationship, or in any other (romantic) relationship. We focus on such relationships because partners are often co-located for a substantial portion of each day and their mutual understanding is important for a healthy home environment $\left\lbrack {9,{19},{41},{47}}\right\rbrack$ . Couple relationships are already very nuanced and complex and include many aspects, e.g., closeness, connectedness, interpersonal trust, and perceptions of empathy. This makes them even challenging to study on their own as a focal relationship $\left\lbrack {8,{19},{39}}\right\rbrack$ .
+
+We first conducted a crowdsourced study examining: 1) people's smartphone habits when being co-located with their partner, 2) the concerns that people have about their partner's smartphone usage, 3) the rules and privacy issues partners have regarding their co-located smartphone usage, and 4) the strategies that people take to become aware of their partner's smartphone on-screen activities. Our results show that people often use smartphones while co-located with the partner, which sometimes leads to anger and frustration. We also found that people often need to respond to their partner's queries about their smartphone activities. Most people share information truthfully, although the details of the shared information vary widely between apps. Furthermore, many people feel that they are not fully aware of their partner's smartphone activities when co-located. This lack of awareness can lead to unpleasant situations. Some strategies such as 'talk-aloud' are being used to be aware of others' smartphone activities, yet there remains a lack of expressive tools to support smartphone activity awareness.
+
+Guided by these findings, we explored ways to increase smart-phone activity awareness among co-located partners. Our goal was to investigate smartphone-based solutions to help partners become aware of each other's smartphone activities and to help them improve their interpersonal communication. We developed a smartphone app, CoAware, that enables users to create co-located smartphone usage awareness by sharing the names, categories, or screens of apps being used by one's partner. Additionally, CoAware enables partners with ways to send notifications to each other such that they might motivate the partner to reduce co-located smartphone usage. We continued with an in-lab study with couples who explored and provided feedback on the features offered by CoAware. The results revealed that high-level information, such as sharing app names and sending notifications, is useful to provide co-located smartphone usage awareness; however, low-level information about phone usage (e.g., screen sharing) and allowing co-located partners to control a different person's phone were seen to be less necessary and sometimes overbearing. We also found that awareness of a partner's smartphone activities was not desired by all participants. Some participants were satisfied with the level of information they knew about, and some were fine relying on social protocols with their partner to remedy challenging situations. Thus, design solutions should not be thought of as a one size fits all approach.
+
+*e-mail: first.last@email.com
+
+§ 2 RELATED WORK
+
+§ 2.1 SMARTPHONE OVERUSE AND REDUCTION
+
+Smartphone overuse and smartphone addiction are active areas of research that examine people's smartphone usage behavior. Prior work showed that the overuse of smartphones can lead to a decrease in productivity in workplaces due to phone use during work hours [10], hamper family relationships, and even cause domestic violence [29]. Researchers have investigated ways of detecting smartphone addiction based on users' smartphone usage behavior. For example, a recent study [40] identifies lifestyle and social media related apps to be associated with smartphone addiction. There are commercial apps that track users' smartphone and computer usage and offer summarized information allowing users to be more productive in their daily activities [36,44]. Researchers have also explored solutions to provide people with their desktop computer usage information $\left\lbrack {{21},{30}}\right\rbrack$ , and non-work related web access information [35]. Despite such systems, further exploration is needed to examine how to apply this knowledge to design persuasive smartphone interfaces to promote co-located engagement for improved interpersonal relationships.
+
+§ 2.2 INFORMATION SHARING THROUGH DIGITAL DEVICES
+
+Sharing digital devices and accounts are common practices amongst household members, yet this topic has received less attention from an awareness perspective. Instead, the topic has mostly been motivated by the pros and cons associated with device sharing. Sharing devices can be viewed as an all-or-nothing approach for sharing information [27, 37], which gives rise to privacy and security issues [4, 25]. Studies also showed that device sharing concerns depend on the user's relation to the other user as well as the types of data being shared, which suggests the need for better privacy and security models for device sharing $\left\lbrack {{16},{18},{27}}\right\rbrack$ . The intricate ways that couples communicate [8] and their need to have a nearly continuous connection have gained deep attention in the literature $\left\lbrack {{19},{41},{47}}\right\rbrack$ . This motivated context sharing among people in close relations. Prior research $\left\lbrack {6,{13},{14},{31}}\right\rbrack$ showed how contextual information sharing (heart rate, distance from home, etc.) can be leveraged to make partners aware of each other's context and activities. Both Buschek et al. [6] and Griggio et al. [14] observed that context-awareness improves the sense of connectedness and pointed out interpretability and privacy concerns that may arise due to inferred additional information from the given context. While context sharing helps people in close relationships to be aware of each other's activities, smart-phone addiction and overuse of other digital tools may create an awareness barrier even when people are present in the same context.
+
+§ 2.3 SMARTPHONE ACTIVITIES AMONG CO-LOCATED PEOPLE
+
+A number of studies analyzed smartphone distractions during group activities $\left\lbrack {{26},{32},{34}}\right\rbrack$ . Ko et al. [32] developed a smartphone app that allows a group of co-located users to simultaneously lock their smartphones during activates such as when studying and chatting. Jarusriboonchai et al. [26] proposed an approach to communicate user's activities on the backside of a smartphone, by displaying the icon and name of the app being used. Users did not feel comfortable using it when they were unwilling to reveal the app name. With our design, CoAware, users can intuitively share activity information with various granularity levels. The goal is to help avoid privacy concerns. Oduor et al. [42] examined people's use of smartphones in the presence of family members in the home. They found that people often feel that the smartphone usage of their co-located family members is non-urgent. They feel ignored when not knowing their true activities. However, many fundamental questions are yet to be explored: When, where and how often do such problems arise? Are couples interested in knowing each other's smartphone activities? If so, then to what extent? How often do couples share activity information? With how much detail and how truthfully? Does the relationship, privacy, trust, or smartphone app play any role? We cover these aspects and more.
+
+§ 2.4 USAGE-AWARE CO-LOCATED SMARTPHONES
+
+Though there has been substantial work on different sensing solutions that allow users to track the state of the device $\left\lbrack {{20},{45}}\right\rbrack$ , very little is known about how to detect two co-located smartphones and how to share information between them. Prior research showed that on-device sensors (e.g., tilt) could be used to make a smart-phone context-aware of its state (e.g., orientation) [20], and usage context (e.g., resting on a table) [45]. Beyond sensing a device state or context, researchers explored external sensing solutions to enable smartphones to track surrounding activities and environments $\left\lbrack {7,{15},{17},{33}}\right\rbrack$ and their social acceptance $\left\lbrack 1\right\rbrack$ , but in limited contexts. To date, however, due to the lack of advances in sensing solutions, such an approach has received little attention in the context of using the space for co-located collaborative interactions for promoting interpersonal engagement. CoAware provides a way of pairing co-located smartphones within 200 meters without requiring any WiFi hotspot or smartphone data connection and thus allows co-located users to connect with each other's devices.
+
+§ 3 STUDY 1: EXPLORATION SMARTPHONE USAGE
+
+We started our exploration by conducting a crowdsourced study investigating people's smartphone usage, rules, trust and privacy concerns, and activity awareness when they use smartphones in the presence of their partner. Prior research has shown that crowdsourc-ing platforms, such as Amazon Mechanical Turk (AMT), are popular and convenient tools for conducting user studies and collecting reliable data [2]. We used AMT to run our study.
+
+§ 3.1 ONLINE SURVEY
+
+We created an online survey with 58 questions to collect data from smartphone users. Figure 1 shows a sample of questions from the survey. The survey contained five sections: i) 18 questions to collect demographic information about participants and their partners (e.g., age, nationality, gender, education, and household conditions); ii) 15 questions about smartphone usage (e.g., how often, where and what types of apps), usage rules in the household, and privacy-related issues; iii) 9 questions about trust-related issues that arise when people share their smartphone usage activities with their partner and other family members; iv) 5 questions targeted at smartphone usage behavior and habits when co-located with the partner; and v) 11 questions regarding the awareness of the partner's smartphone usage and possible strategies used to share usage related information with a partner. In total, we used 15 open-ended questions, 26 single/multiple-choice questions, and 17 5-point Likert scale questions. Most of the open-ended questions were used to collect descriptive responses about co-located smartphone usage where the Likert scale questions were designed to quantify results and to obtain shades of perceptions regarding issues on smartphone usage.The single/multiple-choice questions were primarily used to collect demographic data.
+
+§ 3.2 PARTICIPANTS AND STUDY PROCEDURE
+
+We posted our survey as a Human Intelligence Task (HIT) to AMT (with a $\$ {1.00}$ compensation). We specified two qualifications for participants: a minimum of ${70}\%$ approval rate and a minimum of 50 previously completed HITs. We also set the following requirements for the workers: they (a) must own a smartphone, (b) be either married, or in a common-law, or in a partner relationship, where (c) the partner must also own a smartphone and (d) currently live in the same household. In total, we collected 109 responses in seven days. We subsequently removed 31 responses which contained one or more unanswered questions and/or invalid answers. Consequently, we analyzed data from 78 participants (34 female, 44 male). On average, participants took 25 minutes to complete all the questions.
+
+§ 3.3 DATA ANALYSIS AND RESULTS
+
+We applied a thematic analysis on the qualitative data where two researchers separately went through all the comments to perform open coding. Later they consolidated and reconciled codes into a common code set. Self-reported quantitative data were analyzed using standard statistical methods such as mean and standard deviation.
+
+§ 3.3.1 DEMOGRAPHICS AND SMARTPHONE USAGE
+
+The majority of our participants were from two age ranges: between 24 and 34 years (28 participants) and between 35 and 44 years (29 participants). Three participants were aged between 18 and 24 years, nine between 45 and 54 years, and nine between 55 and 64 years. Only two participants were 65 years or older. Fifty-five participants were from the USA, 23 from India. On average, our participants had been in their relationship for 13.3 years. Participants and their partners had been using smartphones for 8.5 and 7.9 years, respectively. Participants reported using smartphones an average of 3.3 (SD=2.0) hours per day.
+
+For how many years have you and Please estimate how often it hap-your partner been in a relationship? pens that you are concerned about
+
+In total, for an average day, how many your partner using his/her smartphone in your presence. hours do you co-locate with your part- ☐ A couple times a month or less ner (that means you and your partner ☐ Once a week are together at any place such as at home ☐ A few times a week - excluding sleeping time, shopping mall, ☐ About once a day restaurant, park, or sidewalk)? ☐ More than once a day
+
+In total, for an average day, how many Please describe the three most recent hours do you send in collaborative ac- situations, where your partner was con-tivities (for example, cooking) with your cerned about you using your smart-partner? phone in the presence of your partner.
+
+Figure 1: Sample questions on a) smartphone usage and b) concerns
+
+Participants indicated using some categories of apps more often than other categories: ${90}\%$ used communication apps (e.g., email, text message, skype, phone calls) at least once a day and ${83}\%$ used social media apps (e.g., Facebook, Instagram, Snapchat) and 85% used the Internet (e.g., reading news, hobby-related browsing, banking) at least once a day. Only 19% of the participants used location-sharing apps (e.g., Glympse, Life360, Find My Friends) once a day and only 29% used health-related apps (e.g., fitness tracking, sports, or medicines apps) at least once a day.
+
+§ 3.3.2 CO-LOCATED SMARTPHONE USAGE
+
+Participants reported that they are co-located with their partner a significant amount of time (mean 5.9, SD=3.6 hours/day excluding sleeping time) and that they often engage in collaborative activities with their partner, such as cooking or watching movies (mean ${2.2},\mathrm{{SD}} = {1.4}$ hours/day). In response to an open-ended question, participants reported various reasons for using their smartphones when co-located with the partner. In total, we analyzed 167 coded responses which can be categorized into the following five broad categories: 1) communication/ socialization with friends or family members (38% of the responses), 2) work-related activities (20% of the responses), 3) checking information and updates for own interest (20% of the responses), 4) finding information for a purpose shared with the co-located partner (13% of the responses), and 5) personal entertainment (9% of the responses). These results are similar to earlier qualitative results [42] which showed that people use smartphones in the presence of their family members to check notifications, find information, and fill time when they are bored. Our results also revealed that co-located smartphone use frequently happens at home (44% of the responses), mostly in the living room and bedroom. Many participants talked about using their phones at home when co-located with their partners. "... watching a movie on tv at home and he was upset that I checked my phone." [P20, female, relationship 27 years]
+
+Other common places for co-located smartphone use include restaurants ( 18% of the responses), public spaces such as in shopping malls or parks (24%), during social gatherings (7%), and inside cars (7%). All participants reported frequent occurrences (at least once a month) of their partner expressing concerns regarding their colocated smartphone usage. "I pulled out my phone just to go on it before the food came and he complained." [P1, female, relationship 4 years] || "We were in bed together and not really paying attention to what she was saying." [P16, male, relationship 21 years]
+
+We coded a total of 114 responses regarding the situations (places and activities) when this had happened: at mealtime, either at home or in a restaurant (29% of the responses), while watching tv/movies together at home (16%), in public places, such as in a shopping mall (15%), in the bedroom at bedtime (13%), during on-going conversations $\left( {{10}\% }\right)$ , and in some other situations such as at a social gathering or while in the car. Two participants from India responded that it had happened while being in a temple.
+
+ < g r a p h i c s >
+
+Figure 2: Participants' self-reported smartphone application usage frequency when co-located with their partner.
+
+Participants' concerns were mostly related to the lack of attention to what they had expected their partner to concentrate on during a conversation or other activity. Sometimes they were concerned about disregarding family time and social engagement (especially when surrounded by family or friends in social gatherings). Several participants mentioned that their failure at paying attention sometimes led to frustrations and tensions between them. "When we sit together and talk to each other in our living area at home, I go through messages in WhatsApp. That time my partner gets irritated, thinking that I am not listening to him." [P10, female, relationship 21 years] || "During a dinner at his friend home... I was using my phone continuously to text my friends. He signaled me not to use the phone at a get together because it seems odd when I am not involving in the event. I keep on texting my friends he raised and fought with me." [P21, female, relationship 10 years]
+
+Participants reported that their partner expressed concerns about their co-located smartphone usage primarily due to disruption in their quality time, and sometimes expressed anger and annoyance. On average, participants reported that they spend 2.3 hours a day on their smartphones while co-located with their partners. For each app category, at least ${30}\%$ of the participants who use those apps more than once a day reported to use them less frequently while co-located with their partner.
+
+We also asked participants about the apps that they often use when they are co-located with the partner. Figure 2 shows the results. We observe that they frequently surf the Internet (e.g., reading news, browsing, banking), use communication apps (e.g., email, text message, Skype, phone calls) and social media apps (e.g., Facebook, Instagram, Snapchat). However, they rarely use health-related apps (e.g., fitness tracking or medicines apps) and location sharing apps (e.g., Glympse, Life360). The results suggest that people prefer to use communication and other related apps to connect to families and friends when co-located with their partner.
+
+§ 3.3.3 RULES OR MUTUAL UNDERSTANDING ON SMARTPHONE USE
+
+We asked participants questions about rules or agreements set in the household to reduce co-located smartphone usage. More than one-third of the participants (35%) mentioned having some house rules. The rest ( ${65}\%$ ) said they did not have any formal agreement, yet they shared a mutual understanding with their partner. "We're responsible and adult enough to know when it's time to use the phone or not." [P52, male, relationship 11 years]
+
+The participants who reported to have rules or agreements (43 coded responses) for smartphone use had rules based on either locations or situations. Mealtime (33%), family time with kids (21%), collaborative activities (16%), bedtime (12%), social gathering (7%) and driving (5%) were some contexts where the rules restricted smartphone use. "We agreed to not use our smartphones during dinner unless it's an emergency." [P46, male, relationship 26 years]
+
+Often the rules or agreements were set to ensure quality time within the family and in social gathering: "I am in agreement with him that we do not use our phones when it's quality time for us to be together or when we're with others in a social situation, unless everyone is using them, too, for some reason (like looking up some info or playing a game together)." [P58, male, relationship 8 years]
+
+The rules also came from self-realization of being disconnected: "Once me and my spouse was continuously using the phone when we were at home ... we realized that we didn't speak to each other. That moment we decided not to use phones unless an emergency when we both are together." [P21, female, relationship 10 years]
+
+We asked the 51 participants, who did not have any rules, how they would feel about creating them. Fifty percent of these participants welcomed the idea of having some rules for ensuring proper engagement with the partner. Twenty-five percent expressed being somewhat neutral about agreeing on rules. The remaining ${25}\%$ opposed the idea of agreeing on rules. They did so as they felt that rules would intrude on their smartphone activities or that it may be an "overkill" between adults who should be able to act on their own accord. Some stated that they shared mutual respect not to use smartphones in certain situations and do not need any rules. "We should come up with guidelines for smartphone usage that would make me feel better about our communication with one another." [P37, female, relationship 14 years]
+
+The average length of relationships for the participants who have no rules was larger (avg. 14 years) compared to the participants who reported to have rules (avg. 10 years). Binomial Logistic Regression showed no significant difference in gender and relationship length between these two groups.
+
+We asked participants whether they have any rules regarding smartphone use for other family members, excluding themselves, such as their parents (e.g., an older adult living at home with their adult children), teenage children, or younger children. Most participants mentioned that they do not have any rules for other adults in the home as they are responsible adults who do not use smartphones frequently. "There is no rule as my mom is an aged woman and did not use phones every day." [P39, male, relationship 12 years]
+
+However, 40 participants with children have strategies and rules to control the child's smartphone usage. For instance, out of 52 responses, 37% of the responses were about time-based restrictions (e.g., no more than ${30}\mathrm{\;{min}}$ per day), ${23}\%$ were about content-based restrictions (e.g., only for games and watching YouTube videos), ${21}\%$ were location-based restrictions (e.g., not at the dining table, in the bedroom or washroom), and 6% were about age-based restrictions (e.g., no phone before 8 years). Such "no phone" policies were primarily set to ensure that the children were engaged in more purposeful activities and to ensure they spent enough time with family members. Typical responses were: "We do not allow our sons to use their smartphones in private such as their bedroom or bathroom. We also have their settings configured so their phones may not be used between ${10}\mathrm{{pm}}$ and $6\mathrm{{am}}$ ." [P46, father of 2 children]
+
+§ 3.3.4 STRATEGIES TO REDUCE CO-LOCATED SMARTPHONE USAGE
+
+We also asked our participants whether they know of or used any apps or other tools to reduce smartphone use in co-located situations. The majority (67 out of 78) mentioned that they do not know of any such solutions that could either help them be more aware of each other's smartphone activities or help to reduce co-located smartphone usage. The remaining participants (11) mentioned that they are aware of apps to restrict usage time. They mentioned using iPhone's Screen Time [24], Night Mode [23], Offtime [38] to track their daily smartphone usage activities and to limit smartphone app access after a certain amount of time.
+
+We used a 5-point Likert scale to get participants' opinions on the importance of using apps or other strategies to reduce co-located smartphone usage. Interestingly, younger participants felt that it was more important to have apps or strategies than the older participants Out of 28 participants in the age range 25 to 34 years, 17 participants expressed high importance (rating 3 or more) of having such apps or strategies, whereas only 25 of 50 participants in the age range 35 to 65+ years expressed high importance.
+
+§ 3.3.5 SHARING INFORMATION, PRIVACY, AND TRUST
+
+About 74% of participants said that they told their partner what they were doing on their smartphone when co-located at least a few times a week. We also asked them how truthful they are when sharing information. Twenty five out of 34 females and 19 out of 44 males said that they share accurate/truthful information about their smartphone activities with their partner. Participants who said they told the truth commented that they do not have anything to hide from their partner and do not want to lose the trust. "We value honesty in our relationship, not that we do anything shady on our phones, but if we did, I would immediately inform her of anything I did, and vice versa." [P52, male, relationship 11 years]
+
+On the other hand, participants who said they did not always share accurate information with their partners did so because they were trying to safeguard their privacy or ensure personal boundaries. For instance, some participants mentioned that they are not comfortable sharing financial information, business matters, photos, videos and things that they search on their phone. We believe this is due to the sensitivity of this information, and sometimes to maintain personal space. "I might be slightly embarrassed about the random things $I$ look up." [P33, female, relationship 20 years]
+
+In a follow-up 5-point Likert question (5=very confident to 1=not confident at all), participants were asked to indicate their confidence level about whether their partners tell true information about their smartphone activities. Both male and female participants had strong confidence that their partners share accurate information with them, which reflects their average score of ${4.45}\left( {\mathrm{{SD}} = {0.84}}\right)$ and 4.68 $\left( {\mathrm{{SD}} = {0.79}}\right)$ , respectively. Only six participants gave a rating of 3 or below and expressed their past experience of finding their partners not being truthful. This experience could consequently create an impact on their level of trust in the future. "About 9 months ago my partner expressed that due to past cheating by previous partners, she felt paranoid when I was using my phone to chat with other people." [P8, male, relationship 1 year]
+
+§ 3.3.6 SMARTPHONE USAGE AWARENESS
+
+We examined participants' awareness about exactly what their partner is doing on their smartphone and how interested they are in knowing what their partner does on their smartphone. ${78}\%$ participants responded that they are not fully aware of their partner's smartphone activities. In some cases, participants reported that this lack of awareness triggered misunderstanding among the co-located partners as their partners make assumptions based on their smart-phone activities. A potential reason for such an assumption could be the limited information that can be seen from a distance about a person's usage activities. Similar results were found by Oduor et al. [42] who reported a lack of smartphone activity awareness among co-located family members.
+
+We included questions on the common strategies for sharing activity awareness with co-located partners. Participants reported that such awareness was often achieved by asking questions of their partners where they responded verbally or showed their screen to their partner. This action sometimes led to frustration and anxiety among partners. "I normally just ask what he's doing (especially if he laughs!) and he'll always tell me." [P1, female, relationship 4 years] || "My partner usually gets aggravated when I ask what he is doing, because usually, he is trying to figure something out on his phone." [P11, female, relationship 15 years]
+
+In exploring how interested participants were in knowing the partner's smartphone activities, we found that male participants were more interested ( ${66}\%$ of the males were interested) in knowing what their partner is doing on the smartphone than female participants (58%). On the contrary, in a question asking about their partner's interest in knowing what they are doing on their smartphone, we found that ${86}\%$ of male participants reported their partners to be interested in knowing their activities, whereas ${68}\%$ of the females reported the same. We observed a trend that this interest decreased gradually with the increase of age range. In the age range 25 to 34 years, ${82}\%$ expressed interest, whereas in the age range 35 to 44 years only 70% showed interest.
+
+§ 3.3.7 CO-LOCATED CONTENT SHARING
+
+We collected information on level (details vs. abstract) of smart-phone activity information that the participants are comfortable to share with their partner and the level of information that they would like to receive from their partner. We provided them with three different levels that they could choose from for sharing or receiving: (i) detailed information (e.g., chatting with "Alex" in Facebook), (ii) an app's name (e.g., using Facebook), and (iii) activity information (e.g., playing games). Additionally, they could write any other abstractions that they might be comfortable with. We collected 119 coded responses for sharing and 120 coded responses for receiving level of information as they were allowed to select multiple levels.
+
+Many participants are comfortable with providing very detailed information to their partners (37% responses), whereas others reported preferring to share only the app name (34% responses) or general activity information (29% responses). The other participants reported only feeling comfortable with providing less or no information at all. We also found that the preferred sharing level varies across apps. 37 participants mentioned that they share details when using communication apps whereas only 19 participants share details while they are browsing the Internet. We observed similar results for receiving information from their partner. Many participants indicated that they would like to receive detailed information from their partners (34% responses), whereas others expressed to get only the app name (36% responses) or activity information (29% responses). The other participants (only 1%) reported feeling comfortable with receiving any level or no information at all.
+
+We further asked participants to provide examples where they share smartphone usage information with different people (e.g., partner, family members). We observed that it is common to share different levels of information with different people: "I would give less details based on how well I know the person. My partner and family get more information that colleagues." [P57, male, relationship 2 years] || "To my partner, I share all the information; to my family members, I share only app name or activity name" [P53, male, relationship 7 years]
+
+§ 3.4 DISCUSSION
+
+Results from the survey revealed that people use smartphones in the presence of their partner even though their partner expressed concerns about the usage. In general, people can see when partners use smartphones in front of them, but exactly what a partner is doing on the phone cannot be easily inferred from an observer's viewpoint (also found by [26]). Our work builds on prior work by illustrating the locations and activities in which this occurs, the rules people have setup to help mitigate issues, and how they feel about sharing usage information. Participants reported that co-located usage and asking about their partner's activities sometimes triggers aggravating situations. However, they are not aware of technological solutions that would help them limit co-located smartphone usage. These findings motivated us to think of a means to improve people's awareness about their partner's phone activities while co-located by supporting different levels of information (details vs. abstract), such that they can make informed decisions about how to handle the situation. We also believe that improved awareness may help people to be motivated to use phones wisely and, thus, improve the quality of domestic life. Of course, we recognize that awareness of a partner's smartphone activities was not desired by all participants. Some participants were satisfied with the level of information they knew about, and some were fine relying on social protocols with their partner to remedy challenging situations. As such, we wanted to explore design solutions that might work for people who were more interested in additional knowledge of what their partners were doing on their phone in a hope to improve social interactions.
+
+§ 4 THE DESIGN OF COAWARE
+
+Informed by our findings, we designed a smartphone app, Co-located Awareness (CoAware), intended for sharing smartphone usage information between co-located partners.
+
+ < g r a p h i c s >
+
+Figure 3: (a) A connection initiates with a request to gain access to the partner's device; Once the access is gained, the partner can see (b) the app name, (c) the app category, and (d) a screen showing the screen content of the app that the partner is viewing.
+
+§ 4.1 COAWARE FEATURES
+
+CoAware was designed to share users' smartphone usage information such as the number of times an app has been launched, the duration it has been used for, and the time it was initially opened. We used a foreground app checker external library [48] that allows access to smartphone app usage information. In addition, we developed a solution to directly share screens from one smartphone to another via WiFi direct. Based on these capabilities, we developed three techniques to share app usage information between two colocated smartphones with CoAware. As our survey results showed that people prefer to share information by varying degree of details about app usage, we designed the app to have three different levels of access; from very limited information which users may be more comfortable sharing (e.g., an app category) to very specific information (e.g., viewing the screen) that could possibly be more privacy intrusive. Thus, users can choose what level of sharing they and their partner are comfortable with. The specific levels are:
+
+App Category: This access level provides users with a high-level view of app usage information, where only the types of apps being used are shown and not the app names (Figure 3c). For instance, apps that are used for contacting other people (e.g., email, text message, skype, phone calls) are mapped to and labeled as "Communication." Commonly used apps are categorized into different labels.
+
+App Name: In this access level, CoAware tracks the name of a running app on the co-located phone (Figure 3b). The app name is displayed on the other phone.
+
+Screen Share: In this access level, CoAware captures images of a phone's screen and transfers it to the co-located phone every ${50}\mathrm{\;{ms}}$ . In this way, co-located users are aware of the exact on-screen activities of each other (Figure 3d).
+
+With CoAware, when two devices are co-located, one device sends a connection request and asks permission to access App Name, App Category or Screen Share information. The receiving device shows the request in a pop-up (Figure 3a) where the user of the device can accept or reject the request. If accepted, the sender device gains access to the app name, app category or the device screens of the other device. It also starts logging the app usage information (e.g., running app) on the receiving device. CoAware provides three notification strategies to allow co-located partners to send information through the app. We wanted to provide various levels of information exchange and control.
+
+ < g r a p h i c s >
+
+Figure 4: CoAware (a) sending usage statistics, (b) a close request, showing (c) a summary, and (d) detailed information about apps used since the connection was established.
+
+Message: Users can send a preset or custom message to their partner. Examples of preset messages are "It feels like you've been using Gmail for a while now, can we talk instead?" and "Hey, it's me. How are things going?" Custom messages allow users to type anything in a textbox and send the text to their partner. We included this possibility to offer flexibility in terms of how users like to communicate with their partners. This messaging feature is similar to sending a text message; however, we hoped that preset suggestions for messages might help to create courteous exchanges between partners and not heighten tensions. This reflects findings from our survey where some participants said they would gently ask their partner about their smartphone usage if they felt it was inappropriate.
+
+Usage Statistics: Users can send app usage statistics such as "You have been using Gmail for $4\mathrm{\;{min}}$ and 54 seconds." to their partner (Figure 4a). Such messages are created using the duration of the longest-running app among the currently running ones, since the longest-running app is often likely to keep the user engaged for a longer period of time. This reflects findings that some participants did not realize how long they were on their phone in the presence of others; thus, some additional awareness information could be useful to regulate behavior on one's own.
+
+Close Request: Users can send a request to close the currently active app that their partner is interacting with. The partner sees a prompt such as "May I request you to close Facebook?" or "I was hoping you could close Gmail. OK?" (Figure 4b). The partner can cancel the request and continue using the app. If the partner agrees (tapping the Ok button), the app shows a 30-second countdown timer to let the person finish the current activity. When the 30 seconds are over, CoAware closes the currently running app. Our goal here was to make the app closing somewhat graceful and delayed and less of an immediate interruption. This reflects findings from our survey where, again, some participants said that they might ask their partner to change their behavior on their smartphone. Of course, we recognize that actually causing actions to occur on someone else's phone may come across as being strong or overly assertive to some people. We wanted to explore this idea to see how people would react to it in further studies.
+
+Using the Summary tab, one can see the usage statistics for the apps. The summary includes the app/category name, the total time the app or category has been used and the number of launches since the current connection was established (Figure 4c). If the access level App Name or Screen Share is given by the partner, then one can see the app names. The access level App Category only allows one to see the app categories. Using the Details tab, one can see more detailed information about individual apps or app category launches (depending on the access level), such as the name, launch time, and duration of use (Figure 4d). Overall, we recognize that not everyone will find the features we propose in CoAware to be useful. Some may find them to be overbearing, some may find them to be not needed at all, and some may find them to suit their needs well. This is as expected and purposeful such that we could explore our design ideas more and see what reactions participants would give with a fully working system that provides such options.
+
+§ 4.2 IMPLEMENTATION DETAILS
+
+CoAware was built on Android SDK 4.4 and leverages smartphones' WiFi Direct to establish peer-to-peer connections between two smart-phones. We used WiFi direct as this technology allows two devices to connect directly without requiring them to connect via Wi-Fi routers or wireless access points, thus enabling sharing information between co-located users in any location (e.g., at home, park). Prior research has shown that smartphone activity can be shared with others by instrumenting the device (e.g., attaching an additional display to the back of the device) [26] which can raise privacy concerns due to the visibility of private content in some common contexts (e.g., public places). Hence, we designed an application solution that does not require any additional hardware instrumentation.
+
+§ 5 STUDY 2: EXPLORATIONS OF COAWARE
+
+We conducted a study in a lab setting as an initial attempt to get feedback from participants about CoAware. We investigated users' feedback on the three access levels for sharing app information amongst co-located partners and explored their opinions on the three notification strategies provided by CoAware. Additionally, we collected participant feedback on privacy issues and on how CoAware creates awareness and we asked for general feedback on CoAware's features. Naturally, we could have explored our ideas using a field study where participants from various socio-cultural backgrounds could have tried out CoAware over a prolonged period of time. We did not use this approach given that CoAware is still at an early design stage. Field studies bring the risk of participants not trying out all of the features within a design. We felt it was more reasonable to gather initial participant feedback such that the general ideas presented in CoAware could be assessed to understand which may hold the most merit. Then, either CoAware or other applications like it could be created and explored through longer-term usage. The caveat is that our study does not provide generalizable results across a range of real-world situations. Instead, it illustrates initial feedback and directions to help guide future designs, which, at a later point, could be evaluated in the wild with a field study.
+
+§ 5.1 PARTICIPANTS AND PROCEDURE:
+
+We recruited 22 participants (11 couples) from the local community (a large city within North America) to participate in the study. Two participants were 18-24 years old, 11 were 25-34 years old, 4 were 35-44 years old, 2 participants were 45-54 years old, and 3 participants were 55-64 years old. All participants were smartphone users and have been in their partner relationship for an average of 9.5 (SD=5.7) years. None of them had experience using tools to reduce smartphone usage or to support the awareness of someone else's smartphone use.
+
+We used two smartphones, Google Pixel 3 and Google Nexus 5, for the study. We first showed participants how to use CoAware. Next, participants were given the following two tasks to complete:
+
+Establish a connection: One person (sender) sends a connection request and the other person (receiver) accepts it.
+
+Access information: The sender accesses the app name, app category, and screen on the receiver's device while the receiver 1) browses information on an e-commerce website (Amazon) to find a suitable camera costing less than $\$ {500},2)$ finds a rumor/gossip about their favourite actor/actress, and 3) plays a game of their choosing. Once the tasks are completed, the participants switch roles as sender and receiver and repeat the tasks.
+
+We then used a questionnaire to collect their opinion on the access levels and notification strategies to create co-located smartphone usage activity awareness, privacy concerns related to CoAware, and design suggestions to improve the app. We asked participants close-ended questions using 5-point Likert scales regarding (Q1) the usefulness of the three access levels in creating awareness about their partner’s smartphone activities, (Q2) the usefulness of the three notification strategies to motivate them in reducing co-located smart-phone usage, (Q3) their comfort level when receiving a notification from their co-located partner (for each of the three notifications strategies); and (Q4) their comfort level in using the three access levels across five different app categories: communication (e.g., Gmail), social media (e.g., Facebook), games, music & entertainment (e.g., Netflix), Internet (e.g., news), and others (e.g., maps). Additionally, they were asked open-ended questions about privacy and awareness issues. In the end, we also asked to provide feedback and suggestions regarding CoAware’s features. A session lasted approx. ${40}\mathrm{\;{min}}$ . in total. We used thematic analysis to analyze qualitative responses where we iteratively reviewed the responses to look for main themes.
+
+ < g r a p h i c s >
+
+Figure 5: Mean rating for (a) the usefulness of the access levels, (b) creating activity awareness with notification strategies, and (c) the usefulness of the notification strategies, (d) Mean comfort level for sharing information across various app categories.
+
+§ 5.2 RESULTS
+
+For questions Q1-Q3, we used Friedmans test with post-hoc Wilcoxon tests to analyze the data (Bonferroni adjusted $\alpha$ -level from 0.05 to 0.016). (Q1) Figure 5a shows the mean rating on how useful the three access levels were to create awareness of partners' activities. We found a mean rating of ${3.91}\left( {\mathrm{{SD}} = {0.53}}\right)$ for App Name, ${3.0}\left( {\mathrm{{SD}} = {0.93}}\right)$ for App Category, and 4.14 $\left( {\mathrm{{SD}} = {0.89}}\right)$ for Screen Share. A Friedman test showed a significant difference between the access levels $\left( {{\chi }^{2}\left( {2,N = {22}}\right) = {14.17},p < {.001}}\right)$ . Post-hoc pairwise comparisons revealed that both App Name and App Category were rated more useful in creating an activity awareness than App Category (no significant difference between App Name and App Category). (Q2) The mean rating for the usefulness of three notification strategies to motivate reducing co-located smartphone usage was 4.18 (SD=1.05) for Message, 3.27 (SD=0.83) for Usage Statistics, and 2.55 (SD=1.18) for Close Request (Figure 5b). A Friedman test showed a significant difference $\left( {{\chi }^{2}\left( {2,N = {22}}\right) = {20.58},p < {.001}}\right)$ and post-hoc pairwise comparisons showed significant differences between all pairs. (Q3) Figure 5c shows that participants were more comfortable receiving a notification with Message $\left( {{4.18},\mathrm{{SD}} = {1.05}}\right)$ and Usage Statistics $\left( {{3.5},\mathrm{{SD}} = {0.9}}\right)$ than with Close Request $({2.5}$ , $\mathrm{{SD}} = {1.3}$ ) as shown in Figure 5c. A Friedman test showed a significant difference between the strategies $\left( {{\chi }^{2}\left( {2,N = {22}}\right) = {20.55},p < {.001}}\right)$ and post-hoc pairwise comparisons showed that Message and Usage Statistics were significantly different $\left( {p < {.001}}\right)$ than Close Request (no sig. difference between Message and Usage Statistics).
+
+We used a One-Way MANOVA to analyze the responses to Q4 (Figure 5d). Results revealed a significant difference in participants' rating on the access levels $\left( {{F}_{{10},{118}} = {4.37},p < {.001}}\right.$ ; Wilk’s $\lambda = {.53}$ , partial ${\eta }^{2} = {.27}$ ). Tukey’s HSD post-hoc tests showed that for mean scores for the communication, social media, and other categories, there were significant differences between Screen Share and App Name $\left( {p < {.001}}\right)$ and between Screen Share and App Category $(p <$ .001 ). For games, results showed a significant difference between Screen Share and App Name $\left( {p < {.001}}\right)$ . For Internet browsing, there was a significant difference between Screen Share and App Category ( $p < {.001}$ ).
+
+In the open-ended questions about privacy and awareness issues, participants expressed that CoAware would be helpful to maintain their time commitment to each other, create more smartphone usage awareness so that they do not interrupt their partner during an important ongoing activity, and that Screen Share would help them during co-located collaborative activities such as sharing information with others. "[CoAware] can be very useful for creating awareness as it allows us to check what other has been doing, especially when he is on phone for a long time." [P7, female, relationship 5 years] || "The app will help when we want to show something to each other but sitting different places in a room." [P3, male, relationship 5 years]
+
+Additionally, participants mentioned that there are other potential use cases for CoAware (e.g., sharing information with partner, monitoring their children's smartphone usage). "Sharing feature is useful as I can show photos and videos to my wife; I can share the game that I am playing to my son." [P18, male, relationship 13 years]
+
+Participants also expressed privacy concerns regarding the Screen Share. For instance, two females mentioned that they used some apps to track their health-related issues which they might not feel comfortable sharing. Others wanted to have a personal digital space away from their partner which they did not want to be intruded in, as this might create stress and tension in family life. "It will hamper my privacy, I may not feel comfortable at all for sharing screens of my messages and emails" [P4, female, relationship 5 years]
+
+Participants provided suggestions to improve CoAware. For example, instead of showing pop-ups, they suggested using standard notifications that commonly appear at the top of the screen. Six participants also suggested that partners should be allowed to only send a fixed number of notifications within a certain time (e.g., 10 notifications per day). Some participants wanted more notification styles and strategies or more statistics to better motivate the partner to engage with them. Two participants also felt that instead of sharing the entire screen with their partner, a blurred image or custom screen area could be shared as this would protect privacy.
+
+§ 6 DISCUSSION AND DESIGN RECOMMENDATIONS
+
+Through our studies, we were able to uncover a range of ways that applications like CoAware should be designed, based on its strengths and weaknesses.
+
+Ensuring personal digital space: The participants' lower comfort ratings when using Screen Share illustrate that there is often a personal digital space among partners which they typically want to preserve. Thus, we suggest that applications like CoAware should focus on sharing higher-level or more abstract information (e.g., the app name) instead of sharing very detailed information such as the screen content. Low-level information akin to what we provided as one feature within CoAware is likely too much for many people and could inadvertently create greater tensions between partners.
+
+Level of access: Participants expressed concerns about using the Close Request feature within CoAware as it takes some control over a partner's phone and may disrupt their on-going activities. This, again, could create further tension between partners. Instead, participants felt that solutions that alert others of what they may want to change in their own behavior, rather than take control, would be more acceptable. Thus, when designing apps for communication between devices, it is important to carefully consider how much control one should have over another person's device.
+
+Notification Strategies: Participants found pop-up notifications to be distracting and somewhat overbearing. Thus, we suggest that applications like CoAware use standard notification mechanisms as are already found on smartphones. Pop-ups notifications can be distracting and the forced change in activity (e.g., switching from a game to notification UI) could create new frustrations.
+
+Determining whether the phone is in use: Sometimes a phone may remain active although the user may not be engaged with it. This means that usage information provided to one's partner may not be accurate. One possibility could be to rely on information about the user's on-screen taps in combination with information about the running apps to determine usage history.
+
+Study 1 provided insights and direction for our design work with CoAware and helped lead to the aforementioned design suggestions. Its results also moved beyond prior literature (e.g., [42]) to allow us to more deeply understand when, where, and during what activities a system like CoAware would potentially be used in real-world situations. This can help guide future studies that want to test applications similar to CoAware to know when, where, and how such testing should be done. One could also imagine using Study 1's data on locations and context to think about ways to further refine applications such as CoAware. For example, users could be given options to customize applications so that they are able to choose what types of information they are comfortable revealing to their partner based on location, time, and activity. Such information could also be inferred by applications and then adjusted as needed by users.
+
+We also recognize that there is a darker side to applications like CoAware and designers should be cautious in this regard. The challenge with apps that track or share mobile device usage between partners is that they can potentially alter relationship dynamics, given an increased access to information [11]. This could create issues around trust or control between partners. While our results did not reveal such concerns, they are most certainly possible. Further research is required to understand partners' information sharing behaviour with others and its impact on their relationship. We also acknowledge that apps with features like CoAware could be seen as being highly problematic for relationships that contain domestic abuse or family violence [12]. As apps like CoAware enable access to information on partners' devices, this could lead to the system being misused (e.g., coercing to share information constantly, surveilling one's partner) and create anxiety and tensions within a relationship. Of course, there are no easy solutions for such types of situations. CoAware could, for example, ask users for details about their relationship satisfaction before making features available to them. Yet partners in an abusive relationship could easily answer untruthfully. Apps could track couples' information sharing behaviours and provide warnings if acts that appear to resemble surveillance occur, or features could be turned off based on certain negative behaviours. However, this may, again, not be a complete solution and may be hard to detect. As such, designers need to be cautious to think about the possible negative consequences of apps with features similar to CoAware.
+
+§ 7 LIMITATIONS AND FUTURE WORK
+
+Our crowdsourced survey has some inherent limitations. Since we used AMT, our participant's demographics were determined by the demographics (USA and India) of the ATM. With a larger sample and with participants from more cultures, it is possible to investigate how people's perceptions of co-located phone usage differ between cultures. It would also be interesting to conduct an in-person study with interviews to determine whether the results differ from those that have been obtained from crowdsourced study. In an in-person study, we would be able to see how people use the design for real-life situations which may not completely match the tasks in our study and essential to ground and guide the developments of CoAware This would further help us cover ethical aspects of smartphone activity-awareness between couples which might be missed in our online survey. The challenges and potential problems that may arise with an increased awareness, especially in abusive and problematic relationships, needs further investigation. We believe such future research would provide us with important insights into the scope and impact (both positive and negative) of using CoAware and similar technological solutions in sensitive family situations.
+
+We concentrated on partners' co-located smartphone activity awareness and investigated their opinions on CoAware. However, we envision to extend our approach with CoAware to other relationship types, such as between parents and children, where the parent could use CoAware to monitor and control the child's smartphone activities. This would require an in-depth study of parent-children relationships and the consideration of many other aspects, such as the diversity of house rules, family traditions of raising children, child age, and the educational backgrounds of parents.
+
+In the future, it would be interesting to find out in more detail whether and how our findings were influenced by our participants' age, cultural background, and relationship length. This would require a larger and more diverse set of participant couples. Furthermore, with our initial encouraging results and reactions, we plan to further develop CoAware and to perform a longitudinal study with a full-fledged version to examine its effect on sustained behavior change among people. Additionally, CoAware inspires the design of context-aware smartphones where the solution can trigger notifications based on pre-set rules based on locations and any surrounding people to reduce the smartphone use of co-located persons.
+
+§ 8 CONCLUSION
+
+Our paper examines the issues that arise when people use smart-phones in front of their partners, how they try to solve these problems, and how they search for potential solutions. In a crowdsourced survey, we found that people often feel ignored and get frustrated due to their partner's smartphone use in front of them. We also found that these problems exist even though people consciously attempt to resolve them through mutual understanding, and sometimes with explicit house rules. Partners frequently ask about and share each other's smartphone activities verbally, yet are often not confident about knowing what the other is doing on the smartphone. Consequently, we designed CoAware, a smartphone app, to further explore the design space. CoAware allows partners to digitally share each other's smartphone activities using different levels of detail. In a study where couples used CoAware, we observed that they found the app to be a promising solution to improve awareness, help reduce smartphone overuse, and, perhaps, even monitor a child's smartphone activities. We learned that designing similar solutions requires careful consideration of various app-specific privacy concerns and people's tolerance towards being advised by their partners.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..6303bc086fbf2106e0dcabb2a617d0cfdf2ebdd5
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,503 @@
+# FoldMold: Automating Papercraft for Fast DIY Casting of Scalable Curved Shapes
+
+Anonymized for review
+
+
+
+Figure 1: FoldMold enables rapid casting of curved 3D shapes using paper and wax. In this image we show the steps for casting a silicone kitchen gripper, far faster and with less waste than it can be 3D printed. (1) Create a 3D model of the object positive in Blender. (2) Add joinery and support features using the FoldMold Pattern Builder. (3) Automatically unfold the model into 2D patterns, and cut them from paper. (4) Assemble the mold and pour the intended material. (5) Remove the cast object from the mold.
+
+## Abstract
+
+Rapid iteration is crucial to effective prototyping; yet making certain objects - large, smoothly curved and/or of specific material - requires specialized equipment or considerable time. To improve access to casting such objects, we developed FoldMold: a low-cost, simply-resourced and eco-friendly technique for creating scalable, curved mold shapes (any developable surface) with wax-stiffened paper. Starting with a 3D digital shape, we define seams, add bending, joinery and mold-strengthening features, and "unfold" the shape into a 2D pattern, which is then cut, assembled, wax-dipped and cast with materials like silicone, plaster, or ice. To access the concept's full power, we facilitated digital pattern creation with a custom Blender add-on. We assessed FoldMold's viability, first with several molding challenges in which it produced smooth, curved shapes far faster than $3\mathrm{D}$ printing would; then with a small user study that confirmed automation usability. Finally, we describe a range of opportunities for further development.
+
+Index Terms: Human-centered computing; Human-centered computing-Systems and tools for interaction design
+
+## 1 INTRODUCTION
+
+The practice of many designers, makers, and artists relies on rapid prototyping of physical objects, a process characterized by quick iterative exploration, modeling and construction. One method of bringing designs to life is through casting, wherein the maker pours material into a mold "negative" and lets it set [28]. Casting advantages include diversity of material, ability to place insets, and the possibility of mixing computer-modeled and extemporaneous manual mold construction [36].
+
+A primary downside to casting for rapid prototyping is the time required to build a mold. 3D printing a mold (a common approach today) can take hours even for small objects. For multi-part molds, large models and with failed prints, the latency grows to days. While other mold-making techniques exist (e.g., StackMold [47], Meta-molds [4]), they are either limited to geometrically simple shapes such as extrusions, or require effort in other ways. Molds of smooth and complexly curved, digitally defined models are hard to achieve in a Do-It-Yourself (DIY) setting except through 3D printing. Iteration is thus expensive, particularly for large sizes and curvy shapes.
+
+Efficient use of materials is a particular challenge for prototype casting. When iterating, we only need one-time-use molds; so both the mold and final object materials should not only be readily available and low-cost, but also minimize non-biodegradable waste - a problem with most 3D printing media [25].
+
+This work was inspired by insights connecting papercraft and computer-aided fabrication. The first was that papercraft and wax can together produce low-cost, eco-friendly, curved molds. Paper-craft techniques like origami (paper folding) and kirigami (paper cutting) $\left\lbrack {{12},{18}}\right\rbrack$ can produce geometrically complex positive shapes. Because paper is thin, the negative space within can be filled with castable material for new positive objects. Paper can be flexibly bent into smooth curves and shapes. Mold construction time is size invariant. Wax can fix, reinforce, and seal a paper mold's curves. It is biodegradable, easy to work with, melts at low heat, creates a smooth finish, is adhesive when warm, can be iteratively built and touched-up. Both paper and wax are inexpensive and easy to source.
+
+Secondly, while origami and kirigami shapes are folded from paper sheets, paper pattern pieces can also be joined with woodcraft techniques. Paper has wood-like properties that enable many cutting and assembly methods: it is fibrous, tough, and diverse in stiffness and density. Like wood, paper fibers allow controlled bending through patterned cuts, but unlike wood, its weakness can be exploited for bending, and mold parts are easily broken away after casting.
+
+Thirdly, the complex design of paper and wax molds is highly automatable. Given user-defined vertices, edges, and faces, we can algorithmically compute mold structure, bends, joints and seams, and mold supports - steps which would require expertise and time, especially for complex shapes. With a computational tool, we can make the pattern-creation process very fast, more precise, and reliable, while still allowing a maker's intervention when desired.
+
+FoldMold is a system for rapidly building single-use molds for castable materials out of wax-stiffened paper that blends paper-bending and wood joinery methods (Figure 1). To support complex shapes, we created a computational tool - the FoldMold Pattern Builder (or "Builder") - to automate translation of a 3D model to a 2D pattern with joinery and mold support components. Patterns can be cut with digital support (e.g., lasercut or, at home, with X-acto knife or vinyl cutter [1]).
+
+In this paper, we show that FoldMolds are faster to construct than other moldmaking methods, use readily-available equipment and biodegradable materials, are low-cost, support complex shapes difficult to attain with other methods, and can be used with a variety of casting materials. FoldMold is ideal for custom fabrication and rapid iteration of shapes with these qualities, including soft robotics, wearables, and large objects. FoldMold is valuable for makers without access to expensive, high-speed equipment and industrial materials, or committed to avoiding waste and toxicity.
+
+### 1.1 Objectives and Contributions
+
+We prioritized three attributes in a mold-making process:
+
+Accessibility: Mold materials should be cheap, accessible, and disposable/biodegradable. The pattern should be easy to cut (e.g., laser or vinyl cutter) and assemble in a typical DIY workshop.
+
+Speed and Outcome: Moldmaking should be fast, support high curve fidelity and fine surface finish, or be a useful compromise of these relative to current rapid 2D-to-3D prototyping practices.
+
+Usability and Customisability: Mold creation and physical assembly should be straightforward for a DIY maker, hiding tedious details, yet enabling them to customize and modify patterns.
+
+To this end, we contribute:
+
+1. The novel approach of saturating digitally-designed papercraft molds with wax to quickly create low-cost, material-efficient, laser-cuttable molds for castable objects;
+
+2. A computational tool that makes it feasible and fast to design complex FoldMolds;
+
+3. A demonstration of diverse process capabilities through three casting examples.
+
+## 2 RELATED WORK
+
+We ground our approach in literature on prototyping complex object positives, in rapid shape prototyping, casting, papercraft and woodcraft techniques, and computational mold creation.
+
+### 2.1 Rapid Shape Prototyping
+
+#### 2.1.1 Additive Prototyping
+
+Based on sequentially adding material to create a shape, additive methods are dominated by $3\mathrm{D}$ printing [19] due to platform penetration, slowly growing material choices, precision and resolution, and total-job speed and hands-off process relative to previous methods such as photo sculpture [43] and directed light fabrication [27]. 3D printers heat and extrude polymer filaments. Some technologies can achieve high resolution and precision, although at present this may be at the expense of speed, cost and material options [34].
+
+Rates of contemporary 3D printing are still slow enough (hours to days) to impede quick iteration. As an example of efforts to increase speed, WirePrint modifies the digital 3D model to reflect a mesh version of the object positive [31], but at the cost of creating discontinuous object surfaces. While capturing an object's general shape and size, it sacrifices fidelity.
+
+Other limitations of direct 3D printing of object positives are limited material options and geometry constraints. Its layering process complicates overhangs: they require printed scaffolding, or multipart prints for later reassembly [22]. Papercraft has no problem with overhangs; where mold support is needed FoldMold utilizes paper or cardboard scaffolding stiffened with wax.
+
+#### 2.1.2 Subtractive and 2D-to-3D Prototyping
+
+Computer Numerical Control (CNC) machining technologies include drills and laser-cutters, and lathes and milling machines which can create $2\mathrm{D}$ and $3\mathrm{D}$ artifacts respectively, all via cutting rather than building-up. Although limited to 2D media, laser-cutting offers speed and precision at low per-job cost, albeit with a high equipment investment [37]. Because it can be cheaper and faster to fabricate 2D than 3D media, some have sought speed by cutting 2D patterns to be folded or assembled into $3\mathrm{D}$ objects $\left\lbrack {6,7,{32}}\right\rbrack$ .
+
+FlatFitFab [26] and Field-Aligned Mesh Joinery [13] allow the user to create 2D laser cut pieces that, when aligned and assembled, form non-continuous 3D approximations of the object positive - essentially creating the object "skeleton". Other methods (e.g., Joinery [50], SpringFit [40]) utilize laser cutting followed by assembly of 2D cutouts. Joinery supports the creation of continuous, non-curved surfaces joined by a variety of mechanisms. SpringFit introduces the use of unidirectional laser-cut curves joined using stress-spring mechanisms. However, these techniques are for creating object positives and not suitable for casting; material qualities are inherently limited and the joints are not designed to fully seal.
+
+Here, we draw inspiration from these methods which approach physical 3D object construction based on 2D fabrication techniques, and draw on the basic ideas to build continuous sealed object negatives (molds) for casting objects from a variety of materials.
+
+### 2.2 Casting in Rapid Prototyping
+
+An Iron Age technique [28], casting enables object creation through replication (creating a mold from the target object's positive) or from designs that do not physically exist yet (our focus). A particular utility of casting in prototyping is access to a wider range of materials than is afforded by methods like 3D printing, carving or machining - e.g., silicone or plaster.
+
+StackMold is a system for casting multi-material parts that forms molds from stacked laser-cut wood [47]. It incorporates lost-wax cast parts to create cavities for internal structures. While this improves casting speed (especially with thicker layers), the layers create a discretized, "stepped" surface finish which is unsuitable for smoothly curved shapes - prototyping speed is in conflict with surface resolution. Metamolds [4] uses a 3D printed mold to produce a second silicone mold, which is then used to cast objects The Metamolds software minimizes the number of 3D printed parts to optimize printing time. Silicone molds are good for repeated casts of the same object, but this multi-stage process slows rapid iteration requiring only single-use molds. Further, Metamolds are size-constrained by the $3\mathrm{D}$ printer workspace.
+
+Thus, despite significant progress in rapid molding, fast iteration of large and/or complex shapes is still far from well supported.
+
+### 2.3 Papercraft and Wood Modeling
+
+Several paper and wood crafting techniques inspired FoldMold.
+
+#### 2.3.1 Papercraft
+
+Origami involves repeatedly folding a single paper sheet into a 3D shape [11]. Mathematicians have characterized origami geometries [30] as Euclidean constructions [18]. They can achieve astonishing complexity, but at a high cost in labor, dexterity and ingenuity. Kirigami allows paper cutting as well as folding to simplify assembly and access a broader geometric range. Despite the effort, both demonstrate how folding can transform 2D sheets into complex 3D shapes, and that papercraft design can be modeled.
+
+#### 2.3.2 Creating 2D Papercraft Patterns for 3D Objects
+
+Many have sought ways to create foldable patterns and control deformation by discretizing 3D objects. Castle et al. developed a set of transferable rules for folding, cutting and joining rigid lattice materials [10]. For 3D kirigami structures, specific cuts to flat material can be buckled out of plane by a controlled tension on connected ligaments [38]. Research work on these papercraft techniques inform cut and fold prototyping systems; e.g., LaserOrigami uses a laser cutter to make cuts on a 2D sheet then melts them into specified bends for a precise 3D object [32]. FoldMold goes beyond this by enabling the use of a wide variety of materials through casting, and supporting the creation of large, curvy objects.
+
+#### 2.3.3 Controlled Bending
+
+Wood and other rigid but fibrous materials can be controllably bent with partial cuts, by managing cut width, shape and patterning [46]. Many techniques and designs achieve specific curves: e.g., kerfing, patterns of short through-cuts, can render a different and more continuous curvature than scoring (cutting partway through) $\left\lbrack {9,{21},{29},{49}}\right\rbrack$ . These methods support complex double curved surfaces [9, 29], stretching [20], and conformation to preexisting curves for measurements [48]. With sturdy 2D materials, they create continuous curves strong enough to structurally reinforce substantial objects [7].
+
+#### 2.3.4 Joinery
+
+In fine woodwork, wood pieces are cut with geometries that are pressure-fit into one another, to mechanically strengthen the material bond which can be further reinforced with glue, screws or dowels. There are many joint types varying in ideal material and needed strength. Taking these ideas into prototyping, Joinery developed a parametric joinery design tool specifically for laser cutting to create 3D shapes [50]. Joinery has been used in rapid prototyping literature: Cignoni et al. creates a meshed, interlocked structure approximation of a positive shape to replicate a 3D solid object [13]. Conversely, SpringFit shows how mechanical joints can lock components of an object firmly in place and minimize assembly pieces [40].
+
+Our work leverages these papercraft, joinery and modeling techniques to achieve structurally sound, complex and curved shapes from 2D materials by using precise bending and joints.
+
+### 2.4 Computational Mold Creation
+
+Computational support can make complex geometric tasks more accessible to designers $\left\lbrack {8,{14},{24},{35},{39},{41},{42}}\right\rbrack$ . LASEC allows for simplified production of stretchable circuits through a design software and laser cutter [20]. Some of these tools include software that automates part of the process; others are computationally-supported frameworks or design approaches [45].
+
+Designers often begin with digital 3D models of the target object. To cast a 3D model, they must generate a complement (the object negative), and convert it to physical patterns for mold assembly. Examples where software speeds this process are Stackmold, which slices the object negative into laser-cuttable slices [47]; while Metamolds helps users optimize silicone molds [4].
+
+Computer graphics yields other approaches to 3D-to-2D mapping. UV Mapping is the flat representation of a surface of a 3D model. Creating a UV map is called ${UV}$ Unwrapping. While $\left\lbrack {X, Y, Z}\right\rbrack$ specify an object’s position in 3D space, $\left\lbrack {U, V}\right\rbrack$ specify a location on the surface of a 3D object. UV Unwrapping uses UV coordinate information to create a $2\mathrm{D}$ pattern corresponding to a $3\mathrm{D}$ surface, thus "unwrapping" it [16]. The Least Squared Conformal Mapping (LCSM) UV Unwrapping algorithm is implemented in the popular open source 3D modeling tool Blender [17].
+
+Previous work in computer graphics has investigated the decomposition of 3D geometries into geometries that are suitable for CNC cutting [5]. For example, Axis-Aligned Height-Field Block Decomposition of 3D Shapes splits 3D geometries into portions that can be cut with 3-axis CNC milling [33]. D-Charts converts complex 3D meshes into 2D, nearly-developable surfaces [23].
+
+These tools signified that FoldMold also needed computational support; however, our papercraft-based technology is utterly different. In the FoldMold pattern-generation tool, the 3D model of the object positive is "unwrapped" and elements are added to reassemble - fold - the 2D patterns into a structurally sound mold. Similar algorithms have been used in other tools, such as the Unwrap function in Fusion 360 [2], but not in a prototyping or mold-making context.
+
+## 3 THE FOLDMOLD TECHNIQUE
+
+In its entirety, FoldMold is a fast, low-cost and eco-friendly way to cast objects based on a 3D digital positive. It can achieve an identifiable set of geometries (Section 3.1), utilizes a set of mold design features (3.2), and consists of a set of steps (3.3). In Section 4 we describe our custom computational tool (FoldMold Pattern Builder) which makes designing complex FoldMolds feasible and fast.
+
+### 3.1 FoldMold Geometries
+
+A flexible piece of paper can be bent into many forms. Termed developable surfaces, they are derivable from a flat surface by folding or bending, but not stretching [44]. Mathematically, such surfaces possess zero Gaussian curvature at every point; that is, at every point on the surface, the surface is not curved in at least one direction. Cylinders and cones are examples of curved developable surfaces, but spheres are not: every point on a sphere is curved in all directions.
+
+FoldMold can be used for any developable surface or connected sets of them. It can achieve a non-developable surface after approximating and translating it into a subset of connected, individually developable surfaces (islands), which can then be joined together. A single developable surface may also be divided unto multiple islands, e.g., for ease of pattern construction or use. Islands comprise the basic shapes of a 2D FoldMold pattern (Figure 1, Step 3).
+
+FoldMold geometries can have several kinds of edges. Joints are seams between islands. Folds (sharp, creased bends) and smooth curves are both controlled via scoring, i.e., cuts partway through a material, possible with a lasercutter or handheld knife.
+
+A FoldMold island can have multiple faces which are equivalent to their 3D digital versions' polygons, i.e., the polygon or face resolution can be adjusted to increase surface smoothness. Faces are delineated by any type of edge, whether cut or scored.
+
+A strength of the FoldMold technique is its ability to construct large geometries. The size of a FoldMold geometry is characterized by three factors.
+
+Size/time scaling: While popular 3D printers accommodate objects of ${14} - {28}\mathrm{\;{cm}}$ (major dimensions), build time scales exponentially with object size. In contrast, FoldMold operations (2D cutting and folding) scale linearly or better with object size (Table 2).
+
+Weight of cast material: Paper is flexible, and may deform under the weight of large objects. As we show in 5.2, we tested this technique with a large object cast from plaster(3.64kg)and did not notice visible deformation. As objects get even larger and heavier, they will eventually require added support.
+
+Cutter specs: The cutter bed size limits the size of each island in the geometry. Additionally, the ability of the cutter to accommodate material thickness/stiffness is another limiting factor.
+
+### 3.2 FoldMold Features
+
+FoldMold produces precise, curved, but sturdy molds from paper via computationally managed bending, joinery and mold supports. Here we discuss the features of FoldMold.
+
+## Score-Controlled Bending for 3D Shapes from 2D Patterns
+
+Folds (Sharp Creases): Manual folding can produce uneven or warped bends, especially for thick or dense materials. To guide a sharp fold or crease, we score material on the outside of the fold line to relieve strain and add fold precision. Score depth influences the bending angle, but cuts that are too deep can reduce structural strength along the fold. We empirically found that cutting through $\sim {50}\%$ of the material thickness is a good compromise for most folds and paper material.
+
+Smooth Unidirectional Curves: As is well-known by foam-core modelmakers, repetition of score lines can precisely control a curve. For example, as we add lengthwise scores on a cylinder's long axis, its cross section approaches a circle; non-uniform spacing can generate an ovoid or U-shape. There is a trade-off between curve continuity, cutting time and structural integrity. Designers can adjust scoring density - the frequency of score lines - based on specific needs; e.g., speed often rules in early prototyping stages, replaced by quality as the project reaches completion. We can smooth some discretized polygonization by filling corners and edges with wax.
+
+## Joinery to Attach Edges and Assemblies
+
+Joints must (1) seal seams, (2) maintain interior smoothness, for casting surface finish, and (3) support manual assembly. We implemented sawtooth joints, pins, and glue tabs (Figure 2).
+
+
+
+Figure 2: FoldMold joint types (A) Sawtooth and (B) pin joints utilize pressure fitting for secure joints and to maintain alignment. (C) Glue tabs rely on an adhesive.
+
+Sawtooth Joints: Pressure fits create a tight seal, with gaps slightly smaller than the teeth and held by friction, enabled by paper's compressibility (as shown in Fig. 2A). To ease insertion, we put gentle guiding tapers on the teeth, with notches to prevent pulling out. Best for straight, perpendicular seams, these joints can face outward from the model for interior surface integrity.
+
+Pin Joints: Small tabs are pushed through slightly undersized slots; a flange slightly wider than the corresponding slot ensures a pressurized, locking fit (shown in Fig. 2B). Pin joints are ideal for curved seams, which other techniques would discretize: e.g., a circle of slots on a flat base can smoothly constrain a cylinder with pins on its bottom circumference. Tapers and notches on the pins facilitate assembly.
+
+Glue Tabs: Fast to cut and easy to assemble, two flat surfaces are joined with adhesive (Fig. 2C). Overlapping the tabs (as in typical box construction) would create an interior discontinuity. Instead, we bend both tabs outwards from the model and paste them together, like the seam of an inside-out garment. Thus accessible, they can be manipulated to reduce mismatch while preserving interior surface quality. We have used commonly available, multi-purpose white glue with a 20-30 minute drying time. Assembly time can be greatly reduced by clamping the drying tabs.
+
+Ribbing for Support: Wax stiffening greatly strengthens the paper, In some cases, e.g., for dense casting materials such as plaster, or large volumes, more strength may be needed to prevent deformation. External support can also help to maintain mold element registration (Figure 1, Steps 2 and 4).
+
+### 3.3 Constructing a FoldMold Model
+
+Constructing a mold using the FoldMold method can be described in five steps, illustrated in Figure 1. Section 4 describes how the FoldMold Pattern Builder ("Builder" hereafter) assists the process.
+
+Step 1: Create or import a 3D model. The designer starts by modelling or importing/editing the 3D object positive in Blender.
+
+## Step 2: Design the FoldMold - iteratively add and refine seams, bends, joinery and supports:
+
+Substeps: Conceptually, FoldMold design-stage subtasks and outputs are to (a) indicate desired joint lines (island boundaries) on the 3D model (b) joinery type for joints; (c) adjust face resolution to achieve desired curvature; (d) specify scoring at internal (non joint) face edges to control bending within islands, and (e) add ribbing for mold support. Each of these tasks are supported in the FoldMold Pattern Builder tool's interface (below). Any of these substeps may be repeated during digital design, or revisited after the mold has been physically assembled to adjust the design. This may be especially important for novice users or for challenging projects.
+
+Automation and Intervention: Builder can do Step 2 fully automatically, but because its unwrapping algorithm does not consider all factors of the molding process such as preferred building process or seam identification, results may sometimes be improved with maker intervention. As examples, one can intervene at (a) by constraining joint lines then letting Builder figure out scoring (c). By default, Builder uses glue tabs for joints, but we can step in at (b) in a realization that a pin joint will work better than glue tabs for a circular seam such as a cup bottom. Builder's default ribbing is 3 ribs along each of the X- and Y-axes, and 2 Z-axis ribs holding them in place. The maker can intervene to modify ribbing placement frequency, and to position and orient individual ribbing pieces to best support a given geometry.
+
+Step 3: Unfold and cut the FoldMold design. Builder unfolds the object’s geometry into a $2\mathrm{D}$ mold pattern that is cutter-ready, exported as a PDF file. The maker cuts the 2D patterns from paper by sending the PDF to the cutter - e.g., a laser cutter, vinyl cutter, or even laser-printing the patterns and cutting them with an X-Acto knife or a pair of scissors.
+
+Step 4: Assemble, wax and cast. The FoldMold physical construction steps are shown in Figure 3. The maker assembles the cut patterns into a 3D mold by creasing and bending on fold lines and joining at seams according to the joinery method.
+
+To build mold strength, the maker repeatedly dips it in melted wax (paraffin has a melting point of ${46} - {68}{}^{ \circ }\mathrm{C}$ ). As the wax hardens, it stiffens the paper, "locking in" the mold's shape. For very fine areas, dipping may obscure desired detail or dull sharp angles; wax can be added with a small brush, and excess can be removed or surface detail emphasized.
+
+Curable casting materials (e.g., silicone or epoxy resin), or materials that dry (plaster, concrete) are simply prepared and poured.
+
+Step 5: Set and remove Mold. After setting for the time dictated by the casting material, the mold is easily taken apart by gently tearing the paper and peeling it away from the cast object. Any excess wax crumbs that stick to the object can be mechanically removed or melted away with a warm tool.
+
+## 4 COMPUTATIONAL TOOL: FOLDMOLD PATTERN BUILDER
+
+A designer should be able to focus effort on the target object rather than on its mold, and FoldMold-making requires complex and laborious spatial thinking, especially for complex shapes. Fortunately, these operations are mathematically calculable, and features can be placed using heuristics. To speed up the mold-making process, our computational tool - the FoldMold Pattern Builder - automates the generation of laser-cuttable 2D patterns from a 3D positive while allowing designer intervention. We describe its implementation and usability evaluation.
+
+### 4.1 Implementation
+
+We wrote Builder as a custom Blender add-on, using Blender's Python API.
+
+#### 4.1.1 User Interface
+
+We created Builder's user interface to reflect primary FoldMold design activities, as described in Section 3.3. The interface's panels shown in Figure 4 map to Steps 2a-d (panel A, Mold Prep), Step 2e (panel B, Ribbing Creation) and Step 3 (panel C, Mold Unfolding).
+
+Builder's user interface is integrated into the Blender user interface, following the same style conventions as the rest of the software in order to reduce the learning curve for novice users who may already be familiar with 3D modeling programs. We tested multiple different configurations of the panels before finding that this grouping of options was the most intuitive.
+
+
+
+Figure 3: FoldMold physical construction (1) Assemble the mold: in this case, tabs are glued and dried. (2) Wax: the mold is dipped in wax strengthen and seal it, preparing it for casting. (3) Cast: the casting material, in this case plaster, is poured into the mold and left to harden.
+
+
+
+Figure 4: The FoldMold Pattern Builder's user interface has panels based on three design activities, each accessed from a menu bar on the side of the Blender screen. Target model edges and vertices can be first selected while in Blender "edit" mode. (A) Mold Prep Panel: Specify material, apply seams and joinery types, and create scores. (B) Ribbing Panel: Generate and conform ribbing elements, with user-specification of frequency of ribbing elements. (C) Unfold Panel: Export the mold into 2D patterns.
+
+#### 4.1.2 Bending
+
+In contrast to cut and joined seams, bending edges (for both sharp creases and smooth curves; Section 3.1) remain connected after unwrapping and need no joinery; however, they need to be scored.
+
+
+
+Figure 5: Creating scores with the FoldMold Pattern Builder. (1) Select faces of the object that are to be scored. (2) Choose axis around which scores should be drawn. (3) Define scoring density. (4) Apply scores to faces.
+
+Builder automatically detects folds as non-cutting edges that demarcate faces, and on its own, would direct a scoring cutting pattern for them. As noted in Section 3.3, the user can intervene in a number of ways. Scoring can be applied by following the steps described in Figure 5, of (1) face selection, (2) axis choice from Cartesian options, (3) assigning score density (polygon resolution), and finally (4) applying scores to the faces with the press of a button.
+
+A finely resolved curve can be achieved by adjusting the score density along faces in the $3\mathrm{D}$ object. If a score density has been set (Figure 5, Step 3), Builder creates additional fold lines across those faces beyond its default.
+
+To instruct the cutter how to handle them, Builder assigns colors to cut and fold lines (red and green respectively; Figure 1, Steps 2-3). In the exported PDF, this is a coded indicator to the CNC cutter to apply different power settings when cutting, recognized in the machine's color settings.
+
+#### 4.1.3 Joinery
+
+To reassemble the islands created during UV-unwrapping into a 3D mold, Builder defines joinery features (sawtooths, pins, glue tabs) along the 2D cut-outs' mating edges in repeating, aligned joinery sequences. All three joint types can be included in a given model.
+
+Builder can choose cutting edges. Its default joint type is a glue tab, the easiest to cut and assemble. The user can override this in the Builder interface (Figure 6), instead selecting an edge, then choosing and applying a joinery type.
+
+
+
+Figure 6: Creating joinery with FoldMold Builder. (1) Select edges of the object to be joined. (2) Choose joinery type or default to "Auto", which are glue tabs. (3) Apply joinery to edge.
+
+Builder implements this functionality as follows. The basic components of each joinery sequence are referred to as tiles, which are designs stored as points in SVG files. Builder defines joinery sequences from several tiles, first parsing their files. This system is easily extended to more joint types simply by adding new SVG images and combining them with existing ones in new ways.
+
+As an example, a sawtooth joinery sequence is composed of tooth and gap tiles. These are arranged in alternation along one edge, and in an inverted placement along the mating edge such that the two sets of features fit together (i.e., register). Builder generates a unique sequence for each mating edge pair because the number of tiles placed must correspond to the length of the edge, and matching edges must register, e.g., with pin/holes aligned.
+
+Once created, a joinery sequence must be rotated and positioned along its mating edges. Builder applies transformations (rotation and translation) to each tile sequence to align it with its target edge, and positions it between the edge's start and end vertices.
+
+
+
+Figure 7: Creating ribbing with the FoldMold Pattern Builder. (1) Set number of slices to add along each axis. (2) Add ribbing to object and optionally transform slices around object for maximal support. (3) Select "Conform Ribbing" to finalize the ribbing shape.
+
+#### 4.1.4 Ribbing
+
+Builder defines ribbing along three axes (Figure 7) for maximal support and stability. X- and Y-axis ribs slot together, supporting the mold, while $\mathrm{Z}$ ribs slot around and register the $\mathrm{{XY}}$ ribbing sheets.
+
+It is impossible to physically assemble ribbing that fully encloses a mold, as the mold would have to pass through it. Builder splits each ribbing sheet in half, then "conforms" it by clipping the ribbing sheets at the mold surface - performing a boolean differenc operation between each ribbing sheet and the mold. In assembly, the user will join the halves to surround the mold.
+
+Within the Builder interface, the user can modify the default ribbing by choosing and applying a ribbing density in terms of slices to be generated by axis (Figure 7). The ribs can then be manipulated (moved, rotated, scaled) within the Blender interface to maximise their support of the object, then conformed with a button click.
+
+#### 4.1.5 UV Unwrapping
+
+Conversion of the $3\mathrm{D}$ model into a flat $2\mathrm{D}$ layout is automated through UV Unwrapping (Section 2.4).
+
+At this stage, the Blender mesh object (the 3D model) is converted into a 2D "unfolded" pattern. Our implementation draws from the "Export Paper Model from Blender" add-on [15] from which we use the UV unwrapping algorithm which employs Least Squared Conformal Mapping (LCSM). Initially, the 3D model is processed as a set of edges, faces, and vertices. We then reorient the faces of the 3D object, as if unfolded onto a 2D plane; then alter the edges, faces, and vertices to be in a UV coordinate space.
+
+Unwrapping delivers a set of islands (Section 3.1) - themselves fold-connected faces delineated by joint edges. While these seams are automatically generated during unwrapping, they can optionally be user-defined through Builder's Unfold panel (Figure 4).
+
+Each island has a bounding box. If this exceeds page size (set by media or cutter workspace), it will be rotated to better fit; failing that, it will trigger an oversize error. The user can then scale the 3D model or define more seams, for more but smaller islands.
+
+### 4.2 Usability Review of FoldMold Pattern Builder Tool
+
+We conducted a small $\left( {\mathrm{n} = 3}\right)$ user study for preliminary insight into the designers' expectations and experiences with Builder.
+
+#### 4.2.1 Method
+
+We recruited three participants (all male Computer Science graduate students whose research related to 3D modeling for familiarity with relevant software). P2 was moderately experienced with Blender, P1 had used Blender, but was not experienced with it, and P3 had never used Blender, but had extensive experience with similar software (3D Studio Max). Conducted over Zoom, sessions took 45-60 min, with the participants accessing Blender and the Builder add-on via Zoom remote control while the researcher recorded the session. Participants were compensated \$15.
+
+We introduced each participant to the FoldMold technique, demonstrating how to design joints, bends, and ribbing for various geometries. The researcher walked the participant through a Builder tutorial with a simple practice object (a cube) to design a mold for, and answered questions. They then estimated how long they would take to digitally design a mold for the object in Figure 9, before actually designing and exporting the mold pattern for the object using Builder. The session finished with a short interview.
+
+Table 1: Mold design time: participant expectations vs. actual time for time for mold design.
+
+| Participant | Estimated 3D- printable mold | Estimated FoldMold (no Builder) | Estimated Fold- Mold (Builder) | Actual FoldMold (Builder) |
| $\mathbf{{P1}}$ | several hours | most of a day | 2 min | 5 min 40 s |
| P2 | 30-60 min | several hours | a few minutes | 5 min 10 s |
| P3 | 30-40 min | 3 hours | 2 min | 9 min 30 s |
+
+#### 4.2.2 Results
+
+We review participants' qualitative and quantitative responses to our three questions.
+
+## 1. Time: How much time do users expect to and actually spend on mold design?
+
+All participants predicted Builder would be much faster ( 2 or a few min) than either designing a 3D-printable mold or manually creating a FoldMold design (30m - a day) (Table 1). Their actual recorded Builder-facilitated times were under 10 minutes (average 6:46). P3, with previous casting experience, iterated on their original design with considerations of manual assembly; their longer time resulted in a slightly more easily assembled mold. While P1 and P2 had no previous mold-making experience, Builder successfully guided them through the creation of a simple mold.
+
+## 2. Outcome: Could they customize; and could outcome control be improved?
+
+All participants reported good outcome control, but offered three possibilities for improvement.
+
+Joinery density: Participants tended to select all edges in a curve, applying a joint type to the entire selection. P2 was interested in selecting a set of edges and applying joints to, e.g., every third edge, to prevent overly dense joints on a scored object and consequent assembly complication.
+
+Mold material combinations: While Builder allows users to select or define a mold material type (e.g., chipboard or cardboard), they would have liked to indicate multiple materials for a single mold. P3 attempted to define different material settings for different object faces, to accommodate regions which needed flexibility (thin bendy paper) versus strength (thick and dense).
+
+Cut positioning: When Builder currently exports mold patterns, it arranges pieces to maximize paper usage. P1 would have valued grouping pieces based on relationship or assembly order.
+
+## 3. Problems: What obstacles were encountered?
+
+While participants were generally positive, there were instances where transparency could have been better. P3 was unsure of how Builder would automate mold design without user input, and had to do a trial export to learn it. P1 and P2 asked for warnings when their design choices would lead to issues with the mold or cast object.
+
+## 5 DEMONSTRATIONS
+
+To demonstrate FoldMold performance in our goals of curvature, large scale, and deterministic outcome, we designed molds for three objects using the FoldMold Pattern Builder and built them in a home workshop, DIY setting. We used a Silhouette Cameo 4 vinyl cutter [1] to cut patterns onto chipboard paper, which we then assembled, dipped in paraffin wax, and cast. We purchased chipboard from an art store (\$2.20 / 35x45in sheet), and paraffin from a grocery store at $\sim \$ {10}$ per box.
+
+
+
+Figure 8: Left: the 3D model of the planter. Right: the physical planter cast with plaster and a plant inserted.
+
+In Table 2 we compare FoldMold construction times for each demo object (from digital design to de-molding, not including material curing) to the time it would take to 3D print the object positive, and the time it would take to 3D print a mold (negative) for casting the object. Mold design and construction for all objects were done by the authors, with their relative expertise with this new technique. The times for each mold construction are taken from a single build. We can see that FoldMold accomplishes much faster speeds, especially as the model size increases.
+
+### 5.1 Curvature
+
+To demonstrate FoldMold curvature and complexity performance, we chose a heat-protective silicone kitchen grip with multiple curvature axes and overhangs which make it harder to 3D print and should also challenge a fold-based technique.
+
+Figure 1 shows the steps for building a heat protective silicone kitchen gripper, beginning by (1) modelling the geometry in Blender.
+
+We (2) scored curved areas, marked mold seams and set joinery types using Builder. The model's varying surface topology indicated a mix of joinery types. For curved areas we chose pin joints, and for all non-curved areas we used glue tabs to keep the interior of the seams flat and smooth. In ribbing design, we chose a ribbing density of one slice per axis in order to keeping the inner and outer edges of the cavities registered, without requiring very much support.
+
+After (3) exporting the mold layout and cut it from paper using our vinyl cutter, we (4) assembled the mold, dipped it in wax, and poured the silicone. Once the silicone had cured, we (5) removed the cast object from the mold.
+
+Mold materials for the gripper mold (excluding casting material and vinyl cutter) cost $\sim \$ {4.50}$ .
+
+### 5.2 Scale
+
+A FoldMold strength is creating large molds (Section 3.1) without the same speed-size tradeoff common with other rapid prototyping techniques. We demonstrate this by casting a planter that measures ${18.8}\mathrm{\;{cm}}$ in total height and ${17.8}\mathrm{\;{cm}}$ in diameter, with an intricate angular outer surface, with a hollow interior to allow for a plant to be inserted (Figure 8) We used plaster for strength.
+
+Mold creation, shown in Figure 3, was similar to Figure 1, with minor adjustments. Due to its angular geometry, the mold did not need to be scored. The long, straight edges could be largely connected using glue tabs and adequately secured with wax. Planter mold materials cost $\sim \$ {5.50}$ .
+
+### 5.3 Variability
+
+While rapid-prototyping workflows do not usually involve multiple re-casts of the same object, we wanted to test the extent to which the output is deterministic. In early prototyping stages, it can be beneficial to introduce some variability as a catalyst to ideation and inspiration, whereas in later stages of prototyping, higher determinism is useful as the design approaches completion.
+
+We tested FoldMold's variability by making three cups from the same mold pattern and casting them with ice (Figure 9), following a similar process to that of Figure 1. Due to these molds' small size, ribbing was not needed. The cups' cylindrical geometry led us to use pin joints around the top and bottom, with glue tabs connecting the sides. Table 5 shows the dimensions of each cast cup. Mold materials for each cup cost $\sim \$ 2$ .
+
+
+
+Figure 9: Left: the 3D model of the drinking cup. Right: three physical cups cast in ice.
+
+## 6 Discussion
+
+We review progress towards our goals of accessibility, performance, usability and customisability.
+
+### 6.1 Accessibility: Resource Requirements, Cost, Eco- logical Load, Versatility
+
+We set out to establish a process that was not just fast, but could be done in a home kitchen (many makers' "pandemic workshop") with readily available, low-cost materials and without toxic waste.
+
+Paper and wax materials together cost $\$ 3 - \$ {10}$ per model of the scales demonstrated here, and are easy to source in everyday consumer businesses. Other costs to this project include a computer to design molds, a cutter and casting material. The latter is highly versatile; FoldMold can potentially cast anything that sets at a temperature low enough to not melt the wax, including many food-safe items (we have tried chocolate and gelatin as well as ice).
+
+The disposable mold is biodegradable. We found the mold making materials easy to cut and assemble using a vinyl cutter (a small consumer CNC device). While we could not demonstrate laser-cut examples due to COVID-19 access restrictions, laser-cutters are common in staffed school and community workshops; although more expensive, they avail higher precision and speed.
+
+### 6.2 Speed and Outcome
+
+We targeted high creation speed of single-use molds suitable for diverse casting materials in an accessible setting. We compared FoldMold with the go-to method of $3\mathrm{D}$ printing (as opposed to other DIY casting methods like StackMold) because it can also achieve geometries we sought.
+
+We found FoldMold build-times to be extremely competitive with 3D printing (Table 2), and that the process is capable of a highly interesting range of shape and scale at a fidelity level and surface quality that makes it a viable casting alternative. With 3D printing, a maker faces fewer steps but will wait longer for their mold or positive to print, particularly for larger objects. Our FoldMold planter took 2.5 hours to cut, assemble, and dip in wax. This would have taken a 3D printer 48 hours for a 3D positive and 56 hours for the mold.
+
+Compared to 3D printing an object positive or mold, FoldMold requires a more hands-on approach; and maker skill (mold design optimizations and mold-craft "tricks") can improve results. However, users already find the current process straightforward.
+
+Our wax-stiffened paper approach combined with the manual assembly that this process affords have proven a rewarding combination. Beyond efficiency, mold-handling and shape "tweaking" are an opportunity for spontaneous, fine-grained control over the final geometry beyond what is captured in the digital model. Techniques that rely on the removal of material, such as folding or scoring, may leave the surface finish with unwanted ridges; wax dipping prevents this, filling and smoothing cuts. Finally, wax-soaked paper is a convenient non-stick surface that is easy to remove.
+
+Table 2: Mold making times (Silhouette Cameo 4 Vinyl Cutter [1]). Mold making times exclude curing. 3D printing estimates were generated by the Cura Lulzbot 3D printing software [3] at ${100}\mathrm{\;{mm}}/\mathrm{s}$ printing speed and ${1.05}\mathrm{\;g}/{\mathrm{{cm}}}^{3}$ fill density. We estimate that if laser-cut, FoldMold cuts would be 2-4 times faster.
+
+| Demo | Digital Design | Cutting | Mold Prep & Casting | De-molding | Total FoldMold | 3D Print, Positive | 3D Print, Mold |
| Kitchen Grip | 10 min | 41 min | 2 hours | 1 min | 2h 52 min | ${21}\mathrm{\;h}7\mathrm{\;{min}}$ | 41h 13min |
| Planter | 10 min | 43 min | 1h 37 min | 1 min | 2h 31 min | 47h 38 min | 56h 12min |
| Cup | 5 min | 22 min | 36 min | 5 min | 1h 8 min | 5h 20 min | 6h 56 min |
+
+Table 3: Comparing dimensions of digital and cast grips
+
+| Prototype | Height | Width | Depth |
| 3D model | ${11}\mathrm{\;{cm}}$ | ${17}\mathrm{\;{cm}}$ | ${10}\mathrm{\;{cm}}$ |
| Silicone casting | ${11.5}\mathrm{\;{cm}}$ | 17.1 cm | ${10.5}\mathrm{\;{cm}}$ |
+
+Table 4: Comparing dimensions of digital and cast planters
+
+| Prototype | Height | Diameter | Depth |
| 3D model | 19 cm | ${18}\mathrm{\;{cm}}$ | ${15}\mathrm{\;{cm}}$ |
| Plaster casting | ${18.8}\mathrm{\;{cm}}$ | 17.8 cm | 15.2 cm |
+
+We aimed to support the creation of highly curved surfaces. The use of computational support removed most limits, and within the space of one-directional curves we are not aware of anything that FoldMold can't build at some scale. For non-developable surfaces (entirely or as a part of a hybrid mold) or highly precise geometries, 3D printing may be more suitable.
+
+Finally, we were pleased by FoldMold's versatility, not only in casting material but in adaptability of the method itself. A mold can be adjusted to tradeoff precision for construction time and material use (more faces and structural elements). Many items can be completely hand-made, albeit more slowly, or the process can be boosted with more powerful tools. We foresee that this technique could be adjusted within itself (e.g., to support multiple paper weights within a FoldMold, as per a study participant's suggestion) but also combined easily with other complementary techniques.
+
+### 6.3 Usability and Customisability
+
+FoldMold's mold design process is facilitated by the FoldMold Pattern Builder. In our user study, participants could design one of our demonstration molds in an average of 6:47 minutes; based on participants' well informed estimations, a mold of the same shape would have taken several hours to design. Cutting down on mold design time is a major benefit of FoldMold.
+
+Alongside Builder's ability to quickly create molds, we aimed to balance user control and tool automation. Our participants were able to digitally customize their molds to assign specific joint types, materials, and structural supports. While customization allows the user to design a mold specific to their making needs, it also offloads intricate design processes by automating the $2\mathrm{D}$ cut patterns of joinery, scoring, and ribbing. Based on participant responses, a desirable adaptation of the tool would account for customizations such as handling different paper material types in one mold or mixed casting materials (e.g., silicone and plaster).
+
+Table 5: Comparing dimensions of three ice cups
+
+| Prototype | Height | Diameter | Thickness | Depth | Capacity |
| 3D model | 7.6 cm | 7.3 cm | 1 cm | 4.8 cm | 110 mL |
| 1 | 7.3 cm | 6.9 cm | 0.8 cm | 4.5 cm | 91 mL |
| 2 | 7.5 cm | 6.9 cm | 0.7 cm | ${4.8}\mathrm{\;{cm}}$ | 93 mL |
| 3 | ${7.0}\mathrm{\;{cm}}$ | 7.1 cm | 1.0 cm | 4.7 cm | 90 mL |
+
+## 7 CONCLUSIONS AND FUTURE WORK
+
+In this paper, we contributed a novel paper and wax mold-making technique that allows $3\mathrm{D}$ molds to be constructed from $2\mathrm{D}$ cut patterns. We demonstrated FoldMold's capabilities through demonstrations of curvature, scale, precision and repeatability. We developed the FoldMold Pattern Builder, a computational tool that automatically generates $2\mathrm{D}$ mold patterns from $3\mathrm{D}$ objects with optional designer control, dramatically reducing design time from days and hours to minutes. We conducted a small user study to investigate how Builder can better support designers, and found that increasing user control over joinery density, material combinations, and island positioning would be helpful.
+
+Here, we discuss the directions that future work should explore.
+
+Quasi-Developable Surfaces: FoldMold currently implements only straight-line bends, but like origami, it could employ methods like controlled buckling to achieve curved 3D fold lines, which would allow it to achieve a larger space of geometries. Relatedly, Kerfing is a woodworking technique that allows flat materials to be controllably bent in two dimensions (as opposed to the one dimension supported by scoring) via intricate cut-away patterning $\left\lbrack {9,{21},{29},{49}}\right\rbrack$ . In principle this is similar to scoring, however, because kerfing removes material it can stretch as well as bend, and attain quasi-developable surfaces. Future work should explore how buckling and kerfing can be incorporated into FoldMold to support more complex curvatures.
+
+Assembly Optimizations: As FoldMolds get more complicated, their hands-on assembly becomes more challenging. Future work should explore ways to computationally optimize the components of the mold for faster assembly. For example, joinery can be minimized and placement of seams optimized; model geometries themselves can be simplified for a speed-fidelity trade-off useful at early, "draft quality" prototyping stages.
+
+Multi-Material Molds and Casts and Interesting Inclusions: We can investigate material combinations in two ways. First, multiple molding materials (i.e., different paper weights) can theoretically be used together for molds that are very flexible in certain areas and very strong in others. Second, certain prototypes may require multiple casting materials in the same mold, and this would influence how the 2D mold pieces fit together and the needed support structures. This can potentially be expanded to support the prototyping of objects with embedded electronic components like sensors and actuators for applications such as soft robotics and wearable electronics.
+
+## ACKNOWLEDGMENTS
+
+Anonymized for review.
+
+## REFERENCES
+
+[1] Silhouette cameo 4 - white.
+
+[2] Software for product design: Fusion 360.
+
+[3] Cura lulzbot edition, Sep 2020.
+
+[4] T. Alderighi, L. Malomo, D. Giorgi, N. Pietroni, B. Bickel, and P. Cignoni. Metamolds: Computational design of silicone molds. ACM Trans. Graph., 37(4), July 2018. doi: 10.1145/3197517.3201381
+
+[5] C. Araújo, D. Cabiddu, M. Attene, M. Livesu, N. Vining, and A. Shef-fer. Surface2volume: Surface segmentation conforming assemblable
+
+volumetric partition. ACM Transaction on Graphics, 38(4), 2019. doi: 10.1145/3306346.3323004
+
+[6] P. Baudisch, A. Silber, Y. Kommana, M. Gruner, L. Wall, K. Reuss,
+
+L. Heilman, R. Kovacs, D. Rechlitz, and T. Roumen. Kyub: A 3d editor for modeling sturdy laser-cut objects. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300796
+
+[7] D. Beyer, S. Gurevich, S. Mueller, H.-T. Chen, and P. Baudisch. Platener: Low-fidelity fabrication of $3\mathrm{\;d}$ objects by substituting $3\mathrm{\;d}$ print with laser-cut plates. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1799-1806. ACM, Seoul, Korea, 2015.
+
+[8] V. P. C and D. Wigdor. Foldem: Heterogeneous object fabrication via selective ablation of multi-material sheets. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI, pp. 5765-5775. ACM, New York, NY, USA, 2016. doi: 10.1145/2858036. 2858135
+
+[9] M. Capone and E. Lanzara. Kerf bending: ruled double curved surfaces manufacturing. XXII CONGRESSO INTERNACIONAL DA SO-CIEDADE IBEROAMERICANA DE GRÁFICA DIGITAL, XXII(1):1-8, 2018.
+
+[10] T. Castle, Y. Cho, X. Gong, E. Jung, D. M. Sussman, S. Yang, and R. D. Kamien. Making the cut: Lattice kirigami rules. Physical review letters, 113(24):245502, 2014.
+
+[11] B. G.-g. Chen, B. Liu, A. A. Evans, J. Paulose, I. Cohen, V. Vitelli, and C. Santangelo. Topological mechanics of origami and kirigami. Physical review letters, 116(13):135501, 2016.
+
+[12] G. P. Choi, L. H. Dudte, and L. Mahadevan. Programming shape using kirigami tessellations. Nature materials, 18(9):999-1004, 2019.
+
+[13] P. Cignoni, N. Pietroni, L. Malomo, and R. Scopigno. Field-aligned mesh joinery. ACM Trans. Graph., 33(1), Feb. 2014. doi: 10.1145/ 2537852
+
+[14] S. Coros, B. Thomaszewski, G. Noris, S. Sueda, M. Forberg, R. W. Sumner, W. Matusik, and B. Bickel. Computational design of mechanical characters. ACM Trans. Graph., 32(4):83:1-83:12, July 2013. doi: 10.1145/2461912.2461953
+
+[15] A. Dominec. Export Paper Model from Blender. GitHub, 2020.
+
+[16] L. Flavell. Uv mapping. In Beginning Blender, pp. 97-122. Apress, Apress, 2010.
+
+[17] B. Foundation. Home of the blender project - free and open 3d creation software.
+
+[18] R. Geretschläger. Euclidean constructions and the geometry of origami. Mathematics Magazine, 68(5):357-371, 1995.
+
+[19] D. Goldberg. History of 3d printing: It's older than you think [updated], Dec 2018.
+
+[20] D. Groeger and J. Steimle. Lasec: Instant fabrication of stretchable circuits using a laser cutter. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300929
+
+[21] O. Z. Güzelci, S. Alaçam, Z. Bacinoğlu, et al. Enhancing flexibility of 2d planar materials by applying cut patterns for hands on study models. Congress of the Iberoamerican Society of Digital Graphics, XX(1):1-7, 2016.
+
+[22] J. Jiang, J. Stringer, X. Xu, and R. Y. Zhong. Investigation of printable threshold overhang angle in extrusion-based additive manufacturing for reducing support waste. International Journal of Computer Integrated Manufacturing, 31(10):961-969, 2018.
+
+[23] D. Julius, V. Kraevoy, and A. Sheffer. D-charts: Quasi-developable mesh segmentation. In Computer Graphics Forum, vol. 24, pp. 581- 590. Citeseer, 2005.
+
+[24] M. Konaković, K. Crane, B. Deng, S. Bouaziz, D. Piker, and M. Pauly. Beyond developable: Computational design and fabrication with auxetic materials. ACM Trans. Graph., 35(4):89:1-89:11, July 2016. doi: 10.1145/2897824.2925944
+
+[25] M. Kreiger, M. Mulder, A. Glover, and J. Pearce. Life cycle analysis of distributed recycling of post-consumer high density polyethylene for 3-d printing filament. Journal of Cleaner Production, 70:90-96, 2014. doi: 10.1016/j.jclepro.2014.02.009
+
+[26] J. McCrae, N. Umetani, and K. Singh. Flatfitfab: Interactive modeling with planar sections. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST, p. 13-22. Association for Computing Machinery, New York, NY, USA, 2014.
+
+doi: 10.1145/2642918.2647388
+
+[27] J. Milewski, G. Lewis, D. Thoma, G. Keel, R. Nemec, and R. Reinert. Directed light fabrication of a solid metal hemisphere using 5-axis powder deposition. Journal of Materials Processing Technology, 75(1):165 - 172, 1998. doi: 10.1016/S0924-0136(97)00321-X
+
+[28] D. Miller. Smelter and smith: Iron age metal fabrication technology in southern africa. Journal of Archaeological Science, 29(10):1083 - 1131, 2002. doi: 10.1006/jasc.2001.0758
+
+[29] D. Mitov, B. Tepavčević, V. Stojaković, and I. Bajšanski. Kerf bending strategy for thick planar sheet materials. Nexus Network Journal, 21(1):149-160, 2019.
+
+[30] K. Miura. A note on intrinsic geometry of origami. Research of Pattern Formation, pp. 91-102, 1989.
+
+[31] S. Mueller, S. Im, S. Gurevich, A. Teibrich, L. Pfisterer, F. Guim-bretière, and P. Baudisch. Wireprint: 3d printed previews for fast prototyping. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 273-280. ACM, Honolulu, Hawaii, 2014.
+
+[32] S. Mueller, B. Kruck, and P. Baudisch. Laser origami: Laser-cutting 3d objects. Interactions, 21(2):36-41, 2014. doi: 10.1145/2567782
+
+[33] A. Muntoni, M. Livesu, R. Scateni, A. Sheffer, and D. Panozzo. Axis-aligned height-field block decomposition of $3\mathrm{\;d}$ shapes. ACM Transactions on Graphics, 37(5), 2018. doi: 10.1145/3204458
+
+[34] T. D. Ngo, A. Kashani, G. Imbalzano, K. T. Nguyen, and D. Hui. Additive manufacturing (3d printing): A review of materials, methods, applications and challenges. Composites Part B: Engineering, 143:172 - 196, 2018. doi: 10.1016/j.compositesb.2018.02.012
+
+[35] S. Olberding, S. Soto Ortega, K. Hildebrandt, and J. Steimle. Foldio: Digital fabrication of interactive and shape-changing objects with foldable printed electronics. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST, pp. 223-232. ACM, New York, NY, USA, 2015. doi: 10.1145/2807442. 2807494
+
+[36] N. Padfield, M. Hobye, M. Haldrup, J. Knight, and M. F. Ranten. Creating synergies between traditional crafts and fablab making: Exploring digital mold-making for glassblowing. In Proceedings of the Conference on Creativity and Making in Education, FabLearn Europe, p. 11-20. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3213818.3213821
+
+[37] Person.
+
+[38] A. Rafsanjani and K. Bertoldi. Buckling-induced kirigami. Physical review letters, 118(8):084301, 2017.
+
+[39] R. Ramakers, K. Todi, and K. Luyten. Paperpulse: An integrated approach for embedding electronics in paper designs. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI, pp. 2457-2466. ACM, New York, NY, USA, 2015. doi: 10.1145/2702123.2702487
+
+[40] T. Roumen, J. Shigeyama, J. C. R. Rudolph, F. Grzelka, and P. Baud-isch. Springfit: Joints and mounts that fabricate on any laser cutter. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST, p. 727-738. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332165. 3347930
+
+[41] V. Savage, S. Follmer, J. Li, and B. Hartmann. Makers' marks: Physical markup for designing and fabricating functional objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST '15, pp. 103-108. ACM, New York, NY, USA, 2015. doi: 10.1145/2807442.2807508
+
+[42] V. Savage, X. Zhang, and B. Hartmann. Midas: Fabricating custom capacitive touch sensors to prototype interactive objects. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST, pp. 579-588. ACM, New York, NY, USA, 2012. doi: 10.1145/2380116.2380189
+
+[43] R. A. Sobieszek. Sculpture as the sum of its profiles: François willème and photosculpture in france, 1859-1868. The Art Bulletin, 62(4):617-630, 1980. doi: 10.1080/00043079.1980.10787818
+
+[44] J. Solomon, E. Vouga, M. Wardetzky, and E. Grinspun. Flexible developable surfaces. In Computer Graphics Forum, vol. 31, pp. 1567- 1576. Wiley Online Library, 2012.
+
+[45] A. Spielberg, A. Sample, S. E. Hudson, J. Mankoff, and J. McCann. Rapid: A framework for fabricating low-latency interactive objects with rfid tags. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, pp. 5897-5908. ACM, New York, NY, USA, 2016. doi: 10.1145/2858036.2858243
+
+[46] Z. Taylor. Wood benders handbook. Sterling, New York, 2008.
+
+[47] T. Valkeneers, D. Leen, D. Ashbrook, and R. Ramakers. Stackmold: Rapid prototyping of functional multi-material objects with selective levels of surface details. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST, p. 687-699. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332165.3347915
+
+[48] M. Wei and K. Singh. Bend-a-rule: A fabrication-based workflow for 3d planar contour acquisition. In Proceedings of the 1st Annual ACM Symposium on Computational Fabrication, SCF. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/ 3083157.3083164
+
+[49] S. Zarrinmehr, E. Akleman, M. Ettehad, N. Kalantar, and A. Borhani. Kerfing with generalized $2\mathrm{\;d}$ meander-patterns: conversion of planar rigid panels into locally-flexible panels with stiffness control. In Future Trajectories of Computation in Design - 17th International Conference, pp. 276-293. CUMINCAD, Istanbul, Turkey, 2017.
+
+[50] C. Zheng, E. Y.-L. Do, and J. Budd. Joinery: Parametric joint generation for laser cut assemblies. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, C and C, p. 63-74. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3059454.3059459
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..59d88d5aba47d7040e1685afe7892ecd40918d10
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/6gaOU6UA6pa/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,457 @@
+§ FOLDMOLD: AUTOMATING PAPERCRAFT FOR FAST DIY CASTING OF SCALABLE CURVED SHAPES
+
+Anonymized for review
+
+ < g r a p h i c s >
+
+Figure 1: FoldMold enables rapid casting of curved 3D shapes using paper and wax. In this image we show the steps for casting a silicone kitchen gripper, far faster and with less waste than it can be 3D printed. (1) Create a 3D model of the object positive in Blender. (2) Add joinery and support features using the FoldMold Pattern Builder. (3) Automatically unfold the model into 2D patterns, and cut them from paper. (4) Assemble the mold and pour the intended material. (5) Remove the cast object from the mold.
+
+§ ABSTRACT
+
+Rapid iteration is crucial to effective prototyping; yet making certain objects - large, smoothly curved and/or of specific material - requires specialized equipment or considerable time. To improve access to casting such objects, we developed FoldMold: a low-cost, simply-resourced and eco-friendly technique for creating scalable, curved mold shapes (any developable surface) with wax-stiffened paper. Starting with a 3D digital shape, we define seams, add bending, joinery and mold-strengthening features, and "unfold" the shape into a 2D pattern, which is then cut, assembled, wax-dipped and cast with materials like silicone, plaster, or ice. To access the concept's full power, we facilitated digital pattern creation with a custom Blender add-on. We assessed FoldMold's viability, first with several molding challenges in which it produced smooth, curved shapes far faster than $3\mathrm{D}$ printing would; then with a small user study that confirmed automation usability. Finally, we describe a range of opportunities for further development.
+
+Index Terms: Human-centered computing; Human-centered computing-Systems and tools for interaction design
+
+§ 1 INTRODUCTION
+
+The practice of many designers, makers, and artists relies on rapid prototyping of physical objects, a process characterized by quick iterative exploration, modeling and construction. One method of bringing designs to life is through casting, wherein the maker pours material into a mold "negative" and lets it set [28]. Casting advantages include diversity of material, ability to place insets, and the possibility of mixing computer-modeled and extemporaneous manual mold construction [36].
+
+A primary downside to casting for rapid prototyping is the time required to build a mold. 3D printing a mold (a common approach today) can take hours even for small objects. For multi-part molds, large models and with failed prints, the latency grows to days. While other mold-making techniques exist (e.g., StackMold [47], Meta-molds [4]), they are either limited to geometrically simple shapes such as extrusions, or require effort in other ways. Molds of smooth and complexly curved, digitally defined models are hard to achieve in a Do-It-Yourself (DIY) setting except through 3D printing. Iteration is thus expensive, particularly for large sizes and curvy shapes.
+
+Efficient use of materials is a particular challenge for prototype casting. When iterating, we only need one-time-use molds; so both the mold and final object materials should not only be readily available and low-cost, but also minimize non-biodegradable waste - a problem with most 3D printing media [25].
+
+This work was inspired by insights connecting papercraft and computer-aided fabrication. The first was that papercraft and wax can together produce low-cost, eco-friendly, curved molds. Paper-craft techniques like origami (paper folding) and kirigami (paper cutting) $\left\lbrack {{12},{18}}\right\rbrack$ can produce geometrically complex positive shapes. Because paper is thin, the negative space within can be filled with castable material for new positive objects. Paper can be flexibly bent into smooth curves and shapes. Mold construction time is size invariant. Wax can fix, reinforce, and seal a paper mold's curves. It is biodegradable, easy to work with, melts at low heat, creates a smooth finish, is adhesive when warm, can be iteratively built and touched-up. Both paper and wax are inexpensive and easy to source.
+
+Secondly, while origami and kirigami shapes are folded from paper sheets, paper pattern pieces can also be joined with woodcraft techniques. Paper has wood-like properties that enable many cutting and assembly methods: it is fibrous, tough, and diverse in stiffness and density. Like wood, paper fibers allow controlled bending through patterned cuts, but unlike wood, its weakness can be exploited for bending, and mold parts are easily broken away after casting.
+
+Thirdly, the complex design of paper and wax molds is highly automatable. Given user-defined vertices, edges, and faces, we can algorithmically compute mold structure, bends, joints and seams, and mold supports - steps which would require expertise and time, especially for complex shapes. With a computational tool, we can make the pattern-creation process very fast, more precise, and reliable, while still allowing a maker's intervention when desired.
+
+FoldMold is a system for rapidly building single-use molds for castable materials out of wax-stiffened paper that blends paper-bending and wood joinery methods (Figure 1). To support complex shapes, we created a computational tool - the FoldMold Pattern Builder (or "Builder") - to automate translation of a 3D model to a 2D pattern with joinery and mold support components. Patterns can be cut with digital support (e.g., lasercut or, at home, with X-acto knife or vinyl cutter [1]).
+
+In this paper, we show that FoldMolds are faster to construct than other moldmaking methods, use readily-available equipment and biodegradable materials, are low-cost, support complex shapes difficult to attain with other methods, and can be used with a variety of casting materials. FoldMold is ideal for custom fabrication and rapid iteration of shapes with these qualities, including soft robotics, wearables, and large objects. FoldMold is valuable for makers without access to expensive, high-speed equipment and industrial materials, or committed to avoiding waste and toxicity.
+
+§ 1.1 OBJECTIVES AND CONTRIBUTIONS
+
+We prioritized three attributes in a mold-making process:
+
+Accessibility: Mold materials should be cheap, accessible, and disposable/biodegradable. The pattern should be easy to cut (e.g., laser or vinyl cutter) and assemble in a typical DIY workshop.
+
+Speed and Outcome: Moldmaking should be fast, support high curve fidelity and fine surface finish, or be a useful compromise of these relative to current rapid 2D-to-3D prototyping practices.
+
+Usability and Customisability: Mold creation and physical assembly should be straightforward for a DIY maker, hiding tedious details, yet enabling them to customize and modify patterns.
+
+To this end, we contribute:
+
+1. The novel approach of saturating digitally-designed papercraft molds with wax to quickly create low-cost, material-efficient, laser-cuttable molds for castable objects;
+
+2. A computational tool that makes it feasible and fast to design complex FoldMolds;
+
+3. A demonstration of diverse process capabilities through three casting examples.
+
+§ 2 RELATED WORK
+
+We ground our approach in literature on prototyping complex object positives, in rapid shape prototyping, casting, papercraft and woodcraft techniques, and computational mold creation.
+
+§ 2.1 RAPID SHAPE PROTOTYPING
+
+§ 2.1.1 ADDITIVE PROTOTYPING
+
+Based on sequentially adding material to create a shape, additive methods are dominated by $3\mathrm{D}$ printing [19] due to platform penetration, slowly growing material choices, precision and resolution, and total-job speed and hands-off process relative to previous methods such as photo sculpture [43] and directed light fabrication [27]. 3D printers heat and extrude polymer filaments. Some technologies can achieve high resolution and precision, although at present this may be at the expense of speed, cost and material options [34].
+
+Rates of contemporary 3D printing are still slow enough (hours to days) to impede quick iteration. As an example of efforts to increase speed, WirePrint modifies the digital 3D model to reflect a mesh version of the object positive [31], but at the cost of creating discontinuous object surfaces. While capturing an object's general shape and size, it sacrifices fidelity.
+
+Other limitations of direct 3D printing of object positives are limited material options and geometry constraints. Its layering process complicates overhangs: they require printed scaffolding, or multipart prints for later reassembly [22]. Papercraft has no problem with overhangs; where mold support is needed FoldMold utilizes paper or cardboard scaffolding stiffened with wax.
+
+§ 2.1.2 SUBTRACTIVE AND 2D-TO-3D PROTOTYPING
+
+Computer Numerical Control (CNC) machining technologies include drills and laser-cutters, and lathes and milling machines which can create $2\mathrm{D}$ and $3\mathrm{D}$ artifacts respectively, all via cutting rather than building-up. Although limited to 2D media, laser-cutting offers speed and precision at low per-job cost, albeit with a high equipment investment [37]. Because it can be cheaper and faster to fabricate 2D than 3D media, some have sought speed by cutting 2D patterns to be folded or assembled into $3\mathrm{D}$ objects $\left\lbrack {6,7,{32}}\right\rbrack$ .
+
+FlatFitFab [26] and Field-Aligned Mesh Joinery [13] allow the user to create 2D laser cut pieces that, when aligned and assembled, form non-continuous 3D approximations of the object positive - essentially creating the object "skeleton". Other methods (e.g., Joinery [50], SpringFit [40]) utilize laser cutting followed by assembly of 2D cutouts. Joinery supports the creation of continuous, non-curved surfaces joined by a variety of mechanisms. SpringFit introduces the use of unidirectional laser-cut curves joined using stress-spring mechanisms. However, these techniques are for creating object positives and not suitable for casting; material qualities are inherently limited and the joints are not designed to fully seal.
+
+Here, we draw inspiration from these methods which approach physical 3D object construction based on 2D fabrication techniques, and draw on the basic ideas to build continuous sealed object negatives (molds) for casting objects from a variety of materials.
+
+§ 2.2 CASTING IN RAPID PROTOTYPING
+
+An Iron Age technique [28], casting enables object creation through replication (creating a mold from the target object's positive) or from designs that do not physically exist yet (our focus). A particular utility of casting in prototyping is access to a wider range of materials than is afforded by methods like 3D printing, carving or machining - e.g., silicone or plaster.
+
+StackMold is a system for casting multi-material parts that forms molds from stacked laser-cut wood [47]. It incorporates lost-wax cast parts to create cavities for internal structures. While this improves casting speed (especially with thicker layers), the layers create a discretized, "stepped" surface finish which is unsuitable for smoothly curved shapes - prototyping speed is in conflict with surface resolution. Metamolds [4] uses a 3D printed mold to produce a second silicone mold, which is then used to cast objects The Metamolds software minimizes the number of 3D printed parts to optimize printing time. Silicone molds are good for repeated casts of the same object, but this multi-stage process slows rapid iteration requiring only single-use molds. Further, Metamolds are size-constrained by the $3\mathrm{D}$ printer workspace.
+
+Thus, despite significant progress in rapid molding, fast iteration of large and/or complex shapes is still far from well supported.
+
+§ 2.3 PAPERCRAFT AND WOOD MODELING
+
+Several paper and wood crafting techniques inspired FoldMold.
+
+§ 2.3.1 PAPERCRAFT
+
+Origami involves repeatedly folding a single paper sheet into a 3D shape [11]. Mathematicians have characterized origami geometries [30] as Euclidean constructions [18]. They can achieve astonishing complexity, but at a high cost in labor, dexterity and ingenuity. Kirigami allows paper cutting as well as folding to simplify assembly and access a broader geometric range. Despite the effort, both demonstrate how folding can transform 2D sheets into complex 3D shapes, and that papercraft design can be modeled.
+
+§ 2.3.2 CREATING 2D PAPERCRAFT PATTERNS FOR 3D OBJECTS
+
+Many have sought ways to create foldable patterns and control deformation by discretizing 3D objects. Castle et al. developed a set of transferable rules for folding, cutting and joining rigid lattice materials [10]. For 3D kirigami structures, specific cuts to flat material can be buckled out of plane by a controlled tension on connected ligaments [38]. Research work on these papercraft techniques inform cut and fold prototyping systems; e.g., LaserOrigami uses a laser cutter to make cuts on a 2D sheet then melts them into specified bends for a precise 3D object [32]. FoldMold goes beyond this by enabling the use of a wide variety of materials through casting, and supporting the creation of large, curvy objects.
+
+§ 2.3.3 CONTROLLED BENDING
+
+Wood and other rigid but fibrous materials can be controllably bent with partial cuts, by managing cut width, shape and patterning [46]. Many techniques and designs achieve specific curves: e.g., kerfing, patterns of short through-cuts, can render a different and more continuous curvature than scoring (cutting partway through) $\left\lbrack {9,{21},{29},{49}}\right\rbrack$ . These methods support complex double curved surfaces [9, 29], stretching [20], and conformation to preexisting curves for measurements [48]. With sturdy 2D materials, they create continuous curves strong enough to structurally reinforce substantial objects [7].
+
+§ 2.3.4 JOINERY
+
+In fine woodwork, wood pieces are cut with geometries that are pressure-fit into one another, to mechanically strengthen the material bond which can be further reinforced with glue, screws or dowels. There are many joint types varying in ideal material and needed strength. Taking these ideas into prototyping, Joinery developed a parametric joinery design tool specifically for laser cutting to create 3D shapes [50]. Joinery has been used in rapid prototyping literature: Cignoni et al. creates a meshed, interlocked structure approximation of a positive shape to replicate a 3D solid object [13]. Conversely, SpringFit shows how mechanical joints can lock components of an object firmly in place and minimize assembly pieces [40].
+
+Our work leverages these papercraft, joinery and modeling techniques to achieve structurally sound, complex and curved shapes from 2D materials by using precise bending and joints.
+
+§ 2.4 COMPUTATIONAL MOLD CREATION
+
+Computational support can make complex geometric tasks more accessible to designers $\left\lbrack {8,{14},{24},{35},{39},{41},{42}}\right\rbrack$ . LASEC allows for simplified production of stretchable circuits through a design software and laser cutter [20]. Some of these tools include software that automates part of the process; others are computationally-supported frameworks or design approaches [45].
+
+Designers often begin with digital 3D models of the target object. To cast a 3D model, they must generate a complement (the object negative), and convert it to physical patterns for mold assembly. Examples where software speeds this process are Stackmold, which slices the object negative into laser-cuttable slices [47]; while Metamolds helps users optimize silicone molds [4].
+
+Computer graphics yields other approaches to 3D-to-2D mapping. UV Mapping is the flat representation of a surface of a 3D model. Creating a UV map is called ${UV}$ Unwrapping. While $\left\lbrack {X,Y,Z}\right\rbrack$ specify an object’s position in 3D space, $\left\lbrack {U,V}\right\rbrack$ specify a location on the surface of a 3D object. UV Unwrapping uses UV coordinate information to create a $2\mathrm{D}$ pattern corresponding to a $3\mathrm{D}$ surface, thus "unwrapping" it [16]. The Least Squared Conformal Mapping (LCSM) UV Unwrapping algorithm is implemented in the popular open source 3D modeling tool Blender [17].
+
+Previous work in computer graphics has investigated the decomposition of 3D geometries into geometries that are suitable for CNC cutting [5]. For example, Axis-Aligned Height-Field Block Decomposition of 3D Shapes splits 3D geometries into portions that can be cut with 3-axis CNC milling [33]. D-Charts converts complex 3D meshes into 2D, nearly-developable surfaces [23].
+
+These tools signified that FoldMold also needed computational support; however, our papercraft-based technology is utterly different. In the FoldMold pattern-generation tool, the 3D model of the object positive is "unwrapped" and elements are added to reassemble - fold - the 2D patterns into a structurally sound mold. Similar algorithms have been used in other tools, such as the Unwrap function in Fusion 360 [2], but not in a prototyping or mold-making context.
+
+§ 3 THE FOLDMOLD TECHNIQUE
+
+In its entirety, FoldMold is a fast, low-cost and eco-friendly way to cast objects based on a 3D digital positive. It can achieve an identifiable set of geometries (Section 3.1), utilizes a set of mold design features (3.2), and consists of a set of steps (3.3). In Section 4 we describe our custom computational tool (FoldMold Pattern Builder) which makes designing complex FoldMolds feasible and fast.
+
+§ 3.1 FOLDMOLD GEOMETRIES
+
+A flexible piece of paper can be bent into many forms. Termed developable surfaces, they are derivable from a flat surface by folding or bending, but not stretching [44]. Mathematically, such surfaces possess zero Gaussian curvature at every point; that is, at every point on the surface, the surface is not curved in at least one direction. Cylinders and cones are examples of curved developable surfaces, but spheres are not: every point on a sphere is curved in all directions.
+
+FoldMold can be used for any developable surface or connected sets of them. It can achieve a non-developable surface after approximating and translating it into a subset of connected, individually developable surfaces (islands), which can then be joined together. A single developable surface may also be divided unto multiple islands, e.g., for ease of pattern construction or use. Islands comprise the basic shapes of a 2D FoldMold pattern (Figure 1, Step 3).
+
+FoldMold geometries can have several kinds of edges. Joints are seams between islands. Folds (sharp, creased bends) and smooth curves are both controlled via scoring, i.e., cuts partway through a material, possible with a lasercutter or handheld knife.
+
+A FoldMold island can have multiple faces which are equivalent to their 3D digital versions' polygons, i.e., the polygon or face resolution can be adjusted to increase surface smoothness. Faces are delineated by any type of edge, whether cut or scored.
+
+A strength of the FoldMold technique is its ability to construct large geometries. The size of a FoldMold geometry is characterized by three factors.
+
+Size/time scaling: While popular 3D printers accommodate objects of ${14} - {28}\mathrm{\;{cm}}$ (major dimensions), build time scales exponentially with object size. In contrast, FoldMold operations (2D cutting and folding) scale linearly or better with object size (Table 2).
+
+Weight of cast material: Paper is flexible, and may deform under the weight of large objects. As we show in 5.2, we tested this technique with a large object cast from plaster(3.64kg)and did not notice visible deformation. As objects get even larger and heavier, they will eventually require added support.
+
+Cutter specs: The cutter bed size limits the size of each island in the geometry. Additionally, the ability of the cutter to accommodate material thickness/stiffness is another limiting factor.
+
+§ 3.2 FOLDMOLD FEATURES
+
+FoldMold produces precise, curved, but sturdy molds from paper via computationally managed bending, joinery and mold supports. Here we discuss the features of FoldMold.
+
+§ SCORE-CONTROLLED BENDING FOR 3D SHAPES FROM 2D PATTERNS
+
+Folds (Sharp Creases): Manual folding can produce uneven or warped bends, especially for thick or dense materials. To guide a sharp fold or crease, we score material on the outside of the fold line to relieve strain and add fold precision. Score depth influences the bending angle, but cuts that are too deep can reduce structural strength along the fold. We empirically found that cutting through $\sim {50}\%$ of the material thickness is a good compromise for most folds and paper material.
+
+Smooth Unidirectional Curves: As is well-known by foam-core modelmakers, repetition of score lines can precisely control a curve. For example, as we add lengthwise scores on a cylinder's long axis, its cross section approaches a circle; non-uniform spacing can generate an ovoid or U-shape. There is a trade-off between curve continuity, cutting time and structural integrity. Designers can adjust scoring density - the frequency of score lines - based on specific needs; e.g., speed often rules in early prototyping stages, replaced by quality as the project reaches completion. We can smooth some discretized polygonization by filling corners and edges with wax.
+
+§ JOINERY TO ATTACH EDGES AND ASSEMBLIES
+
+Joints must (1) seal seams, (2) maintain interior smoothness, for casting surface finish, and (3) support manual assembly. We implemented sawtooth joints, pins, and glue tabs (Figure 2).
+
+ < g r a p h i c s >
+
+Figure 2: FoldMold joint types (A) Sawtooth and (B) pin joints utilize pressure fitting for secure joints and to maintain alignment. (C) Glue tabs rely on an adhesive.
+
+Sawtooth Joints: Pressure fits create a tight seal, with gaps slightly smaller than the teeth and held by friction, enabled by paper's compressibility (as shown in Fig. 2A). To ease insertion, we put gentle guiding tapers on the teeth, with notches to prevent pulling out. Best for straight, perpendicular seams, these joints can face outward from the model for interior surface integrity.
+
+Pin Joints: Small tabs are pushed through slightly undersized slots; a flange slightly wider than the corresponding slot ensures a pressurized, locking fit (shown in Fig. 2B). Pin joints are ideal for curved seams, which other techniques would discretize: e.g., a circle of slots on a flat base can smoothly constrain a cylinder with pins on its bottom circumference. Tapers and notches on the pins facilitate assembly.
+
+Glue Tabs: Fast to cut and easy to assemble, two flat surfaces are joined with adhesive (Fig. 2C). Overlapping the tabs (as in typical box construction) would create an interior discontinuity. Instead, we bend both tabs outwards from the model and paste them together, like the seam of an inside-out garment. Thus accessible, they can be manipulated to reduce mismatch while preserving interior surface quality. We have used commonly available, multi-purpose white glue with a 20-30 minute drying time. Assembly time can be greatly reduced by clamping the drying tabs.
+
+Ribbing for Support: Wax stiffening greatly strengthens the paper, In some cases, e.g., for dense casting materials such as plaster, or large volumes, more strength may be needed to prevent deformation. External support can also help to maintain mold element registration (Figure 1, Steps 2 and 4).
+
+§ 3.3 CONSTRUCTING A FOLDMOLD MODEL
+
+Constructing a mold using the FoldMold method can be described in five steps, illustrated in Figure 1. Section 4 describes how the FoldMold Pattern Builder ("Builder" hereafter) assists the process.
+
+Step 1: Create or import a 3D model. The designer starts by modelling or importing/editing the 3D object positive in Blender.
+
+§ STEP 2: DESIGN THE FOLDMOLD - ITERATIVELY ADD AND REFINE SEAMS, BENDS, JOINERY AND SUPPORTS:
+
+Substeps: Conceptually, FoldMold design-stage subtasks and outputs are to (a) indicate desired joint lines (island boundaries) on the 3D model (b) joinery type for joints; (c) adjust face resolution to achieve desired curvature; (d) specify scoring at internal (non joint) face edges to control bending within islands, and (e) add ribbing for mold support. Each of these tasks are supported in the FoldMold Pattern Builder tool's interface (below). Any of these substeps may be repeated during digital design, or revisited after the mold has been physically assembled to adjust the design. This may be especially important for novice users or for challenging projects.
+
+Automation and Intervention: Builder can do Step 2 fully automatically, but because its unwrapping algorithm does not consider all factors of the molding process such as preferred building process or seam identification, results may sometimes be improved with maker intervention. As examples, one can intervene at (a) by constraining joint lines then letting Builder figure out scoring (c). By default, Builder uses glue tabs for joints, but we can step in at (b) in a realization that a pin joint will work better than glue tabs for a circular seam such as a cup bottom. Builder's default ribbing is 3 ribs along each of the X- and Y-axes, and 2 Z-axis ribs holding them in place. The maker can intervene to modify ribbing placement frequency, and to position and orient individual ribbing pieces to best support a given geometry.
+
+Step 3: Unfold and cut the FoldMold design. Builder unfolds the object’s geometry into a $2\mathrm{D}$ mold pattern that is cutter-ready, exported as a PDF file. The maker cuts the 2D patterns from paper by sending the PDF to the cutter - e.g., a laser cutter, vinyl cutter, or even laser-printing the patterns and cutting them with an X-Acto knife or a pair of scissors.
+
+Step 4: Assemble, wax and cast. The FoldMold physical construction steps are shown in Figure 3. The maker assembles the cut patterns into a 3D mold by creasing and bending on fold lines and joining at seams according to the joinery method.
+
+To build mold strength, the maker repeatedly dips it in melted wax (paraffin has a melting point of ${46} - {68}{}^{ \circ }\mathrm{C}$ ). As the wax hardens, it stiffens the paper, "locking in" the mold's shape. For very fine areas, dipping may obscure desired detail or dull sharp angles; wax can be added with a small brush, and excess can be removed or surface detail emphasized.
+
+Curable casting materials (e.g., silicone or epoxy resin), or materials that dry (plaster, concrete) are simply prepared and poured.
+
+Step 5: Set and remove Mold. After setting for the time dictated by the casting material, the mold is easily taken apart by gently tearing the paper and peeling it away from the cast object. Any excess wax crumbs that stick to the object can be mechanically removed or melted away with a warm tool.
+
+§ 4 COMPUTATIONAL TOOL: FOLDMOLD PATTERN BUILDER
+
+A designer should be able to focus effort on the target object rather than on its mold, and FoldMold-making requires complex and laborious spatial thinking, especially for complex shapes. Fortunately, these operations are mathematically calculable, and features can be placed using heuristics. To speed up the mold-making process, our computational tool - the FoldMold Pattern Builder - automates the generation of laser-cuttable 2D patterns from a 3D positive while allowing designer intervention. We describe its implementation and usability evaluation.
+
+§ 4.1 IMPLEMENTATION
+
+We wrote Builder as a custom Blender add-on, using Blender's Python API.
+
+§ 4.1.1 USER INTERFACE
+
+We created Builder's user interface to reflect primary FoldMold design activities, as described in Section 3.3. The interface's panels shown in Figure 4 map to Steps 2a-d (panel A, Mold Prep), Step 2e (panel B, Ribbing Creation) and Step 3 (panel C, Mold Unfolding).
+
+Builder's user interface is integrated into the Blender user interface, following the same style conventions as the rest of the software in order to reduce the learning curve for novice users who may already be familiar with 3D modeling programs. We tested multiple different configurations of the panels before finding that this grouping of options was the most intuitive.
+
+ < g r a p h i c s >
+
+Figure 3: FoldMold physical construction (1) Assemble the mold: in this case, tabs are glued and dried. (2) Wax: the mold is dipped in wax strengthen and seal it, preparing it for casting. (3) Cast: the casting material, in this case plaster, is poured into the mold and left to harden.
+
+ < g r a p h i c s >
+
+Figure 4: The FoldMold Pattern Builder's user interface has panels based on three design activities, each accessed from a menu bar on the side of the Blender screen. Target model edges and vertices can be first selected while in Blender "edit" mode. (A) Mold Prep Panel: Specify material, apply seams and joinery types, and create scores. (B) Ribbing Panel: Generate and conform ribbing elements, with user-specification of frequency of ribbing elements. (C) Unfold Panel: Export the mold into 2D patterns.
+
+§ 4.1.2 BENDING
+
+In contrast to cut and joined seams, bending edges (for both sharp creases and smooth curves; Section 3.1) remain connected after unwrapping and need no joinery; however, they need to be scored.
+
+ < g r a p h i c s >
+
+Figure 5: Creating scores with the FoldMold Pattern Builder. (1) Select faces of the object that are to be scored. (2) Choose axis around which scores should be drawn. (3) Define scoring density. (4) Apply scores to faces.
+
+Builder automatically detects folds as non-cutting edges that demarcate faces, and on its own, would direct a scoring cutting pattern for them. As noted in Section 3.3, the user can intervene in a number of ways. Scoring can be applied by following the steps described in Figure 5, of (1) face selection, (2) axis choice from Cartesian options, (3) assigning score density (polygon resolution), and finally (4) applying scores to the faces with the press of a button.
+
+A finely resolved curve can be achieved by adjusting the score density along faces in the $3\mathrm{D}$ object. If a score density has been set (Figure 5, Step 3), Builder creates additional fold lines across those faces beyond its default.
+
+To instruct the cutter how to handle them, Builder assigns colors to cut and fold lines (red and green respectively; Figure 1, Steps 2-3). In the exported PDF, this is a coded indicator to the CNC cutter to apply different power settings when cutting, recognized in the machine's color settings.
+
+§ 4.1.3 JOINERY
+
+To reassemble the islands created during UV-unwrapping into a 3D mold, Builder defines joinery features (sawtooths, pins, glue tabs) along the 2D cut-outs' mating edges in repeating, aligned joinery sequences. All three joint types can be included in a given model.
+
+Builder can choose cutting edges. Its default joint type is a glue tab, the easiest to cut and assemble. The user can override this in the Builder interface (Figure 6), instead selecting an edge, then choosing and applying a joinery type.
+
+ < g r a p h i c s >
+
+Figure 6: Creating joinery with FoldMold Builder. (1) Select edges of the object to be joined. (2) Choose joinery type or default to "Auto", which are glue tabs. (3) Apply joinery to edge.
+
+Builder implements this functionality as follows. The basic components of each joinery sequence are referred to as tiles, which are designs stored as points in SVG files. Builder defines joinery sequences from several tiles, first parsing their files. This system is easily extended to more joint types simply by adding new SVG images and combining them with existing ones in new ways.
+
+As an example, a sawtooth joinery sequence is composed of tooth and gap tiles. These are arranged in alternation along one edge, and in an inverted placement along the mating edge such that the two sets of features fit together (i.e., register). Builder generates a unique sequence for each mating edge pair because the number of tiles placed must correspond to the length of the edge, and matching edges must register, e.g., with pin/holes aligned.
+
+Once created, a joinery sequence must be rotated and positioned along its mating edges. Builder applies transformations (rotation and translation) to each tile sequence to align it with its target edge, and positions it between the edge's start and end vertices.
+
+ < g r a p h i c s >
+
+Figure 7: Creating ribbing with the FoldMold Pattern Builder. (1) Set number of slices to add along each axis. (2) Add ribbing to object and optionally transform slices around object for maximal support. (3) Select "Conform Ribbing" to finalize the ribbing shape.
+
+§ 4.1.4 RIBBING
+
+Builder defines ribbing along three axes (Figure 7) for maximal support and stability. X- and Y-axis ribs slot together, supporting the mold, while $\mathrm{Z}$ ribs slot around and register the $\mathrm{{XY}}$ ribbing sheets.
+
+It is impossible to physically assemble ribbing that fully encloses a mold, as the mold would have to pass through it. Builder splits each ribbing sheet in half, then "conforms" it by clipping the ribbing sheets at the mold surface - performing a boolean differenc operation between each ribbing sheet and the mold. In assembly, the user will join the halves to surround the mold.
+
+Within the Builder interface, the user can modify the default ribbing by choosing and applying a ribbing density in terms of slices to be generated by axis (Figure 7). The ribs can then be manipulated (moved, rotated, scaled) within the Blender interface to maximise their support of the object, then conformed with a button click.
+
+§ 4.1.5 UV UNWRAPPING
+
+Conversion of the $3\mathrm{D}$ model into a flat $2\mathrm{D}$ layout is automated through UV Unwrapping (Section 2.4).
+
+At this stage, the Blender mesh object (the 3D model) is converted into a 2D "unfolded" pattern. Our implementation draws from the "Export Paper Model from Blender" add-on [15] from which we use the UV unwrapping algorithm which employs Least Squared Conformal Mapping (LCSM). Initially, the 3D model is processed as a set of edges, faces, and vertices. We then reorient the faces of the 3D object, as if unfolded onto a 2D plane; then alter the edges, faces, and vertices to be in a UV coordinate space.
+
+Unwrapping delivers a set of islands (Section 3.1) - themselves fold-connected faces delineated by joint edges. While these seams are automatically generated during unwrapping, they can optionally be user-defined through Builder's Unfold panel (Figure 4).
+
+Each island has a bounding box. If this exceeds page size (set by media or cutter workspace), it will be rotated to better fit; failing that, it will trigger an oversize error. The user can then scale the 3D model or define more seams, for more but smaller islands.
+
+§ 4.2 USABILITY REVIEW OF FOLDMOLD PATTERN BUILDER TOOL
+
+We conducted a small $\left( {\mathrm{n} = 3}\right)$ user study for preliminary insight into the designers' expectations and experiences with Builder.
+
+§ 4.2.1 METHOD
+
+We recruited three participants (all male Computer Science graduate students whose research related to 3D modeling for familiarity with relevant software). P2 was moderately experienced with Blender, P1 had used Blender, but was not experienced with it, and P3 had never used Blender, but had extensive experience with similar software (3D Studio Max). Conducted over Zoom, sessions took 45-60 min, with the participants accessing Blender and the Builder add-on via Zoom remote control while the researcher recorded the session. Participants were compensated $15.
+
+We introduced each participant to the FoldMold technique, demonstrating how to design joints, bends, and ribbing for various geometries. The researcher walked the participant through a Builder tutorial with a simple practice object (a cube) to design a mold for, and answered questions. They then estimated how long they would take to digitally design a mold for the object in Figure 9, before actually designing and exporting the mold pattern for the object using Builder. The session finished with a short interview.
+
+Table 1: Mold design time: participant expectations vs. actual time for time for mold design.
+
+max width=
+
+Participant Estimated 3D- printable mold Estimated FoldMold (no Builder) Estimated Fold- Mold (Builder) Actual FoldMold (Builder)
+
+1-5
+$\mathbf{{P1}}$ several hours most of a day 2 min 5 min 40 s
+
+1-5
+P2 30-60 min several hours a few minutes 5 min 10 s
+
+1-5
+P3 30-40 min 3 hours 2 min 9 min 30 s
+
+1-5
+
+§ 4.2.2 RESULTS
+
+We review participants' qualitative and quantitative responses to our three questions.
+
+§ 1. TIME: HOW MUCH TIME DO USERS EXPECT TO AND ACTUALLY SPEND ON MOLD DESIGN?
+
+All participants predicted Builder would be much faster ( 2 or a few min) than either designing a 3D-printable mold or manually creating a FoldMold design (30m - a day) (Table 1). Their actual recorded Builder-facilitated times were under 10 minutes (average 6:46). P3, with previous casting experience, iterated on their original design with considerations of manual assembly; their longer time resulted in a slightly more easily assembled mold. While P1 and P2 had no previous mold-making experience, Builder successfully guided them through the creation of a simple mold.
+
+§ 2. OUTCOME: COULD THEY CUSTOMIZE; AND COULD OUTCOME CONTROL BE IMPROVED?
+
+All participants reported good outcome control, but offered three possibilities for improvement.
+
+Joinery density: Participants tended to select all edges in a curve, applying a joint type to the entire selection. P2 was interested in selecting a set of edges and applying joints to, e.g., every third edge, to prevent overly dense joints on a scored object and consequent assembly complication.
+
+Mold material combinations: While Builder allows users to select or define a mold material type (e.g., chipboard or cardboard), they would have liked to indicate multiple materials for a single mold. P3 attempted to define different material settings for different object faces, to accommodate regions which needed flexibility (thin bendy paper) versus strength (thick and dense).
+
+Cut positioning: When Builder currently exports mold patterns, it arranges pieces to maximize paper usage. P1 would have valued grouping pieces based on relationship or assembly order.
+
+§ 3. PROBLEMS: WHAT OBSTACLES WERE ENCOUNTERED?
+
+While participants were generally positive, there were instances where transparency could have been better. P3 was unsure of how Builder would automate mold design without user input, and had to do a trial export to learn it. P1 and P2 asked for warnings when their design choices would lead to issues with the mold or cast object.
+
+§ 5 DEMONSTRATIONS
+
+To demonstrate FoldMold performance in our goals of curvature, large scale, and deterministic outcome, we designed molds for three objects using the FoldMold Pattern Builder and built them in a home workshop, DIY setting. We used a Silhouette Cameo 4 vinyl cutter [1] to cut patterns onto chipboard paper, which we then assembled, dipped in paraffin wax, and cast. We purchased chipboard from an art store ($2.20 / 35x45in sheet), and paraffin from a grocery store at $\sim \$ {10}$ per box.
+
+ < g r a p h i c s >
+
+Figure 8: Left: the 3D model of the planter. Right: the physical planter cast with plaster and a plant inserted.
+
+In Table 2 we compare FoldMold construction times for each demo object (from digital design to de-molding, not including material curing) to the time it would take to 3D print the object positive, and the time it would take to 3D print a mold (negative) for casting the object. Mold design and construction for all objects were done by the authors, with their relative expertise with this new technique. The times for each mold construction are taken from a single build. We can see that FoldMold accomplishes much faster speeds, especially as the model size increases.
+
+§ 5.1 CURVATURE
+
+To demonstrate FoldMold curvature and complexity performance, we chose a heat-protective silicone kitchen grip with multiple curvature axes and overhangs which make it harder to 3D print and should also challenge a fold-based technique.
+
+Figure 1 shows the steps for building a heat protective silicone kitchen gripper, beginning by (1) modelling the geometry in Blender.
+
+We (2) scored curved areas, marked mold seams and set joinery types using Builder. The model's varying surface topology indicated a mix of joinery types. For curved areas we chose pin joints, and for all non-curved areas we used glue tabs to keep the interior of the seams flat and smooth. In ribbing design, we chose a ribbing density of one slice per axis in order to keeping the inner and outer edges of the cavities registered, without requiring very much support.
+
+After (3) exporting the mold layout and cut it from paper using our vinyl cutter, we (4) assembled the mold, dipped it in wax, and poured the silicone. Once the silicone had cured, we (5) removed the cast object from the mold.
+
+Mold materials for the gripper mold (excluding casting material and vinyl cutter) cost $\sim \$ {4.50}$ .
+
+§ 5.2 SCALE
+
+A FoldMold strength is creating large molds (Section 3.1) without the same speed-size tradeoff common with other rapid prototyping techniques. We demonstrate this by casting a planter that measures ${18.8}\mathrm{\;{cm}}$ in total height and ${17.8}\mathrm{\;{cm}}$ in diameter, with an intricate angular outer surface, with a hollow interior to allow for a plant to be inserted (Figure 8) We used plaster for strength.
+
+Mold creation, shown in Figure 3, was similar to Figure 1, with minor adjustments. Due to its angular geometry, the mold did not need to be scored. The long, straight edges could be largely connected using glue tabs and adequately secured with wax. Planter mold materials cost $\sim \$ {5.50}$ .
+
+§ 5.3 VARIABILITY
+
+While rapid-prototyping workflows do not usually involve multiple re-casts of the same object, we wanted to test the extent to which the output is deterministic. In early prototyping stages, it can be beneficial to introduce some variability as a catalyst to ideation and inspiration, whereas in later stages of prototyping, higher determinism is useful as the design approaches completion.
+
+We tested FoldMold's variability by making three cups from the same mold pattern and casting them with ice (Figure 9), following a similar process to that of Figure 1. Due to these molds' small size, ribbing was not needed. The cups' cylindrical geometry led us to use pin joints around the top and bottom, with glue tabs connecting the sides. Table 5 shows the dimensions of each cast cup. Mold materials for each cup cost $\sim \$ 2$ .
+
+ < g r a p h i c s >
+
+Figure 9: Left: the 3D model of the drinking cup. Right: three physical cups cast in ice.
+
+§ 6 DISCUSSION
+
+We review progress towards our goals of accessibility, performance, usability and customisability.
+
+§ 6.1 ACCESSIBILITY: RESOURCE REQUIREMENTS, COST, ECO- LOGICAL LOAD, VERSATILITY
+
+We set out to establish a process that was not just fast, but could be done in a home kitchen (many makers' "pandemic workshop") with readily available, low-cost materials and without toxic waste.
+
+Paper and wax materials together cost $\$ 3 - \$ {10}$ per model of the scales demonstrated here, and are easy to source in everyday consumer businesses. Other costs to this project include a computer to design molds, a cutter and casting material. The latter is highly versatile; FoldMold can potentially cast anything that sets at a temperature low enough to not melt the wax, including many food-safe items (we have tried chocolate and gelatin as well as ice).
+
+The disposable mold is biodegradable. We found the mold making materials easy to cut and assemble using a vinyl cutter (a small consumer CNC device). While we could not demonstrate laser-cut examples due to COVID-19 access restrictions, laser-cutters are common in staffed school and community workshops; although more expensive, they avail higher precision and speed.
+
+§ 6.2 SPEED AND OUTCOME
+
+We targeted high creation speed of single-use molds suitable for diverse casting materials in an accessible setting. We compared FoldMold with the go-to method of $3\mathrm{D}$ printing (as opposed to other DIY casting methods like StackMold) because it can also achieve geometries we sought.
+
+We found FoldMold build-times to be extremely competitive with 3D printing (Table 2), and that the process is capable of a highly interesting range of shape and scale at a fidelity level and surface quality that makes it a viable casting alternative. With 3D printing, a maker faces fewer steps but will wait longer for their mold or positive to print, particularly for larger objects. Our FoldMold planter took 2.5 hours to cut, assemble, and dip in wax. This would have taken a 3D printer 48 hours for a 3D positive and 56 hours for the mold.
+
+Compared to 3D printing an object positive or mold, FoldMold requires a more hands-on approach; and maker skill (mold design optimizations and mold-craft "tricks") can improve results. However, users already find the current process straightforward.
+
+Our wax-stiffened paper approach combined with the manual assembly that this process affords have proven a rewarding combination. Beyond efficiency, mold-handling and shape "tweaking" are an opportunity for spontaneous, fine-grained control over the final geometry beyond what is captured in the digital model. Techniques that rely on the removal of material, such as folding or scoring, may leave the surface finish with unwanted ridges; wax dipping prevents this, filling and smoothing cuts. Finally, wax-soaked paper is a convenient non-stick surface that is easy to remove.
+
+Table 2: Mold making times (Silhouette Cameo 4 Vinyl Cutter [1]). Mold making times exclude curing. 3D printing estimates were generated by the Cura Lulzbot 3D printing software [3] at ${100}\mathrm{\;{mm}}/\mathrm{s}$ printing speed and ${1.05}\mathrm{\;g}/{\mathrm{{cm}}}^{3}$ fill density. We estimate that if laser-cut, FoldMold cuts would be 2-4 times faster.
+
+max width=
+
+Demo Digital Design Cutting Mold Prep & Casting De-molding Total FoldMold 3D Print, Positive 3D Print, Mold
+
+1-8
+Kitchen Grip 10 min 41 min 2 hours 1 min 2h 52 min ${21}\mathrm{\;h}7\mathrm{\;{min}}$ 41h 13min
+
+1-8
+Planter 10 min 43 min 1h 37 min 1 min 2h 31 min 47h 38 min 56h 12min
+
+1-8
+Cup 5 min 22 min 36 min 5 min 1h 8 min 5h 20 min 6h 56 min
+
+1-8
+
+Table 3: Comparing dimensions of digital and cast grips
+
+max width=
+
+Prototype Height Width Depth
+
+1-4
+3D model ${11}\mathrm{\;{cm}}$ ${17}\mathrm{\;{cm}}$ ${10}\mathrm{\;{cm}}$
+
+1-4
+Silicone casting ${11.5}\mathrm{\;{cm}}$ 17.1 cm ${10.5}\mathrm{\;{cm}}$
+
+1-4
+
+Table 4: Comparing dimensions of digital and cast planters
+
+max width=
+
+Prototype Height Diameter Depth
+
+1-4
+3D model 19 cm ${18}\mathrm{\;{cm}}$ ${15}\mathrm{\;{cm}}$
+
+1-4
+Plaster casting ${18.8}\mathrm{\;{cm}}$ 17.8 cm 15.2 cm
+
+1-4
+
+We aimed to support the creation of highly curved surfaces. The use of computational support removed most limits, and within the space of one-directional curves we are not aware of anything that FoldMold can't build at some scale. For non-developable surfaces (entirely or as a part of a hybrid mold) or highly precise geometries, 3D printing may be more suitable.
+
+Finally, we were pleased by FoldMold's versatility, not only in casting material but in adaptability of the method itself. A mold can be adjusted to tradeoff precision for construction time and material use (more faces and structural elements). Many items can be completely hand-made, albeit more slowly, or the process can be boosted with more powerful tools. We foresee that this technique could be adjusted within itself (e.g., to support multiple paper weights within a FoldMold, as per a study participant's suggestion) but also combined easily with other complementary techniques.
+
+§ 6.3 USABILITY AND CUSTOMISABILITY
+
+FoldMold's mold design process is facilitated by the FoldMold Pattern Builder. In our user study, participants could design one of our demonstration molds in an average of 6:47 minutes; based on participants' well informed estimations, a mold of the same shape would have taken several hours to design. Cutting down on mold design time is a major benefit of FoldMold.
+
+Alongside Builder's ability to quickly create molds, we aimed to balance user control and tool automation. Our participants were able to digitally customize their molds to assign specific joint types, materials, and structural supports. While customization allows the user to design a mold specific to their making needs, it also offloads intricate design processes by automating the $2\mathrm{D}$ cut patterns of joinery, scoring, and ribbing. Based on participant responses, a desirable adaptation of the tool would account for customizations such as handling different paper material types in one mold or mixed casting materials (e.g., silicone and plaster).
+
+Table 5: Comparing dimensions of three ice cups
+
+max width=
+
+Prototype Height Diameter Thickness Depth Capacity
+
+1-6
+3D model 7.6 cm 7.3 cm 1 cm 4.8 cm 110 mL
+
+1-6
+1 7.3 cm 6.9 cm 0.8 cm 4.5 cm 91 mL
+
+1-6
+2 7.5 cm 6.9 cm 0.7 cm ${4.8}\mathrm{\;{cm}}$ 93 mL
+
+1-6
+3 ${7.0}\mathrm{\;{cm}}$ 7.1 cm 1.0 cm 4.7 cm 90 mL
+
+1-6
+
+§ 7 CONCLUSIONS AND FUTURE WORK
+
+In this paper, we contributed a novel paper and wax mold-making technique that allows $3\mathrm{D}$ molds to be constructed from $2\mathrm{D}$ cut patterns. We demonstrated FoldMold's capabilities through demonstrations of curvature, scale, precision and repeatability. We developed the FoldMold Pattern Builder, a computational tool that automatically generates $2\mathrm{D}$ mold patterns from $3\mathrm{D}$ objects with optional designer control, dramatically reducing design time from days and hours to minutes. We conducted a small user study to investigate how Builder can better support designers, and found that increasing user control over joinery density, material combinations, and island positioning would be helpful.
+
+Here, we discuss the directions that future work should explore.
+
+Quasi-Developable Surfaces: FoldMold currently implements only straight-line bends, but like origami, it could employ methods like controlled buckling to achieve curved 3D fold lines, which would allow it to achieve a larger space of geometries. Relatedly, Kerfing is a woodworking technique that allows flat materials to be controllably bent in two dimensions (as opposed to the one dimension supported by scoring) via intricate cut-away patterning $\left\lbrack {9,{21},{29},{49}}\right\rbrack$ . In principle this is similar to scoring, however, because kerfing removes material it can stretch as well as bend, and attain quasi-developable surfaces. Future work should explore how buckling and kerfing can be incorporated into FoldMold to support more complex curvatures.
+
+Assembly Optimizations: As FoldMolds get more complicated, their hands-on assembly becomes more challenging. Future work should explore ways to computationally optimize the components of the mold for faster assembly. For example, joinery can be minimized and placement of seams optimized; model geometries themselves can be simplified for a speed-fidelity trade-off useful at early, "draft quality" prototyping stages.
+
+Multi-Material Molds and Casts and Interesting Inclusions: We can investigate material combinations in two ways. First, multiple molding materials (i.e., different paper weights) can theoretically be used together for molds that are very flexible in certain areas and very strong in others. Second, certain prototypes may require multiple casting materials in the same mold, and this would influence how the 2D mold pieces fit together and the needed support structures. This can potentially be expanded to support the prototyping of objects with embedded electronic components like sensors and actuators for applications such as soft robotics and wearable electronics.
+
+§ ACKNOWLEDGMENTS
+
+Anonymized for review.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..aba17caf0c9db613953825a387ca7f02e894e9fe
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,371 @@
+# Multi-level Correspondence via Graph Kernels for Editing Vector Graphics Designs
+
+Category: Research
+
+
+
+Figure 1: Graphic designs often contain repeating sets of elements with similar structure. We introduce an algorithm that automatically computes this shared structure, which enables graphical edits to be transferred from a set of source elements to multiple targets. For example, designers may want to propagate isolated edits to element attributes (a), apply nested layout adjustments (b), or transfer edits across different designs (c).
+
+## Abstract
+
+To create graphic designs such as infographics, UI designs, explanatory diagrams, designers often need to apply consistent edits across similar groups of elements, which is a tedious task to perform manually. One solution is to explicitly specify the structure of the design upfront and leverage it to transfer edits across elements that share the predefined structure. However, defining such structure requires a lot of forethought, which conflicts with the iterative workflow of designers. We propose a different approach where designers select an arbitrary set of source elements, apply the desired edits, and automatically transfer the edits to similarly structured target elements. To this end, we present a graph kernel based algorithm that retroactively infers the shared structure between source and target elements. Our method does not require any explicit annotation, and can be applied to any existing design regardless of how it was created. It is flexible enough to handle differences in structure and appearance between source and target graphics, such as cardinality, color, size, and arrangement. It also generalizes to different types of edits such as style transfer or applying animation effects. We evaluate our algorithm on a range of real-world designs, and demonstrate how our approach can facilitate various editing scenarios.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+## 1 INTRODUCTION
+
+Graphic designs such as infographics, UI mockups, and explanatory diagrams often contain multiple sets of elements with similar visual structure. For example, in Figure Fig. 1(a), each chart is represented by a circle, a country name tag, three data bars and text annotations arranged in a consistent manner. There are similar repetitions across the graphics for each building in Figure 1(b). In some cases, we also see elements with consistent visual structure across multiple different designs. For example, Figure 1(c) shows two separate diagrams created by the same designer that share a similar structure.
+
+Designers often need to apply consistent edits such as style changes, layout adjustments, and animation effects across these repeating sets of elements. For example, Figure 1(a) shows adjustments to fill color and adding drop shadows for various elements, and Figure 1(b) shows modifications to the spacing and layout of the text and graphics for each building. Such edits are tedious to perform manually, especially as the number of elements increases. One solution is to explicitly specify the structure of the design upfront and leverage it to transfer edits across sets of elements that share the predefined structure. For example, Microsoft PowerPoint's master slide feature allows users to edit the appearance of multiple slides at once. Similarly, popular UX design tools (e.g., Adobe XD, Figma) encourage users to define master symbols or components that control the properties of repeated instances within a design, such as buttons, icons or banners. However, defining such structure ahead of time requires substantial forethought, which often conflicts with designer workflows. In many cases, designers create and iterate on the whole graphic to get the overall design right before thinking about repeated elements, shared structure, or what edits to apply.
+
+We propose a different approach to help designers apply edits consistently across a design. Instead of asking users to explicitly structure their content ahead of time, we allow them to select an arbitrary set of source elements, apply the desired edits, and then automatically transfer the edits to a collection of target elements. In this workflow, the system is responsible for retroactively inferring the shared structure between the source and target elements. Thus, our approach can be applied to any design, regardless of how it was created. Our method is not limited to graphics that share identical structure or elements but is flexible enough to accommodate common variations between the source and target graphics, such as color, shape, arrangement, or element cardinality. Moreover, it generalizes to different types of edits such as style transfer, layout adjustments, or applying animation effects.
+
+A core challenge in realizing this approach is finding correspondences between the source and target graphics such that the appropriate source edits can be applied to each target element. First, both the source and target graphics may contain many similar elements, both related and unrelated. Second, the target graphics usually contain several differences from the source graphics. For instance, in Figure 1(a) the size as well as the position of the data bars relative to the circle element are slightly different from one another. The type and range of these differences vary for each design, making it hard to define a consistent matching algorithm based on heuristics. Finally, some types of edits must take into account nesting and ordering relationships between the elements. For example, in Figure 1(b), the designer may scale up the entire set of graphics for one of the buildings and then separately adjust the vertical spacing between the text elements. Transferring this edit properly to all the buildings requires that we identify the analogous hierarchical structure for the other graphical elements in the design by computing multi-level correspondence. Although designers often organize elements into various groups during the creation process, these are not a reliable indicator of perceptual structure and do not always correspond to the desired hierarchy for an edit or transfer. In addition, user-created groups typically do not encode the ordering of elements, which is important for temporal effects like animation.
+
+The main contribution of our work is an automatic algorithm for determining the shared structure between source and target elements that addresses these challenges. Our method is based on graph kernels. Given source and target graphics, we compute relationship graphs that encode the structure of the elements. We then analyze the source and target relationship graphs using graph kernels to compute element-wise correspondences. We also introduce an efficient method to hierarchically cluster and sequence the target elements into ordered trees whose structure is consistent with the source graphics. Together, the correspondences and ordered trees make it possible to transfer edits from the source to target elements. We evaluate our algorithm on a range of real-world designs, and demonstrate how our approach facilitates graphical editing.
+
+## 2 RELATED WORK
+
+Inferring Structure in Graphical Designs A large body of work focuses on automatically estimating the internal structure of graphic designs to facilitate authoring and editing. For example, in the context of graphical patterns, [10] computes perceptual grouping of discrete patterns, and [3] encodes the structure of a pattern in a directed graph representation to create design variations. In the context of web designs, [7] leverage the tree structure of the DOM as well as its style and semantic attributes to create mappings between web pages. [9] generates semantic annotations for mobile app UIs by extracting patterns from the source code. In the context of layout optimization, [24] automatically decomposes a 2D layout into multiple 1D groups to perform group-aware layout arrangement. Other examples include structural analysis of architectural drawings [13], procedurally generated designs [16, 22], 3D designs [19] and 3D scenes $\left\lbrack {2,{21}}\right\rbrack$ . Our work focuses on $2\mathrm{D}$ vector graphics designs, with the purpose of facilitating edit operations.
+
+While several design software such as [6] allow users to create graphical designs procedurally (and then perfectly matched, edited, and animated), such tools are not commonplace, and the resulting designs loose their structural information if they are exported to portable data-formats such as SVG or PDF. We propose a generic solution that only depends on the graphics, regardless of how they were created.
+
+A different approach to purely automatic inference is mixed-initiative methods that take advantage of user interactions to infer structure. For example, previous work has analyzed user edits to extract implicit groupings of vector graphic objects [17], detect related elements in slide decks [1], and infer graphical editing macros by example based on inter-element relationships [8]. Some existing techniques introduce new interactive tools that facilitate manipulation or selection of multiple elements through a combination of explicit user actions and automatic inference [5,23]. In contrast, we propose an automatic algorithm to determine the shared structure within graphic designs. Our method does not require user annotations or edit history, which means it can be applied to any existing vector graphics design, regardless of how it was created.
+
+## Computing Correspondence between Graphic Designs
+
+Computing the correspondence between two analogous designs is a long standing problem in graphics with many applications. Many techniques have been developed for computing correspondences between a pair of images [18], 3D shapes [20], and 3D scenes [2]. These algorithms exploit local features as well global structures to compute the correspondence.
+
+One technique that has proven highly effective for comparing different objects is kernel-based methods. There is ample work on defining kernels between highly structured data types [15]. In particular, one approach is to represent objects or collections objects as a graph and define a kernel over the graphs. This approach has been applied to a variety of problems such as molecule classification [11], computing document similarity [14] and image classification [4]. 3D scene comparisons [2]. Our algorithm is directly inspired by [2], which uses graph kernels to compute a similarity between 3D scenes. To the best of our knowledge, there is no prior work that computes a pairwise element-to-element correspondence between two sets of vector-based graphical elements. In addition to element-wise correspondence, we also infer the nesting and ordering relationships between the elements, which is crucial for transferring complex edit operations.
+
+## 3 OVERVIEW
+
+The input to our method is a set of source elements that the user has manually edited, and a set of target elements to which the user wants to transfer the edits. Transferring isolated changes to element attributes (e.g., fill color, text formatting) simply requires matching each target element to the appropriate source element and applying the corresponding edit. However, other types of edits define nesting and ordering relationships that must be taken into account. For example, many layout changes are applied hierarchically. In Figure 1(b), the designer may scale up the entire set of graphics for the Burj Khalifa and then separately adjust the vertical spacing and arrangement of the text elements in that column. Transferring this edit properly to all the buildings requires that we identify the analogous hierarchical structure for the other graphical elements in the design. In addition, the ordering between elements is important for temporal effects like animation. In Figure 1(a), the designer may want to apply animated entrance effects, in a specific sequence, to the set of elements for the Central Mediterranean region. Transferring this edit to the rest of the design requires grouping the elements for each year and determining the appropriate animation order.
+
+To ensure that our method generalizes to these various editing scenarios, we specify the desired nesting and ordering relationships amongst the source elements as part of the input. Specifically, the source elements are represented as an ordered tree (source tree), and our goal is to organize the target elements into one or more ordered target trees that correspond to the source tree structure (Figure 4). Note that the problem becomes much simpler if each source element is allowed to match no more than one target element, or if we assume that the target elements are already organized into ordered trees that match the source tree. In such cases, finding the appropriate element-wise correspondence is sufficient. However, these assumptions are not realistic for most real-world design worfklows. They either require users to manually select subsets of target elements to perform individual transfer operations, which is inconvenient for designs with many repeated components, or to arrange graphics into consistently ordered trees (which may differ per editing operation) ahead of time.
+
+Thus, we propose an algorithm for computing the shared structure between source and target elements that does not limit the number of target elements or assume the presence of consistent pre-defined structure in the design.
+
+## 4 ALGORITHM
+
+Our algorithm is composed of two main stages. First, we compute an element-wise correspondence by finding the best matching source element for each target element. Then, we compute a hierarchical clustering of the target elements and organize them into ordered target trees. Overall, our approach is heavily inspired by Fisher et al. [2]. While [2] compute global similarity between entire 3D scenes, we need a detailed, structured correspondence between individual elements. This requires 3 new aspects in our approach.
+
+- Finding an optimal element-wise match requires a similarity score for each source target-pair (vs a single similarity score between two scenes in [2]), and an algorithm to find the best match using these scores. (Section 4.2.5)
+
+- Determining the nesting and ordering that corresponds to the edit operations requires clustering the elements and inferring their order. (Section 4.3)
+
+- We use different low-level kernels pertinent to comparing graphic designs. For example, while [2] uses binary edge kernels by comparing edge types, we assign partial similarity by also considering the distance between elements. Such details are important to determine a correct match and nesting for designs which may contain many elements that share identical relationships.
+
+In the following, we use similar notation as [2] to illustrate how our method relates to and extends the previous work.
+
+### 4.1 Relationship Graphs
+
+Given the set of source elements $s$ and target elements $t$ , we start by constructing relationship graphs, ${G}_{s}$ and ${G}_{t}$ for the source and target graphics, respectively. The graph nodes represent the graphical elements, and the edges specify relationships between those elements. We rely on the intuition that elements have certain relationships that characterize the structure of the design and makes two designs more or less similar to each other. For example, in Figure 1(a), the text elements 'Central', 'Mediterranean' and '91,302' are center-aligned with each other and all contained inside the orange circle element. The blue and green charts also contain analogous text and circle elements that share these relationships. We selected a number of prevalent relationships by observing real world designs. Table 1 shows the list of the relationships and the process used to test for them. In general, the graph may contain multiple edges between a pair of nodes.
+
+
+
+Figure 2: An example relationship graph for a set of elements. Only a subset of the edges are shown.
+
+The tests are performed in the order listed. To eliminate redundancy, we encode at most one edge for a given category between any two elements. For example, if elements A and B are both center-aligned and left-aligned, we only encode the center-aligned relationship since that is the first test to be satisfied in the horizontal alignment category. Figure 2 is an example illustrating different edges in a relationship graph.
+
+Note that we do not encode grouping information from the designer. Grouping structure created during the authoring process is usually not a reliable indicator of the visual structure that informs most edit transfer operations. Thus, we decided to disregard all such groups when constructing the relationship graphs.
+
+### 4.2 Computing Element-wise Correspondence
+
+After constructing source and target relationship graphs, we compute correspondences between their nodes. More specifically, for each target graph node ${n}_{t} \in {G}_{t}$ , we find the closest matching source graph node ${n}_{s} \in {G}_{s}$ using a graph kernel based approach inspired by [2]. To apply the method, we define separate kernels to compare individual nodes and edges across the two graphs.
+
+#### 4.2.1 Node Kernel
+
+The nodes in our relationship graph represent individual graphical elements, with a number of properties such as type, shape, size and style attributes. The node kernel is a combination of several functions, each of which takes as input two nodes and computes the similarity of different features of the nodes. Each function described below is constructed to be positive semi-definite and bounded between 0 (no similarity) and 1 (identical).
+
+Type Kernel $\left( {k}_{\text{type }}\right)$ : Graphic elements are typically categorized as shapes (e.g., path, circle, rectangle), images, or text. In particular, most graphic design products and the SVG specification distinguish objects in this way. The type kernel returns 1 if two nodes have the exact same type (e.g., circle-circle), 0.5 if they are in the same category (e.g., circle-path), and 0 otherwise.
+
+Size Kernel $\left( {k}_{\text{size }}\right)$ : This function compares the bounding box size of two elements. It returns the area of the smaller bounding box divided by the area of the larger bounding box.
+
+Online Submission ID: 0
+
+| tableheader Category | Edge | Test |
| Intersection | Overlay | Element A is contained in element B and vice versa. |
| Contained in | Element A is contained in Element B if A's bounding box is inside the B's bounding box. |
| Overlap | Element A overlaps element B if their bounding boxes intersect |
| Z-Order | Z-Above / Z-Below | Element A is Z-Above (Z-Below) element B if element A and B overlap, and A's z-order is higher (lower) than that of B. |
| Vertical alignment | Center / Left / Right | Similar to intersection relationships, alignment is computed on element bounding boxes. |
| Horizontal Alignment | Middle / Top / Bottom |
| Horizontal Adjacency | Left of / Right of | Element A is left-of element B if its bounding box is to the left of B's bounding box within a threshold, and if the vertical range of their bounding boxes overlap.(*) |
| Vertical Adjacency | Above / Below |
| Style | Same Style | While there is a plethora of style attributes for each element, we use fill color and stroke style for non-text elements, and font style for text elements since these attributes are visually most apparent. |
+
+Table 1: Edges encoded in relationship graphs. (*For For threshold, we use $\frac{1}{2}$ (width of source graphics bounding box) for horizontal adjacency, and $\frac{1}{2}$ (height of source graphics bounding box) for vertical adjacency. If element $A$ is left of multiple other elements, we only encode the relationship with the closest element. These constraints prevent edges between elements that are far apart relative to the size of the source graphics.)
+
+
+
+Figure 3: The element shape kernel, ${k}_{\text{shape }}$ computes the difference between the normalized bitmap images of the elements' silhouettes.
+
+Shape Kernel $\left( {k}_{\text{shape }}\right)$ : We obtain the normalized shape (ignoring aspect ratio) of each element by taking its filled silhouette and scaling it into a ${64} \times {64}$ bitmap image (Figure 3). The element shape kernel returns the percentage image difference between two normalized shapes.
+
+Font Kernel $\left( {k}_{\text{font }}\right)$ : For comparing two text elements, we consider their font style attributes. Specifically, we compare font-family, font-size, font-style (e.g., normal, italic) and font-weight (e.g., normal, bold). We return the percentage of style attributes that have equal values.
+
+The final node kernel, ${k}_{\text{node }}$ , is a weighted sum of the above kernels. Since many editing operations (e.g., changing font size, applying a character-wise animation effect) are non-transferable between text and shape elements, we separate text elements and non-text elements, and only compare elements within the same category. For comparing shape elements, we take into account type, size and shape kernels.
+
+$$
+{k}_{\text{node }}\left( {{n}_{s},{n}_{t}}\right) = {\omega }_{\text{type }}{k}_{\text{type }}\left( {{n}_{s},{n}_{t}}\right) \tag{1}
+$$
+
+$$
++ {\omega }_{\text{size }}{k}_{\text{size }}\left( {{n}_{s},{n}_{t}}\right) + {\omega }_{\text{shape }}{k}_{\text{shape }}\left( {{n}_{s},{n}_{t}}\right)
+$$
+
+For text elements, font style attributes are deemed more discriminatory than shape or size.
+
+$$
+{k}_{\text{node }}\left( {{n}_{s},{n}_{t}}\right) = {\omega }_{\text{type }}{k}_{\text{type }}\left( {{n}_{s},{n}_{t}}\right) + {\omega }_{\text{font }}{k}_{\text{font }}\left( {{n}_{s},{n}_{t}}\right) \tag{2}
+$$
+
+If ${n}_{s}$ and ${n}_{t}$ are not in the same category, we assign a small constant (0.1) instead. The weights, ${\omega }_{\text{type }},{\omega }_{\text{size }},{\omega }_{\text{shape }}$ and ${\omega }_{\text{font }}$ , are defined per each source element. $\$ {4.2.4}$ details how we compute these weights.
+
+#### 4.2.2 Edge Kernel
+
+Next, we define an edge kernel to compute the similarity between a pair of edges that represent the relationship between two graphical elements. Each edge encodes a type of relationship (e.g., overlap, left-aligned). Based on our observation of real-world designs, we distinguish between strong edges and regular edges. Strong edges are highly discriminative relationships that tend to be preserved across design alterations. These include intersection and z-order relationships. All other edge relationships are deemed regular. In addition to the type $\left( \tau \right)$ , each edge also encodes the distance(d) between the two connected elements. Distances are approximated by the distances between the bounding box centers. Then, the kernel between two edges ${e}_{s}$ and ${e}_{t}$ with types ${\tau }_{{e}_{s}},{\tau }_{{e}_{t}}$ and distances ${d}_{{e}_{s}},{d}_{{e}_{t}}$ respectively is defined as:
+
+$$
+{k}_{\text{edge }}\left( {{e}_{s},{e}_{t}}\right) = {\omega }_{{\tau }_{{e}_{s}}}c\left( {\tau }_{{e}_{s}}\right) \delta \left( {{\tau }_{{e}_{s}},{\tau }_{{e}_{t}}}\right) \frac{\min \left( {{d}_{{e}_{s}},{d}_{{e}_{t}}}\right) }{\max \left( {{d}_{{e}_{s}},{d}_{{e}_{t}}}\right) } \tag{3}
+$$
+
+where $\delta$ is a Kronecker delta function which returns whether the two edges types ${\tau }_{{e}_{s}}$ and ${\tau }_{{e}_{t}}$ are identical. $c\left( {t}_{e}\right)$ is 2.5 if ${\tau }_{e}$ is a strong edge, and 1 if it is a regular edge. Again, ${\omega }_{{\tau }_{{e}_{c}}}$ is a weight factor that is computed per source element and per edge type. See $\$ {4.2.4}$ for details.
+
+#### 4.2.3 Graph Walk Kernel
+
+Using the node and edge kernels we compute a graph walk kernel to compare nodes between two graphs. A walk of length $p$ on a graph is an ordered set of $p$ nodes on the graph along with a set of $p - 1$ edges that connect this node set together. We exclude walks that contain a cycle. Let ${W}_{G}^{p}\left( n\right)$ be the set of all walks of length $p$ starting at node $n$ in a graph $G$ . To compare nodes ${n}_{s}$ and ${n}_{t}$ in relationship graphs ${G}_{s}$ and ${G}_{t}$ respectively we define the $p$ -th order rooted walk graph kernel ${k}_{R}^{P}$ .
+
+$$
+{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) =
+$$
+
+$$
+\mathop{\sum }\limits_{{{W}_{{G}_{s}}^{p}\left( {n}_{s}\right) ,{W}_{{G}_{t}}^{p}\left( {n}_{t}\right) }}{k}_{\text{node }}\left( {{n}_{{s}_{p}},{n}_{{t}_{p}}}\right) \mathop{\prod }\limits_{{i = 1}}^{{p - 1}}{k}_{\text{node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{i}}}\right) {k}_{\text{edge }}\left( {{e}_{{s}_{i}},{e}_{{t}_{i}}}\right) \tag{4}
+$$
+
+The walk kernel compares nodes ${n}_{s}$ and ${n}_{t}$ by comparing all walks of length $p$ whose first node is ${n}_{s}$ against all walks of length $p$ whose first node is ${n}_{t}$ . The similarity between a pair of walks is computed by comparing the nodes and edges that compose each walk using the node and edge kernels respectively.
+
+Finally, the similarity of nodes ${n}_{s}$ and ${n}_{t}$ is defined by taking the sum of the average walk graph kernels for all walk lengths up to $p$
+
+$$
+\operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) = \mathop{\sum }\limits_{p}\frac{{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) }{\left| {{W}_{{G}_{s}}^{p}\left( {n}_{s}\right) }\right| \left| {{W}_{{G}_{t}}^{p}\left( {n}_{t}\right) }\right| } \tag{5}
+$$
+
+where $\left| {{W}_{G}^{p}\left( n\right) }\right|$ is the number of all walks of length $p$ starting at node $n$ in a graph $G$ .
+
+#### 4.2.4 Kernel Weights
+
+The node and edge kernels in Equations 1 - 3 compare different features (e.g., shape, style, layout) of the source and target graphics. The weights applied to these kernels represent the importance of each feature in determining correspondence. It is not possible to assign globally meaningful weights because the discriminative power of each feature depends on the specific design and even specific elements within the design. For example, in Table 3, D4, 7 out of the 8 source elements have the same color, green. So, color is not a very discriminative feature for these elements. On the other hand the circle element which contains the symbol has a unique pastel color within the source set. For this element, color is a highly discriminative feature.
+
+We assume that features that are highly discriminative within the source graphics will also be important when comparing to the target graphics. Based on this assumption, we determine a unique set of kernel weights for each element in the source graphics, $s$ , as follows.
+
+Node kernel weights: For each element ${n}_{{s}_{i}} \in s$ , we compute the node feature kernel between ${n}_{{s}_{i}}$ and every other element ${n}_{{s}_{j}} \in s$ . The weight ${\omega }_{\text{feature }}$ is inversely proportional to the average feature kernel value.
+
+$$
+{\omega }_{\text{feature }}\left( {n}_{{s}_{i}}\right) = {1.0} - \frac{\mathop{\sum }\limits_{{{n}_{{s}_{j}} \in s, j \neq i}}{k}_{\text{feature }}\left( {{n}_{{s}_{i}},{n}_{{s}_{j}}}\right) }{\left| s\right| - 1} \tag{6}
+$$
+
+where $\left| s\right|$ is the number of elements in the source graphics. If the average kernel value for a certain feature is high, many elements within the source graphics share a similar value for that feature, so the feature is less discriminative and vice versa.
+
+Edge kernel weights: The weights for the edge kernel is defined for each source element, ${n}_{{s}_{i}}$ , and for each edge type, ${\tau }_{e}$ .
+
+$$
+{\omega }_{{\tau }_{e}}\left( {n}_{{s}_{i}}\right) = 1 - \frac{\mathop{\sum }\limits_{{{e}_{i} \in {E}_{{n}_{{s}_{i}}}}}\delta \left( {{\tau }_{{e}_{i}},{\tau }_{e}}\right) }{\left| {E}_{{n}_{{s}_{i}}}\right| } \tag{7}
+$$
+
+where ${E}_{{n}_{{s}_{i}}}$ is set of all edges from node ${n}_{{s}_{i}}$ . The numerator counts the number of edges in ${E}_{{n}_{{s}_{i}}}$ that has type ${\tau }_{e}$ . If element ${n}_{{s}_{i}}$ has many edges of a given type, that edge type is less discriminatory for ${n}_{{s}_{i}}$ and vice versa. In $\$ {6.2}$ , we conduct an ablation experiment, where we replace these weights with uniform weights.
+
+#### 4.2.5 Element-wise Correspondence
+
+Given the pairwise similarity score between the source and target elements, a straightforward approach for finding an element-wise correspondence would be to match each target element to the source element that has the highest similarity score. The downside of this approach is that it is sensitive to small differences in the similarity score. Instead, we take an iterative approach which looks for confident matches and utilizes these matches to update the similarity scores of other pairs of elements. Source element ${n}_{{s}_{i}}$ and target element ${n}_{{t}_{j}}$ is a confident match if
+
+$$
+\operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) \gg \operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{{t}_{j}}}\right) \;\forall {n}_{s} \in s,{n}_{s} \neq {n}_{{s}_{i}} \tag{8}
+$$
+
+that is, if the similarity score of $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is much greater than the similarity score of ${n}_{{t}_{j}}$ with any other source elements.
+
+Once we identify the confident matches, we use them as anchors to re-compute all other pair-wise similarity scores. First, we update the node kernels in Equations 1-2. Since we are confident that $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a good match, we boost their node kernels:
+
+$$
+{k}_{\text{node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) = {2.5}\;\text{ if }\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) \text{ is a confident match } \tag{9}
+$$
+
+Since our goal is to match each target element to exactly one source element, if $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a confident match ${n}_{{t}_{j}}$ is not a good match with any other source element ${n}_{s} \neq {n}_{{s}_{i}}$ . Therefore, we discount these node kernels:
+
+${k}_{\text{node }}\left( {{n}_{s},{n}_{{t}_{j}}}\right) = {0.1}\;$ if $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a confident match and ${n}_{s} \neq {n}_{{s}_{i}}$(10)
+
+All other node kernels are computed using the original Equations 1-2. The similarity score is then computed using the original Equation 5 with the updated node kernels. We iterate the steps of identifying confident matches and re-computing the similarity scores until all target elements are matched to a source element or until we reach a set maximum number of iterations. At this point, any remaining target elements are matched with the most similar source element according to the most recently updated similarity score.
+
+### 4.3 Computing Ordered Target Trees
+
+The previous stage of the algorithm finds the closest matching source element for each target element. As noted earlier, this correspondence is enough to transfer isolated attributes that do not depend on the nesting structure or ordering of the target elements. However, for many other edits (e.g., layout, animation), transferring changes requires computing ordered target trees that are consistent with the source tree.
+
+#### 4.3.1 Hierarchical Clustering
+
+We start by computing a hierarchical clustering of the target elements that matches the structure of the source tree. More specifically, the goal is to cluster together target nodes that correspond to the same source sub-tree. For example, in Figure 1(a), there should be two top-level target clusters with the Hungary graphics and Greece graphics, which represent two coherent instances of the entire source tree (Italy graphics). If the source tree contains nested sub-trees, the target clustering should include corresponding nested clusters.
+
+To obtain these clusters, we use agglomerative nesting (AGNES), a standard bottom-up clustering method that takes as input a similarity matrix(D)and the number of desired clusters(k), and iteratively merges the closest pair of clusters to generate a hierarchy [12].
+
+The key aspect of our approach is how we define the similarity matrix, which measures how likely a pair of target elements should be clustered together. We rely on the intuition that the relationship between two target elements that belong to the same cluster would be similar to the relationship between their corresponding source elements. For example, in Figure 1(a), consider the orange circle and the text element 'Central' in the Italy graphics (source), and their corresponding elements in the target graphics (blue circle, green circle, 'Western' and 'Eastern'). The orange circle contains 'Central'. Likewise, the blue circle contains 'Western' and these elements should be grouped together. On the other hand although 'Eastern' also corresponds to 'Central', 'Eastern' and the blue circle do not have a containment relationship and these should be grouped separately. We measure the similarity of relationships between pairs of elements, again using a graph walk kernel. From the relationship graph defined in $§{4.1}$ , the relationship between two elements $n$ and ${n}^{\prime }$ is represented by ${W}_{G}^{p}\left( {n,{n}^{\prime }}\right)$ , the set of all walks of length $p$ whose first node is $n$ and whose last node is ${n}^{\prime }$ . Then, the similarity of target elements ${n}_{t}$ and ${n}_{t}^{\prime }$ with corresponding source elements ${n}_{s}$ and ${n}_{s}^{\prime }$ is defined as:
+
+$$
+{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{t},{n}_{t}^{\prime },{n}_{s},{n}_{s}^{\prime }}\right) =
+$$
+
+$$
+\mathop{\sum }\limits_{\substack{{{W}_{{G}_{s}}^{p}\left( {{n}_{s},{n}_{s}^{\prime }}\right) } \\ {{W}_{{G}_{I}}^{p}\left( {{n}_{t},{n}_{t}^{\prime }}\right) } }}{k}_{\text{node }}\left( {{n}_{s},{n}_{t}}\right) \mathop{\prod }\limits_{{i = 1}}^{{p - 1}}{k}_{\text{node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{i}}}\right) {k}_{\text{edge }}\left( {{e}_{{s}_{i}},{e}_{{t}_{i}}}\right)
+$$
+
+(11)
+
+This equation is equivalent to Equation 4, except here we compare walks between a fixed source and destination node. That is, we compare all walks of length $p$ starting at ${n}_{t}$ and ending at ${n}_{t}^{\prime }$ against all walks of length $p$ starting at ${n}_{s}$ and ending at ${n}_{s}^{\prime }$ . Again, we take the sum of the average walk graph kernels for all walk lengths up to $p$ . For the bottom-up clustering method, to compare distances between clusters, we use the average distance between all pairs of elements from each cluster.
+
+We use a simple heuristic to determine the number of clusters, $k$ . For each source element, we count the number of matched target elements and take the mode of this value. We apply this heuristic recursively to determine the number of clusters at each level. We experimented with more complex methods such as using the spectral gap of the Laplacian matrix or putting a threshold on the maximum distance between two clusters to be merged. However, we found that the simpler approach worked better in most cases, even with variations in element cardinality between the source and target graphics.
+
+#### 4.3.2 Ordering
+
+Once we obtain the target tree, the final step is to determine the ordering between the subtrees, which translates to the ordering of the elements. The ordering between subtrees depends heavily on the global structure of the design (e.g., radial design vs linear design), the semantics of the content (e.g., graphics that represent chronological events) and the user's intent (e.g., presenting things in chronological order ${vs}$ in reverse chronological order). Instead of trying to infer these factors, we rely on a simple heuristic based on natural reading order. We order the subtrees from left to right, then top to bottom using their bounding box centers.
+
+## 5 RESULTS
+
+To test the effectiveness of our algorithm, we collected a set of 25 vector graphic designs. The designs included infographics, presentation templates and UI layouts. We collected designs that contained a number of repeating structures that could benefit from bulk editing. We selected the source and target graphics for each design. Then, we manually coded the type of variations (e.g., color, layout) applied between the source and target graphics. More variations imply greater differences between the source and target graphics, making the correspondence more difficult to compute. Note that we did not use this test dataset to develop our algorithm.
+
+For each design, we manually specified an ordered source tree and the corresponding ground truth set of ordered target trees. By default, we created shallow source trees (height of 1) for all designs. We also created deeper trees for a few examples (e.g., Figure 4. See supplementary material for more examples). We chose the source element ordering such that applying entrance animation effects in that order would produce a plausible presentation. For example, text elements organized in paragraphs are ordered top-down, and container elements (e.g., the circle in Figure 1(a)) are succeeded by the interior graphics (e.g., 'Central', 'Mediterranean' and '91,302) For other elements, we arbitrarily chose one specific ordering among many reasonable options. When constructing the ground truth target trees, we created a separate subtree for each set of target elements that correspond to an instance of the source graphics. For example, in 1(a), the elements in the blue chart constitute one target subtree, and the elements in the green chart constitute another target subtree. In Figure 1(b) each set of graphics representing a building comprise a separate subtree. The ordering of elements within the lowest level subtrees is determined by the ordering of the corresponding source element in the source tree. Higher-level subtree orderings are determined by the heuristic described in 4.3.2.
+
+We evaluate the two stages of our algorithm separately. First, we evaluate the element-wise match between the source and target graphics. For the majority of target elements, the ground-truth match and clustering is visually apparent in the design. In cases where the variation between the source and target graphics make the match less clear, we choose a reasonable ground-truth match For example, in Table 3 D5, there are six different person icons each of which consist of multiple vector graphics paths. Since, each path roughly represents a body part (e.g., hair, face, neckline), we consider a match between equivalent parts of the symbol to be correct. On the other hand, the symbols in D4 do not have a clear semantic or visual correspondence. In this case, we consider a match between any target symbol path to any source symbol path as a correct match.
+
+We evaluate the match for two different scenarios. In each design, the target graphics contains multiple sets of targets that match the source graphics. First, we evaluate the match on each target set separately (Separate). This would apply to the scenario when the user selects each target set separately and applies a transfer one by one. Then, we evaluate the match between the source graphics and the entire target graphics, emulating the case when the user selects all target elements and applies the transfer at once (All). Table 3 reports the percentage of correctly matched target elements in each case.
+
+For the hierarchical clustering and ordering encoded in the target trees, we also evaluate two scenarios. First, we compute the target trees given the match results from the All scenario, which may include incorrect matches (Default). Next, we compute the trees given the ground-truth match (Perfect Match).
+
+In order to quantify the correctness of the final result, we define an edit distance, ${D}_{e}$ , that measures the difference between the ground-truth target tree and our computed target tree. To simplify this distance, we flatten both the ground-truth and computed trees into a sequence of target elements based on the ordering information encoded in the target trees. In these sequences, we label each target element with the corresponding source node, which allows us to identify errors in the computed element-wise matches (wrong label for a given target element) and incorrect ordering (wrong sequence of labels). Given a flattened ground-truth sequence $g$ and a computed sequence $r$ , we define the edit distance as follows:
+
+$$
+{D}_{e}\left( {r, g}\right) = {D}_{m}\left( {r, g}\right) + {D}_{o}\left( {r, g}\right) \tag{12}
+$$
+
+where the match edit distance ${D}_{m}$ encodes differences in element-wise matches, and the order edit distance ${D}_{o}$ encodes differences in the order of elements between $r$ and $g.{D}_{m}\left( {r, g}\right)$ is simply the total number of elements in the target that are matched to the wrong source element. It roughly represents the work required to fix all the incorrect matches. For the Default case, it is equal to the number of incorrect matches in All. For the Perfect Match case, it is equal to 0 .
+
+
+
+Figure 4: Example of ordered source and target trees. Node colors indicate element-wise correspondence. Multiple instances of matching target graphics is represented as a nested target tree. Here, the entire source graphics (top slice of bulb), matches 4 target instances, represented by subtrees T1, T2 and T3. Within T1, there are 3 instances/sub-trees of the necktie symbol which corresponds to the flashlight symbol in the source. Our algorithm takes as input an ordered source tree and outputs an ordered target tree.
+
+The order edit distance is defined as:
+
+$$
+{D}_{o}\left( {r, g}\right) = N - \operatorname{InOrder}\left( {{r}^{\prime }, g}\right) \tag{13}
+$$
+
+where $N$ is the number of elements in the target graphics, and ${r}^{\prime }$ is the result of correcting all matches in $r$ . Note that ${r}^{\prime }$ must be a permutation of $g$ since both contain the same set of target-to-source matches (potentially in different orders). InOrder $\left( {{r}^{\prime }, g}\right)$ is a function that returns the total length of all subsequences in ${r}^{\prime }$ of length $\geq 2$ that exactly match a subsequence of $g$ , minus $\left( {{N}_{o} - 1}\right)$ , where ${N}_{o}$ is the number of matching subsequences. If $g$ and ${r}^{\prime }$ are identical, ${D}_{o}\left( {{r}^{\prime }, g}\right) = 0.{D}_{o}$ is a variant of the transposition distance and roughly measures the number of operations required to correct the order of ${r}^{\prime }$ to match $g$ . Note that ${D}_{e}$ does not penalize $r$ for errors in the hierarchical structure of clusters as long as the result has the same ordering as $g$ . In most cases, having the correct match and ordering will produce the desired visual result. Thus, ${D}_{e}$ attempts to approximate the number of correction operations needed to reach the desired ground truth solution. We report ${D}_{e}$ divided by the number of target elements, $N$ . Note that creating $g$ manually would roughly correspond to ${D}_{e} = N$ , since the user could just visit all the target elements in the correct order and assign each target element to the appropriate source element (or, equivalently, apply the desired edit).
+
+Element-wise Correspondence. Table 3 shows the result for a subset of the designs that we tested. The full set of results is included in the supplementary material. The source graphics is highlighted with a red outline. The rest of the graphics minus the greyed-out background is the target graphics. We obtain close to perfect element-wise matches even when the target graphics has multiple types of variations from the source graphics. The match performance was comparable across the Separate and All scenarios, with only slightly better accuracy for the Separate case. This means that the user can transfer edits from the source graphics to multiple sets of target graphics at once without having to select each target set individually, which is especially tedious when there are many elements in a complex layout.
+
+Ordered Target Trees. The accuracy of the ordered target trees varied widely across the designs. In order to obtain a perfect result, we must infer the correct match as well as the correct number of clusters(k). Error in either of these steps can have a big impact on the edit distance, ${D}_{e}$ . For example, our algorithm computed a perfect element-wise match for D1. However, because the target graphics contained much fewer elements than the source graphics, our simple heuristic incorrectly inferred that $k = 1$ . This led to a large ${D}_{e}$ since many elements were out of sequence. On the other hand, for D7, our heuristic correctly inferred that $k = 3$ , and in fact, the clustering algorithm accurately identified elements belonging to each target set, in this case, the individual profiles. However, the relatively poor element-wise match mixed up the ordering of the elements within each target subtree, resulting in a larger ${D}_{e}$ . Manually correcting the match (Perfect Match) and recomputing the target trees produced a perfect result $\left( {{D}_{e} = 0}\right)$ . In general, if the ordering error is due to the incorrect element-wise match, correcting the match will improve the accuracy of the resulting target trees However, if the clustering of targets itself is wrong, correcting the match can alleviate the error only partially. In $\$ 6$ , we also compare the results given a ground-truth $k$ .
+
+Nested Hierarchy. Note that if each source element matches exactly one target element, the structure of the target tree is completely determined by the element-wise correspondence. For example, the designs shown in Table 3 contain multiple top-level target subtrees that represent different sets of target elements that each map to the source graphics. However, within each target subtree, there is a one-to-one match between the source and target elements. Therefore, once we compute the top-level clustering of target elements into multiple target subtrees, the hierarchy within each target subtree is completely determined by the element-wise correspondence.
+
+Still, it is common for target graphics to have a different cardinality of constituent elements. For example, in Figure 4(a), each slice of the light bulb contains a different number of symbols (e.g., flashlight, necktie, dollar sign), each of which consist of multiple path elements. Our algorithm handles these cases by recursively clustering target elements into sub-trees. For example, in Figure 4(c) T1, the 6 path elements that make up the necktie symbols are matched to the 2 source path elements that make up the flashlight in S. These paths are clustered into lower-level subtrees that represent 3 necktie symbols.
+
+Editing Applications. Our algorithm for computing correspondences and ordered target trees supports a variety of edit transfer scenarios. As noted earlier, transferring animation effects is uniquely challenging because they often involve both temporal (ordering) and hierarchical structure. To demonstrate this application, we implemented a prototype system that uses our automatic computation to transfer animation patterns from source to target elements. The accompanying submission video shows animation transfers for several designs from our test dataset. In addition, Figure 1 shows other possible edit transfer scenario. We created these examples by computing correspondences and ordered target trees for each set of source elements and then manually applying the corresponding edits to the target elements. In this process, we did not correct or modify the automatic output of our algorithm.
+
+## 6 Ablation Experiments
+
+To further evaluate the impact of different aspects of our algorithm, we conducted ablation experiments by removing key parts or our method or replacing them with a simpler baselines.
+
+### 6.1 Removing Edge Kernels
+
+The graph walk kernel defined in Equation 4 compares two nodes ${n}_{s}$ and ${n}_{t}$ by comparing the similarity of their respective relationships with neighboring nodes. The walk length, $p$ , determines the size of the neighborhood to consider. In our implementation, we experimentally determined that $p = 1$ obtained satisfactory results. That is, considering only immediate neighbors was enough to predict good element-wise correspondences. We compare this approach to $p = 0$ , where we disregard the relationships between the nodes and only use the node kernels to compute element matches. Table 2 (column Node Kernels Only) shows the result.
+
+| Design | Match |
| Ours $p = 1$ | Node Kernel Only $p = 0$ | Uniformω $p = 1$ | Greedy Match |
| Average | 0.95 | 0.82 | 0.93 | 0.93 |
| D1 | 1.00 | 0.95 | 1.00 | 0.83 |
| D2 | 1.00 | 0.76 | 1.00 | 1.00 |
| D3 | 0.94 | 0.80 | 0.94 | 0.97 |
| D4 | 1.00 | 0.76 | 1.00 | 1.00 |
| D5 | 0.90 | 0.78 | 0.86 | 0.87 |
| D6 | 1.00 | 0.94 | 0.97 | 1.00 |
| D7 | 0.78 | 0.78 | 0.78 | 0.78 |
+
+Table 2: Ablation experiments for element-wise matching.
+
+In the majority of cases, removing edge kernels and only considering node kernels produced worse results. This was especially true for designs that contained many elements that looked alike, with similar shapes and sizes. For example, in D3, the shorter target green bars get matched to the source blue bar because they have similar sizes. Likewise, in D6, each horizontal bar in the target graphics get matched to the source bar with the closest size, rather than being matched according to their relative positions. A particularly bad failure case is shown in Figure 5 D10 (Ours $= {1.0}$ vs Node Kernel Only $= {0.47}$ ), where the shadow consists of multiple circle elements with the same shape and size. In this case, without the z-order or relative positioning information, it is challenging to get a correct pair-wise correspondence between these circles. This experiment demonstrates that inter-element relationships are critical for discerning element-wise correspondences in graphic designs.
+
+### 6.2 Uniform Kernel Weights
+
+In $\$ {4.2.4}$ , we describe a method for determining the importance and thus the weights $\left( \omega \right)$ of each feature kernel. We evaluate the effectiveness of these weights by replacing them with uniform weights and comparing the results. Interestingly, in most cases, kernel weights did not have a significant impact on the performance. In a few cases, adaptive weights (ours) produced better matches compared to uniform weights. For instance, in D5, all the symbol paths have the same type (path) so we put a small weight on the type kernel, and higher weights on other features such as the positions of the paths relative to each other. This helps to differentiate the subtle difference between these paths. Adaptive weights are also useful for matching elements where some features are much more powerful than others. For example, in Figure 5, D11 (Ours $= {0.97}$ vs. Uniform Weights $= {0.86})$ , the type and font style attributes are much more powerful than the shape or layout relationship features. Still, for most designs, there are many features with discriminatory power, and replacing $\omega$ s with a uniform weight produces as good a match as our previous result.
+
+
+
+Figure 5: (a) Design with many elements that have similar appearance. The shadow parts consist of multiple circle elements with the same shape and size. Inter-element relationships are critical for discerning correspondence between such designs. (b) Depending on the design, some features are more discriminatory than others. In D11 element type and font style attributes are more power features than relative positioning between the elements.
+
+### 6.3 Greedy Matching
+
+In $§{4.2.5}$ , we describe an iterative method by which we first match confident pairs of nodes, and then use these matches to iteratively refine the similarity score of other pairs of nodes. We compare this approach to a greedy algorithm, whereby we simply match each target node to the source node with the highest similarity score. Table 2 column Greedy Match shows the result. For some designs, the greedy method produced as good a match as our iterative method. However, for other designs (e.g., D1 and D5), the greedy method performed worse. These were designs where a target element had several closely similar source elements. For example, in the UI design shown in D1, the different text elements for the user input fields have close similarity. In D5 the symbol paths as well as the solid blocks with 4 sides have close similarity to each other. In these scenarios, using the more confident matches to refine the similarity scores helped improve the pair-wise match.
+
+### 6.4 Clustering using Element Positions
+
+The second stage of our algorithm (§4.3) uses hierarchical clustering of the target elements to compute target trees. As noted in $§{4.3.1}$ , a key insight of our method is that the relationship between two target elements that belong to the same cluster should be similar to the relationship between their corresponding source elements. As a result, we use the graph walk kernel to analyze every pair of target elements and populate the similarity matrix used by the clustering procedure. To evaluate the importance of this insight, we compare our approach to a simpler heuristic that defines the similarity between every pair of target elements as their Euclidean distance (i.e., closer means more similar). We approximate element positions using the centroids of their bounding boxes. For both methods, we provide the ground-truth element-wise correspondences and the correct number of clusters, $k$ .
+
+Table 4 reports the results of the comparison. Using centroid distance produces worse clusters, especially when the desired target clusters are close to each other and arranged in a nonlinear layout (e.g., D2, D5). Even for designs with relatively simple layouts like D3, where the target clusters are visually separated from each other, the difference between the vertical and horizontal spacing coupled with the tall aspect ratio of some of the elements makes it challenging to estimate the correct clustering using only element centroids. In general, users can select an arbitrary arrangement of elements as the source graphics (not just ones that are geometrically close to each other). The graph walk kernel distance is better suited to handle these cases by producing clusters that reflects the original arrangement of the source graphics, as shown in the synthetic toy example, D8.
+
+ | Design | #Elements | Variations | Match Score | Tree Score $\left( {{D}_{e}/N}\right)$ |
| Source | Target(N) | Count | Type | Separate | All | Default | Perfect Match |
| Avg | | 11.0 | 36.6 | 4.23 | | 0.95 | 0.95 | 0.31 | 0.15 |
| D1 | | 17 | 21 | 2 | Text Cardinality | 1.00 | 1.00 | 0.81 $\left( {k = 1}\right)$ | - |
| D2 | | 10 | 50 | 3 | Color Shape Size | 1.00 | 1.00 | 0.06 $\left( {k = 6}\right)$ | - |
| D3 | | 9 | 36 | 4 | Cardinality Shape Size Layout Text | 1.00 | 0.97 | 0.03 $\left( {k = 4}\right)$ | 0 $\left( {k = 4}\right)$ |
| D4 | | 8 | 33 | 5 | Cardinality Color Layout Shape Text | 1.00 | 1.00 | 0 $\left( {k = 4}\right)$ | - |
| D5 | | 16 | 60 | 5 | Cardinality Color Layout Shape Size | 0.90 | 0.90 | 0.50 $\left( {k = 5}\right)$ | 0.08 $\left( {k = 5}\right)$ |
| D6 | | 10 | 33 | 5 | Cardinality Color Size Text Layout | 1.00 | 1.00 | 0 $\left( {k = 2}\right)$ | - |
| D7 | | 7 | 23 | 6 | Cardinality Layout Shape - Size Text Type | 0.78 | 0.78 | 0.65 $\left( {k = 3}\right)$ | 0 |
+
+Table 3: Representative examples from our test data set. For each design, we evaluate the element-wise correspondence and clustering separately.
+
+Please refer to supplementary material for full results.
+
+
+
+Table 4: Comparison of distance metrics used for clustering target elements (Graph Walk Kernel vs Centroid distance). The graph walk kernel distance produces clusters that reflects the original arrangement of the source graphics. *D8: red outline indicates source graphics, blue indicates target clusters.
+
+## 7 LIMITATION AND FUTURE WORK
+
+Although our algorithm is designed to handle different types of variations between the source and target graphics, as expected, large geometrical, style, and structural variations tend to produce erroneous matching and clustering results. Clustering is more prone to error because it is sensitive to the number of clusters, $k$ , as well as the match results. We use a simple heuristic to determine the number of target clusters, which works for many cases, but can also fail easily. Users could provide the ground-truth $k$ as input, but this also becomes tedious if the desired target tree is deeply nested and each subtree requires a different value of $k$ .
+
+One area for future work is to use the document edit history to inform the matching and clustering. For example, knowing which elements were copy-pasted, or which elements were selected and modified together could provide strong hints about matching elements. The creation or edit order of the target elements could also be used to inform the clustering and ordering.
+
+Another way to improve the algorithm is to take advantage of user corrections of the output. For example, when the user corrects an erroneous match, we could use this ground-truth match as a confident match to recompute the graph walk kernels, or to infer better weights for the kernels. If there are multiple errors in the output, we could potentially reduce the number of manual corrections needed by using each correction to recompute a new, improved output. In general, since the type and power of discriminatory features varies by design and user intent, learning from user inputs is an interesting avenue for future work.
+
+## 8 CONCLUSION
+
+In this work, we present an approach to help designers apply consistent edits across multiple sets of elements. Our method allows users to select an arbitrary set of source elements, apply the desired edits, and then automatically transfer the edits to a collection of target elements. Our algorithm retroactively infers the shared structure between the source and target elements to find the correspondence between them. Our approach can be applied to any existing design without manual annotation or explicit structuring. It is flexible enough to accomnodate common variations between the source and target graphics. Finally, it generalizes to different types of editing operations such as style transfer, layout adjustments or applying animation effects. We demonstrate our algorithm on a range of real-world designs, and show how our approach can facilitate editing workflows.
+
+## REFERENCES
+
+[1] D. Edge, S. Gulwani, N. Milic-Frayling, M. Raza, R. Adhitya Saputra, C. Wang, and K. Yatani. Mixed-initiative approaches to global editing in slideware. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pages 3503-3512, New York, NY, USA, 2015. ACM.
+
+[2] M. Fisher, M. Savva, and P. Hanrahan. Characterizing structural relationships in scenes using graph kernels. In ACM transactions on graphics (TOG), volume 30, page 34. ACM, 2011.
+
+[3] P. Guerrero, G. Bernstein, W. Li, and N. J. Mitra. PATEX: Exploring pattern variations. ACM Trans. Graph., 35(4):48:1-48:13, 2016.
+
+[4] Z. Harchaoui and F. Bach. Image classification with segmentation graph kernels. In 2007 IEEE Conference on Computer Vision and
+
+Pattern Recognition, pages 1-8. IEEE, 2007.
+
+[5] R. Hoarau and S. Conversy. Augmenting the scope of interactions with implicit and explicit graphical structures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, page 1937-1946, New York, NY, USA, 2012. Association for Computing Machinery.
+
+[6] http://www.figma.com.Figma, 2020 (accessed Dec 19, 2020).
+
+[7] R. Kumar, J. O. Talton, S. Ahmad, and S. R. Klemmer. Bricolage: Example-based retargeting for web design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, page 2197-2206, New York, NY, USA, 2011. Association for Computing Machinery.
+
+[8] D. Kurlander. Watch what i do. chapter Chimera: Example-based Graphical Editing, pages 271-290. MIT Press, Cambridge, MA, USA, 1993.
+
+[9] T. F. Liu, M. Craft, J. Situ, E. Yumer, R. Mech, and R. Kumar. Learning design semantics for mobile apps. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pages 569-579, 2018.
+
+[10] Z. Lun, C. Zou, H. Huang, E. Kalogerakis, P. Tan, M.-P. Cani, and H. Zhang. Learning to group discrete graphical patterns. ACM Transactions on Graphics (TOG), 36(6):225, 2017.
+
+[11] P. Mahé, N. Ueda, T. Akutsu, J.-L. Perret, and J.-P. Vert. Extensions of marginalized graph kernels. In Proceedings of the twenty-first international conference on Machine learning, page 70, 2004.
+
+[12] C. D. Manning, P. Raghavan, and H. Schütze. Introduction to Information Retrieval. Cambridge University Press, USA, 2008.
+
+[13] L. Nan, A. Sharf, K. Xie, T.-T. Wong, O. Deussen, D. Cohen-Or, and B. Chen. Conjoining gestalt rules for abstraction of architectural drawings, volume 30. ACM, 2011.
+
+[14] G. Nikolentzos, P. Meladianos, F. Rousseau, Y. Stavrakas, and M. Vazir-giannis. Shortest-path graph kernels for document similarity. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1890-1900, 2017.
+
+[15] J. Shawe-Taylor, N. Cristianini, et al. Kernel methods for pattern analysis. Cambridge university press, 2004.
+
+[16] O. Št'ava, B. Beneš, R. Měch, D. G. Aliaga, and P. Krištof. Inverse procedural modeling by automatic generation of l-systems. In Computer Graphics Forum, volume 29, pages 665-674. Wiley Online Library, 2010.
+
+[17] S. L. Su, S. Paris, and F. Durand. Quickselect: History-based selection expansion. In Proceedings of the 35th Graphics Interface Conference, pages 215-221, 2009.
+
+[18] R. Szeliski et al. Image alignment and stitching: A tutorial. Foundations and Trends® in Computer Graphics and Vision, 2(1):1-104, 2007.
+
+[19] J. Talton, L. Yang, R. Kumar, M. Lim, N. Goodman, and R. Měch. Learning design patterns with bayesian grammar induction. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST '12, page 63-74, New York, NY, USA, 2012. Association for Computing Machinery.
+
+[20] O. Van Kaick, H. Zhang, G. Hamarneh, and D. Cohen-Or. A survey on shape correspondence. In Computer Graphics Forum, volume 30, pages 1681-1707. Wiley Online Library, 2011.
+
+[21] K. Wang, Y.-A. Lin, B. Weissmann, M. Savva, A. X. Chang, and D. Ritchie. Planit: planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Transactions on Graphics (TOG), 38(4):132, 2019.
+
+[22] F. Wu, D.-M. Yan, W. Dong, X. Zhang, and P. Wonka. Inverse procedural modeling of facade layouts. arXiv preprint arXiv:1308.0419, 2013.
+
+[23] H. Xia, B. Araujo, and D. Wigdor. Collection objects: Enabling fluid formation and manipulation of aggregate selections. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, page 5592-5604, New York, NY, USA, 2017. Association for Computing Machinery.
+
+[24] P. Xu, H. Fu, C.-L. Tai, and T. Igarashi. Gaca: Group-aware command-
+
+based arrangement of graphic elements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 2787-2795. ACM, 2015.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0164a81b548dc3f64f8b8ccd4468b8d13956f182
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/7rM-nGqEpIe/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,428 @@
+§ MULTI-LEVEL CORRESPONDENCE VIA GRAPH KERNELS FOR EDITING VECTOR GRAPHICS DESIGNS
+
+Category: Research
+
+ < g r a p h i c s >
+
+Figure 1: Graphic designs often contain repeating sets of elements with similar structure. We introduce an algorithm that automatically computes this shared structure, which enables graphical edits to be transferred from a set of source elements to multiple targets. For example, designers may want to propagate isolated edits to element attributes (a), apply nested layout adjustments (b), or transfer edits across different designs (c).
+
+§ ABSTRACT
+
+To create graphic designs such as infographics, UI designs, explanatory diagrams, designers often need to apply consistent edits across similar groups of elements, which is a tedious task to perform manually. One solution is to explicitly specify the structure of the design upfront and leverage it to transfer edits across elements that share the predefined structure. However, defining such structure requires a lot of forethought, which conflicts with the iterative workflow of designers. We propose a different approach where designers select an arbitrary set of source elements, apply the desired edits, and automatically transfer the edits to similarly structured target elements. To this end, we present a graph kernel based algorithm that retroactively infers the shared structure between source and target elements. Our method does not require any explicit annotation, and can be applied to any existing design regardless of how it was created. It is flexible enough to handle differences in structure and appearance between source and target graphics, such as cardinality, color, size, and arrangement. It also generalizes to different types of edits such as style transfer or applying animation effects. We evaluate our algorithm on a range of real-world designs, and demonstrate how our approach can facilitate various editing scenarios.
+
+Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
+
+§ 1 INTRODUCTION
+
+Graphic designs such as infographics, UI mockups, and explanatory diagrams often contain multiple sets of elements with similar visual structure. For example, in Figure Fig. 1(a), each chart is represented by a circle, a country name tag, three data bars and text annotations arranged in a consistent manner. There are similar repetitions across the graphics for each building in Figure 1(b). In some cases, we also see elements with consistent visual structure across multiple different designs. For example, Figure 1(c) shows two separate diagrams created by the same designer that share a similar structure.
+
+Designers often need to apply consistent edits such as style changes, layout adjustments, and animation effects across these repeating sets of elements. For example, Figure 1(a) shows adjustments to fill color and adding drop shadows for various elements, and Figure 1(b) shows modifications to the spacing and layout of the text and graphics for each building. Such edits are tedious to perform manually, especially as the number of elements increases. One solution is to explicitly specify the structure of the design upfront and leverage it to transfer edits across sets of elements that share the predefined structure. For example, Microsoft PowerPoint's master slide feature allows users to edit the appearance of multiple slides at once. Similarly, popular UX design tools (e.g., Adobe XD, Figma) encourage users to define master symbols or components that control the properties of repeated instances within a design, such as buttons, icons or banners. However, defining such structure ahead of time requires substantial forethought, which often conflicts with designer workflows. In many cases, designers create and iterate on the whole graphic to get the overall design right before thinking about repeated elements, shared structure, or what edits to apply.
+
+We propose a different approach to help designers apply edits consistently across a design. Instead of asking users to explicitly structure their content ahead of time, we allow them to select an arbitrary set of source elements, apply the desired edits, and then automatically transfer the edits to a collection of target elements. In this workflow, the system is responsible for retroactively inferring the shared structure between the source and target elements. Thus, our approach can be applied to any design, regardless of how it was created. Our method is not limited to graphics that share identical structure or elements but is flexible enough to accommodate common variations between the source and target graphics, such as color, shape, arrangement, or element cardinality. Moreover, it generalizes to different types of edits such as style transfer, layout adjustments, or applying animation effects.
+
+A core challenge in realizing this approach is finding correspondences between the source and target graphics such that the appropriate source edits can be applied to each target element. First, both the source and target graphics may contain many similar elements, both related and unrelated. Second, the target graphics usually contain several differences from the source graphics. For instance, in Figure 1(a) the size as well as the position of the data bars relative to the circle element are slightly different from one another. The type and range of these differences vary for each design, making it hard to define a consistent matching algorithm based on heuristics. Finally, some types of edits must take into account nesting and ordering relationships between the elements. For example, in Figure 1(b), the designer may scale up the entire set of graphics for one of the buildings and then separately adjust the vertical spacing between the text elements. Transferring this edit properly to all the buildings requires that we identify the analogous hierarchical structure for the other graphical elements in the design by computing multi-level correspondence. Although designers often organize elements into various groups during the creation process, these are not a reliable indicator of perceptual structure and do not always correspond to the desired hierarchy for an edit or transfer. In addition, user-created groups typically do not encode the ordering of elements, which is important for temporal effects like animation.
+
+The main contribution of our work is an automatic algorithm for determining the shared structure between source and target elements that addresses these challenges. Our method is based on graph kernels. Given source and target graphics, we compute relationship graphs that encode the structure of the elements. We then analyze the source and target relationship graphs using graph kernels to compute element-wise correspondences. We also introduce an efficient method to hierarchically cluster and sequence the target elements into ordered trees whose structure is consistent with the source graphics. Together, the correspondences and ordered trees make it possible to transfer edits from the source to target elements. We evaluate our algorithm on a range of real-world designs, and demonstrate how our approach facilitates graphical editing.
+
+§ 2 RELATED WORK
+
+Inferring Structure in Graphical Designs A large body of work focuses on automatically estimating the internal structure of graphic designs to facilitate authoring and editing. For example, in the context of graphical patterns, [10] computes perceptual grouping of discrete patterns, and [3] encodes the structure of a pattern in a directed graph representation to create design variations. In the context of web designs, [7] leverage the tree structure of the DOM as well as its style and semantic attributes to create mappings between web pages. [9] generates semantic annotations for mobile app UIs by extracting patterns from the source code. In the context of layout optimization, [24] automatically decomposes a 2D layout into multiple 1D groups to perform group-aware layout arrangement. Other examples include structural analysis of architectural drawings [13], procedurally generated designs [16, 22], 3D designs [19] and 3D scenes $\left\lbrack {2,{21}}\right\rbrack$ . Our work focuses on $2\mathrm{D}$ vector graphics designs, with the purpose of facilitating edit operations.
+
+While several design software such as [6] allow users to create graphical designs procedurally (and then perfectly matched, edited, and animated), such tools are not commonplace, and the resulting designs loose their structural information if they are exported to portable data-formats such as SVG or PDF. We propose a generic solution that only depends on the graphics, regardless of how they were created.
+
+A different approach to purely automatic inference is mixed-initiative methods that take advantage of user interactions to infer structure. For example, previous work has analyzed user edits to extract implicit groupings of vector graphic objects [17], detect related elements in slide decks [1], and infer graphical editing macros by example based on inter-element relationships [8]. Some existing techniques introduce new interactive tools that facilitate manipulation or selection of multiple elements through a combination of explicit user actions and automatic inference [5,23]. In contrast, we propose an automatic algorithm to determine the shared structure within graphic designs. Our method does not require user annotations or edit history, which means it can be applied to any existing vector graphics design, regardless of how it was created.
+
+§ COMPUTING CORRESPONDENCE BETWEEN GRAPHIC DESIGNS
+
+Computing the correspondence between two analogous designs is a long standing problem in graphics with many applications. Many techniques have been developed for computing correspondences between a pair of images [18], 3D shapes [20], and 3D scenes [2]. These algorithms exploit local features as well global structures to compute the correspondence.
+
+One technique that has proven highly effective for comparing different objects is kernel-based methods. There is ample work on defining kernels between highly structured data types [15]. In particular, one approach is to represent objects or collections objects as a graph and define a kernel over the graphs. This approach has been applied to a variety of problems such as molecule classification [11], computing document similarity [14] and image classification [4]. 3D scene comparisons [2]. Our algorithm is directly inspired by [2], which uses graph kernels to compute a similarity between 3D scenes. To the best of our knowledge, there is no prior work that computes a pairwise element-to-element correspondence between two sets of vector-based graphical elements. In addition to element-wise correspondence, we also infer the nesting and ordering relationships between the elements, which is crucial for transferring complex edit operations.
+
+§ 3 OVERVIEW
+
+The input to our method is a set of source elements that the user has manually edited, and a set of target elements to which the user wants to transfer the edits. Transferring isolated changes to element attributes (e.g., fill color, text formatting) simply requires matching each target element to the appropriate source element and applying the corresponding edit. However, other types of edits define nesting and ordering relationships that must be taken into account. For example, many layout changes are applied hierarchically. In Figure 1(b), the designer may scale up the entire set of graphics for the Burj Khalifa and then separately adjust the vertical spacing and arrangement of the text elements in that column. Transferring this edit properly to all the buildings requires that we identify the analogous hierarchical structure for the other graphical elements in the design. In addition, the ordering between elements is important for temporal effects like animation. In Figure 1(a), the designer may want to apply animated entrance effects, in a specific sequence, to the set of elements for the Central Mediterranean region. Transferring this edit to the rest of the design requires grouping the elements for each year and determining the appropriate animation order.
+
+To ensure that our method generalizes to these various editing scenarios, we specify the desired nesting and ordering relationships amongst the source elements as part of the input. Specifically, the source elements are represented as an ordered tree (source tree), and our goal is to organize the target elements into one or more ordered target trees that correspond to the source tree structure (Figure 4). Note that the problem becomes much simpler if each source element is allowed to match no more than one target element, or if we assume that the target elements are already organized into ordered trees that match the source tree. In such cases, finding the appropriate element-wise correspondence is sufficient. However, these assumptions are not realistic for most real-world design worfklows. They either require users to manually select subsets of target elements to perform individual transfer operations, which is inconvenient for designs with many repeated components, or to arrange graphics into consistently ordered trees (which may differ per editing operation) ahead of time.
+
+Thus, we propose an algorithm for computing the shared structure between source and target elements that does not limit the number of target elements or assume the presence of consistent pre-defined structure in the design.
+
+§ 4 ALGORITHM
+
+Our algorithm is composed of two main stages. First, we compute an element-wise correspondence by finding the best matching source element for each target element. Then, we compute a hierarchical clustering of the target elements and organize them into ordered target trees. Overall, our approach is heavily inspired by Fisher et al. [2]. While [2] compute global similarity between entire 3D scenes, we need a detailed, structured correspondence between individual elements. This requires 3 new aspects in our approach.
+
+ * Finding an optimal element-wise match requires a similarity score for each source target-pair (vs a single similarity score between two scenes in [2]), and an algorithm to find the best match using these scores. (Section 4.2.5)
+
+ * Determining the nesting and ordering that corresponds to the edit operations requires clustering the elements and inferring their order. (Section 4.3)
+
+ * We use different low-level kernels pertinent to comparing graphic designs. For example, while [2] uses binary edge kernels by comparing edge types, we assign partial similarity by also considering the distance between elements. Such details are important to determine a correct match and nesting for designs which may contain many elements that share identical relationships.
+
+In the following, we use similar notation as [2] to illustrate how our method relates to and extends the previous work.
+
+§ 4.1 RELATIONSHIP GRAPHS
+
+Given the set of source elements $s$ and target elements $t$ , we start by constructing relationship graphs, ${G}_{s}$ and ${G}_{t}$ for the source and target graphics, respectively. The graph nodes represent the graphical elements, and the edges specify relationships between those elements. We rely on the intuition that elements have certain relationships that characterize the structure of the design and makes two designs more or less similar to each other. For example, in Figure 1(a), the text elements 'Central', 'Mediterranean' and '91,302' are center-aligned with each other and all contained inside the orange circle element. The blue and green charts also contain analogous text and circle elements that share these relationships. We selected a number of prevalent relationships by observing real world designs. Table 1 shows the list of the relationships and the process used to test for them. In general, the graph may contain multiple edges between a pair of nodes.
+
+ < g r a p h i c s >
+
+Figure 2: An example relationship graph for a set of elements. Only a subset of the edges are shown.
+
+The tests are performed in the order listed. To eliminate redundancy, we encode at most one edge for a given category between any two elements. For example, if elements A and B are both center-aligned and left-aligned, we only encode the center-aligned relationship since that is the first test to be satisfied in the horizontal alignment category. Figure 2 is an example illustrating different edges in a relationship graph.
+
+Note that we do not encode grouping information from the designer. Grouping structure created during the authoring process is usually not a reliable indicator of the visual structure that informs most edit transfer operations. Thus, we decided to disregard all such groups when constructing the relationship graphs.
+
+§ 4.2 COMPUTING ELEMENT-WISE CORRESPONDENCE
+
+After constructing source and target relationship graphs, we compute correspondences between their nodes. More specifically, for each target graph node ${n}_{t} \in {G}_{t}$ , we find the closest matching source graph node ${n}_{s} \in {G}_{s}$ using a graph kernel based approach inspired by [2]. To apply the method, we define separate kernels to compare individual nodes and edges across the two graphs.
+
+§ 4.2.1 NODE KERNEL
+
+The nodes in our relationship graph represent individual graphical elements, with a number of properties such as type, shape, size and style attributes. The node kernel is a combination of several functions, each of which takes as input two nodes and computes the similarity of different features of the nodes. Each function described below is constructed to be positive semi-definite and bounded between 0 (no similarity) and 1 (identical).
+
+Type Kernel $\left( {k}_{\text{ type }}\right)$ : Graphic elements are typically categorized as shapes (e.g., path, circle, rectangle), images, or text. In particular, most graphic design products and the SVG specification distinguish objects in this way. The type kernel returns 1 if two nodes have the exact same type (e.g., circle-circle), 0.5 if they are in the same category (e.g., circle-path), and 0 otherwise.
+
+Size Kernel $\left( {k}_{\text{ size }}\right)$ : This function compares the bounding box size of two elements. It returns the area of the smaller bounding box divided by the area of the larger bounding box.
+
+Online Submission ID: 0
+
+max width=
+
+tableheader Category Edge Test
+
+1-3
+3*Intersection Overlay Element A is contained in element B and vice versa.
+
+2-3
+ Contained in Element A is contained in Element B if A's bounding box is inside the B's bounding box.
+
+2-3
+ Overlap Element A overlaps element B if their bounding boxes intersect
+
+1-3
+Z-Order Z-Above / Z-Below Element A is Z-Above (Z-Below) element B if element A and B overlap, and A's z-order is higher (lower) than that of B.
+
+1-3
+Vertical alignment Center / Left / Right 2*Similar to intersection relationships, alignment is computed on element bounding boxes.
+
+1-2
+Horizontal Alignment Middle / Top / Bottom
+
+1-3
+Horizontal Adjacency Left of / Right of 2*Element A is left-of element B if its bounding box is to the left of B's bounding box within a threshold, and if the vertical range of their bounding boxes overlap.(*)
+
+1-2
+Vertical Adjacency Above / Below
+
+1-3
+Style Same Style While there is a plethora of style attributes for each element, we use fill color and stroke style for non-text elements, and font style for text elements since these attributes are visually most apparent.
+
+1-3
+
+Table 1: Edges encoded in relationship graphs. (*For For threshold, we use $\frac{1}{2}$ (width of source graphics bounding box) for horizontal adjacency, and $\frac{1}{2}$ (height of source graphics bounding box) for vertical adjacency. If element $A$ is left of multiple other elements, we only encode the relationship with the closest element. These constraints prevent edges between elements that are far apart relative to the size of the source graphics.)
+
+ < g r a p h i c s >
+
+Figure 3: The element shape kernel, ${k}_{\text{ shape }}$ computes the difference between the normalized bitmap images of the elements' silhouettes.
+
+Shape Kernel $\left( {k}_{\text{ shape }}\right)$ : We obtain the normalized shape (ignoring aspect ratio) of each element by taking its filled silhouette and scaling it into a ${64} \times {64}$ bitmap image (Figure 3). The element shape kernel returns the percentage image difference between two normalized shapes.
+
+Font Kernel $\left( {k}_{\text{ font }}\right)$ : For comparing two text elements, we consider their font style attributes. Specifically, we compare font-family, font-size, font-style (e.g., normal, italic) and font-weight (e.g., normal, bold). We return the percentage of style attributes that have equal values.
+
+The final node kernel, ${k}_{\text{ node }}$ , is a weighted sum of the above kernels. Since many editing operations (e.g., changing font size, applying a character-wise animation effect) are non-transferable between text and shape elements, we separate text elements and non-text elements, and only compare elements within the same category. For comparing shape elements, we take into account type, size and shape kernels.
+
+$$
+{k}_{\text{ node }}\left( {{n}_{s},{n}_{t}}\right) = {\omega }_{\text{ type }}{k}_{\text{ type }}\left( {{n}_{s},{n}_{t}}\right) \tag{1}
+$$
+
+$$
++ {\omega }_{\text{ size }}{k}_{\text{ size }}\left( {{n}_{s},{n}_{t}}\right) + {\omega }_{\text{ shape }}{k}_{\text{ shape }}\left( {{n}_{s},{n}_{t}}\right)
+$$
+
+For text elements, font style attributes are deemed more discriminatory than shape or size.
+
+$$
+{k}_{\text{ node }}\left( {{n}_{s},{n}_{t}}\right) = {\omega }_{\text{ type }}{k}_{\text{ type }}\left( {{n}_{s},{n}_{t}}\right) + {\omega }_{\text{ font }}{k}_{\text{ font }}\left( {{n}_{s},{n}_{t}}\right) \tag{2}
+$$
+
+If ${n}_{s}$ and ${n}_{t}$ are not in the same category, we assign a small constant (0.1) instead. The weights, ${\omega }_{\text{ type }},{\omega }_{\text{ size }},{\omega }_{\text{ shape }}$ and ${\omega }_{\text{ font }}$ , are defined per each source element. $\$ {4.2.4}$ details how we compute these weights.
+
+§ 4.2.2 EDGE KERNEL
+
+Next, we define an edge kernel to compute the similarity between a pair of edges that represent the relationship between two graphical elements. Each edge encodes a type of relationship (e.g., overlap, left-aligned). Based on our observation of real-world designs, we distinguish between strong edges and regular edges. Strong edges are highly discriminative relationships that tend to be preserved across design alterations. These include intersection and z-order relationships. All other edge relationships are deemed regular. In addition to the type $\left( \tau \right)$ , each edge also encodes the distance(d) between the two connected elements. Distances are approximated by the distances between the bounding box centers. Then, the kernel between two edges ${e}_{s}$ and ${e}_{t}$ with types ${\tau }_{{e}_{s}},{\tau }_{{e}_{t}}$ and distances ${d}_{{e}_{s}},{d}_{{e}_{t}}$ respectively is defined as:
+
+$$
+{k}_{\text{ edge }}\left( {{e}_{s},{e}_{t}}\right) = {\omega }_{{\tau }_{{e}_{s}}}c\left( {\tau }_{{e}_{s}}\right) \delta \left( {{\tau }_{{e}_{s}},{\tau }_{{e}_{t}}}\right) \frac{\min \left( {{d}_{{e}_{s}},{d}_{{e}_{t}}}\right) }{\max \left( {{d}_{{e}_{s}},{d}_{{e}_{t}}}\right) } \tag{3}
+$$
+
+where $\delta$ is a Kronecker delta function which returns whether the two edges types ${\tau }_{{e}_{s}}$ and ${\tau }_{{e}_{t}}$ are identical. $c\left( {t}_{e}\right)$ is 2.5 if ${\tau }_{e}$ is a strong edge, and 1 if it is a regular edge. Again, ${\omega }_{{\tau }_{{e}_{c}}}$ is a weight factor that is computed per source element and per edge type. See $\$ {4.2.4}$ for details.
+
+§ 4.2.3 GRAPH WALK KERNEL
+
+Using the node and edge kernels we compute a graph walk kernel to compare nodes between two graphs. A walk of length $p$ on a graph is an ordered set of $p$ nodes on the graph along with a set of $p - 1$ edges that connect this node set together. We exclude walks that contain a cycle. Let ${W}_{G}^{p}\left( n\right)$ be the set of all walks of length $p$ starting at node $n$ in a graph $G$ . To compare nodes ${n}_{s}$ and ${n}_{t}$ in relationship graphs ${G}_{s}$ and ${G}_{t}$ respectively we define the $p$ -th order rooted walk graph kernel ${k}_{R}^{P}$ .
+
+$$
+{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) =
+$$
+
+$$
+\mathop{\sum }\limits_{{{W}_{{G}_{s}}^{p}\left( {n}_{s}\right) ,{W}_{{G}_{t}}^{p}\left( {n}_{t}\right) }}{k}_{\text{ node }}\left( {{n}_{{s}_{p}},{n}_{{t}_{p}}}\right) \mathop{\prod }\limits_{{i = 1}}^{{p - 1}}{k}_{\text{ node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{i}}}\right) {k}_{\text{ edge }}\left( {{e}_{{s}_{i}},{e}_{{t}_{i}}}\right) \tag{4}
+$$
+
+The walk kernel compares nodes ${n}_{s}$ and ${n}_{t}$ by comparing all walks of length $p$ whose first node is ${n}_{s}$ against all walks of length $p$ whose first node is ${n}_{t}$ . The similarity between a pair of walks is computed by comparing the nodes and edges that compose each walk using the node and edge kernels respectively.
+
+Finally, the similarity of nodes ${n}_{s}$ and ${n}_{t}$ is defined by taking the sum of the average walk graph kernels for all walk lengths up to $p$
+
+$$
+\operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) = \mathop{\sum }\limits_{p}\frac{{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{t}}\right) }{\left| {{W}_{{G}_{s}}^{p}\left( {n}_{s}\right) }\right| \left| {{W}_{{G}_{t}}^{p}\left( {n}_{t}\right) }\right| } \tag{5}
+$$
+
+where $\left| {{W}_{G}^{p}\left( n\right) }\right|$ is the number of all walks of length $p$ starting at node $n$ in a graph $G$ .
+
+§ 4.2.4 KERNEL WEIGHTS
+
+The node and edge kernels in Equations 1 - 3 compare different features (e.g., shape, style, layout) of the source and target graphics. The weights applied to these kernels represent the importance of each feature in determining correspondence. It is not possible to assign globally meaningful weights because the discriminative power of each feature depends on the specific design and even specific elements within the design. For example, in Table 3, D4, 7 out of the 8 source elements have the same color, green. So, color is not a very discriminative feature for these elements. On the other hand the circle element which contains the symbol has a unique pastel color within the source set. For this element, color is a highly discriminative feature.
+
+We assume that features that are highly discriminative within the source graphics will also be important when comparing to the target graphics. Based on this assumption, we determine a unique set of kernel weights for each element in the source graphics, $s$ , as follows.
+
+Node kernel weights: For each element ${n}_{{s}_{i}} \in s$ , we compute the node feature kernel between ${n}_{{s}_{i}}$ and every other element ${n}_{{s}_{j}} \in s$ . The weight ${\omega }_{\text{ feature }}$ is inversely proportional to the average feature kernel value.
+
+$$
+{\omega }_{\text{ feature }}\left( {n}_{{s}_{i}}\right) = {1.0} - \frac{\mathop{\sum }\limits_{{{n}_{{s}_{j}} \in s,j \neq i}}{k}_{\text{ feature }}\left( {{n}_{{s}_{i}},{n}_{{s}_{j}}}\right) }{\left| s\right| - 1} \tag{6}
+$$
+
+where $\left| s\right|$ is the number of elements in the source graphics. If the average kernel value for a certain feature is high, many elements within the source graphics share a similar value for that feature, so the feature is less discriminative and vice versa.
+
+Edge kernel weights: The weights for the edge kernel is defined for each source element, ${n}_{{s}_{i}}$ , and for each edge type, ${\tau }_{e}$ .
+
+$$
+{\omega }_{{\tau }_{e}}\left( {n}_{{s}_{i}}\right) = 1 - \frac{\mathop{\sum }\limits_{{{e}_{i} \in {E}_{{n}_{{s}_{i}}}}}\delta \left( {{\tau }_{{e}_{i}},{\tau }_{e}}\right) }{\left| {E}_{{n}_{{s}_{i}}}\right| } \tag{7}
+$$
+
+where ${E}_{{n}_{{s}_{i}}}$ is set of all edges from node ${n}_{{s}_{i}}$ . The numerator counts the number of edges in ${E}_{{n}_{{s}_{i}}}$ that has type ${\tau }_{e}$ . If element ${n}_{{s}_{i}}$ has many edges of a given type, that edge type is less discriminatory for ${n}_{{s}_{i}}$ and vice versa. In $\$ {6.2}$ , we conduct an ablation experiment, where we replace these weights with uniform weights.
+
+§ 4.2.5 ELEMENT-WISE CORRESPONDENCE
+
+Given the pairwise similarity score between the source and target elements, a straightforward approach for finding an element-wise correspondence would be to match each target element to the source element that has the highest similarity score. The downside of this approach is that it is sensitive to small differences in the similarity score. Instead, we take an iterative approach which looks for confident matches and utilizes these matches to update the similarity scores of other pairs of elements. Source element ${n}_{{s}_{i}}$ and target element ${n}_{{t}_{j}}$ is a confident match if
+
+$$
+\operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) \gg \operatorname{Sim}\left( {{G}_{s},{G}_{t},{n}_{s},{n}_{{t}_{j}}}\right) \;\forall {n}_{s} \in s,{n}_{s} \neq {n}_{{s}_{i}} \tag{8}
+$$
+
+that is, if the similarity score of $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is much greater than the similarity score of ${n}_{{t}_{j}}$ with any other source elements.
+
+Once we identify the confident matches, we use them as anchors to re-compute all other pair-wise similarity scores. First, we update the node kernels in Equations 1-2. Since we are confident that $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a good match, we boost their node kernels:
+
+$$
+{k}_{\text{ node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) = {2.5}\;\text{ if }\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right) \text{ is a confident match } \tag{9}
+$$
+
+Since our goal is to match each target element to exactly one source element, if $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a confident match ${n}_{{t}_{j}}$ is not a good match with any other source element ${n}_{s} \neq {n}_{{s}_{i}}$ . Therefore, we discount these node kernels:
+
+${k}_{\text{ node }}\left( {{n}_{s},{n}_{{t}_{j}}}\right) = {0.1}\;$ if $\left( {{n}_{{s}_{i}},{n}_{{t}_{j}}}\right)$ is a confident match and ${n}_{s} \neq {n}_{{s}_{i}}$(10)
+
+All other node kernels are computed using the original Equations 1-2. The similarity score is then computed using the original Equation 5 with the updated node kernels. We iterate the steps of identifying confident matches and re-computing the similarity scores until all target elements are matched to a source element or until we reach a set maximum number of iterations. At this point, any remaining target elements are matched with the most similar source element according to the most recently updated similarity score.
+
+§ 4.3 COMPUTING ORDERED TARGET TREES
+
+The previous stage of the algorithm finds the closest matching source element for each target element. As noted earlier, this correspondence is enough to transfer isolated attributes that do not depend on the nesting structure or ordering of the target elements. However, for many other edits (e.g., layout, animation), transferring changes requires computing ordered target trees that are consistent with the source tree.
+
+§ 4.3.1 HIERARCHICAL CLUSTERING
+
+We start by computing a hierarchical clustering of the target elements that matches the structure of the source tree. More specifically, the goal is to cluster together target nodes that correspond to the same source sub-tree. For example, in Figure 1(a), there should be two top-level target clusters with the Hungary graphics and Greece graphics, which represent two coherent instances of the entire source tree (Italy graphics). If the source tree contains nested sub-trees, the target clustering should include corresponding nested clusters.
+
+To obtain these clusters, we use agglomerative nesting (AGNES), a standard bottom-up clustering method that takes as input a similarity matrix(D)and the number of desired clusters(k), and iteratively merges the closest pair of clusters to generate a hierarchy [12].
+
+The key aspect of our approach is how we define the similarity matrix, which measures how likely a pair of target elements should be clustered together. We rely on the intuition that the relationship between two target elements that belong to the same cluster would be similar to the relationship between their corresponding source elements. For example, in Figure 1(a), consider the orange circle and the text element 'Central' in the Italy graphics (source), and their corresponding elements in the target graphics (blue circle, green circle, 'Western' and 'Eastern'). The orange circle contains 'Central'. Likewise, the blue circle contains 'Western' and these elements should be grouped together. On the other hand although 'Eastern' also corresponds to 'Central', 'Eastern' and the blue circle do not have a containment relationship and these should be grouped separately. We measure the similarity of relationships between pairs of elements, again using a graph walk kernel. From the relationship graph defined in $§{4.1}$ , the relationship between two elements $n$ and ${n}^{\prime }$ is represented by ${W}_{G}^{p}\left( {n,{n}^{\prime }}\right)$ , the set of all walks of length $p$ whose first node is $n$ and whose last node is ${n}^{\prime }$ . Then, the similarity of target elements ${n}_{t}$ and ${n}_{t}^{\prime }$ with corresponding source elements ${n}_{s}$ and ${n}_{s}^{\prime }$ is defined as:
+
+$$
+{k}_{R}^{p}\left( {{G}_{s},{G}_{t},{n}_{t},{n}_{t}^{\prime },{n}_{s},{n}_{s}^{\prime }}\right) =
+$$
+
+$$
+\mathop{\sum }\limits_{\substack{{{W}_{{G}_{s}}^{p}\left( {{n}_{s},{n}_{s}^{\prime }}\right) } \\ {{W}_{{G}_{I}}^{p}\left( {{n}_{t},{n}_{t}^{\prime }}\right) } }}{k}_{\text{ node }}\left( {{n}_{s},{n}_{t}}\right) \mathop{\prod }\limits_{{i = 1}}^{{p - 1}}{k}_{\text{ node }}\left( {{n}_{{s}_{i}},{n}_{{t}_{i}}}\right) {k}_{\text{ edge }}\left( {{e}_{{s}_{i}},{e}_{{t}_{i}}}\right)
+$$
+
+(11)
+
+This equation is equivalent to Equation 4, except here we compare walks between a fixed source and destination node. That is, we compare all walks of length $p$ starting at ${n}_{t}$ and ending at ${n}_{t}^{\prime }$ against all walks of length $p$ starting at ${n}_{s}$ and ending at ${n}_{s}^{\prime }$ . Again, we take the sum of the average walk graph kernels for all walk lengths up to $p$ . For the bottom-up clustering method, to compare distances between clusters, we use the average distance between all pairs of elements from each cluster.
+
+We use a simple heuristic to determine the number of clusters, $k$ . For each source element, we count the number of matched target elements and take the mode of this value. We apply this heuristic recursively to determine the number of clusters at each level. We experimented with more complex methods such as using the spectral gap of the Laplacian matrix or putting a threshold on the maximum distance between two clusters to be merged. However, we found that the simpler approach worked better in most cases, even with variations in element cardinality between the source and target graphics.
+
+§ 4.3.2 ORDERING
+
+Once we obtain the target tree, the final step is to determine the ordering between the subtrees, which translates to the ordering of the elements. The ordering between subtrees depends heavily on the global structure of the design (e.g., radial design vs linear design), the semantics of the content (e.g., graphics that represent chronological events) and the user's intent (e.g., presenting things in chronological order ${vs}$ in reverse chronological order). Instead of trying to infer these factors, we rely on a simple heuristic based on natural reading order. We order the subtrees from left to right, then top to bottom using their bounding box centers.
+
+§ 5 RESULTS
+
+To test the effectiveness of our algorithm, we collected a set of 25 vector graphic designs. The designs included infographics, presentation templates and UI layouts. We collected designs that contained a number of repeating structures that could benefit from bulk editing. We selected the source and target graphics for each design. Then, we manually coded the type of variations (e.g., color, layout) applied between the source and target graphics. More variations imply greater differences between the source and target graphics, making the correspondence more difficult to compute. Note that we did not use this test dataset to develop our algorithm.
+
+For each design, we manually specified an ordered source tree and the corresponding ground truth set of ordered target trees. By default, we created shallow source trees (height of 1) for all designs. We also created deeper trees for a few examples (e.g., Figure 4. See supplementary material for more examples). We chose the source element ordering such that applying entrance animation effects in that order would produce a plausible presentation. For example, text elements organized in paragraphs are ordered top-down, and container elements (e.g., the circle in Figure 1(a)) are succeeded by the interior graphics (e.g., 'Central', 'Mediterranean' and '91,302) For other elements, we arbitrarily chose one specific ordering among many reasonable options. When constructing the ground truth target trees, we created a separate subtree for each set of target elements that correspond to an instance of the source graphics. For example, in 1(a), the elements in the blue chart constitute one target subtree, and the elements in the green chart constitute another target subtree. In Figure 1(b) each set of graphics representing a building comprise a separate subtree. The ordering of elements within the lowest level subtrees is determined by the ordering of the corresponding source element in the source tree. Higher-level subtree orderings are determined by the heuristic described in 4.3.2.
+
+We evaluate the two stages of our algorithm separately. First, we evaluate the element-wise match between the source and target graphics. For the majority of target elements, the ground-truth match and clustering is visually apparent in the design. In cases where the variation between the source and target graphics make the match less clear, we choose a reasonable ground-truth match For example, in Table 3 D5, there are six different person icons each of which consist of multiple vector graphics paths. Since, each path roughly represents a body part (e.g., hair, face, neckline), we consider a match between equivalent parts of the symbol to be correct. On the other hand, the symbols in D4 do not have a clear semantic or visual correspondence. In this case, we consider a match between any target symbol path to any source symbol path as a correct match.
+
+We evaluate the match for two different scenarios. In each design, the target graphics contains multiple sets of targets that match the source graphics. First, we evaluate the match on each target set separately (Separate). This would apply to the scenario when the user selects each target set separately and applies a transfer one by one. Then, we evaluate the match between the source graphics and the entire target graphics, emulating the case when the user selects all target elements and applies the transfer at once (All). Table 3 reports the percentage of correctly matched target elements in each case.
+
+For the hierarchical clustering and ordering encoded in the target trees, we also evaluate two scenarios. First, we compute the target trees given the match results from the All scenario, which may include incorrect matches (Default). Next, we compute the trees given the ground-truth match (Perfect Match).
+
+In order to quantify the correctness of the final result, we define an edit distance, ${D}_{e}$ , that measures the difference between the ground-truth target tree and our computed target tree. To simplify this distance, we flatten both the ground-truth and computed trees into a sequence of target elements based on the ordering information encoded in the target trees. In these sequences, we label each target element with the corresponding source node, which allows us to identify errors in the computed element-wise matches (wrong label for a given target element) and incorrect ordering (wrong sequence of labels). Given a flattened ground-truth sequence $g$ and a computed sequence $r$ , we define the edit distance as follows:
+
+$$
+{D}_{e}\left( {r,g}\right) = {D}_{m}\left( {r,g}\right) + {D}_{o}\left( {r,g}\right) \tag{12}
+$$
+
+where the match edit distance ${D}_{m}$ encodes differences in element-wise matches, and the order edit distance ${D}_{o}$ encodes differences in the order of elements between $r$ and $g.{D}_{m}\left( {r,g}\right)$ is simply the total number of elements in the target that are matched to the wrong source element. It roughly represents the work required to fix all the incorrect matches. For the Default case, it is equal to the number of incorrect matches in All. For the Perfect Match case, it is equal to 0 .
+
+ < g r a p h i c s >
+
+Figure 4: Example of ordered source and target trees. Node colors indicate element-wise correspondence. Multiple instances of matching target graphics is represented as a nested target tree. Here, the entire source graphics (top slice of bulb), matches 4 target instances, represented by subtrees T1, T2 and T3. Within T1, there are 3 instances/sub-trees of the necktie symbol which corresponds to the flashlight symbol in the source. Our algorithm takes as input an ordered source tree and outputs an ordered target tree.
+
+The order edit distance is defined as:
+
+$$
+{D}_{o}\left( {r,g}\right) = N - \operatorname{InOrder}\left( {{r}^{\prime },g}\right) \tag{13}
+$$
+
+where $N$ is the number of elements in the target graphics, and ${r}^{\prime }$ is the result of correcting all matches in $r$ . Note that ${r}^{\prime }$ must be a permutation of $g$ since both contain the same set of target-to-source matches (potentially in different orders). InOrder $\left( {{r}^{\prime },g}\right)$ is a function that returns the total length of all subsequences in ${r}^{\prime }$ of length $\geq 2$ that exactly match a subsequence of $g$ , minus $\left( {{N}_{o} - 1}\right)$ , where ${N}_{o}$ is the number of matching subsequences. If $g$ and ${r}^{\prime }$ are identical, ${D}_{o}\left( {{r}^{\prime },g}\right) = 0.{D}_{o}$ is a variant of the transposition distance and roughly measures the number of operations required to correct the order of ${r}^{\prime }$ to match $g$ . Note that ${D}_{e}$ does not penalize $r$ for errors in the hierarchical structure of clusters as long as the result has the same ordering as $g$ . In most cases, having the correct match and ordering will produce the desired visual result. Thus, ${D}_{e}$ attempts to approximate the number of correction operations needed to reach the desired ground truth solution. We report ${D}_{e}$ divided by the number of target elements, $N$ . Note that creating $g$ manually would roughly correspond to ${D}_{e} = N$ , since the user could just visit all the target elements in the correct order and assign each target element to the appropriate source element (or, equivalently, apply the desired edit).
+
+Element-wise Correspondence. Table 3 shows the result for a subset of the designs that we tested. The full set of results is included in the supplementary material. The source graphics is highlighted with a red outline. The rest of the graphics minus the greyed-out background is the target graphics. We obtain close to perfect element-wise matches even when the target graphics has multiple types of variations from the source graphics. The match performance was comparable across the Separate and All scenarios, with only slightly better accuracy for the Separate case. This means that the user can transfer edits from the source graphics to multiple sets of target graphics at once without having to select each target set individually, which is especially tedious when there are many elements in a complex layout.
+
+Ordered Target Trees. The accuracy of the ordered target trees varied widely across the designs. In order to obtain a perfect result, we must infer the correct match as well as the correct number of clusters(k). Error in either of these steps can have a big impact on the edit distance, ${D}_{e}$ . For example, our algorithm computed a perfect element-wise match for D1. However, because the target graphics contained much fewer elements than the source graphics, our simple heuristic incorrectly inferred that $k = 1$ . This led to a large ${D}_{e}$ since many elements were out of sequence. On the other hand, for D7, our heuristic correctly inferred that $k = 3$ , and in fact, the clustering algorithm accurately identified elements belonging to each target set, in this case, the individual profiles. However, the relatively poor element-wise match mixed up the ordering of the elements within each target subtree, resulting in a larger ${D}_{e}$ . Manually correcting the match (Perfect Match) and recomputing the target trees produced a perfect result $\left( {{D}_{e} = 0}\right)$ . In general, if the ordering error is due to the incorrect element-wise match, correcting the match will improve the accuracy of the resulting target trees However, if the clustering of targets itself is wrong, correcting the match can alleviate the error only partially. In $\$ 6$ , we also compare the results given a ground-truth $k$ .
+
+Nested Hierarchy. Note that if each source element matches exactly one target element, the structure of the target tree is completely determined by the element-wise correspondence. For example, the designs shown in Table 3 contain multiple top-level target subtrees that represent different sets of target elements that each map to the source graphics. However, within each target subtree, there is a one-to-one match between the source and target elements. Therefore, once we compute the top-level clustering of target elements into multiple target subtrees, the hierarchy within each target subtree is completely determined by the element-wise correspondence.
+
+Still, it is common for target graphics to have a different cardinality of constituent elements. For example, in Figure 4(a), each slice of the light bulb contains a different number of symbols (e.g., flashlight, necktie, dollar sign), each of which consist of multiple path elements. Our algorithm handles these cases by recursively clustering target elements into sub-trees. For example, in Figure 4(c) T1, the 6 path elements that make up the necktie symbols are matched to the 2 source path elements that make up the flashlight in S. These paths are clustered into lower-level subtrees that represent 3 necktie symbols.
+
+Editing Applications. Our algorithm for computing correspondences and ordered target trees supports a variety of edit transfer scenarios. As noted earlier, transferring animation effects is uniquely challenging because they often involve both temporal (ordering) and hierarchical structure. To demonstrate this application, we implemented a prototype system that uses our automatic computation to transfer animation patterns from source to target elements. The accompanying submission video shows animation transfers for several designs from our test dataset. In addition, Figure 1 shows other possible edit transfer scenario. We created these examples by computing correspondences and ordered target trees for each set of source elements and then manually applying the corresponding edits to the target elements. In this process, we did not correct or modify the automatic output of our algorithm.
+
+§ 6 ABLATION EXPERIMENTS
+
+To further evaluate the impact of different aspects of our algorithm, we conducted ablation experiments by removing key parts or our method or replacing them with a simpler baselines.
+
+§ 6.1 REMOVING EDGE KERNELS
+
+The graph walk kernel defined in Equation 4 compares two nodes ${n}_{s}$ and ${n}_{t}$ by comparing the similarity of their respective relationships with neighboring nodes. The walk length, $p$ , determines the size of the neighborhood to consider. In our implementation, we experimentally determined that $p = 1$ obtained satisfactory results. That is, considering only immediate neighbors was enough to predict good element-wise correspondences. We compare this approach to $p = 0$ , where we disregard the relationships between the nodes and only use the node kernels to compute element matches. Table 2 (column Node Kernels Only) shows the result.
+
+max width=
+
+2*Design 4|c|Match
+
+2-5
+ Ours $p = 1$ Node Kernel Only $p = 0$ Uniformω $p = 1$ Greedy Match
+
+1-5
+Average 0.95 0.82 0.93 0.93
+
+1-5
+D1 1.00 0.95 1.00 0.83
+
+1-5
+D2 1.00 0.76 1.00 1.00
+
+1-5
+D3 0.94 0.80 0.94 0.97
+
+1-5
+D4 1.00 0.76 1.00 1.00
+
+1-5
+D5 0.90 0.78 0.86 0.87
+
+1-5
+D6 1.00 0.94 0.97 1.00
+
+1-5
+D7 0.78 0.78 0.78 0.78
+
+1-5
+
+Table 2: Ablation experiments for element-wise matching.
+
+In the majority of cases, removing edge kernels and only considering node kernels produced worse results. This was especially true for designs that contained many elements that looked alike, with similar shapes and sizes. For example, in D3, the shorter target green bars get matched to the source blue bar because they have similar sizes. Likewise, in D6, each horizontal bar in the target graphics get matched to the source bar with the closest size, rather than being matched according to their relative positions. A particularly bad failure case is shown in Figure 5 D10 (Ours $= {1.0}$ vs Node Kernel Only $= {0.47}$ ), where the shadow consists of multiple circle elements with the same shape and size. In this case, without the z-order or relative positioning information, it is challenging to get a correct pair-wise correspondence between these circles. This experiment demonstrates that inter-element relationships are critical for discerning element-wise correspondences in graphic designs.
+
+§ 6.2 UNIFORM KERNEL WEIGHTS
+
+In $\$ {4.2.4}$ , we describe a method for determining the importance and thus the weights $\left( \omega \right)$ of each feature kernel. We evaluate the effectiveness of these weights by replacing them with uniform weights and comparing the results. Interestingly, in most cases, kernel weights did not have a significant impact on the performance. In a few cases, adaptive weights (ours) produced better matches compared to uniform weights. For instance, in D5, all the symbol paths have the same type (path) so we put a small weight on the type kernel, and higher weights on other features such as the positions of the paths relative to each other. This helps to differentiate the subtle difference between these paths. Adaptive weights are also useful for matching elements where some features are much more powerful than others. For example, in Figure 5, D11 (Ours $= {0.97}$ vs. Uniform Weights $= {0.86})$ , the type and font style attributes are much more powerful than the shape or layout relationship features. Still, for most designs, there are many features with discriminatory power, and replacing $\omega$ s with a uniform weight produces as good a match as our previous result.
+
+ < g r a p h i c s >
+
+Figure 5: (a) Design with many elements that have similar appearance. The shadow parts consist of multiple circle elements with the same shape and size. Inter-element relationships are critical for discerning correspondence between such designs. (b) Depending on the design, some features are more discriminatory than others. In D11 element type and font style attributes are more power features than relative positioning between the elements.
+
+§ 6.3 GREEDY MATCHING
+
+In $§{4.2.5}$ , we describe an iterative method by which we first match confident pairs of nodes, and then use these matches to iteratively refine the similarity score of other pairs of nodes. We compare this approach to a greedy algorithm, whereby we simply match each target node to the source node with the highest similarity score. Table 2 column Greedy Match shows the result. For some designs, the greedy method produced as good a match as our iterative method. However, for other designs (e.g., D1 and D5), the greedy method performed worse. These were designs where a target element had several closely similar source elements. For example, in the UI design shown in D1, the different text elements for the user input fields have close similarity. In D5 the symbol paths as well as the solid blocks with 4 sides have close similarity to each other. In these scenarios, using the more confident matches to refine the similarity scores helped improve the pair-wise match.
+
+§ 6.4 CLUSTERING USING ELEMENT POSITIONS
+
+The second stage of our algorithm (§4.3) uses hierarchical clustering of the target elements to compute target trees. As noted in $§{4.3.1}$ , a key insight of our method is that the relationship between two target elements that belong to the same cluster should be similar to the relationship between their corresponding source elements. As a result, we use the graph walk kernel to analyze every pair of target elements and populate the similarity matrix used by the clustering procedure. To evaluate the importance of this insight, we compare our approach to a simpler heuristic that defines the similarity between every pair of target elements as their Euclidean distance (i.e., closer means more similar). We approximate element positions using the centroids of their bounding boxes. For both methods, we provide the ground-truth element-wise correspondences and the correct number of clusters, $k$ .
+
+Table 4 reports the results of the comparison. Using centroid distance produces worse clusters, especially when the desired target clusters are close to each other and arranged in a nonlinear layout (e.g., D2, D5). Even for designs with relatively simple layouts like D3, where the target clusters are visually separated from each other, the difference between the vertical and horizontal spacing coupled with the tall aspect ratio of some of the elements makes it challenging to estimate the correct clustering using only element centroids. In general, users can select an arbitrary arrangement of elements as the source graphics (not just ones that are geometrically close to each other). The graph walk kernel distance is better suited to handle these cases by producing clusters that reflects the original arrangement of the source graphics, as shown in the synthetic toy example, D8.
+
+max width=
+
+2*X 2*Design 2|c|#Elements 2|c|Variations 2|c|Match Score 2|c|Tree Score $\left( {{D}_{e}/N}\right)$
+
+3-10
+ Source Target(N) Count Type Separate All Default Perfect Match
+
+1-10
+Avg X 11.0 36.6 4.23 X 0.95 0.95 0.31 0.15
+
+1-10
+D1 X 17 21 2 Text Cardinality 1.00 1.00 0.81 $\left( {k = 1}\right)$ -
+
+1-10
+D2
+ < g r a p h i c s >
+ 10 50 3 Color Shape Size 1.00 1.00 0.06 $\left( {k = 6}\right)$ -
+
+1-10
+D3
+ < g r a p h i c s >
+ 9 36 4 Cardinality Shape Size Layout Text 1.00 0.97 0.03 $\left( {k = 4}\right)$ 0 $\left( {k = 4}\right)$
+
+1-10
+D4
+ < g r a p h i c s >
+ 8 33 5 Cardinality Color Layout Shape Text 1.00 1.00 0 $\left( {k = 4}\right)$ -
+
+1-10
+D5
+ < g r a p h i c s >
+ 16 60 5 Cardinality Color Layout Shape Size 0.90 0.90 0.50 $\left( {k = 5}\right)$ 0.08 $\left( {k = 5}\right)$
+
+1-10
+D6
+ < g r a p h i c s >
+ 10 33 5 Cardinality Color Size Text Layout 1.00 1.00 0 $\left( {k = 2}\right)$ -
+
+1-10
+D7
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+ 7 23 6 Cardinality Layout Shape - Size Text Type 0.78 0.78 0.65 $\left( {k = 3}\right)$ 0
+
+1-10
+
+Table 3: Representative examples from our test data set. For each design, we evaluate the element-wise correspondence and clustering separately.
+
+Please refer to supplementary material for full results.
+
+ < g r a p h i c s >
+
+Table 4: Comparison of distance metrics used for clustering target elements (Graph Walk Kernel vs Centroid distance). The graph walk kernel distance produces clusters that reflects the original arrangement of the source graphics. *D8: red outline indicates source graphics, blue indicates target clusters.
+
+§ 7 LIMITATION AND FUTURE WORK
+
+Although our algorithm is designed to handle different types of variations between the source and target graphics, as expected, large geometrical, style, and structural variations tend to produce erroneous matching and clustering results. Clustering is more prone to error because it is sensitive to the number of clusters, $k$ , as well as the match results. We use a simple heuristic to determine the number of target clusters, which works for many cases, but can also fail easily. Users could provide the ground-truth $k$ as input, but this also becomes tedious if the desired target tree is deeply nested and each subtree requires a different value of $k$ .
+
+One area for future work is to use the document edit history to inform the matching and clustering. For example, knowing which elements were copy-pasted, or which elements were selected and modified together could provide strong hints about matching elements. The creation or edit order of the target elements could also be used to inform the clustering and ordering.
+
+Another way to improve the algorithm is to take advantage of user corrections of the output. For example, when the user corrects an erroneous match, we could use this ground-truth match as a confident match to recompute the graph walk kernels, or to infer better weights for the kernels. If there are multiple errors in the output, we could potentially reduce the number of manual corrections needed by using each correction to recompute a new, improved output. In general, since the type and power of discriminatory features varies by design and user intent, learning from user inputs is an interesting avenue for future work.
+
+§ 8 CONCLUSION
+
+In this work, we present an approach to help designers apply consistent edits across multiple sets of elements. Our method allows users to select an arbitrary set of source elements, apply the desired edits, and then automatically transfer the edits to a collection of target elements. Our algorithm retroactively infers the shared structure between the source and target elements to find the correspondence between them. Our approach can be applied to any existing design without manual annotation or explicit structuring. It is flexible enough to accomnodate common variations between the source and target graphics. Finally, it generalizes to different types of editing operations such as style transfer, layout adjustments or applying animation effects. We demonstrate our algorithm on a range of real-world designs, and show how our approach can facilitate editing workflows.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/AhgksBTYwC5/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/AhgksBTYwC5/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b07bb73d1b2ffc859a60360c3dbba21a8fdb434
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/AhgksBTYwC5/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,433 @@
+# Can Data Videos Influence Viewers' Willingness To Reconsider Their Health-Related Behavior? An Exploration With Existing Data Videos
+
+
+
+Figure 1: Data Videos or animated infographics are short videos that present large amounts of data in an engaging narrative format. We selected nine Data Videos focusing on three health-related topics; physical activity, sleep, and diet. This figure shows an epitome of the videos used in the study. For the full list of videos, their links, and more detailed description, refer to Appendix A, B.
+
+## Abstract
+
+Data Videos have the potential to promote healthy behaviors [21]. Using publicly available data videos addressing physical activity, sleep, and diet, we explored the persuasive capability of Data Videos through their narrative format and the affective connection they arouse in their viewers' minds. We asked four central questions; (1) do Data Videos increase negative affects in their viewers?; (2) are negative affective responses linked with individuals' personality traits?; (3) can negative affects predict any change in viewers' willingness to improve their health-related behavior?; and finally (4) can personality traits and/or video attributes predict viewers' such willingness? An M-Turk study was conducted, whereby participants $\left( {N = {102}}\right)$ watched Data Videos, answered questions about their perceptions, and completed a personality trait questionnaire. Overall, influencing participants' willingness to reconsider their health-related behaviors was more difficult (i.e., harder to persuade) when they scored higher on neuroticism. This was because these individuals liked the provided Data Videos less, compared to those who scored low in neuroticism, at least partially. Perceived usefulness of the information along with neuroticism predicted our participants' willingness to reevaluate their health-related behaviors. Together, these findings show the importance of both using personality traits (i.e., personalization) and working on the general contents of Data Videos without considering personalization.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI
+
+## 1 INTRODUCTION
+
+Behavioral health refers to how individuals' well-being and health are influenced by their own behaviors [2]. Simple and preventable as they sound, behavioral health issues like improper diet or physical inactivity could be linked to many serious consequences such as cardiovascular diseases, obesity, high blood pressure, and even some types of cancer $\left\lbrack {1,{61}}\right\rbrack$ . In fact, not engaging in a sufficient amount of physical activity is a key risk factor for death worldwide [1]. In order to find means to effectively address our behavior-related health challenges in a timely manner, research involving technology will be key.
+
+As behaviors are normally adjustable, people suffering from behavior-induced health issues would benefit if they understood how their seemingly minor behaviors are tied to their serious medical issues. Further, they could be motivated to reevaluate their behaviors if they were guided with concrete behavioral suggestions on how to improve their health. The availability of modern technologies in the form of wearable devices and mobile health apps (mHealth) allows us to collect diverse personal health-related data (e.g., steps taken, calories burnt, and sleep hours) [51]. However, users of these technologies often do not benefit fully from the data they obtain [24]. This is because current data representation approaches generally lack the ability to "convey" the insights to users so they can take action. While current mHeath technologies typically provide statistics in the form of charts and graphs [39, 57], research shows that such methods are less effective in triggering behavior change [46], which is the ultimate goal of such technologies. Statistical data representation of personal health data is often passive, difficult to interpret, and not insightful. Thus, users often do not explore these representations fully. Some even claim the way such personal data is presented could be too frustrating or overwhelming for the users to understand [57]. This frustration could then lower the users' motivation to reexamine their health-related behaviors, and even their motivation to continuously use the technology [57]. We believe that the role of modern health tracking technology could move beyond data collection and presentation, towards presenting data in an insightful way to provide "actionable intelligence" to effectively guide users.
+
+For the effective delivery of health information, we focus on Data Videos in this study. Data Videos or animated infographics are short in length, typically shorter than six minutes [36], and provide factual, data-driven information for the users in an engaging narrative format $\left\lbrack {6,7,{47}}\right\rbrack$ . The narrative nature of Data Videos can make complex data easier to digest and act upon because narratives are a natural way for people to communicate and gain knowledge, [7,21]. As Data Videos take a storytelling format, they arouse emotional connections in their viewers' minds while they engage with the story. Research in advertisement and marketing demonstrates that affect plays an important role in motivating and convincing viewers and consumers $\left\lbrack {8,{50},{52}}\right\rbrack$ . Based on these findings, we hypothesize that a potential strength of Data Videos could be attributed to their capability to rouse the viewers' affect (e.g., people realize how much sugar they consume everyday, and they become motivated to reduce their daily sugar intake because they fear negative consequences). Indeed, to alleviate negative affects, changing one's behavior is logical (e.g., I don't want to get sick so I will start exercising). In this way, our affects could be indirectly driving our behaviors. Further, personality differences could play an important role in individuals' emotional responses $\left\lbrack {{13},{35},{42}}\right\rbrack$ as well as their potential behavior modification in response to a persuasive system $\left\lbrack {9,{37},{40},{44},{63},{77}}\right\rbrack$ ; individuals’ responses to an emotion provoking persuasive message vary from person to person. Accordingly, we plan to consider personality differences as a factor in our study.
+
+Another aspect that can play an important role in the persuasive capability of health Data Videos is how the viewers perceive the content of the video (i.e., their content value appraisal). Could they follow the content of the video easily and clearly? Did the video provide them with new and/or useful information?
+
+The problem we are addressing in this paper is how to improve the persuasive potential of health-related Data Videos, to effectively influence viewers' willingness to alter their health-related behavior. Our focus is in on two dimensions: 1) personalizing Data Videos to viewers' unique personalities, and 2) improving the overall quality of Data Videos in general.
+
+## 2 RELATED WORK
+
+We discuss previous work investigating the effectiveness of Data Videos as a narrative to communicate data. We highlight some behavioral theories as well as studies in HCI/other related fields, then turn to affects associated with narratives. Finally, we discuss how affects play a role in forming people's attitudes. As a factor influencing affects, we review studies focusing on the importance of the role personality traits play in reaction to a persuasive message, by describing studies and strategies used in the persuasive technologies literature.
+
+### 2.1 Data Videos as a Narrative
+
+Data videos are motion graphics that incorporate factual, data-driven information to tell informative and engaging stories with data [5,6]. Data videos are gaining popularity [6] in various fields such as journalism, education, advertisements, mass communication, as well as in political campaigns [29,38,47,70-72]. Due to their narrative nature, Data Videos are recognized as one of the seven forms of narrative visualization $\left\lbrack {{19},{71}}\right\rbrack$ . Baber et al. $\left\lbrack {10}\right\rbrack$ define narrative as a formal structure that constitutes a "sharable" story as opposed to the informal stories which could be "unstructured" and "ambiguous". A narrative is a series of connected events that constitute a story [71]. The order in which these events is presented in a medium constitutes its narrative structure [5]. Amini et al. [6] examined 50 professionally created Data Videos to learn about their narrative structure. In their study, they divided the videos into temporal sections and coded them based on Cohn's [23] theory of visual narrative structure that categorized the narrative into four stages: Establisher (E), Initial (I), Peak (P) and Release (R). Amini et al. [6] provided insights regarding the average duration (in percentage) each narrative stage consumes from the total video length as well as the percentage of time spent on attention cues and data visualizations within each stage. They also pinpointed some narrative structure patterns that are commonly used in Data Videos.
+
+The power of Data Videos comes mainly from this narrative format. Stories can convey information in an engaging way that is more natural, seamless, and effective than text or even pictures [31,39]. A well told story can convey a large amount of information in a way that the viewers find interesting, easy to understand, trust, recall readily, and make sense of $\left\lbrack {{17},{31},{57}}\right\rbrack$ . The advantage of visual narrative is its ability to present plenty of information in a compact form, compared to text or pictures alone [31]. According to Narrative Transportation Theory, videos can transform and immerse the viewer in a totally different world with their locale, characters, situations, and emotions which could reflect on the users' own beliefs, emotions, and intentions $\left\lbrack {{34},{57},{58},{76}}\right\rbrack$ . Furthermore, a plethora of psychological theories support the persuasive power of narrative. The Extended Elaboration Likelihood Model (E-ELM) argues that as people indulge in a narrative, with all its cues and stimuli, their cognitive processing of the narrative obstruct any counterarguments of the presented message $\left\lbrack {{18},{74}}\right\rbrack$ , making the message more persuasive even for those who are difficult to persuade otherwise [73]. Furthermore, as per the Entertainment Overcoming Resistance Model (EORM), the entertaining aspect of a narrative also plays a role in reducing the cognitive resistance to the message presented, and hence facilitates persuasion $\left\lbrack {{22},{56},{57}}\right\rbrack$ .
+
+Despite the great potential of, and the increasing demand for, Data Videos for information communication, it was not until recently that researchers turned an eye to empirically investigate them in terms of their building blocks, components, and narrative characteristics [6]. In a recent study that aimed at exploring the persuasive power of Data Videos, Choe et al. [21] introduced a new class of Data Videos called Persuasive Data Videos or PDVs [21]. This genre of Data Videos incorporates some persuasive elements inspired by and drawn from the Persuasive System Design Model [62]. In their research, the authors studied how incorporating some persuasive elements in a Data Video could improve the potential persuasion level of the video [21]. Their study revealed that their PDVs had higher persuasive potential than regular Data Videos.
+
+Amini et al. [7] examined the effect of using pictographs and animation, two commonly used techniques in data videos [7]. They found that the use of such techniques enhanced the viewers' understanding of data insights while boosting their engagement. They concluded that the strength of pictographs can be attributed to their ability to trigger more emotions in the viewers, while the animation strengthens the intensity of such emotions.
+
+### 2.2 Affects and Data Videos
+
+This leads us to an important aspect of Data Videos: affects. Research shows that viewers' preference for multimedia; be it a per-formingart, internet video, or even music videos, is highly dependent on their arousal level and the intensity of their affects towards the viewed media $\left\lbrack {{11},{75}}\right\rbrack$ . Past studies assumed that TV viewers liked to watch shows that elicit positive emotions as opposed to negative emotions [32]. However, later research showed this may be true for real life events, but people enjoy watching TV shows that evoke fear, anger, or sad emotions [59]. Bardzel et al. [11] examined the intensity and valence of viewers' affects, as well as their ratings of internet videos. The results showed correlation between the affects' intensity and the liking of the video [11]. As for the valence (i.e., positive or negative), the study showed that it is not the presence or absence of certain affects, be it negative or positive, that influenced the rating of the video. It is rather the emotional arc that leaves the viewer emotionally resolved and hence liking the video, even if it started with negative emotions [11]. In fact, health-related videos and campaigns are often designed in a way that elicits negative affects such as fear, worry, anxiety, etc. This is because health promoting messages normally present the negative consequences of not following a healthy behavior or of engaging in an unhealthy behavior (e.g., if you do not exercise you will become obese, look older, be at risk of diabetes, high blood pressure and cancer, or if you smoke, you will look like the picture on the tobacco box), as well as alarming statistics on how many people are suffering from those consequences. Such strategies have been studied, approved, and even recommended for public health campaigns such as anti-smoking campaigns $\left\lbrack {{14},{64}}\right\rbrack$ . Theoretical studies in health risk messaging design suggest that a certain level of threat is "required" for the message to be effective, while excessive levels of threat could backfire [55]. This is likely the reason that Health-related Data Videos frequently contain alarming messages.
+
+#### 2.2.1 The role of affects in attitude and behavior change
+
+Affects play an important role in the appeal, as well as the persuasive power, of media [3, 15, 50]. According to behavioral theories in psychology, some of our attitudes have a cognitive basis while others have an affective basis $\left\lbrack {{45},{65}}\right\rbrack$ . Affective attitudes emerge from our feelings towards certain topics or ideas. Some attitudes are influenced relatively easily through affects or emotions while others through logic and facts $\left\lbrack {{45},{65}}\right\rbrack$ . The Dual Process Model suggests two routes to persuasion; central and peripheral. The central route is the cognitive route in which the receiver of a message is willing and able to cognitively process the ideas $\left\lbrack {{18},{65}}\right\rbrack$ . In contrast, the peripheral route processing is triggered when the receiver lacks the motivation or ability to logically process cues in the message, and decides to agree with the message based on its emotional appeal (e.g., emotions triggered by the look or smell, but not by the logic) $\left\lbrack {{18},{50}}\right\rbrack$ . For instance, one might purchase a car based on its gas emission, cost, functions, and so on (Central route) or because of the way it looks (Peripheral route). In sum, research indicate both cognition and affect are heavily involved in persuasion.
+
+In the field of marketing, as an area focusing primarily on persuading and guiding the viewers to adopt a certain service or commodity, a wide array of studies focused on the kind of affects evoked by ads [41] and how they affect the viewers' attitudes to improve the persuasive power of ads [16]. Models for behavior change, such as the health belief model $\left\lbrack {{43},{68}}\right\rbrack$ support that negative affects (e.g., feeling worried or at risk) are the first step towards behavior modification. That is, people need to recognize they are at risk in order to be willing or motivated to change their behavior. Dunlop et al. [27] examined the responses to health promoting mass-media messages and found that feeling at risk was a significant predictor for participants' intention to attitude change. Here, we should say "Additionally, partly related to this mode, health-related Data Videos almost always include some negative information (e.g., Negative outcomes of lack of sleep).
+
+#### 2.2.2 Measuring Affects
+
+There is a wealth of research in diverse fields such as psychology, advertising, political science, and HCI on measurement of viewers' affective responses to videos, advertisements, or computing applications. When it comes to measuring affects, studies normally fall between two approaches or a mix of both. The first approach is the implicit approach to measure affects, which relies on physiological recordings of individuals' biometric responses. The second approach is the explicit self-reporting of the viewers' or users' affects during their exposure to the stimuli. While modern technologies in the form of sensors and specialized devices that can log biometric changes (e.g., heart rate, breath rate, respiration patterns, skin patterns, electroencephalogram (EEG), and galvanic skin responses or GSR) are very promising $\left\lbrack {{49},{75}}\right\rbrack$ , they are often invasive, expensive, and the meaning of the data recorded remains unclear [49]. As for the explicit measurement methods, indicators such as final applause to a show, post-show surveys, or interviews are most commonly used. While less costly and invasive, the explicit approach could capture somewhat skewed responses as the viewers' responses are normally affected by their peak emotion and the emotions experienced at the end of the show (i.e., the 'peak-end' effect) [25, 48, 49, 67]. More recent research relies on participants' self-recordings of their affects using different forms of sliders. Latulipe et al. [49] developed two self-reporting scales; the Love-Hate scale (LH scale) and the Emotional Reaction scale (ER scale) [49]. The LH scale was implemented on a slider that had the labels 'Love it' and 'Hate it' at the very ends and neutral in the middle. The ER slider, on the other hand, ranged from 'No Emotional Reaction' to 'Strong Emotional Reaction'. Researchers in this study wanted to relate self-reported emotions recorded by participants while watching a video, using one of the LH and ER scales, to biometric data collected using GSR, and this is indeed what they found: they found strong correlation between ER scale and GSR (r = .43; p <.001). The absolute value of the LH scale was also strongly correlated with GSR data. In this study and similar studies, the researchers used a continuous reporting of emotions where participants rated their emotions all through the video. Another approach can be by chunking the video into meaningful segments and reporting emotions in each segment [49].
+
+### 2.3 Personalization in Persuasive Technology
+
+Recent research indicates that the one-size-fits-all model of persuasive technology is not as effective for persuading users to change attitudes or behavior. Instead, the focus is shifting towards personalized persuasive systems which often explore the effect of personalities on persuasion level $\left\lbrack {9,{20},{26},{40},{44},{45},{77},{79}}\right\rbrack$ . The five-factor (or Big Five) model of personality offers five broad personality traits: extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. This is the most widely used model for personality assessment across diverse disciplines $\left\lbrack {{28},{30},{45},{53},{69},{77}}\right\rbrack$ , it has been repeatedly validated, and its predictive power has been confirmed [82]. Halko et al. [37] explored the link between personality traits and people's perception regarding persuasive technologies that adapt different persuasive strategies. They categorized their participants based on the Big Five personality traits, and studied the effect of eight persuasive techniques. These eight techniques were grouped into four categories by putting complementary persuasive strategies together. The Instruction style category, for example, included authoritative and non authoritative, Social feedback included cooperative and competitive, Motivation type included extrinsic and intrinsic and the Reinforcement type included negative and positive reinforcement. They found correlations between personality traits and the persuasive strategies. For example, their results showed that people who score high in neuroticism tend to prefer negative reinforcement (i.e., removal of aversive stimuli) in the reinforcement category. As for the social feedback, neurotic people do not prefer cooperating with others to achieve their goals.
+
+## 3 STUDY DESIGN
+
+To reiterate, we explored the influence of existing health-related Data Videos on users' willingness to reconsider their health-related behavior. We focused on three health-related topics (physical activity, sleep, and diet). We examined personality differences as a factor and whether personality contributes to the affective experience of viewers watching Data Videos. We also examined some factors related to participants' appraisal of the videos' content in relation to their potential to change their attitude. For this exploration, we developed an online study which contained questions and the Data Video stimuli.
+
+### 3.1 Study Administration
+
+An online study was created using Qualtrics, and administered through Amazon Mechanical Turk (M-Turk). To ensure data quality, we recruited participants with approval ratings higher than 95% and who had completed a minimum of 1000 tasks prior to our study. All participants received monetary compensation (\$2.24 US) in compliance with the study ethics approval and M-Turk payment terms We restricted participant recruitment to the US and Canada to help ensure a good command of English. The study started with a consent form, and provided participants an overview of the objective of the study, and the study instructions. The study consisted of survey questions before and after the presentation of three Data Videos on health-related topics.
+
+### 3.2 Data Video Selection
+
+We collected Data Videos focusing on three general health-related topics; physical activity, healthy sleep, and healthy diet. Our aim was to collect generally good Data Videos as our ultimate goal is to create guidelines to produce Data Videos. There are no criteria for quality of Data Videos yet: empirical research in this area is scarce. so we systematically explored existing Data Videos with guidance from Amini et al.'s [6] study. Our careful video selection process, described below, yielded overall consistency across all the videos used (See Fig. 5).
+
+- First, two researchers collected more than 100 Data Videos using relevant keywords such as 'healthy diet', 'dangers of not having enough sleep', 'importance of exercise', etc.
+
+Table 1: Big Five Inventory 10-items (BFI-10) developed by Rammst-edt [66]
+
+| I see myself as someone who ... | |
| ... is reserved (R) ... is outgoing, sociable | Extraversion |
| ... is generally trusting ... tends to find fault with others (R) | Agreeableness |
| ... tends to be lazy (R) ... does a thorough job | Conscientiousness |
| ... is relaxed, handles stress well (R) ... gets nervous easily | Neuroticism |
| ... has few artistic interests (R) ... has an active imagination | Openness |
+
+$\left( R\right) =$ item is reverse-scored.
+
+A Likert scale (1: Strongly Disagree to 5: Strongly Agree) is used.
+
+- We then removed videos that did not follow the Data Video definition found in [6] or contained erroneous information.
+
+- Remaining videos were coded by two researchers for length, source credibility, information accuracy, etc. To ensure the quality and accuracy of the information provided in the videos, we increased the score of videos produced by professional and reputable health-related organizations, companies, magazines, research centers, and websites (e.g., WHO, Tylenol Official, The Guardian, British Heart Foundation, and UK Mental Health) and with high numbers of views (greater than 25,000 views) on YouTube.com or Vimeo.com.
+
+- The final list consisted of nine videos; three on each topic. Videos were checked by three researchers for suitability for the study. See Appendix A for the list of Data Videos used.
+
+### 3.3 Data Collection Instruments
+
+#### 3.3.1 Demographics
+
+The first part of the study asked demographic questions (e.g., age, sex, first language) followed by questions about participants' interest levels on the three health topics (i.e., physical activity, diet, and sleep).
+
+#### 3.3.2 Personality Traits
+
+The next section assessed participants' personality traits. A version of the Big-Five Inventory with 10 questions [66] was used because speed was crucial due to the online nature of the survey, see Table 1. This version of the scale is widely used in personalized technologies $\left\lbrack {9,{63}}\right\rbrack$ that tailor their contents based on the users’ personality and in which personality assessment needs to be quick. Although it is relatively short compared to the standard multi-item instruments, the 10-item version has been repeatedly examined and verified. According to Gosling et al. [33] it has "reached an adequate level" in terms of predictive power and convergence with full scales in self, observer, and peer responses. ${}^{1}$
+
+#### 3.3.3 Perceptions of Own Health
+
+Participants were asked to answer questions about their own diet, sleep, and physical activity in general (e.g., "Generally speaking, I am physically active"), using a 7-point Likert scale (1; Strongly Disagree to 7; Strongly Agree).
+
+Table 2: Negative Affects Question Items
+
+| Please read each statement carefully, and select the appropriate answer that best describes how you feel right now. |
| I feel anxious. I am relaxed. (R) I am worried. |
+
+$\left( R\right) =$ item is reverse-scored.
+
+A Likert scale (1: Not at all to 8: Extremely) is used.
+
+#### 3.3.4 Affective State Self-Reports
+
+Participants' negative affects, focusing on their worries, anxiousness, and (not being) relaxed, were assessed using three questions, four times; first, prior to the exposure to any of the videos (for a baseline value) and right after viewing each video, to examine the affective influence of the video. We used an 8-point Likert scale to report the affect intensity $(1 =$ not at all; $8 =$ extremely; see Table 2). We were inspired by [80] and followed their approach of not having a mid point in the scale as we focus on negative affects.
+
+We chose to focus on negative affects for two main reasons. First of all, as noted earlier, the model of behaviour change [34] suggests that feeling worried or at risk is the first step towards attitude change. Second, the majority of health Data Videos that we looked at contained unpleasant facts and threatening messages.
+
+#### 3.3.5 Persuasive Potential Questionnaire (PPQ)
+
+This study explored the effect of Data Videos at the perceptual level as a preliminary step in investigating Data Videos. More specifically, we focused on participants' motivation and willingness to change their behavior as opposed to their actual behavior change. While exploring behavior changes would have been useful, it requires a longitudinal study which was not possible with our current restrictions due to the pandemic. Therefore, to measure the potential of Data Videos, the Persuasive Potential Questionnaire (PPQ) [54] was adopted and adjusted to fit our context. PPQ is a subjective measurement tool that allows us to assess the potential of a persuasive system. The scale is composed of 15 question items, reported using a 7-point Likert scale (1; Strongly Disagree to 7; Strongly Agree); grouped under 3 dimensions: 1) individuals' susceptibility to persuasion (SP), 2) the general persuasive potential of the system (GPP), which measures the participants' perception of the system's ability to persuade, and 3) the individual persuasive potential of the user (IPP) which measures participants' assessment of the persuasive potential of a system they tried; (See Table 3). We did not include the IPP dimension as this set of questions (e.g., "I think I will use such a program in the future") are irrelevant to our research goal. Thus, we used the first two dimensions of PPQ. Since the SP dimension measures personal traits that are independent of the system, we asked participants to respond to it prior to the video viewing. Participants responded to the GPP questions after the video viewing to report their perception of the potential persuasive ability of each video.
+
+### 3.4 Overall Study Progression
+
+First, participants answered demographic questions and SP questions in Table 3, followed by the 10-item personality measure (See Table 1).
+
+Participants then watched three Data Videos and answered questions after each. The videos covered the three health topics, randomly selected from the sets of three videos per topic (see Figure 2). The order in which the topics were presented was also randomized for two reasons: 1) to avoid any priming effect that might occur due to the topic relevance to the participant, 2) to cancel out potential effects associated with features of each video. Participants were given full control to replay or pause the video. Our instruction made it clear that the participants could not skip to the next section (i.e., question) unless sufficient time (i.e., the length of the video) had elapsed.
+
+---
+
+${}^{1}$ According to Google Scholar search, this 10 -item scale has been cited in 2902 articles at the moment of writing; Sept, 2020
+
+---
+
+Table 3: Adjusted Persuasive Potential Questionnaire
+
+| SP | 1 When I hear others talking about something, I often re-evaluate my attitude toward it. 2 I do not like to be influenced by others. 3 Persuading me is hard even for my close friends. 4When I am determined, no one can tell me what to do. |
| $\mathbf{{GPP}}$ | I feel that... the video would make its viewer change their behaviors. 6 the video has the potential to influence its viewer. 7the video gives the viewer a new behavioral guideline. |
+
+A Likert scale (1: Strongly Disagree to 7: Strongly Agree) is used
+
+| 3 Physical Activity videos | 3 Sleep videos | 3 Diet videos |
| PA Video 1 | Sleep Video 1 | Diet Video 1 |
| PA Video 2 | Sleep Video 2 | Diet Video 2 |
| PA Video 3 | Sleep Video 3 | Diet Video 3 |
+
+Figure 2: We had 9 videos in total: 3 videos per topic. Each participant watched 3 videos in total, 1 video on each topic. (e.g., Diet Video 2, PA video 1, then Sleep Video 2). Thus, the order of the topic and the selection of the video within each category were randomized.
+
+After watching each video:
+
+1. Participants answered the three Affect-related questions. This helped us to capture participants' affective state influenced by the video (see Table 2).
+
+2. Participants answered three questions regarding their appraisal of the video content (Novelty, Clarity, and Usefulness of the information; e.g., "The information provided by the video was useful to me") using 7-point Likert scale (1: Strongly Disagree to 7: Strongly Agree). Their overall liking of the video was also assessed.
+
+3. Participants completed the questions for the General Persuasive Potential (GPP) of the video (see Table 3).
+
+4. Finally, participants indicated if they had any health issues that would prevent them from following the video's advice.
+
+After completing these four steps for each video, participants were asked to solve a one-minute, 12-piece jigsaw puzzle. This step was created to help participants neutralize their affective sate between videos by focusing on a task. After the puzzle, participants repeated the four steps for the next video. In total, each participant watched three videos and did two puzzles (one puzzle between 1st and 2nd video, and another puzzle between 2nd and 3rd video). After the final video, participants were directed to a final page that thanked them for their participation, and their work was submitted for review and payment following M-Turk standard practices.
+
+### 3.5 Hypotheses
+
+In this study we had the following five hypotheses:
+
+${H}_{1}$ : Watching Data Videos will increase participants’ negative affects.
+
+${H}_{2}$ : There are correlations between Personality traits and Negative Affects.
+
+- Neurotic people tend to be anxious and more likely to feel threatened by ordinary situations [78], they could experience more negative affects.
+
+- Extroverts and people open to experience are characterized by their happiness and optimism. Thus, we do not expect correlation between negative affects and these traits (i.e., lower susceptibility to threatening messages).
+
+- Conscientious individuals tend to be cautious. Thus, they could become worried about their own health after watching the video. Alternatively, they might be inclined to process threatening information cognitively, and as a result, they might not experience intense negative affects.
+
+${H}_{3}$ : Negative affects predict potential attitude change, measured by PPQ.
+
+${H}_{4}$ : There is a link between personality traits and potential attitude change.
+
+${H}_{5}$ : There is a link between video appraisal factors and potential attitude change.
+
+## 4 RESULTS
+
+On average, participants took 26 minutes to complete the study. Data-fitting assumptions for each analysis were checked and nonparametric options were used whenever appropriate.
+
+### 4.1 Participants
+
+We recruited participants $(N = {102};{68}$ Males,33 Females, and one participant preferred not to say) with ages ranging between 21 and ${70}\left( {M = {37.29},{SD} = {12.01}}\right) {.60}\%$ of the participants identified themselves as white, ${20}\%$ preferred not to mention their ethnicity and the rest were Hispanic, Black, Asian, and American. 100 participants reported their first language was English, 83.3% of them had an education level higher than Bachelor's Degree.
+
+### 4.2 Data Quality Control
+
+A verifiable (i.e., Gotcha) question was included in the survey. This question was designed to be readily solvable as long as the participants read the question ("How many words do you see in this sentence?"): 78 valid cases remained for the analyses. When appropriate, we further filtered out responses when participants responded "Yes" to the following question ("I have health issues that prevent me from following the advice provided in the video") in each of the three topics (i.e., Physical Activity, Sleep, Diet). ${}^{2}$
+
+### 4.3 Data Videos and Negative Affects
+
+To explore ${H}_{1}$ , a Wilcoxon Signed Ranks Test explored whether viewing of Data Videos influenced the levels of participants' negative affects; prior to the video viewing $\left( {{Mdn} = {2.67}}\right)$ and after the first video viewing $\left( {{Mdn} = {2.33}}\right)$ . Contrary to our expectation, participants' negative affect was not heightened even after they viewed a video $\left( {Z = - {.101}, p = {.919}}\right)$ . Note the video order was randomized and thus, this lack of effect cannot be interpreted as a result of one specific video.
+
+---
+
+${}^{2}$ This choice was made to reduce potential confounds (i.e., the participants might not be willing to change their attitude in response to the video because of their health issues). Four participants responded "Yes" after watching a video related to physical activity, and four different participants responded "Yes" after watching a video related to sleep, and finally, three participants responded "Yes" after watching a video related to diet. A pairwise deletion method was applied to this selection throughout the analyses.
+
+---
+
+### 4.4 Personality Traits and Negative Affects
+
+To examine ${H}_{2}$ , correlations between each personality trait and negative affects were explored. Negative affects were positively correlated with neuroticism, $\operatorname{rho}\left( {78}\right) = {.594}, p < {.001}$ , and negatively correlated with conscientiousness rho $\left( {78}\right) = - {.363}, p = {.001}$ ; no other traits were correlated with negative affects.
+
+### 4.5 Negative Affects and GPP
+
+We examined the link between negative affects and potential attitude change $\left( {H}_{3}\right)$ . For the analysis, we computed an index for Affective Responses. First, Chronbach's Alphas were checked ( ${.73} \leq \alpha \leq {.82}$ ) for participants’ affective responses (anxious, relaxed, and worried) per topic (Physical Activity, Sleep, and Diet). ${}^{3}$ Since the alpha levels satisfied our standard (.70) [60], the mean of these three items was computed. Then the correlations between these means for each topic were also investigated. They were all significantly correlated ( ${.810} <$ rhos $< {.830},{ps} < {.001}$ ; see [4]). This implies that if a participant's affective response was negative from viewing one video, it was likely that they experienced negative affects from viewing other videos as well (i.e., implied underlying personal tendency). Thus, the mean across all the topics was used as an index for Negative Affect. The index for GPP was also created in the same manner. Chronbach's alphas ranged between .81 and .91 per topic. We further checked whether GPP for one topic (e.g., Physical activity) was correlated with the GPP for other topics (e.g., Sleep and Diet). They were significantly correlated with each other ${\left( {.555} < \text{ rhos } < {.719},\text{ ps } < {.001}\right) }^{4}$ , and the mean of scores across all the topics was used as a GPP index. We explored whether overall GPP could be predicted by negative affects with a linear regression analysis. Negative affects predicted GPP, $F\left( {1,{76}}\right) = {4.056}, p = {.048}$ , ${R}^{2}$ change $= {.051},\beta = - {.225}$ .
+
+### 4.6 Personality Traits and Potential Attitude Change
+
+To explore ${H}_{4}$ , first, we explored the link between personality traits and individuals' susceptibility to persuasion (SP). Since Chronbach's Alpha for the four SP items was .60, we removed the first item (See Table 3) based on its low correlation with other items $\left( {{ps} \geq {.248}}\right)$ . Thus, the mean of these three items ( 2, 3, and 4, see Table 3 ) was used to create an index of SP (Chronbach’s Alpha $= {.79}$ ; [60]). ${}^{5}$ Agreeableness was positively correlated with SP, rho (78) $= {.26}, p =$ .047. No other links were found. The more agreeable participants were, the more susceptible to persuasion they were and vice versa.
+
+Next, linear regression analysis explored traits as predictors of GPP index using stepwise method. Neuroticism was the only predictor of $\mathrm{{GPP}}, F\left( {1,{76}}\right) = {8.179}, p = {.005},{R}^{2}$ change $= {.097},\beta = - {.306}$ . When individuals are highly neurotic, it was harder to achieve higher GPP. Based on this, we turned to focus on neuroticism to explore what it does to viewers' cognitive processing. Thus far, we have found that neuroticism is correlated with negative affects. Now, we further explored to see whether neuroticism predicted participants' general cognitive tendency even before they had watched the videos. For this, neuroticism was used as a predictor while participants' own health perception for each topic was entered as a dependent variable. Participants' health-related perception for all the topics were predicted by neuroticism at significant level; Physical Activity, $F\left( {1,{65}}\right) = {5.04}, p = {.028},{R}^{2}$ change $= {.072},\beta = - {.268}$ ; Sleep $F(1$ , ${64}) = {6.37}, p = {.014},{R}^{2}$ change $= {.091},\beta = - {.301}$ ; Diet, $F\left( {1,{65}}\right) =$ ${24.87}, p < {.001},{R}^{2}$ change $= {.277},\beta = - {.526}$ . We suggest this could be explained by the link between thinking style and neuroticism discovered by Zhang [81] where researchers found that neuroticism was linked to a conservative and risk-averse thinking style. In line with their findings, participants who scored high on neuroticism also revealed rather conservative views about their own health status and judged the extent of a videos' persuasiveness rather conservatively.
+
+
+
+Figure 3: Liking as mediator between neuroticsim and GPP. Neuroticism originally predicted GPP (p <.005; Step 1). However, when we controlled for the mediator (Liking), this relationship disappeared (p > .05; Step 2). This indicates one potential way designers could incorporate personality traits in persuasive design of Data Videos.
+
+${}^{ * }p < {.05}$ .
+
+${}^{ \star }p < {.005}$ .
+
+Finally, we explored the potential explanation of underlying dynamics of how neuroticism predicted GPP in our data. Inspired by previous findings, we hypothesized that liking of the video could be a mediator of the link between neuroticism and GPP: Neurotic people might not like the video (i.e., judging the video conservatively), and that could, at least partially, explain why their potential behavior change is not expected. To explore this, we followed Baron's mediation analysis [12] again, and mediation effect was found. Although neuroticism originally predicted GPP, $\beta = - {.306},\mathrm{t}\left( {77}\right) = - {2.86}, p$ $= < {.005}$ (See Step 1 in Figure 3), this effect disappeared when liking of the video was controlled, $\beta = - {.059}, t\left( {77}\right) = - {.781}, p = {.437}$ . This result helps designers see how they should consider personality differences in their Data Video development (i.e., personalization). We found that it is particularly challenging to persuade individuals who are highly neurotic. To guide them to alter their willingness to change their health-related behaviors, then, providing Data Videos that neurotic individuals like was a key.
+
+### 4.7 Content Appraisal
+
+For exploratory purposes, we explored how participants perceived the content of the videos (Information Novelty, Information Clarity, Information Usefulness, See Figure 4). Their content evaluation of one video correlated with the evaluation of the rest (.328
+
+Figure 1: Data Videos or animated infographics are short videos that present large amounts of data in an engaging narrative format. We selected nine Data Videos focusing on three health-related topics; physical activity, sleep, and diet. This figure shows an epitome of the videos used in the study. For the full list of videos, their links, and more detailed description, refer to Appendix A, B.
+
+§ ABSTRACT
+
+Data Videos have the potential to promote healthy behaviors [21]. Using publicly available data videos addressing physical activity, sleep, and diet, we explored the persuasive capability of Data Videos through their narrative format and the affective connection they arouse in their viewers' minds. We asked four central questions; (1) do Data Videos increase negative affects in their viewers?; (2) are negative affective responses linked with individuals' personality traits?; (3) can negative affects predict any change in viewers' willingness to improve their health-related behavior?; and finally (4) can personality traits and/or video attributes predict viewers' such willingness? An M-Turk study was conducted, whereby participants $\left( {N = {102}}\right)$ watched Data Videos, answered questions about their perceptions, and completed a personality trait questionnaire. Overall, influencing participants' willingness to reconsider their health-related behaviors was more difficult (i.e., harder to persuade) when they scored higher on neuroticism. This was because these individuals liked the provided Data Videos less, compared to those who scored low in neuroticism, at least partially. Perceived usefulness of the information along with neuroticism predicted our participants' willingness to reevaluate their health-related behaviors. Together, these findings show the importance of both using personality traits (i.e., personalization) and working on the general contents of Data Videos without considering personalization.
+
+Index Terms: Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI
+
+§ 1 INTRODUCTION
+
+Behavioral health refers to how individuals' well-being and health are influenced by their own behaviors [2]. Simple and preventable as they sound, behavioral health issues like improper diet or physical inactivity could be linked to many serious consequences such as cardiovascular diseases, obesity, high blood pressure, and even some types of cancer $\left\lbrack {1,{61}}\right\rbrack$ . In fact, not engaging in a sufficient amount of physical activity is a key risk factor for death worldwide [1]. In order to find means to effectively address our behavior-related health challenges in a timely manner, research involving technology will be key.
+
+As behaviors are normally adjustable, people suffering from behavior-induced health issues would benefit if they understood how their seemingly minor behaviors are tied to their serious medical issues. Further, they could be motivated to reevaluate their behaviors if they were guided with concrete behavioral suggestions on how to improve their health. The availability of modern technologies in the form of wearable devices and mobile health apps (mHealth) allows us to collect diverse personal health-related data (e.g., steps taken, calories burnt, and sleep hours) [51]. However, users of these technologies often do not benefit fully from the data they obtain [24]. This is because current data representation approaches generally lack the ability to "convey" the insights to users so they can take action. While current mHeath technologies typically provide statistics in the form of charts and graphs [39, 57], research shows that such methods are less effective in triggering behavior change [46], which is the ultimate goal of such technologies. Statistical data representation of personal health data is often passive, difficult to interpret, and not insightful. Thus, users often do not explore these representations fully. Some even claim the way such personal data is presented could be too frustrating or overwhelming for the users to understand [57]. This frustration could then lower the users' motivation to reexamine their health-related behaviors, and even their motivation to continuously use the technology [57]. We believe that the role of modern health tracking technology could move beyond data collection and presentation, towards presenting data in an insightful way to provide "actionable intelligence" to effectively guide users.
+
+For the effective delivery of health information, we focus on Data Videos in this study. Data Videos or animated infographics are short in length, typically shorter than six minutes [36], and provide factual, data-driven information for the users in an engaging narrative format $\left\lbrack {6,7,{47}}\right\rbrack$ . The narrative nature of Data Videos can make complex data easier to digest and act upon because narratives are a natural way for people to communicate and gain knowledge, [7,21]. As Data Videos take a storytelling format, they arouse emotional connections in their viewers' minds while they engage with the story. Research in advertisement and marketing demonstrates that affect plays an important role in motivating and convincing viewers and consumers $\left\lbrack {8,{50},{52}}\right\rbrack$ . Based on these findings, we hypothesize that a potential strength of Data Videos could be attributed to their capability to rouse the viewers' affect (e.g., people realize how much sugar they consume everyday, and they become motivated to reduce their daily sugar intake because they fear negative consequences). Indeed, to alleviate negative affects, changing one's behavior is logical (e.g., I don't want to get sick so I will start exercising). In this way, our affects could be indirectly driving our behaviors. Further, personality differences could play an important role in individuals' emotional responses $\left\lbrack {{13},{35},{42}}\right\rbrack$ as well as their potential behavior modification in response to a persuasive system $\left\lbrack {9,{37},{40},{44},{63},{77}}\right\rbrack$ ; individuals’ responses to an emotion provoking persuasive message vary from person to person. Accordingly, we plan to consider personality differences as a factor in our study.
+
+Another aspect that can play an important role in the persuasive capability of health Data Videos is how the viewers perceive the content of the video (i.e., their content value appraisal). Could they follow the content of the video easily and clearly? Did the video provide them with new and/or useful information?
+
+The problem we are addressing in this paper is how to improve the persuasive potential of health-related Data Videos, to effectively influence viewers' willingness to alter their health-related behavior. Our focus is in on two dimensions: 1) personalizing Data Videos to viewers' unique personalities, and 2) improving the overall quality of Data Videos in general.
+
+§ 2 RELATED WORK
+
+We discuss previous work investigating the effectiveness of Data Videos as a narrative to communicate data. We highlight some behavioral theories as well as studies in HCI/other related fields, then turn to affects associated with narratives. Finally, we discuss how affects play a role in forming people's attitudes. As a factor influencing affects, we review studies focusing on the importance of the role personality traits play in reaction to a persuasive message, by describing studies and strategies used in the persuasive technologies literature.
+
+§ 2.1 DATA VIDEOS AS A NARRATIVE
+
+Data videos are motion graphics that incorporate factual, data-driven information to tell informative and engaging stories with data [5,6]. Data videos are gaining popularity [6] in various fields such as journalism, education, advertisements, mass communication, as well as in political campaigns [29,38,47,70-72]. Due to their narrative nature, Data Videos are recognized as one of the seven forms of narrative visualization $\left\lbrack {{19},{71}}\right\rbrack$ . Baber et al. $\left\lbrack {10}\right\rbrack$ define narrative as a formal structure that constitutes a "sharable" story as opposed to the informal stories which could be "unstructured" and "ambiguous". A narrative is a series of connected events that constitute a story [71]. The order in which these events is presented in a medium constitutes its narrative structure [5]. Amini et al. [6] examined 50 professionally created Data Videos to learn about their narrative structure. In their study, they divided the videos into temporal sections and coded them based on Cohn's [23] theory of visual narrative structure that categorized the narrative into four stages: Establisher (E), Initial (I), Peak (P) and Release (R). Amini et al. [6] provided insights regarding the average duration (in percentage) each narrative stage consumes from the total video length as well as the percentage of time spent on attention cues and data visualizations within each stage. They also pinpointed some narrative structure patterns that are commonly used in Data Videos.
+
+The power of Data Videos comes mainly from this narrative format. Stories can convey information in an engaging way that is more natural, seamless, and effective than text or even pictures [31,39]. A well told story can convey a large amount of information in a way that the viewers find interesting, easy to understand, trust, recall readily, and make sense of $\left\lbrack {{17},{31},{57}}\right\rbrack$ . The advantage of visual narrative is its ability to present plenty of information in a compact form, compared to text or pictures alone [31]. According to Narrative Transportation Theory, videos can transform and immerse the viewer in a totally different world with their locale, characters, situations, and emotions which could reflect on the users' own beliefs, emotions, and intentions $\left\lbrack {{34},{57},{58},{76}}\right\rbrack$ . Furthermore, a plethora of psychological theories support the persuasive power of narrative. The Extended Elaboration Likelihood Model (E-ELM) argues that as people indulge in a narrative, with all its cues and stimuli, their cognitive processing of the narrative obstruct any counterarguments of the presented message $\left\lbrack {{18},{74}}\right\rbrack$ , making the message more persuasive even for those who are difficult to persuade otherwise [73]. Furthermore, as per the Entertainment Overcoming Resistance Model (EORM), the entertaining aspect of a narrative also plays a role in reducing the cognitive resistance to the message presented, and hence facilitates persuasion $\left\lbrack {{22},{56},{57}}\right\rbrack$ .
+
+Despite the great potential of, and the increasing demand for, Data Videos for information communication, it was not until recently that researchers turned an eye to empirically investigate them in terms of their building blocks, components, and narrative characteristics [6]. In a recent study that aimed at exploring the persuasive power of Data Videos, Choe et al. [21] introduced a new class of Data Videos called Persuasive Data Videos or PDVs [21]. This genre of Data Videos incorporates some persuasive elements inspired by and drawn from the Persuasive System Design Model [62]. In their research, the authors studied how incorporating some persuasive elements in a Data Video could improve the potential persuasion level of the video [21]. Their study revealed that their PDVs had higher persuasive potential than regular Data Videos.
+
+Amini et al. [7] examined the effect of using pictographs and animation, two commonly used techniques in data videos [7]. They found that the use of such techniques enhanced the viewers' understanding of data insights while boosting their engagement. They concluded that the strength of pictographs can be attributed to their ability to trigger more emotions in the viewers, while the animation strengthens the intensity of such emotions.
+
+§ 2.2 AFFECTS AND DATA VIDEOS
+
+This leads us to an important aspect of Data Videos: affects. Research shows that viewers' preference for multimedia; be it a per-formingart, internet video, or even music videos, is highly dependent on their arousal level and the intensity of their affects towards the viewed media $\left\lbrack {{11},{75}}\right\rbrack$ . Past studies assumed that TV viewers liked to watch shows that elicit positive emotions as opposed to negative emotions [32]. However, later research showed this may be true for real life events, but people enjoy watching TV shows that evoke fear, anger, or sad emotions [59]. Bardzel et al. [11] examined the intensity and valence of viewers' affects, as well as their ratings of internet videos. The results showed correlation between the affects' intensity and the liking of the video [11]. As for the valence (i.e., positive or negative), the study showed that it is not the presence or absence of certain affects, be it negative or positive, that influenced the rating of the video. It is rather the emotional arc that leaves the viewer emotionally resolved and hence liking the video, even if it started with negative emotions [11]. In fact, health-related videos and campaigns are often designed in a way that elicits negative affects such as fear, worry, anxiety, etc. This is because health promoting messages normally present the negative consequences of not following a healthy behavior or of engaging in an unhealthy behavior (e.g., if you do not exercise you will become obese, look older, be at risk of diabetes, high blood pressure and cancer, or if you smoke, you will look like the picture on the tobacco box), as well as alarming statistics on how many people are suffering from those consequences. Such strategies have been studied, approved, and even recommended for public health campaigns such as anti-smoking campaigns $\left\lbrack {{14},{64}}\right\rbrack$ . Theoretical studies in health risk messaging design suggest that a certain level of threat is "required" for the message to be effective, while excessive levels of threat could backfire [55]. This is likely the reason that Health-related Data Videos frequently contain alarming messages.
+
+§ 2.2.1 THE ROLE OF AFFECTS IN ATTITUDE AND BEHAVIOR CHANGE
+
+Affects play an important role in the appeal, as well as the persuasive power, of media [3, 15, 50]. According to behavioral theories in psychology, some of our attitudes have a cognitive basis while others have an affective basis $\left\lbrack {{45},{65}}\right\rbrack$ . Affective attitudes emerge from our feelings towards certain topics or ideas. Some attitudes are influenced relatively easily through affects or emotions while others through logic and facts $\left\lbrack {{45},{65}}\right\rbrack$ . The Dual Process Model suggests two routes to persuasion; central and peripheral. The central route is the cognitive route in which the receiver of a message is willing and able to cognitively process the ideas $\left\lbrack {{18},{65}}\right\rbrack$ . In contrast, the peripheral route processing is triggered when the receiver lacks the motivation or ability to logically process cues in the message, and decides to agree with the message based on its emotional appeal (e.g., emotions triggered by the look or smell, but not by the logic) $\left\lbrack {{18},{50}}\right\rbrack$ . For instance, one might purchase a car based on its gas emission, cost, functions, and so on (Central route) or because of the way it looks (Peripheral route). In sum, research indicate both cognition and affect are heavily involved in persuasion.
+
+In the field of marketing, as an area focusing primarily on persuading and guiding the viewers to adopt a certain service or commodity, a wide array of studies focused on the kind of affects evoked by ads [41] and how they affect the viewers' attitudes to improve the persuasive power of ads [16]. Models for behavior change, such as the health belief model $\left\lbrack {{43},{68}}\right\rbrack$ support that negative affects (e.g., feeling worried or at risk) are the first step towards behavior modification. That is, people need to recognize they are at risk in order to be willing or motivated to change their behavior. Dunlop et al. [27] examined the responses to health promoting mass-media messages and found that feeling at risk was a significant predictor for participants' intention to attitude change. Here, we should say "Additionally, partly related to this mode, health-related Data Videos almost always include some negative information (e.g., Negative outcomes of lack of sleep).
+
+§ 2.2.2 MEASURING AFFECTS
+
+There is a wealth of research in diverse fields such as psychology, advertising, political science, and HCI on measurement of viewers' affective responses to videos, advertisements, or computing applications. When it comes to measuring affects, studies normally fall between two approaches or a mix of both. The first approach is the implicit approach to measure affects, which relies on physiological recordings of individuals' biometric responses. The second approach is the explicit self-reporting of the viewers' or users' affects during their exposure to the stimuli. While modern technologies in the form of sensors and specialized devices that can log biometric changes (e.g., heart rate, breath rate, respiration patterns, skin patterns, electroencephalogram (EEG), and galvanic skin responses or GSR) are very promising $\left\lbrack {{49},{75}}\right\rbrack$ , they are often invasive, expensive, and the meaning of the data recorded remains unclear [49]. As for the explicit measurement methods, indicators such as final applause to a show, post-show surveys, or interviews are most commonly used. While less costly and invasive, the explicit approach could capture somewhat skewed responses as the viewers' responses are normally affected by their peak emotion and the emotions experienced at the end of the show (i.e., the 'peak-end' effect) [25, 48, 49, 67]. More recent research relies on participants' self-recordings of their affects using different forms of sliders. Latulipe et al. [49] developed two self-reporting scales; the Love-Hate scale (LH scale) and the Emotional Reaction scale (ER scale) [49]. The LH scale was implemented on a slider that had the labels 'Love it' and 'Hate it' at the very ends and neutral in the middle. The ER slider, on the other hand, ranged from 'No Emotional Reaction' to 'Strong Emotional Reaction'. Researchers in this study wanted to relate self-reported emotions recorded by participants while watching a video, using one of the LH and ER scales, to biometric data collected using GSR, and this is indeed what they found: they found strong correlation between ER scale and GSR (r = .43; p <.001). The absolute value of the LH scale was also strongly correlated with GSR data. In this study and similar studies, the researchers used a continuous reporting of emotions where participants rated their emotions all through the video. Another approach can be by chunking the video into meaningful segments and reporting emotions in each segment [49].
+
+§ 2.3 PERSONALIZATION IN PERSUASIVE TECHNOLOGY
+
+Recent research indicates that the one-size-fits-all model of persuasive technology is not as effective for persuading users to change attitudes or behavior. Instead, the focus is shifting towards personalized persuasive systems which often explore the effect of personalities on persuasion level $\left\lbrack {9,{20},{26},{40},{44},{45},{77},{79}}\right\rbrack$ . The five-factor (or Big Five) model of personality offers five broad personality traits: extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. This is the most widely used model for personality assessment across diverse disciplines $\left\lbrack {{28},{30},{45},{53},{69},{77}}\right\rbrack$ , it has been repeatedly validated, and its predictive power has been confirmed [82]. Halko et al. [37] explored the link between personality traits and people's perception regarding persuasive technologies that adapt different persuasive strategies. They categorized their participants based on the Big Five personality traits, and studied the effect of eight persuasive techniques. These eight techniques were grouped into four categories by putting complementary persuasive strategies together. The Instruction style category, for example, included authoritative and non authoritative, Social feedback included cooperative and competitive, Motivation type included extrinsic and intrinsic and the Reinforcement type included negative and positive reinforcement. They found correlations between personality traits and the persuasive strategies. For example, their results showed that people who score high in neuroticism tend to prefer negative reinforcement (i.e., removal of aversive stimuli) in the reinforcement category. As for the social feedback, neurotic people do not prefer cooperating with others to achieve their goals.
+
+§ 3 STUDY DESIGN
+
+To reiterate, we explored the influence of existing health-related Data Videos on users' willingness to reconsider their health-related behavior. We focused on three health-related topics (physical activity, sleep, and diet). We examined personality differences as a factor and whether personality contributes to the affective experience of viewers watching Data Videos. We also examined some factors related to participants' appraisal of the videos' content in relation to their potential to change their attitude. For this exploration, we developed an online study which contained questions and the Data Video stimuli.
+
+§ 3.1 STUDY ADMINISTRATION
+
+An online study was created using Qualtrics, and administered through Amazon Mechanical Turk (M-Turk). To ensure data quality, we recruited participants with approval ratings higher than 95% and who had completed a minimum of 1000 tasks prior to our study. All participants received monetary compensation ($2.24 US) in compliance with the study ethics approval and M-Turk payment terms We restricted participant recruitment to the US and Canada to help ensure a good command of English. The study started with a consent form, and provided participants an overview of the objective of the study, and the study instructions. The study consisted of survey questions before and after the presentation of three Data Videos on health-related topics.
+
+§ 3.2 DATA VIDEO SELECTION
+
+We collected Data Videos focusing on three general health-related topics; physical activity, healthy sleep, and healthy diet. Our aim was to collect generally good Data Videos as our ultimate goal is to create guidelines to produce Data Videos. There are no criteria for quality of Data Videos yet: empirical research in this area is scarce. so we systematically explored existing Data Videos with guidance from Amini et al.'s [6] study. Our careful video selection process, described below, yielded overall consistency across all the videos used (See Fig. 5).
+
+ * First, two researchers collected more than 100 Data Videos using relevant keywords such as 'healthy diet', 'dangers of not having enough sleep', 'importance of exercise', etc.
+
+Table 1: Big Five Inventory 10-items (BFI-10) developed by Rammst-edt [66]
+
+max width=
+
+I see myself as someone who ... X
+
+1-2
+... is reserved (R) ... is outgoing, sociable Extraversion
+
+1-2
+... is generally trusting ... tends to find fault with others (R) Agreeableness
+
+1-2
+... tends to be lazy (R) ... does a thorough job Conscientiousness
+
+1-2
+... is relaxed, handles stress well (R) ... gets nervous easily Neuroticism
+
+1-2
+... has few artistic interests (R) ... has an active imagination Openness
+
+1-2
+
+$\left( R\right) =$ item is reverse-scored.
+
+A Likert scale (1: Strongly Disagree to 5: Strongly Agree) is used.
+
+ * We then removed videos that did not follow the Data Video definition found in [6] or contained erroneous information.
+
+ * Remaining videos were coded by two researchers for length, source credibility, information accuracy, etc. To ensure the quality and accuracy of the information provided in the videos, we increased the score of videos produced by professional and reputable health-related organizations, companies, magazines, research centers, and websites (e.g., WHO, Tylenol Official, The Guardian, British Heart Foundation, and UK Mental Health) and with high numbers of views (greater than 25,000 views) on YouTube.com or Vimeo.com.
+
+ * The final list consisted of nine videos; three on each topic. Videos were checked by three researchers for suitability for the study. See Appendix A for the list of Data Videos used.
+
+§ 3.3 DATA COLLECTION INSTRUMENTS
+
+§ 3.3.1 DEMOGRAPHICS
+
+The first part of the study asked demographic questions (e.g., age, sex, first language) followed by questions about participants' interest levels on the three health topics (i.e., physical activity, diet, and sleep).
+
+§ 3.3.2 PERSONALITY TRAITS
+
+The next section assessed participants' personality traits. A version of the Big-Five Inventory with 10 questions [66] was used because speed was crucial due to the online nature of the survey, see Table 1. This version of the scale is widely used in personalized technologies $\left\lbrack {9,{63}}\right\rbrack$ that tailor their contents based on the users’ personality and in which personality assessment needs to be quick. Although it is relatively short compared to the standard multi-item instruments, the 10-item version has been repeatedly examined and verified. According to Gosling et al. [33] it has "reached an adequate level" in terms of predictive power and convergence with full scales in self, observer, and peer responses. ${}^{1}$
+
+§ 3.3.3 PERCEPTIONS OF OWN HEALTH
+
+Participants were asked to answer questions about their own diet, sleep, and physical activity in general (e.g., "Generally speaking, I am physically active"), using a 7-point Likert scale (1; Strongly Disagree to 7; Strongly Agree).
+
+Table 2: Negative Affects Question Items
+
+max width=
+
+Please read each statement carefully, and select the appropriate answer that best describes how you feel right now.
+
+1-1
+I feel anxious. I am relaxed. (R) I am worried.
+
+1-1
+
+$\left( R\right) =$ item is reverse-scored.
+
+A Likert scale (1: Not at all to 8: Extremely) is used.
+
+§ 3.3.4 AFFECTIVE STATE SELF-REPORTS
+
+Participants' negative affects, focusing on their worries, anxiousness, and (not being) relaxed, were assessed using three questions, four times; first, prior to the exposure to any of the videos (for a baseline value) and right after viewing each video, to examine the affective influence of the video. We used an 8-point Likert scale to report the affect intensity $(1 =$ not at all; $8 =$ extremely; see Table 2). We were inspired by [80] and followed their approach of not having a mid point in the scale as we focus on negative affects.
+
+We chose to focus on negative affects for two main reasons. First of all, as noted earlier, the model of behaviour change [34] suggests that feeling worried or at risk is the first step towards attitude change. Second, the majority of health Data Videos that we looked at contained unpleasant facts and threatening messages.
+
+§ 3.3.5 PERSUASIVE POTENTIAL QUESTIONNAIRE (PPQ)
+
+This study explored the effect of Data Videos at the perceptual level as a preliminary step in investigating Data Videos. More specifically, we focused on participants' motivation and willingness to change their behavior as opposed to their actual behavior change. While exploring behavior changes would have been useful, it requires a longitudinal study which was not possible with our current restrictions due to the pandemic. Therefore, to measure the potential of Data Videos, the Persuasive Potential Questionnaire (PPQ) [54] was adopted and adjusted to fit our context. PPQ is a subjective measurement tool that allows us to assess the potential of a persuasive system. The scale is composed of 15 question items, reported using a 7-point Likert scale (1; Strongly Disagree to 7; Strongly Agree); grouped under 3 dimensions: 1) individuals' susceptibility to persuasion (SP), 2) the general persuasive potential of the system (GPP), which measures the participants' perception of the system's ability to persuade, and 3) the individual persuasive potential of the user (IPP) which measures participants' assessment of the persuasive potential of a system they tried; (See Table 3). We did not include the IPP dimension as this set of questions (e.g., "I think I will use such a program in the future") are irrelevant to our research goal. Thus, we used the first two dimensions of PPQ. Since the SP dimension measures personal traits that are independent of the system, we asked participants to respond to it prior to the video viewing. Participants responded to the GPP questions after the video viewing to report their perception of the potential persuasive ability of each video.
+
+§ 3.4 OVERALL STUDY PROGRESSION
+
+First, participants answered demographic questions and SP questions in Table 3, followed by the 10-item personality measure (See Table 1).
+
+Participants then watched three Data Videos and answered questions after each. The videos covered the three health topics, randomly selected from the sets of three videos per topic (see Figure 2). The order in which the topics were presented was also randomized for two reasons: 1) to avoid any priming effect that might occur due to the topic relevance to the participant, 2) to cancel out potential effects associated with features of each video. Participants were given full control to replay or pause the video. Our instruction made it clear that the participants could not skip to the next section (i.e., question) unless sufficient time (i.e., the length of the video) had elapsed.
+
+${}^{1}$ According to Google Scholar search, this 10 -item scale has been cited in 2902 articles at the moment of writing; Sept, 2020
+
+Table 3: Adjusted Persuasive Potential Questionnaire
+
+max width=
+
+SP 1 When I hear others talking about something, I often re-evaluate my attitude toward it. 2 I do not like to be influenced by others. 3 Persuading me is hard even for my close friends. 4When I am determined, no one can tell me what to do.
+
+1-2
+$\mathbf{{GPP}}$ I feel that... the video would make its viewer change their behaviors. 6 the video has the potential to influence its viewer. 7the video gives the viewer a new behavioral guideline.
+
+1-2
+
+A Likert scale (1: Strongly Disagree to 7: Strongly Agree) is used
+
+max width=
+
+3 Physical Activity videos 3 Sleep videos 3 Diet videos
+
+1-3
+PA Video 1 Sleep Video 1 Diet Video 1
+
+1-3
+PA Video 2 Sleep Video 2 Diet Video 2
+
+1-3
+PA Video 3 Sleep Video 3 Diet Video 3
+
+1-3
+
+Figure 2: We had 9 videos in total: 3 videos per topic. Each participant watched 3 videos in total, 1 video on each topic. (e.g., Diet Video 2, PA video 1, then Sleep Video 2). Thus, the order of the topic and the selection of the video within each category were randomized.
+
+After watching each video:
+
+1. Participants answered the three Affect-related questions. This helped us to capture participants' affective state influenced by the video (see Table 2).
+
+2. Participants answered three questions regarding their appraisal of the video content (Novelty, Clarity, and Usefulness of the information; e.g., "The information provided by the video was useful to me") using 7-point Likert scale (1: Strongly Disagree to 7: Strongly Agree). Their overall liking of the video was also assessed.
+
+3. Participants completed the questions for the General Persuasive Potential (GPP) of the video (see Table 3).
+
+4. Finally, participants indicated if they had any health issues that would prevent them from following the video's advice.
+
+After completing these four steps for each video, participants were asked to solve a one-minute, 12-piece jigsaw puzzle. This step was created to help participants neutralize their affective sate between videos by focusing on a task. After the puzzle, participants repeated the four steps for the next video. In total, each participant watched three videos and did two puzzles (one puzzle between 1st and 2nd video, and another puzzle between 2nd and 3rd video). After the final video, participants were directed to a final page that thanked them for their participation, and their work was submitted for review and payment following M-Turk standard practices.
+
+§ 3.5 HYPOTHESES
+
+In this study we had the following five hypotheses:
+
+${H}_{1}$ : Watching Data Videos will increase participants’ negative affects.
+
+${H}_{2}$ : There are correlations between Personality traits and Negative Affects.
+
+ * Neurotic people tend to be anxious and more likely to feel threatened by ordinary situations [78], they could experience more negative affects.
+
+ * Extroverts and people open to experience are characterized by their happiness and optimism. Thus, we do not expect correlation between negative affects and these traits (i.e., lower susceptibility to threatening messages).
+
+ * Conscientious individuals tend to be cautious. Thus, they could become worried about their own health after watching the video. Alternatively, they might be inclined to process threatening information cognitively, and as a result, they might not experience intense negative affects.
+
+${H}_{3}$ : Negative affects predict potential attitude change, measured by PPQ.
+
+${H}_{4}$ : There is a link between personality traits and potential attitude change.
+
+${H}_{5}$ : There is a link between video appraisal factors and potential attitude change.
+
+§ 4 RESULTS
+
+On average, participants took 26 minutes to complete the study. Data-fitting assumptions for each analysis were checked and nonparametric options were used whenever appropriate.
+
+§ 4.1 PARTICIPANTS
+
+We recruited participants $(N = {102};{68}$ Males,33 Females, and one participant preferred not to say) with ages ranging between 21 and ${70}\left( {M = {37.29},{SD} = {12.01}}\right) {.60}\%$ of the participants identified themselves as white, ${20}\%$ preferred not to mention their ethnicity and the rest were Hispanic, Black, Asian, and American. 100 participants reported their first language was English, 83.3% of them had an education level higher than Bachelor's Degree.
+
+§ 4.2 DATA QUALITY CONTROL
+
+A verifiable (i.e., Gotcha) question was included in the survey. This question was designed to be readily solvable as long as the participants read the question ("How many words do you see in this sentence?"): 78 valid cases remained for the analyses. When appropriate, we further filtered out responses when participants responded "Yes" to the following question ("I have health issues that prevent me from following the advice provided in the video") in each of the three topics (i.e., Physical Activity, Sleep, Diet). ${}^{2}$
+
+§ 4.3 DATA VIDEOS AND NEGATIVE AFFECTS
+
+To explore ${H}_{1}$ , a Wilcoxon Signed Ranks Test explored whether viewing of Data Videos influenced the levels of participants' negative affects; prior to the video viewing $\left( {{Mdn} = {2.67}}\right)$ and after the first video viewing $\left( {{Mdn} = {2.33}}\right)$ . Contrary to our expectation, participants' negative affect was not heightened even after they viewed a video $\left( {Z = - {.101},p = {.919}}\right)$ . Note the video order was randomized and thus, this lack of effect cannot be interpreted as a result of one specific video.
+
+${}^{2}$ This choice was made to reduce potential confounds (i.e., the participants might not be willing to change their attitude in response to the video because of their health issues). Four participants responded "Yes" after watching a video related to physical activity, and four different participants responded "Yes" after watching a video related to sleep, and finally, three participants responded "Yes" after watching a video related to diet. A pairwise deletion method was applied to this selection throughout the analyses.
+
+§ 4.4 PERSONALITY TRAITS AND NEGATIVE AFFECTS
+
+To examine ${H}_{2}$ , correlations between each personality trait and negative affects were explored. Negative affects were positively correlated with neuroticism, $\operatorname{rho}\left( {78}\right) = {.594},p < {.001}$ , and negatively correlated with conscientiousness rho $\left( {78}\right) = - {.363},p = {.001}$ ; no other traits were correlated with negative affects.
+
+§ 4.5 NEGATIVE AFFECTS AND GPP
+
+We examined the link between negative affects and potential attitude change $\left( {H}_{3}\right)$ . For the analysis, we computed an index for Affective Responses. First, Chronbach's Alphas were checked ( ${.73} \leq \alpha \leq {.82}$ ) for participants’ affective responses (anxious, relaxed, and worried) per topic (Physical Activity, Sleep, and Diet). ${}^{3}$ Since the alpha levels satisfied our standard (.70) [60], the mean of these three items was computed. Then the correlations between these means for each topic were also investigated. They were all significantly correlated ( ${.810} <$ rhos $< {.830},{ps} < {.001}$ ; see [4]). This implies that if a participant's affective response was negative from viewing one video, it was likely that they experienced negative affects from viewing other videos as well (i.e., implied underlying personal tendency). Thus, the mean across all the topics was used as an index for Negative Affect. The index for GPP was also created in the same manner. Chronbach's alphas ranged between .81 and .91 per topic. We further checked whether GPP for one topic (e.g., Physical activity) was correlated with the GPP for other topics (e.g., Sleep and Diet). They were significantly correlated with each other ${\left( {.555} < \text{ rhos } < {.719},\text{ ps } < {.001}\right) }^{4}$ , and the mean of scores across all the topics was used as a GPP index. We explored whether overall GPP could be predicted by negative affects with a linear regression analysis. Negative affects predicted GPP, $F\left( {1,{76}}\right) = {4.056},p = {.048}$ , ${R}^{2}$ change $= {.051},\beta = - {.225}$ .
+
+§ 4.6 PERSONALITY TRAITS AND POTENTIAL ATTITUDE CHANGE
+
+To explore ${H}_{4}$ , first, we explored the link between personality traits and individuals' susceptibility to persuasion (SP). Since Chronbach's Alpha for the four SP items was .60, we removed the first item (See Table 3) based on its low correlation with other items $\left( {{ps} \geq {.248}}\right)$ . Thus, the mean of these three items ( 2, 3, and 4, see Table 3 ) was used to create an index of SP (Chronbach’s Alpha $= {.79}$ ; [60]). ${}^{5}$ Agreeableness was positively correlated with SP, rho (78) $= {.26},p =$ .047. No other links were found. The more agreeable participants were, the more susceptible to persuasion they were and vice versa.
+
+Next, linear regression analysis explored traits as predictors of GPP index using stepwise method. Neuroticism was the only predictor of $\mathrm{{GPP}},F\left( {1,{76}}\right) = {8.179},p = {.005},{R}^{2}$ change $= {.097},\beta = - {.306}$ . When individuals are highly neurotic, it was harder to achieve higher GPP. Based on this, we turned to focus on neuroticism to explore what it does to viewers' cognitive processing. Thus far, we have found that neuroticism is correlated with negative affects. Now, we further explored to see whether neuroticism predicted participants' general cognitive tendency even before they had watched the videos. For this, neuroticism was used as a predictor while participants' own health perception for each topic was entered as a dependent variable. Participants' health-related perception for all the topics were predicted by neuroticism at significant level; Physical Activity, $F\left( {1,{65}}\right) = {5.04},p = {.028},{R}^{2}$ change $= {.072},\beta = - {.268}$ ; Sleep $F(1$ , ${64}) = {6.37},p = {.014},{R}^{2}$ change $= {.091},\beta = - {.301}$ ; Diet, $F\left( {1,{65}}\right) =$ ${24.87},p < {.001},{R}^{2}$ change $= {.277},\beta = - {.526}$ . We suggest this could be explained by the link between thinking style and neuroticism discovered by Zhang [81] where researchers found that neuroticism was linked to a conservative and risk-averse thinking style. In line with their findings, participants who scored high on neuroticism also revealed rather conservative views about their own health status and judged the extent of a videos' persuasiveness rather conservatively.
+
+ < g r a p h i c s >
+
+Figure 3: Liking as mediator between neuroticsim and GPP. Neuroticism originally predicted GPP (p <.005; Step 1). However, when we controlled for the mediator (Liking), this relationship disappeared (p > .05; Step 2). This indicates one potential way designers could incorporate personality traits in persuasive design of Data Videos.
+
+${}^{ * }p < {.05}$ .
+
+${}^{ \star }p < {.005}$ .
+
+Finally, we explored the potential explanation of underlying dynamics of how neuroticism predicted GPP in our data. Inspired by previous findings, we hypothesized that liking of the video could be a mediator of the link between neuroticism and GPP: Neurotic people might not like the video (i.e., judging the video conservatively), and that could, at least partially, explain why their potential behavior change is not expected. To explore this, we followed Baron's mediation analysis [12] again, and mediation effect was found. Although neuroticism originally predicted GPP, $\beta = - {.306},\mathrm{t}\left( {77}\right) = - {2.86},p$ $= < {.005}$ (See Step 1 in Figure 3), this effect disappeared when liking of the video was controlled, $\beta = - {.059},t\left( {77}\right) = - {.781},p = {.437}$ . This result helps designers see how they should consider personality differences in their Data Video development (i.e., personalization). We found that it is particularly challenging to persuade individuals who are highly neurotic. To guide them to alter their willingness to change their health-related behaviors, then, providing Data Videos that neurotic individuals like was a key.
+
+§ 4.7 CONTENT APPRAISAL
+
+For exploratory purposes, we explored how participants perceived the content of the videos (Information Novelty, Information Clarity, Information Usefulness, See Figure 4). Their content evaluation of one video correlated with the evaluation of the rest (.328
+
+—1SD Below Baseline Negative Affect —1SD Above Baseline Negative Affect
+
+Figure 4: GPP, Negative Affect, Topic Interest and Content Appraisal means per video. Only Usefulness predicted GPP.
+
+§ 5 DISCUSSION
+
+Due to the nature of common health Data Videos which regularly contain fear inducing messages, in this study, we explored negative affects (i.e., anxiety, worries, and not being relaxed in response to those messages) specifically about physical activity, sleep, and diet. While we did not find evidence to show Data Videos increased the levels of negative affects, the levels of negative affects predicted general persuasive potential. These findings suggested that levels of negative affects were not influenced by our stimuli at least at a significant level. Further, and importantly, neuroticism predicted general potential persuasiveness. Specifically, when individuals scored high in neuroticism (in comparison to those who scored low), their willingness to reconsider their health-related behaviors was rather low. How do we tackle this challenge then? Can we influence individuals' perception when they they score high on neuroticism? Is there a way to persuade neurotic individuals? Our mediation analysis shed some lights on this neuroticism-persuasiveness link. Our results showed that the neuroticism-persuasiveness link disappears when we remove the effect of individuals' liking of the videos. That is, when we controlled for their liking of the videos, neuroticism did not predict general persuasiveness anymore. This indicates that, in persuading highly neurotic individuals, focusing on improving likability of the Data Videos should be beneficial. Then, understanding viewers' preferences/taste based on their personality type could be fruitful for Data Video designers in achieving higher persuasiveness, at least at a perceptual level.
+
+Additionally, we were able to explore the contents of the videos to find general potential guidelines in designing health-related Data Videos. While our exploration is limited to three aspects of the content (Usefulness of the information, Importance of the information, and Clarity of the information), we were able to find the significance of perceived usefulness of the information in predicting general attitude change. Moreover, our model with neuroticism and perceived usefulness of the information together explained the ${67}\%$ of the variability in general potential attitude change at a significant level. This indicates the importance of consideration of both sides, the audience (e.g., personality traits) in conjunction with the video content itself, to promote attitude change effectively. Improving "Perceived Usefulness" of health-related Data Videos appears to be one of the general rules in developing health-related Data Videos.
+
+Altogether, we were able to find potential means to approach individuals who are high in neuroticism with Data Videos. While they might be harder to persuade, delivering Data Videos that they like might allow us to get us closer to the goal of health-related Data Videos. At the same time, this finding enforces the argument of personalization of technology. Our results indicated personalizing Data Videos at least for those who score high on neuroticism could be useful: Personalization of health-related Data Videos could be, in fact, essential to alter their viewers' perception effectively. On the other hand, regardless of the personality of the viewers, focusing on the improvement of perceived usefulness in Data Videos could enhance their potential of general persuasiveness. In sum, we were able to find means to improve the potential of general persuasiveness by targeting both general population and specific individuals.
+
+ < g r a p h i c s >
+
+Figure 5: General trends by individual video. Overall, selected videos induced generally comparable effects, implying that although we did not create videos for the study, we sufficiently controlled for the basic quality of the videos.
+
+§ 6 LIMITATIONS AND FUTURE WORK
+
+Due to the COVID-19 pandemic, we were not able to conduct the study in the laboratory setting: instead, we used M-Turk. Although M-Turk gave us the opportunity to recruit participants in a short period of time at a relatively low cost, it compromised the controllability of our study. For the same reason, we only relied on participants' self-reporting of their affective responses, and were not able to use any physiological measurements to verify their reported affects. Future studies with physiological measurements will be useful to validate our findings. Furthermore, the study was limited to negative affective responses related to anxiety. We acknowledge that the current time of the pandemic might have affected our results, as participants may be experiencing higher levels of general anxiety than usual. Moreover, the investigation of positive affects would also be useful. Examining positive affects such as excitement or hope, along with negative affects, will improve our model further. Finally, stimuli used in this study were not created for the purpose of this study. Instead, we selected nine existing videos systematically. While we might have relatively lower control in our stimuli (e.g., voice over by male vs. female, use of animation, font types), we chose to use existing data videos to help maximize the generalizability of our results. The general data trends by individual video (Fig 5) shows that central quality of the videos were consistent across videos, confirming that our video selection was successful.
+
+§ 7 CONCLUSION
+
+We found some evidence that neuroticism is an important trait to be considered towards personalization of persuasive technology, at least for health-related Data Videos (Physical Activities, Sleep, and Diet), when motivating the viewers is the goal of such videos. When individuals score high in neuroticism, they are harder to persuade, but there is a potential solution via personalization. Our results further indicated that negative affects (feeling of anxiety/worry) would not aid in the potential to attitude change. It is worth mentioning our results implied that those negative affects could be attributed to the participants' traits in the end. Altogether, these findings encourage us to consider personality traits in designing Data Videos. As for video attributes, we found that perceived information usefulness was a key factor influencing the persuasive potential of the video. While our model with neuroticism and perceived usefulness of the information predicted general potential for attitude change, there are numerous other content related factors to be explored in future studies (e.g., video length). With our results, we would like to conclude that consideration of personality traits along with attributes related to the video content would be beneficial in developing Data Videos for promoting health-related attitude change.
\ No newline at end of file
diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..17048485e4bb02d29a25cfe4ce537381138c0ab2
--- /dev/null
+++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,361 @@
+§ ACCOUNTABILITY-AWARE DESIGN OF VOICE USER INTERFACESFOR HOME APPLIANCES
+
+Anonymous Author(s)
+
+§ ABSTRACT
+
+The availability of voice-user interfaces (VUIs) has grown dramatically in recent years. As more capable systems invite higher expectations, the conversational interactions that VUIs support introduce ambiguity in accountability: who (user or system) is understood to be responsible for the outcome of user-delegated tasks. When misconstrued, impact ranges from inconvenience to deadly harm. This project explores how users' accountability perceptions and expectations can be managed in voice interaction with smart home appliances. To explore links between degree of automation, system accountability and user satisfaction, we identified key design factors for VUI design through an exploratory study, articulated them in video prototypes of four new VUI mechanisms showing a user commanding an advanced appliance and encountering a problem, and deployed them in a second study. We found that participants perceived automated systems as more accountable, and were also more satisfied with them.
+
+Index Terms: Human-centered computing-Interface design prototyping; Auditory feedback; User Interface design;
+
+§ 1 INTRODUCTION
+
+Advances in artificial intelligence (AI) are changing how users interact with software agents. AI-infused systems vary in the level of automation they present to users: they can recommend options, make decisions, communicate with other agents, and adapt to their environments [6,62]. A Voice User Interface (VUI) is a type of user interface that relies on speech recognition to communicate with users, usually with a conversational style [15] that resembles natural verbal intercourse, rather than manual clicking or typing.
+
+Assistant-type VUIs are growing in popularity on personal devices. The majority of Americans own a smartphone (81%) or tablet (52%), which today come equipped with Siri, Google Assistant, or equivalent VUIs [61]. 1 billion devices worldwide running Windows 10 [43] provide access to Cortana, Microsoft's voice assistant. Access is different from use, but devices that exclusively accept voice input, e.g., Amazon Echo and Google Home are on the rise: over 100 million Alexa-enabled units had been sold as of January 2019 [9]. It is clear that many consumers are newly choosing and trying voice interaction in their everyday life [2]. With this prevalence, we can go beyond ${1}^{\text{ st }}$ -order traits of this modality (hands-free, natural language) to examine factors such as social consequences.
+
+Often, VUI technologies are used for requests for information or to trigger applications [7]: failure is annoying but not dire. However, as voice recognition technology improves and systems are capable of greater automation, users hold VUI systems to human-like standards of behavior. During VUI conversations with smart devices, users seem to anticipate accountability along similar lines as they would with humans; e.g., Porcheron et al. describe users expecting appropriate responses, both verbal and actionable, from Alexa and express dismay when these are not provided. If the system makes an utterance, they react as they would to a human utterance [52].
+
+However, AI systems that enable automation typically work under uncertainty, balancing false-negative and false-positive errors with potentially confusing and disruptive results [6]. Impact widens as standalone systems become platforms that control many home technologies through the Internet of Things (IoT) [7, 36, 45]. With Google Home, users can adjust lighting and set the thermostat [33], but also interact with systems invoking larger consequences. Smart washing machines can ruin clothes; personal assistant devices can spend money online. A semi-autonomous car can crash and kill in a moment of ambiguity over who is in charge.
+
+So, what happens in the case of a bad outcome? Does the user hold the system responsible, or themselves?
+
+Accountability can be defined as who (e.g., user or system) is obligated or willing to be responsible or accountable for the satisfactory execution of a task, including one that has been delegated [3]. It is fundamental to how people conceptualize their actions and react to outcomes in a social context, by considering risk, uncertainty, and trust when taking or delegating ownership of outcomes [13]. Both a societal and individual concept, accountability differs subtly from responsibility; it is possible to be responsible (in charge) without being accountable if you take action but not ownership of the results.
+
+Can interaction design mediate this balance, when it is important that the user retain accountability for a delegated task? We focus on perceived accountability (hereafter accountability), which varies with user experience and expectations as well as situation (e.g., degree of automation actually available), and therefore can vary by instance [62]. These factors impact a user's perception of system capability [18]. Because of these interlinked perceptions, we posit that through design we can manage user perception of a system's automation, and influence their notion of accountability.
+
+Research Questions: We consider two questions in the context of VUI interaction with advanced home appliances:
+
+RQ1: What design factors impact user perception of system accountability?
+
+RQ2: How does automation influence perceived accountability and user satisfaction? Can interface design mediate this influence?
+
+Approach: An invitation from industry colleagues to investigate user experience with voice-controlled smart appliances led us to consider VUIs in terms of user types, social roles, privacy and value added. In a first exploratory study, accountability perception emerged as an important and understudied factor. We used insights from this exploration to propose primary design factors that can influence accountability during the user's interaction with the system (RQ1).
+
+To go deeper, we studied more carefully how varying the level of system automation could influence user perception of system accountability. Our work is in the tradition of HCI community-proposed guidelines and recommendations for interaction with AI-infused systems. Early work by Norman [48] and Höök [30] sets out guidelines for avoiding unfavourable actions during interaction with intelligent systems, aiming to safeguard the final outcome by managing autonomy and requiring verification. Horvitz proposed a mixed-initiative method to balance automation with direct manipulation by users [29]. While these works discuss cautionary actions to avoid potential problems, their impact on users' perception of accountability in case of failure is unexplored.
+
+We constructed video prototypes [68] featuring different VUI interaction scenarios, as design probes to provoke open dialogues with participants about accountability for each interaction scenario. These prototypes present four types of VUI mechanisms that vary level of automation, all set in a home environment, enabling participants to compare accountability delegation.
+
+Through in-person interviews and online questionnaires, we obtained participants' rankings of accountability and their satisfaction of each mechanism in light of the task failure illustrated. We analyzed this data in relation to the represented system automation.
+
+§ IN CONTRIBUTION, WE:
+
+ * Propose primary design factors of accountability in voice user interactions with complex technology: task complexity, command ambiguity and user classification.
+
+ * Demonstrate the ability to direct accountability perception through VUI design.
+
+ * Provide insights on the relationship between user satisfaction and system automation.
+
+§ 2 RELATED WORK
+
+§ 2.1 AUTOMATION, INTERACTION & ACCOUNTABILITY
+
+Internet of Things and Smart Home Appliances: IoT technology connects users and environmentally-embedded "smart" objects, from individual gadgets (like smartphones and smartwatches) [31], communal appliances (smart speakers, thermostats, vacuum robots), to semi-autonomous systems and sensor networks) [42,67]. The exploding number of IoT devices and complexity of controlling them can negatively impact user attitudes [49]. We note that while considerable smart technology is available today, our study scenarios imbue VUIs with slightly futuristic decision-making ability.
+
+Intelligent user interfaces (IUI): In addition to sensor capacity and the IoT, some smart home appliances benefit from embedded IUIs. IUIs simplify interactions through AI capabilities such as adaptation [32] and the ability to respond to natural commands and queries with apparent social intelligence. Suchman's discussion of situated action highlights the need for context-dependent responses in HCI [63]. Although situated-action models result in versatile and conversational systems, this approach is based on probabilistic behaviour which is prone to unexpected errors.
+
+Task delegation to AI: Studies in AI-human interaction focused primarily on systems capabilities such as reliability and cost of decision (e.g., [51]). Considering human preferences and perceptions, Lubras et al. summarize the literature in shared control between humans and AI, and propose four factors for AI-task delegation: risk, trust, motivation, and difficulty. Emphasizing human perception, their research supports the human-in-the-loop design and low preference for automation [41]. However, the user's accountability perception generally does not appear in these studies.
+
+Explainable-accountable AI: For both usability and ethical reasons, algorithmically derived decisions should be explainable $\left\lbrack {{19},{34},{58}}\right\rbrack$ . Systems utilizing them should provide accounts of their behaviour and inform users about sensor information, resulting decisions, and likely consequences $\left\lbrack {8,{20}}\right\rbrack$ . Some argue for policies on automated decision making [22], and a few governments have established regulations that require AI systems to provide users with explanations about any algorithmic decisions [24]. Explainable AI enables human to make sense of the machine learning models and understand the rationales behind AI decisions. Abdul et al.reviewed over 12,000 papers from diverse communities on trends in explainable-accountable AI [4]. They highlight a lack of HCI research on practical, usable, and effective AI solutions. Other groups have found that classic UX design principles may be insufficient for AI-infused products, and we need to develop guidelines specifically for human-AI interaction [6] - a motive of the present work.
+
+Automation and accountability: Previous work on accountability mainly focused on social accountability among humans, and how people justify or explain their judgments $\left\lbrack {{10},{65}}\right\rbrack$ . Some, however, investigate accountability during collaborative decision making of a human and an intelligent agent $\left\lbrack {{18},{46},{57}}\right\rbrack$ . Skitka et al.show that holding users accountable for their performance reduces automation bias (too much trust in automated decision makers), improves performance and reduces errors [60]. Suchman shows how the agency attributed to a human or a machine is constructed during an interaction [63]. Others have investigated how users negotiate and interpret their agency while interacting with VUIs [38,59].
+
+However, no study has yet shown how to direct the perception of accountability through design. Accountability research in HCI goes beyond usability and deserves significantly more attention from intelligent user interface designers, including VUI designers.
+
+Control Capabilities: Building on works from Dourish, Button and Suchman $\left\lbrack {{12},{20},{63}}\right\rbrack$ , Boos et al. propose that users feel they have control over a system based on its "control capabilities" - specifically, when it is transparent, predictable and can be influenced [10] The authors further suggest that users who feel in control of an interaction are more likely to consider themselves accountable for the outcome. We aim to determine whether users can identify subtle differences in control capabilities, and whether that affects the accountability of the system.
+
+§ 2.2 VOICE-USER INTERFACES
+
+We use Porcheron's definition of a VUI, which specifies interfaces that rely primarily on voice, such as Amazon's Alexa or the Google Assistant [52]. They are always on and can be accessed from room-level distances, which results in them being highly "embedded in the life of the home" compared to other technologies. The quality of their human-centered design is imperative.
+
+The union of VUIs and smart home appliances is largely unexplored despite its promise; e.g., IoT is one of the most frequent VUI command categories that users employ in their daily interactions with home assistance devices [7]. While we see this as a great opportunity to incorporate VUI into home appliance technology, we heed Dourish's advice to "take sociological insights into the heart of the process and the fabric of design" [21].
+
+Our work is distinguished from past efforts in VUI use in everyday life [52] by moving beyond understanding users' perception of accountability and trying to direct it through design.
+
+§ 3 EXPLORATORY STUDY
+
+While some design factors have been identified at the boundary of automation and human interaction (trust, state learning, workload, machine accuracy, etc. [50]), we needed specific insights for VUI semi-automated systems. To answer RQ1, we investigated user experience with VUI products relative to non-VUI-controlled but "smart" home products. We did this through interviews $\left( {\mathrm{n} = {10}}\right)$ and questionnaires $\left( {\mathrm{n} = {43}}\right)$ , recruiting through social media (Facebook, Twitter). The results, briefly summarized here, motivated using VUIs and suggested where accountability matters most.
+
+§ 3.1 METHODS
+
+Participants: We targeted past purchasers of smart home appliances. Of 43 questionnaire respondents (20/21/2 F/M/unreported), age range was 25-55 years, from Canada, USA, Colombia, UK, China and Australia. All did or had owned smart home appliances.
+
+Questions: Participants reflected on their experiences with smart home appliances and voice-command technology, compared voice with other input modalities, and considered VUI integration for two hypothetical smart systems: lighting, and a washing machine. They were asked to imagine the functions these systems might fulfill through VUI commands, and explain any concerns.
+
+§ 3.2 RESULTS
+
+Accountability figured strongly in responses, emerging as a an under-explored design lever. Results further exposed three factors framing the situational impact of accountability: User Classification, Task Complexity and Command Ambiguity.
+
+Motivations and De-motivations for VUIs: Our participants appreciated VUI speed, convenience (particularly hands-free use), multitasking, shallow learning curve, and natural language. In contrast to human conversations, they wished to minimize interactions. When describing envisioned VUI smart home appliances, we heard that they needed a "machine that can decide for [itself]." [P8] However, they were concerned about unreliability, hesitating to use VUI for complex tasks with irreversible outcomes, and concerned about misinterpretation, likely from prior experience.
+
+Factor I - User Classification: We observed primary users, in charge of choice and maintenance, and secondary users reliant on the primary. Consistently, [56] notes that home technological management is not evenly distributed by gender or across the household.
+
+Factor II - Task Complexity: Participants categorized home appliances mainly by interface complexity, not underlying technology. We thus subsequently focused on home appliances with more complicated UIs and non-trivial consequences of failure.
+
+Factor III - Command Ambiguity: While positive overall, participants cited examples of concern which we categorize as naive access, hidden functionality and open-ended requests. Natural language is inherently ambiguous, requiring the system to make assumptions and decisions, as with human-human interactions. In so doing, accountability can be delegated - important to recognize should something go wrong. We seek design factors that influence this delegation.
+
+§ 4 FRAMEWORK
+
+§ 4.1 ACCOUNTABILITY VIA "CONTROL CAPABILITIES"
+
+We explored how VUIs can affect accountability using Boos et al.'s theoretical framework of Control Capabilities (Section 2.1), which is based on the premise that "in order to answer accountability demands [...], certain requirements of control need to be fulfilled" [10]. We framed our experimental study around an extension of this proposition and framework, seeking to verify or disprove it. To the Transparency and Predictability dimensions proposed by Boos et al. [12] we added Reliability because of its prominence in our exploratory study.
+
+Transparency: Transparency can be achieved through executing clear and understandable actions. Several studies recommend improving transparency by providing explanations about the behaviour of AI-empowered systems [28, 35, 40, 53].
+
+Predictability: Predictability can be obtained by producing desired and anticipated outcomes. Human-AI guidelines suggest two points where interactions with an AI should be shaped: over time and when wrong. They advise that during an interaction, a system should convey updates to users regarding future consequences of the system's behaviour, and support invocation of requests as needed [6].
+
+Reliability (added): Reliability can be achieved through delivery of desired outcomes based on given explanations. A well-studied construct in automated systems, trust is crucial in long-term adoption [27] and key for voice interaction [11]. To invoke trust, we chose reliability: the quality of performing the correct actions.
+
+§ 4.2 USER SATISFACTION AS A METRIC
+
+User satisfaction with automation generally improves with reduced cognitive effort. However, a system can avoid accountability by requesting detail, e.g., by providing choices or asking for confirmation. This increases user involvement, at the potential cost of satisfaction. Measuring user satisfaction as well as accountability perception indicates how well that balance is achieved.
+
+We defined this metric based on principles of measuring customer satisfaction level $\left\lbrack {{39},{55}}\right\rbrack$ , then designed a questionnaire to assess emotional satisfaction by asking about: (a) overall quality (Attitudinal), (b) the extent user's needs are fulfilled (Affective and Cognitive) (c) users' feelings (Affective and Cognitive) [1].
+
+§ 5 PRIMARY STUDY: METHODS
+
+§ 5.1 OVERVIEW AND HYPOTHESES
+
+Since humans can manage accountability in their conversations, we surmise that designers should be able to enable this in human-machine interaction. We hypothesize a correlation between automation level (from fully machine-controlled to fully user-controlled), and system accountability. Our goal was to focus on how the level of automation influences both accountability and user satisfaction, which eventually inform the design of interactive systems.
+
+We chose laundry as our focus task because modern washing machines require more engagement than other appliances (like refrigerators or toasters) and have a plethora of complex settings that can seem impenetrable and can cause confusion and errors. This complexity is what opens possibilities for guiding accountability perception, in a dialogue-type interaction. This also aligned with current industry activity: Samsung and LG have both developed washing machines with voice assistants.
+
+We conducted a controlled experiment (15 survey respondents, of which 8 were also interviewed). Participants watched a series of video sketches [68] showing four levels of automation, where an individual uses a VUI with a smart washing machine for both simple and complex tasks. In every video, the washing machine fails to fulfill the user's expectations, since accountability is relevant primarily when the system fails.
+
+Participants were instructed to imagine themselves as the user. We surveyed their perceptions of the washing machine's accountability for each VUI mechanism to obtain quantitative data, followed by open dialogues on accountability, satisfaction and general thoughts about each scenario.
+
+Our hypotheses address the joint effect of system automation and task complexity on system accountability with the future goal of employing them in balance. We anticipated that:
+
+H1: Increasing users' involvement in decision-making (thereby decreasing system automation) will reduce their perception of system accountability, particularly for high-complexity tasks.
+
+H2: As we increasingly automate task decision-making, user satisfaction will increase.
+
+If these hypotheses are correct, then system automation creates a trade-off between system accountability and user satisfaction. This work explores user perceptions surrounding this trade-off in the context of a VUI interaction. We also investigate the effects of task complexity on accountability and user satisfaction.
+
+§ 5.2 VUI ACCOUNTABILITY-DIRECTING MECHANISMS
+
+To direct users' accountability perception, we conceptualized four VUI mechanisms representing levels of system accountability, based on guidelines for a progression of automation in AI systems [29, 30,48]: automation, recommendation, instruction, and command. We created video dialogues by following highly cited guidelines $\left\lbrack {5,{17},{25},{26}}\right\rbrack$ . The levels differ primarily in the degree of direct manipulation, automation and information conveyed, and method of information delivery. We captured the mechanisms in walkthrough-style video prototypes for use in the study task.
+
+Automation presents a straightforward workflow: the user requests an outcome and the VUI notifies them of the action to be taken: e.g., after the user states they would like to wash their clothes as quickly as possible, the machine chooses to execute a quick wash cycle. Because of the system's take-charge approach, we anticipate that users will regard this largely as a delegation of accountability to the system. This accountability delegation comes into play in failure cases. For example, in the above case of the quick wash cycle, if the clothes are not cleaned as effectively as a normal wash.
+
+Recommendation provides options based on the user's description of the clothes. For example, after the user describes his clothes as colored, made of no special material, and medium load, the mechanism provides two suggestions with different temperatures and spin speeds. The user selects one. We posit that here, the system is accountable for the quality of recommendations, but the user who makes the choice is ultimately responsible for the outcome.
+
+Instruction provides the most information. It guides users in examining their clothes, and based on description and requirements, explains multiple washing suggestions. For example, after suggesting an extra rinse, the machine gives a detailed justification. If the user feels they have enough information, they can stop by saying 'Stop, I choose the first suggestion'. Here, we expect the user to hold the system accountable only for instruction accuracy.
+
+Although users have equivalent choices in Instruction and Recommendation, they differ in presentation of the choices. We expect this to be reflected in the Control Capabilities measures of Transparency, Reliability and Predictability.
+
+Command enables the user to set the washing cycle without any information from the machine. Users simply state their requirements instead of pushing buttons. This implies that the user knows what she wants. With this mechanism, we do not expect the user to hold the system accountable for the outcome.
+
+§ 5.3 DESIGN AND VARIABLES
+
+As we investigate influence of control capabilities (2.1), we rather than measures of the system's actual controllability by the user, these are guidelines whereby a designer can increase a user's sense of control They appear in two ways: informing mechanisms design (5.2); and as outcome measures (5.5), confirming whether this design manipulation was impactful.
+
+The study itself uses within-subject $2 \times 4$ design, with independent variables of complexity level (low/high, described below), and VUI mechanism ( 4 mechanisms). For each complexity level, we counterbalanced the order of VUI mechanisms.
+
+Task Complexity: With our exploratory study revealing the importance of task complexity and failure consequences on accountability perception, we varied task complexity for insight into H1 (whether design, via increased user involvement in decisions, can mediate perception of system accountability in high-complexity tasks). Low complexity - Routine laundry, common in a household. High complexity - A job involving special material (wool), extra requirements (stained fabric), and non-standard functions.
+
+In our videos, for the low complexity condition the system attempted to remove mud from clothing; for high complexity, to remove wine stains from a valuable sweater. For all conditions, the washing machine failed to completely clean the clothes.
+
+§ 5.4 PROCEDURE
+
+We recruited homeowners with purchasing power for home appliances, using social media advertising and referral of participants (similar to $\left\lbrack {{37},{44}}\right\rbrack$ ), necessary due to the inclusion criteria and lack of participant compensation. We recruited a subset of survey participants to be interviewed in-person, immediately post-survey. The survey took an average of 30 minutes, and the follow-up interviews 20-45 minutes, average 27 minutes.
+
+Survey participants answered a demographic questions and watched eight video prototypes: four distinct mechanisms, each performing a high- and low-complexity task. Videos were labelled by numerical order of appearance, counterbalanced by participant. Participants were then asked to rank the mechanisms by "how accountable each one was for the failed laundry task". Then, they scored each mechanism for Control Capabilities of Transparency, Predictability and Reliability and User Satisfaction. We asked the interviewee participant subset to verbally explain their responses.
+
+§ 5.5 DATA COLLECTED
+
+The pre-video questionnaire (28 questions) collected participants' demographic information and past experience with non-smart washing machines, including whether they tended to hold non-smart washing machines "responsible for failed laundry tasks"). The post-video questionnaire collected participants' ratings for participants' satisfaction and Control Capabilities for each video (i.e., VUI mechanism), while the interviews collected qualitative justifications of participants' survey responses.
+
+Accountability Ranking: After watching each mechanism fail to complete the washing task, participants ranked them ( 1 (most) to 4 (least) accountable, tie not allowed). Ranking facilitated direct comparisons between short lists of items [47].
+
+Control Capabilities: We sought participant opinions on Control Capabilities (Transparency, Predictability and Reliability) for each VUI mechanism as presented in the video prototypes. As with [14, 23], they scored each mechanism for Control Capabilities using a slider on a [0-100] point scale.
+
+User Satisfaction: Again with a [0-100] point scale and a slider, we asked participants to respond to three questions:
+
+ * How easy would it be to use the voice-assisted system?
+
+ * How confusing was the voice-assisted system?
+
+ * How satisfied would you be with this interaction?
+
+§ 5.6 ANALYSIS
+
+Perceived Accountability Rankings: We performed Friedman tests (widely used for ranked data $\left\lbrack {{47},{54}}\right\rbrack$ ) on the mechanisms’ accountability ratings to identify any correlation between accountability perception and system automation (which varied with the VUI mechanism in each video), for each level of task complexity (high or low). In post-hoc analysis, we used Bonferroni correction of confidence intervals to compare accountability rankings by VUI mechanism. For all statistical results, we report significance at $\alpha = {0.05}$ .
+
+Control Capabilities & User Satisfaction: We analyzed each set of [0-100] scores with a repeated-measures ANOVA. Due to a violation of sphericity, we report Greenhouse-Geisser results. Post-hoc analysis included a Bonferroni alpha adjustment. User Satisfaction scores were taken by averaging participant responses to the three questions listed in 5.5, which provided a broad depiction of ease of use, clarity and interaction experience.
+
+Interviews: We used Braun and Clark's approach for thematic analysis [16]. In repeated passes, two investigators conducted open coding. Afterwards, two other team members checked the coding and brought disagreements to the full team for resolution. This division provided a broader perspective, deepened our understanding and generated multiple discussions around each theme.
+
+§ 6 RESULTS
+
+We recruited 15 survey participants ( 10 male, 5 female, age distribution $\mathrm{M} = {34.97},\mathrm{{SD}} = {7.86})$ . Of these, we interviewed 8 (3 male,5 female). Participants were from various ethnic backgrounds but all lived in North America at the time of recruitment.
+
+§ 6.1 QUANTITATIVE RESULTS: QUESTIONNAIRES
+
+Pre-Questionnaire Data: We surveyed participants on their past experiences with household technology, including their technology roles within their households. In our exploratory study, we identified two role classifications. Primary users are enthusiastic about initial setup and ongoing maintenance of home technology; we designated other users as secondary. All respondents indicated they had purchasing power within their households.
+
+$\sim {73}\%$ of participants reported enthusiasm in exploring new features on their smart home appliances, and took responsibility for configuring home technology. This suggests that the majority of the participants were primary technology users based on our definition.
+
+As an assessment of how participants related the notion "accountability" to washing machines, we asked where they placed the blame when a non-smart washing machine damaged their clothes. ${60}\%$ had had that experience and "mostly" or "completely" blamed their washing machine. This seems to dispel a notion that perceived accountability skews towards self in such situations.
+
+ < g r a p h i c s >
+
+Figure 1: Average responsibility rankings of experiment's VUI mechanisms by task complexity, for question "How accountable (responsible) is the system if something goes wrong?" Rank 1 (greatest) to 4 . Error bars are standard error of mean.
+
+Table 1: Relative VUI Mechanism Accountability (Bonferroni-adjusted).
+
+max width=
+
+Mechanisms Compared Low-Complexity High-Complexity
+
+1-3
+Command-Automation $z = - {4.10},p < {0.001} *$ $z = - {3.25},p = {.007} *$
+
+1-3
+Command-Recommendation $z = - {2.97},p = {0.018} *$ $z = - {2.97},p = {.018} *$
+
+1-3
+Command-Instruction $z = - {2.83},p = {0.02} *$ $z = - {2.546},p = {.065}$
+
+1-3
+Automation-Recommendation $z = - {1.131},p = {1.0}$ $z = - {.283},p = {1.0}$
+
+1-3
+Automation-Instruction $z = - {1.273},p = {1.0}$ $z = - {.707},p = {1.0}$
+
+1-3
+Recommendation-Instruction $z = - {.141},p = {1.0}$ $z = - {.424},p = {1.0}$
+
+1-3
+
+Perceived Accountability Rankings: Figure 1 shows participants' rankings of mechanism accountability for the portrayed outcome.
+
+Friedman tests on task complexity found automation level, varied through mechanism type statistically significant for accountability ranking. For low-complexity tasks, ${\chi }^{2}\left( {3,N = {15}}\right) = {18.28},p <$ ${0.001} *$ ; for high-complexity, ${\chi }^{2}\left( {3,N = {15}}\right) = {13.32},p = {0.004} *$ . Post-hoc analysis (Table 1) with Bonferroni correction of confidence interval found that for both high and low complexity tasks, Command had significantly lower accountability than Full Automation and Recommendation. For low-complexity tasks, the Command mechanism had significantly lower accountability than Instruction.
+
+Control Capability scores: We analyzed participant scores for each mechanism in the Control Capability (CC) dimensions of Transparency, Predictability and Reliability. Differences between CC scores for the Recommendation and Instruction mechanisms (Figure 2) suggest that option delivery impacts experience of control over the interaction. For example, though Recommendation and Instruction offer similar choices to users, Recommendation was consistently seen as less transparent, predictable and accountable.
+
+Figure 2 reports average ratings for CC dimensions by VUI mechanism; it shows a trend suggesting that increased automation is linked to reduced perceived transparency and predictability. We found statistical significance only for predictability for the high complexity task. However, in our post-hoc test with Bonferroni alpha adjustment, we were not able to find any statistical significance between specific mechanisms for predictability.
+
+Accountability and user satisfaction for low-complexity task: The trend of the average satisfaction scores in Figure 3 suggests that participants preferred the Automation mechanism to those requiring more user involvement, for both low and high-complexity tasks. Participants also reported higher satisfaction with Instruction and Command for high task complexity. However, we did not find statistical significance for either tasks (High Complexity: F(2.075, 29.05), p=0.576; Low Complexity: F(1.885, 26.391), p=0.258).
+
+ < g r a p h i c s >
+
+Figure 2: Participant scores (1-100) on control capabilities of VUI mechanisms (15 samples $/$ bar). Error bars are standard error of the mean.
+
+Table 2: ANOVA results for Control Capabilities and User Satisfaction
+
+max width=
+
+Ctrl Capability Low Complexity High Complexity
+
+1-3
+Transparency $F\left( {{1.848},{25.877}}\right) ,p = 1$ $F\left( {{1.887},{26.423}}\right) ,p = {0.314}$
+
+1-3
+Predictability $F\left( {{1.514},{21.193}}\right) ,p = {0.079}$ $F\left( {{1.94},{27.164}}\right) ,p = {0.028} *$
+
+1-3
+Reliability $F\left( {{1.917},{26.834}}\right) ,p = {0.305}$ $F\left( {{2.044},{28.616}}\right) ,p = {0.275}$
+
+1-3
+Satisfaction $F\left( {{1.885},{26.391}}\right) ,p = {0.258}$ $F\left( {{2.075},{29.05}}\right) ,p = {0.576}$
+
+1-3
+
+Greenhouse-Geisser results are presented due to the violation of Sphericity. A post-hoc test with Bonferroni alpha adjustment was not significant for predictability.
+
+§ 6.2 QUALITATIVE RESULTS: INTERVIEWS
+
+Eight questionnaire participants (4 female), selected through snowball sampling [66], were interviewed (Section 5.5). All were adults living with others who self-identified as primary or secondary users of home appliances, meaning they had purchasing power for home appliances in their households. No compensation was provided.
+
+In the following we organize our analysis of the interview transcripts as laid out in Section 4. Mechanism names here replace the numerical labels that participants used to refer to the videos.
+
+When asked to revisit their ranking of accountability across the VUI mechanisms, the majority of participants identified Full Automation as the most accountable. However, this was not unanimous: P3 suggested all mechanisms were "completely accountable", and P8 found the Recommendation mechanism most accountable. These individual variations further justify the use of Control Capabilities to identify design factors that contribute to accountability.
+
+Automation is seen as most accountable: Automation was deemed by the majority of participants (both primary and secondary) as the most accountable, because the machine gives minimal information and selects the washing cycle by itself. "Not given a choice" [P4] and "machine do[es] whatever... it feels the best" [P7] are reasons that participants ranked the automation mechanism as most accountable.
+
+'I think in [Automation], the machine should take the most responsibility since it makes all the decisions...’[P2]
+
+ < g r a p h i c s >
+
+Figure 3: Average user satisfaction score (1-100) for low complexity and high complexity tasks, by VUI mechanism. Error bars: standard error of the mean.
+
+Shared accountability: Recommendation and Instruction were viewed as generating a sense of shared accountability, by giving suggestions for participants to choose from. Participants are still accountable for the final decision: if "something goes wrong, it should be your fault instead of the machine" [P5]. However, "maybe the user will blame the machine for giving [...] the wrong recommendation" [P6] in the case of an error. Some suggested that since the system "understands the situation [...] it's more accountable" [P7].
+
+§ CONTROL CAPABILITY DIMENSIONS:
+
+A. Transparency - Participants generally agreed that all mechanisms were transparent enough for them to understand the interactions. As in Figure 3, most of participants rated and mentioned Command and Instruction as "more transparent" [P3, P4, P5, P7]. Some viewed Command as "the most transparent" [P4], since the user has complete control in this traditional method of doing laundry.
+
+'...To me transparency relates to the extent to which they use or understand what's going on within the machine, so in video number one that I mentioned a machine is pretty much making the selections on behalf of the user [...] so the user doesn't really know what's going on. Whereas [Full Automation] for instructions, the machine just told me what the user is saying so the user one hundred percent still all the time what's going on... '[P7]
+
+Instruction was also viewed as transparent since it provides detailed information and participants understand the procedure.
+
+'... the [instruction-based] with lots of details. Also, it's transparent. Although it's a bit annoying but it is transparent... '[P4]
+
+Some responses seem to indicate that the participants found the amount of information excessive. Additionally, some participants found Automation clearer since the interaction process was less complex and the "machine takes care of everything" [P7].
+
+B. Predictability - Participants tended to consider Instruction as predictable since it is "the most specific" [P6] and it "give[s] explanations" [P1]. Participants claimed that they "trust it most" [P1] and described it as an expert guide:
+
+'It's so smart, the machine acts as a teacher to teach you like, uh, what should you do? I don't have to worry about anything. He just tells you everything... ' [P5]
+
+The Command mechanism received high predictability ratings. Some participants suggested that the user in the video must have been familiar with the system already:
+
+'[Command based mechanism]... cause the user knows what he or she wants and maybe that's because he/she has already tried it before. Then for the [instruction based mechanism] because you have all the descriptions of the options the results would also be predictable'. [P7]
+
+C. Reliability - Participants also tended to view Instruction as the most reliable for both simple and complicated tasks, since they gather the most information and it "seems to know a lot" [P1, P2].
+
+'[...] if something goes wrong, it should be your fault instead of the machine. Because the machine let you know all the consequences before.'[P5]
+
+Interviewer: Why do you think that the instruction based mechanism is more reliable? ... " "... I believe that the machine knows what it is doing because it has all the information about the laundry process that I don't"[P4]
+
+Satisfaction: Participants' satisfaction with the interaction depended on both the VUI mechanism and task complexity. The "concise [..] and very quick" [P3] Automation mechanism was the most satisfying for some, especially for routine tasks because "you don't have to think about what you're doing" [P6].
+
+For complex, critical or high-stakes washing tasks such as "really expensive clothes" [P6], the "detailed instructions" [P3] of instruction-based mechanisms were considered more satisfying. Participants appreciated additional information when the perceived cost of error was high (e.g., damaging expensive clothes).
+
+'... it depends on what you're trying to wash like if I'm gonna wash really expensive clothes that I cannot mess up. Uh, then I would have done the third one because whatever I don't know what to do, it tells me exactly what's the stuff to take separate the clothes and all that stuff.'[P6]
+
+§ 7 DISCUSSION
+
+This study demonstrates a difference in accountability between our designed mechanisms. The results of both qualitative and quantitative analysis supports our first hypothesis of a positive relationship between system automation and accountability. This relationship is well represented by P2's interview response that "...the machine should take the most responsibility since it makes all the decisions.". A similar trend has been reported by Sheridan et al.: "... individuals using the [automated] system may feel that the machine is in complete control, disclaiming personal accountability for any error or performance degradation" [57].
+
+Though our results showed that Automation takes the highest accountability, two participants provided insights on other potential design factors that affect accountability. P5 argued that Instruction should be accountable when the system fails because users "wasted" their time listening to its verbose instructions. P6 indicated the importance of claims about the system: "Actually that really depends on what the machine says it can do, you know if it says like 'I'm gonna be able to distinguish colour clothes from regular clothes and I'm not gonna mess up.' and it messes up then it's the machine's fault." Setting realistic expectations about a system's abilities may help manage its accountability.
+
+Our results echoed Suchman's prediction that automation can lead to shared accountability between humans and machines [64], and further that level of automation impact perceived accountability.
+
+Users did not experience the washing machines firsthand, a limitation imposed by the state of technology. However, participants empathized with the common experience of doing laundry sufficiently to report a projected level of satisfaction with the interaction. They showed no difficulty in bridging the gap between experienced and imagined scenarios, making comments such as "there is common ground between me and the machine" [P3].
+
+Our results suggest that task complexity does influence user satisfaction. Our qualitative analysis made it clear that for the high complexity laundry task, participants were more willing to ask for guidance and more likely to include the VUI agent in the decision-making process compared to the low-complexity task. However, they might prefer Command or Automation once they became comfortable with the system. Multiple users expressed the desire to transition to command-based systems once they had learned about the washing machine's hidden functionality through the instruction-based or recommendation-based mechanisms. This key finding motivates VUIs during naive access, which could be invaluable for secondary users of home appliances.
+
+The qualitative results also support our quantitative analysis outcomes. Some participants stated that only experienced users who found the washing machine predictable would use the command-based mechanism. This may have contributed to an inflated predictability rating for the command-based mechanism. Though results suggest Automation is perceived as the most accountable, and Command the least, it is difficult to make a conclusive judgment on shared accountability for Recommendation and Instruction. Each of these mechanisms was scored differently in one Control Capability; however, the difference was not significant.
+
+§ 7.1 IMPLICATIONS FOR DESIGN
+
+We encapsulate these findings in VUI design recommendations:
+
+1. Accountability-aware design must consider context, specifically task complexity, type of user(s) and ambiguity of the interactions.
+
+2. Automation has opposing effects on accountability and user satisfaction. A highly automated system may be satisfying to use, but in case of failure, users are more likely to find it blameworthy. Designers should consider this duality in the unique context of their product and its anticipated use.
+
+3. User perceptions of Reliability, Transparency and Predictability depend both on available choices, and how those choices are presented, particularly for high complexity tasks. Designers should consider providing justification for system-presented choices, especially for high-stakes tasks.
+
+4. Results suggest that detailed instruction- and recommendation-based mechanisms improve learnability, but could eventually be too repetitive. Designers should consider transitional mechanisms, in which system operation gradually provides less explanation and becomes more automated.
+
+§ 8 CONCLUSIONS AND FUTURE WORK
+
+We investigated the concept of accountability in home appliance VUIs. We examined automation level as a parameter that could impact accountability delegation, by designing and studying four mechanisms which varied automation and user involvement in decision-making, in simple and complex tasks.
+
+Our primary study sought to characterize differences between these mechanisms. We found our use of video prototypes a successful basis for initial discussions on design concepts, providing non-obvious insights.
+
+Qualitative and quantitative results support our first hypothesis of a positive relationship between automation and accountability, which held whether for both high and low complexity tasks.
+
+Concerning our second hypothesis (that system automation increases user satisfaction), the quantitative result $\left( {\mathrm{N} = {15}}\right)$ was not statistically significant, but trended towards users preferring the most automated system. Interviews consistently supported H2 in that increased user involvement reduced satisfaction. This creates a dilemma for designers of automated systems, who must minimize users' cognitive load without saddling the system with complete accountability for errors.
+
+We found participants more receptive to instructions and recommendations when they were concerned about the outcome of a process, or when they first used a system. We recommend that VUI designers implement guided interfaces as well as command-based ones. This gives users the freedom to transition from guided use to command-based use without leaving the system (and its designers) accountable for mistakes.
+
+Future Work: From this foundation, we recommend next steps. Sample Size - Our study size was appropriate for this early stage of investigation, revealing clear trends supporting the possibility of directing perceptions of accountability in users to support greater investment (more realistic study approaches) in this idea. However, increasing the size and diversity of even this exploratory approach might provide higher power of statistical tests and more significant quantitative insights.
+
+Mechanism design - We examined four distinct mechanisms in isolation. As suggested in Section 7.1, we propose a mechanism that adjusts its automation as the user becomes more familiar with the device. The benefits of such a mechanism would need to be confirmed in a longitudinal study.
+
+Metrics - User satisfaction is a volatile metric. In this study it is especially so because participants did not interact with a physical prototype. To minimize this limitation on user empathy, we assessed common moderate failure outcomes instead of complete failures (i.e., a stained rather than a destroyed shirt). We will have more realistic results when users can reflect on their satisfaction level by observing the laundry process outcome on their own clothes.
+
+Real results - A functional VUI system and, separately, a machine that truly enacts its instructions would advance the reality of the participants' experience and make their responses more reliable. A real system would succeed more often than fail, as opposed to our scenarios which aimed to make use of a short study session. When studied longitudinally within real homes in actual use, we can follow the development of trust and familiarity over time.
\ No newline at end of file