text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
To watch 360{\deg} videos on normal 2D displays, we need to project the selected part of the 360{\deg} image onto the 2D display plane. In this paper, we propose a fully-automated framework for generating content-aware 2D normal-view perspective videos from 360{\deg} videos. Especially, we focus on the projection step preserving important image contents and reducing image distortion. Basically, our projection method is based on Pannini projection model. At first, the salient contents such as linear structures and salient regions in the image are preserved by optimizing the single Panini projection model. Then, the multiple Panini projection models at salient regions are interpolated to suppress image distortion globally. Finally, the temporal consistency for image projection is enforced for producing temporally stable normal-view videos. Our proposed projection method does not require any user-interaction and is much faster than previous content-preserving methods. It can be applied to not only images but also videos taking the temporal consistency of projection into account. Experiments on various 360{\deg} videos show the superiority of the proposed projection method quantitatively and qualitatively.
Automatic Content-aware Projection for 360° Videos
10,100
This paper deals with a kind of design of a ruled surface. It combines concepts from the fields of computer aided geometric design and kinematics. A dual unit spherical B\'ezier-like curve on the dual unit sphere (DUS) is obtained with respect the control points by a new method. So, with the aid of Study [1] transference principle, a dual unit spherical B\'ezier-like curve corresponds to a ruled surface. Furthermore, closed ruled surfaces are determined via control points and integral invariants of these surfaces are investigated. The results are illustrated by examples.
On the Design and Invariants of a Ruled Surface
10,101
This paper introduces a general method to approximate the convolution of an arbitrary program with a Gaussian kernel. This process has the effect of smoothing out a program. Our compiler framework models intermediate values in the program as random variables, by using mean and variance statistics. Our approach breaks the input program into parts and relates the statistics of the different parts, under the smoothing process. We give several approximations that can be used for the different parts of the program. These include the approximation of Dorn et al., a novel adaptive Gaussian approximation, Monte Carlo sampling, and compactly supported kernels. Our adaptive Gaussian approximation is accurate up to the second order in the standard deviation of the smoothing kernel, and mathematically smooth. We show how to construct a compiler that applies chosen approximations to given parts of the input program. Because each expression can have multiple approximation choices, we use a genetic search to automatically select the best approximations. We apply this framework to the problem of automatically bandlimiting procedural shader programs. We evaluate our method on a variety of complex shaders, including shaders with parallax mapping, animation, and spatially varying statistics. The resulting smoothed shader programs outperform previous approaches both numerically, and aesthetically, due to the smoothing properties of our approximations.
Approximate Program Smoothing Using Mean-Variance Statistics, with Application to Procedural Shader Bandlimiting
10,102
QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection.
QuickCSG: Fast Arbitrary Boolean Combinations of N Solids
10,103
We present a novel extension of the path tracing algorithm that is capable of treating highly scattering participating media in the presence of fluorescent structures. The extension is based on the formulation of the full radiative transfer equation when solved on a per-wavelength-basis, resulting in accurate model and unbiased algorithm for rendering highly scattering fluorescent participating media. The model accounts for the intrinsic properties of fluorescent dyes including their absorption and emission spectra, molar absorptivity and quantum yield and also their concentration. Our algorithm is applied to render highly scattering isotropic fluorescent solutions under different illumination conditions. The spectral performance of the model is validated against emission spectra of different fluorescent dyes that are of significance in spectroscopy.
A Physically Plausible Model for Rendering Highly Scattering Fluorescent Participating Media
10,104
The game and movie industries always face the challenge of reproducing materials. This problem is tackled by combining illumination models and various textures (painted or procedural patterns). Gnerating stochastic wall patterns is crucial in the creation of a wide range of backgrounds (castles, temples, ruins...). A specific Wang tile set was introduced previously to tackle this problem, in a non-procedural fashion. Long lines may appear as visual artifacts. We use this tile set in a new procedural algorithm to generate stochastic wall patterns. For this purpose, we introduce specific hash functions implementing a constrained Wang tiling. This technique makes possible the generation of boundless textures while giving control over the maximum line length. The algorithm is simple and easy to implement, and the wall structure we get from the tiles allows to achieve visuals that reproduce all the small details of artist painted walls.
Procedural Wang Tile Algorithm for Stochastic Wall Patterns
10,105
We present a web application for the procedural generation of perturbations of 3D models. We generate the perturbations by generating vertex shaders that change the positions of vertices that make up the 3D model. The vertex shaders are created with an interactive genetic algorithm, which displays to the user the visual effect caused by each vertex shader, allows the user to select the visual effect the user likes best, and produces a new generation of vertex shaders using the user feedback as the fitness measure of the genetic algorithm. We use genetic programming to represent each vertex shader as a computer program. This paper presents details of requirements specification, software architecture, high and low-level design, and prototype user interface. We discuss the project's current status and development challenges.
Interactive Shape Perturbation
10,106
NURBS curve is widely used in Computer Aided Design and Computer Aided Geometric Design. When a single weight approaches infinity, the limit of a NURBS curve tends to the corresponding control point. In this paper, a kind of control structure of a NURBS curve, called regular control curve, is defined. We prove that the limit of the NURBS curve is exactly its regular control curve when all of weights approach infinity, where each weight is multiplied by a certain one-parameter function tending to infinity, different for each control point. Moreover, some representative examples are presented to show this property and indicate its application for shape deformation.
Degenerations of NURBS curves while all of weights approaching infinity
10,107
Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, as well as the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making mistakes during a navigation. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts such as a train station, a downtown and a canyon. We evaluate our results by comparing different wayfinding designs and show that our optimized wayfinding design can guide pedestrians to their destinations effectively and efficiently. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.
Way to Go! Automatic Optimization of Wayfinding Design
10,108
We live in a 3D world, performing activities and interacting with objects in the indoor environments everyday. Indoor scenes are the most familiar and essential environments in everyone's life. In the virtual world, 3D indoor scenes are also ubiquitous in 3D games and interior design. With the fast development of VR/AR devices and the emerging applications, the demand of realistic 3D indoor scenes keeps growing rapidly. Currently, designing detailed 3D indoor scenes requires proficient 3D designing and modeling skills and is often time-consuming. For novice users, creating realistic and complex 3D indoor scenes is even more difficult and challenging. Many efforts have been made in different research communities, e.g. computer graphics, vision and robotics, to capture, analyze and generate the 3D indoor data. This report mainly focuses on the recent research progress in graphics on geometry, structure and semantic analysis of 3D indoor data and different modeling techniques for creating plausible and realistic indoor scenes. We first review works on understanding and semantic modeling of scenes from captured 3D data of the real world. Then, we focus on the virtual scenes composed of 3D CAD models and study methods for 3D scene analysis and processing. After that, we survey various modeling paradigms for creating 3D indoor scenes and investigate human-centric scene analysis and modeling, which bridge indoor scene studies of graphics, vision and robotics. At last, we discuss open problems in indoor scene processing that might bring interests to graphics and all related communities.
Analysis and Modeling of 3D Indoor Scenes
10,109
Colorization of gray-scale images relies on prior color information. Examplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale image. In the literature, this transfer is mainly guided by texture descriptors. Face images usually contain few texture so that the common approaches frequently fail. In this paper we propose a new method based on image morphing. This technique is able to compute a correspondence map between images with similar shapes. It is based on the geometric structure of the images rather than textures which is more reliable for faces. Our numerical experiments show that our morphing based approach clearly outperforms state-of-the-art methods.
Examplar-Based Face Colorization Using Image Morphing
10,110
We present a novel approach enabling interactive visualization of volumetric Locally Refined B-splines (LR-splines). To this end we propose a highly efficient algorithm for direct visualization of scalar and vector fields given by an LR-spline. In both cases, our main contribution to achieve interactive frame rates is an acceleration structure for fast element look-up and a change of basis for efficient evaluation. To further improve the efficiency, we present a heuristic for adaptive sampling distance for the numerical integration. A comparison with existing adaptive approaches is performed. The algorithms are designed to fully utilize modern graphics processing unit (GPU) capabilities. Important applications where LR-spline volumes emerge are given for instance by approximation of large-scale simulation and sensor data, and Isogeometric Analysis (IGA). We showcase interactive rendering achieved by our approach on different representative use cases, stemming from simulations of wind flow around a telescope, Magnetic Resonance (MR) imaging of a human brain, and simulations of a fluidized bed used for mixing and coating particles in industrial processes.
Direct interactive visualization of locally refined spline volumes for scalar and vector fields
10,111
Human motions (especially dance motions) are very noisy, and it is hard to analyze and edit the motions. To resolve this problem, we propose a new method to decompose and modify the motions using the Hilbert-Huang transform (HHT). First, HHT decomposes a chromatic signal into "monochromatic" signals that are the so-called Intrinsic Mode Functions (IMFs) using an Empirical Mode Decomposition (EMD) [6]. After applying the Hilbert Transform to each IMF, the instantaneous frequencies of the "monochromatic" signals can be obtained. The HHT has the advantage to analyze non-stationary and nonlinear signals such as human-joint-motions over FFT or Wavelet transform. In the present paper, we propose a new framework to analyze and extract some new features from a famous Japanese threesome pop singer group called "Perfume", and compare it with Waltz and Salsa dance. Using the EMD, their dance motions can be decomposed into motion (choreographic) primitives or IMFs. Therefore we can scale, combine, subtract, exchange, and modify those IMFs, and can blend them into new dance motions self-consistently. Our analysis and framework can lead to a motion editing and blending method to create a new dance motion from different dance motions.
Nonlinear dance motion analysis and motion editing using Hilbert-Huang transform
10,112
The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer graphics and geometry processing applications. In particular, Laplacian eigenbases allow generalizing the classical Fourier analysis to manifolds. A key drawback of such bases is their inherently global nature, as the Laplacian eigenfunctions carry geometric and topological structure of the entire manifold. In this paper, we introduce a new framework for local spectral shape analysis. We show how to efficiently construct localized orthogonal bases by solving an optimization problem that in turn can be posed as the eigendecomposition of a new operator obtained by a modification of the standard Laplacian. We study the theoretical and computational aspects of the proposed framework and showcase our new construction on the classical problems of shape approximation and correspondence. We obtain significant improvement compared to classical Laplacian eigenbases as well as other alternatives for constructing localized bases.
Localized Manifold Harmonics for Spectral Shape Analysis
10,113
In geometry processing, smoothness energies are commonly used to model scattered data interpolation, dense data denoising, and regularization during shape optimization. The squared Laplacian energy is a popular choice of energy and has a corresponding standard implementation: squaring the discrete Laplacian matrix. For compact domains, when values along the boundary are not known in advance, this construction bakes in low-order boundary conditions. This causes the geometric shape of the boundary to strongly bias the solution. For many applications, this is undesirable. Instead, we propose using the squared Frobenious norm of the Hessian as a smoothness energy. Unlike the squared Laplacian energy, this energy's natural boundary conditions (those that best minimize the energy) correspond to meaningful high-order boundary conditions. These boundary conditions model free boundaries where the shape of the boundary should not bias the solution locally. Our analysis begins in the smooth setting and concludes with discretizations using finite-differences on 2D grids or mixed finite elements for triangle meshes. We demonstrate the core behavior of the squared Hessian as a smoothness energy for various tasks.
Natural Boundary Conditions for Smoothing in Geometry Processing
10,114
Integration of scalar and vector visualization has been an interesting topic. This paper presents a technique to appropriately select and display multiple streamlines while overlaying with isosurfaces, aiming an integrated scalar and vector field visualization. The technique visualizes a scalar field by multiple semitransparent isosurfaces, and a vector field by multiple streamlines, while the technique adequately selects the streamlines considering reduction of cluttering among the isosurfaces and streamlines. The technique first selects and renders isosurfaces, and then generates large number of streamlines from randomly selected seed points. The technique evaluates each of the streamlines according to their shapes on a 2D display space, distances to critical points of the given vector fields, and occlusion by isosurfaces. It then selects the specified number of highly evaluated streamlines. As a result, we can visualize both scalar and vector fields as a set of view-independently selected isosurfaces and view-dependently selected streamlines.
A Streamline Selection Technique Overlaying with Isosurfaces
10,115
BRDF of most real world materials has two components, the surface BRDF due to the light reflecting at the surface of the material and the subsurface BRDF due to the light entering and going through many scattering events inside the material. Each of these events modifies light's path, power, polarization state. Computing polarized subsurface BRDF of a material requires simulating the light transport inside the material. The transport of polarized light is modeled by the Vector Radiative Transfer Equation (VRTE), an integro-differential equation. Computing solution to that equation is expensive. The Discrete Ordinate Method (DOM) is a common approach to solving the VRTE. Such solvers are very time consuming for complex uses such as BRDF computation, where one must solve VRTE for surface radiance distribution due to light incident from every direction of the hemisphere above the surface. In this paper, we present a GPU based DOM solution of the VRTE to expedite the subsurface BRDF computation. As in other DOM based solutions, our solution is based on Fourier expansions of the phase function and the radiance function. This allows us to independently solve the VRTE for each order of expansion. We take advantage of those repetitions and of the repetitions in each of the sub-steps of the solution process. Our solver is implemented to run mainly on graphics hardware using the OpenCL library and runs up to seven times faster than its CPU equivalent, allowing the computation of subsurface BRDF in a matter of minutes. We compute and present the subsurface BRDF lobes due to powders and paints of a few materials. We also show the rendering of objects with the computed BRDF. The solver is available for public use through the authors' web site.
GPU accelerated computation of Polarized Subsurface BRDF for Flat Particulate Layers
10,116
We propose using the Dirichlet-to-Neumann operator as an extrinsic alternative to the Laplacian for spectral geometry processing and shape analysis. Intrinsic approaches, usually based on the Laplace-Beltrami operator, cannot capture the spatial embedding of a shape up to rigid motion, and many previous extrinsic methods lack theoretical justification. Instead, we consider the Steklov eigenvalue problem, computing the spectrum of the Dirichlet-to-Neumann operator of a surface bounding a volume. A remarkable property of this operator is that it completely encodes volumetric geometry. We use the boundary element method (BEM) to discretize the operator, accelerated by hierarchical numerical schemes and preconditioning; this pipeline allows us to solve eigenvalue and linear problems on large-scale meshes despite the density of the Dirichlet-to-Neumann discretization. We further demonstrate that our operators naturally fit into existing frameworks for geometry processing, making a shift from intrinsic to extrinsic geometry as simple as substituting the Laplace-Beltrami operator with the Dirichlet-to-Neumann operator.
Steklov Spectral Geometry for Extrinsic Shape Analysis
10,117
The colorful appearance of a physical painting is determined by the distribution of paint pigments across the canvas, which we model as a per-pixel mixture of a small number of pigments with multispectral absorption and scattering coefficients. We present an algorithm to efficiently recover this structure from an RGB image, yielding a plausible set of pigments and a low RGB reconstruction error. We show that under certain circumstances we are able to recover pigments that are close to ground truth, while in all cases our results are always plausible. Using our decomposition, we repose standard digital image editing operations as operations in pigment space rather than RGB, with interestingly novel results. We demonstrate tonal adjustments, selection masking, cut-copy-paste, recoloring, palette summarization, and edge enhancement.
Pigmento: Pigment-Based Image Analysis and Editing
10,118
We present a discrete theory for modeling developable surfaces as quadrilateral meshes satisfying simple angle constraints. The basis of our model is a lesser known characterization of developable surfaces as manifolds that can be parameterized through orthogonal geodesics. Our model is simple, local, and, unlike previous works, it does not directly encode the surface rulings. This allows us to model continuous deformations of discrete developable surfaces independently of their decomposition into torsal and planar patches or the surface topology. We prove and experimentally demonstrate strong ties to smooth developable surfaces, including a theorem stating that every sampling of the smooth counterpart satisfies our constraints up to second order. We further present an extension of our model that enables a local definition of discrete isometry. We demonstrate the effectiveness of our discrete model in a developable surface editing system, as well as computation of an isometric interpolation between isometric discrete developable shapes.
Discrete Geodesic Nets for Modeling Developable Surfaces
10,119
How to establish the matching (or corresponding) between two different 3D shapes is a classical problem. This paper focused on the research on shape mapping of 3D mesh models, and proposed a shape mapping algorithm based on Hidden Markov Random Field and EM algorithm, as introducing a hidden state random variable associated with the adjacent blocks of shape matching when establishing HMRF. This algorithm provides a new theory and method to ensure the consistency of the edge data of adjacent blocks, and the experimental results show that the algorithm in this paper has a great improvement on the shape mapping of 3D mesh models.
Research on Shape Mapping of 3D Mesh Models based on Hidden Markov Random Field and EM Algorithm
10,120
We introduce a continuous global optimization method to the field of surface reconstruction from discrete noisy cloud of points with weak information on orientation. The proposed method uses an energy functional combining flux-based data-fit measures and a regularization term. A continuous convex relaxation scheme assures the global minima of the geometric surface functional. The reconstructed surface is implicitly represented by the binary segmentation of vertices of a 3D uniform grid and a triangulated surface can be obtained by extracting an appropriate isosurface. Unlike the discrete graph-cut solution, the continuous global optimization entails advantages like memory requirements, reduction of metrication errors for geometric quantities, allowing globally optimal surface reconstruction at higher grid resolutions. We demonstrate the performance of the proposed method on several oriented point clouds captured by laser scanners. Experimental results confirm that our approach is robust to noise, large holes and non-uniform sampling density under the condition of very coarse orientation information.
Continuous Global Optimization in Surface Reconstruction from an Oriented Point Cloud
10,121
The visual representation of concepts or ideas through the use of simple shapes has always been explored in the history of Humanity, and it is believed to be the origin of writing. We focus on computational generation of visual symbols to represent concepts. We aim to develop a system that uses background knowledge about the world to find connections among concepts, with the goal of generating symbols for a given concept. We are also interested in exploring the system as an approach to visual dissociation and visual conceptual blending. This has a great potential in the area of Graphic Design as a tool to both stimulate creativity and aid in brainstorming in projects such as logo, pictogram or signage design.
Generation of concept-representative symbols
10,122
Inspired by kernel methods that have been used extensively in achieving efficient facial animation retargeting, this paper presents a solution to retargeting facial animation in virtual character's face model based on the kernel projection of latent structure (KPLS) regression between semantically similar facial expressions. Specifically, a given number of corresponding semantically similar facial expressions are projected into the latent space. By using the Nonlinear Iterative Partial Least Square method, decomposition of the latent variables is achieved. Finally, the KPLS is achieved by solving a kernalized version of the eigenvalue problem. By evaluating our methodology with other kernel-based solutions, the efficiency of the presented methodology in transferring facial animation to face models with different morphological variations is demonstrated.
Kernel Projection of Latent Structures Regression for Facial Animation Retargeting
10,123
Assembly-based tools provide a powerful modeling paradigm for non-expert shape designers. However, choosing a component from a large shape repository and aligning it to a partial assembly can become a daunting task. In this paper we describe novel neural network architectures for suggesting complementary components and their placement for an incomplete 3D part assembly. Unlike most existing techniques, our networks are trained on unlabeled data obtained from public online repositories, and do not rely on consistent part segmentations or labels. Absence of labels poses a challenge in indexing the database of parts for the retrieval. We address it by jointly training embedding and retrieval networks, where the first indexes parts by mapping them to a low-dimensional feature space, and the second maps partial assemblies to appropriate complements. The combinatorial nature of part arrangements poses another challenge, since the retrieval network is not a function: several complements can be appropriate for the same input. Thus, instead of predicting a single output, we train our network to predict a probability distribution over the space of part embeddings. This allows our method to deal with ambiguities and naturally enables a UI that seamlessly integrates user preferences into the design process. We demonstrate that our method can be used to design complex shapes with minimal or no user input. To evaluate our approach we develop a novel benchmark for component suggestion systems demonstrating significant improvement over state-of-the-art techniques.
ComplementMe: Weakly-Supervised Component Suggestions for 3D Modeling
10,124
In recent years, consumer-level depth cameras have been adopted for various applications. However, they often produce depth maps at only a moderately high frame rate (approximately 30 frames per second), preventing them from being used for applications such as digitizing human performance involving fast motion. On the other hand, low-cost, high-frame-rate video cameras are available. This motivates us to develop a hybrid camera that consists of a high-frame-rate video camera and a low-frame-rate depth camera and to allow temporal interpolation of depth maps with the help of auxiliary color images. To achieve this, we develop a novel algorithm that reconstructs intermediate depth maps and estimates scene flow simultaneously. We test our algorithm on various examples involving fast, non-rigid motions of single or multiple objects. Our experiments show that our scene flow estimation method is more precise than a tracking-based method and the state-of-the-art techniques.
Temporal Upsampling of Depth Maps Using a Hybrid Camera
10,125
Foveation and focus cue are the two most discussed topics on vision in designing near-eye displays. Foveation reduces rendering load by omitting spatial details in the content that the peripheral vision cannot appreciate; Providing richer focal cue can resolve vergence-accommodation conflict thereby lessening visual discomfort in using near-eye displays. We performed two psychophysical experiments to investigate the relationship between foveation and focus cue. The first study measured blur discrimination sensitivity as a function of visual eccentricity, where we found discrimination thresholds significantly lower than previously reported. The second study measured depth discrimination threshold where we found a clear dependency on visual eccentricity. We discuss the results from the two studies and suggest further investigation.
Eccentricity Effects on Blur and Depth Perception
10,126
This contribution describes relationship between fractions, projective representation, duality, linear algebra and geometry. Many problems lead to a system of linear equations. This paper presents equivalence of the Cross-product operation and solution of a system of linear equations Ax=0 or Ax=b using projective space representation and homogeneous coordinates. It leads to conclusion that division operation is not required for a solution of a system of linear equations, if the projective representation and homogeneous coordinates are used. An efficient solution on CPU and GPU based architectures is presented with an application to barycentric coordinates computation as well.
Fractions, Projective Representation, Duality, Linear Algebra and Geometry
10,127
We describe a two-level method for computing a function whose zero-level set is the surface reconstructed from given points scattered over the surface and associated with surface normal vectors. The function is defined as a linear combination of compactly supported radial basis functions (CSRBFs). The method preserves the simplicity and efficiency of implicit surface interpolation with CSRBFs and the reconstructed implicit surface owns the attributes, which are previously only associated with globally supported or globally regularized radial basis functions, such as exhibiting less extra zero-level sets, suitable for inside and outside tests. First, in the coarse scale approximation, we choose basis function centers on a grid that covers the enlarged bounding box of the given point set and compute their signed distances to the underlying surface using local quadratic approximations of the nearest surface points. Then a fitting to the residual errors on the surface points and additional off-surface points is performed with fine scale basis functions. The final function is the sum of the two intermediate functions and is a good approximation of the signed distance field to the surface in the bounding box. Examples of surface reconstruction and set operations between shapes are provided.
A two-level approach to implicit surface modeling with compactly supported radial basis functions
10,128
We present an easy-to-implement and efficient analytical inversion algorithm for the unbiased random sampling of a set of points on a triangle mesh whose surface density is specified by barycentric interpolation of non-negative per-vertex weights. The correctness of the inversion algorithm is verified via statistical tests, and we show that it is faster on average than rejection sampling.
Efficient barycentric point sampling on meshes
10,129
We present an efficient spacetime optimization method to automatically generate animations for a general volumetric, elastically deformable body. Our approach can model the interactions between the body and the environment and automatically generate active animations. We model the frictional contact forces using contact invariant optimization and the fluid drag forces using a simplified model. To handle complex objects, we use a reduced deformable model and present a novel hybrid optimizer to search for the local minima efficiently. This allows us to use long-horizon motion planning to automatically generate animations such as walking, jumping, swimming, and rolling. We evaluate the approach on different shapes and animations, including deformable body navigation and combining with an open-loop controller for realtime forward simulation.
Active Animations of Reduced Deformable Models with Environment Interactions
10,130
Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.
Redefining A in RGBA: Towards a Standard for Graphical 3D Printing
10,131
As many different 3D volumes could produce the same 2D x-ray image, inverting this process is challenging. We show that recent deep learning-based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which is then fused in a second step with the input x-ray into a high-resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer-simulated 2D x-ray images of 3D volumes scanned from 175 mammalian species. Applications of our approach include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x-rays.
Single-image Tomography: 3D Volumes from 2D Cranial X-Rays
10,132
We present a robust method to find region-level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph-matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds. We demonstrate that the region-wise matching that we obtain can be used to find correspondences between feature points, reveal the intrinsic self-similarities of each shape, and even construct point-to-point maps across shapes. Our method is both time and space efficient, leading to a pipeline that is significantly faster than comparable approaches. We demonstrate the performance of our approach through an extensive quantitative and qualitative evaluation on several benchmarks where we achieve comparable or superior performance to existing methods.
Robust Structure-based Shape Correspondence
10,133
Modeling subtractive color mixture (e.g., the way that paints mix) is difficult when working with colors described only by three-dimensional color space values, such as RGB. Although RGB values are sufficient to describe a specific color sensation, they do not contain enough information to predict the RGB color that would result from a subtractive mixture of two specified RGB colors. Methods do exist for accurately modeling subtractive mixture, such as the Kubelka-Munk equations, but require extensive spectrophotometric measurements of the mixed components, making them unsuitable for many computer graphics applications. This paper presents a strategy for modeling subtractive color mixture given only the RGB information of the colors being mixed, written for a general audience. The RGB colors are first transformed to generic, representative spectral distributions, and then this spectral information is used to perform the subtractive mixture, using the weighted arithmetic-geometric mean. This strategy provides reasonable, representative subtractive mixture colors with only modest computational effort and no experimental measurements. As such, it provides a useful way to model subtractive color mixture in computer graphics applications.
Subtractive Color Mixture Computation
10,134
Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve.
Joint Material and Illumination Estimation from Photo Sets in the Wild
10,135
We present Deep Illumination, a novel machine learning technique for approximating global illumination (GI) in real-time applications using a Conditional Generative Adversarial Network. Our primary focus is on generating indirect illumination and soft shadows with offline rendering quality at interactive rates. Inspired from recent advancement in image-to-image translation problems using deep generative convolutional networks, we introduce a variant of this network that learns a mapping from Gbuffers (depth map, normal map, and diffuse map) and direct illumination to any global illumination solution. Our primary contribution is showing that a generative model can be used to learn a density estimation from screen space buffers to an advanced illumination model for a 3D environment. Once trained, our network can approximate global illumination for scene configurations it has never encountered before within the environment it was trained on. We evaluate Deep Illumination through a comparison with both a state of the art real-time GI technique (VXGI) and an offline rendering GI technique (path tracing). We show that our method produces effective GI approximations and is also computationally cheaper than existing GI techniques. Our technique has the potential to replace existing precomputed and screen-space techniques for producing global illumination effects in dynamic scenes with physically-based rendering quality.
Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network
10,136
This paper presents a method of reconstructing full-body locomotion sequences for virtual characters in real-time, using data from a single inertial measurement unit (IMU). This process can be characterized by its difficulty because of the need to reconstruct a high number of degrees of freedom (DOFs) from a very low number of DOFs. To solve such a complex problem, the presented method is divided into several steps. The user's full-body locomotion and the IMU's data are recorded simultaneously. Then, the data is preprocessed in such a way that would be handled more efficiently. By developing a hierarchical multivariate hidden Markov model with reactive interpolation functionality the system learns the structure of the motion sequences. Specifically, the phases of the locomotion sequence are assigned in the higher hierarchical level, and the frame structure of the motion sequences are assigned at the lower hierarchical level. During the runtime of the method, the forward algorithm is used for reconstructing the full-body motion of a virtual character. Firstly, the method predicts the phase where the input motion belongs (higher hierarchical level). Secondly, the method predicts the closest trajectories and their progression and interpolates the most probable of them to reconstruct the virtual character's full-body motion (lower hierarchical level). Evaluating the proposed method shows that it works on reasonable framerates and minimizes the reconstruction errors compared with previous approaches.
Full-Body Locomotion Reconstruction of Virtual Characters Using a Single IMU
10,137
We report results from an experiment on ranking visual markers and node positioning techniques for network visualizations. Inspired by prior ranking studies, we rethink the ranking when the dataset size increases and when the markers are distributed in space. Centrality indices are visualized as node attributes. Our experiment studies nine visual markers and three positioning methods. Our results suggest that direct encoding of quantities improves accuracy by about 20% compared to previous results. Of the three positioning techniques, circular was always in the top group, and matrix and projection switch orders depending on two factors: whether or not the tasks demand symmetry, or the nodes are within closely proximity. Among the most interesting results of ranking the visual markers for comparison tasks are that hue and area fall into the top groups for nearly all multi-scale comparison tasks; Shape (ordered by curvature) is perhaps not as scalable as we have thought and can support more accurate answers only when two quantities are compared; Lightness and slope are least accurate for quantitative comparisons regardless of scale of the comparison tasks. Our experiment is among the first to acquire a complete picture of ranking visual markers in different scales for comparison tasks.
Overlaying Quantitative Measurement on Networks: An Evaluation of Three Positioning and Nine Visual Marker Techniques
10,138
This paper presents a simple and effective two-stage mesh denoising algorithm, where in the first stage, the face normal filtering is done by using the bilateral normal filtering in the robust statistics framework. Tukey's bi-weight function is used as similarity function in the bilateral weighting, which is a robust estimator and stops the diffusion at sharp edges to retain features and removes noise from flat regions effectively. In the second stage, an edge weighted Laplace operator is introduced to compute a differential coordinate. This differential coordinate helps the algorithm to produce a high-quality mesh without any face normal flips and makes the method robust against high-intensity noise.
Robust and High Fidelity Mesh Denoising
10,139
High-quality shadow anti-aliasing is a challenging problem in shadow mapping. Revectorization-based shadow mapping (RBSM) minimizes shadow aliasing by revectorizing the jagged shadow edges generated with shadow mapping, keeping low memory footprint and real-time performance for the shadow computation. However, the current implementation of RBSM is not so well optimized because its visibility functions are composed of a set of 43 cases, each one of them handling a specific revectorization scenario and being implemented as a specific branch in the shader. Here, we take advantage of the shadow shape patterns to reformulate the RBSM visibility functions, simplifying the implementation of the technique and further providing an optimized version of the RBSM. Our results indicate that our implementation runs faster than the original implementation of RBSM, while keeping its same visual quality and memory consumption. Furthermore, we show GLSL source codes to ease the implementation of our technique, provide a comparison between the optimized RBSM and related work, and discuss the limitations of the shadow revectorization.
Optimized Visibility Functions for Revectorization-Based Shadow Mapping
10,140
There are increasing real-time live applications in virtual reality, where it plays an important role in capturing and retargetting 3D human pose. But it is still challenging to estimate accurate 3D pose from consumer imaging devices such as depth camera. This paper presents a novel cascaded 3D full-body pose regression method to estimate accurate pose from a single depth image at 100 fps. The key idea is to train cascaded regressors based on Gradient Boosting algorithm from pre-recorded human motion capture database. By incorporating hierarchical kinematics model of human pose into the learning procedure, we can directly estimate accurate 3D joint angles instead of joint positions. The biggest advantage of this model is that the bone length can be preserved during the whole 3D pose estimation procedure, which leads to more effective features and higher pose estimation accuracy. Our method can be used as an initialization procedure when combining with tracking methods. We demonstrate the power of our method on a wide range of synthesized human motion data from CMU mocap database, Human3.6M dataset and real human movements data captured in real time. In our comparison against previous 3D pose estimation methods and commercial system such as Kinect 2017, we achieve the state-of-the-art accuracy.
Cascaded 3D Full-body Pose Regression from Single Depth Image at 100 FPS
10,141
We present a visual analytics system for exploring group differences in tensor fields with respect to all six degrees of freedom that are inherent in symmetric second-order tensors. Our framework closely integrates quantitative analysis, based on multivariate hypothesis testing and spatial cluster enhancement, with suitable visualization tools that facilitate interpretation of results, and forming of new hypotheses. Carefully chosen and linked spatial and abstract views show clusters of strong differences, and allow the analyst to relate them to the affected structures, to reveal the exact nature of the differences, and to investigate potential correlations. A mechanism for visually comparing the results of different tests or levels of smoothing is also provided. We carefully justify the need for such a visual analytics tool from a practical and theoretical point of view. In close collaboration with our clinical co-authors, we apply it to the results of a diffusion tensor imaging study of systemic lupus erythematosus, in which it revealed previously unknown group differences.
Visual Analytics of Group Differences in Tensor Fields: Application to Clinical DTI
10,142
Epidemiology aims at identifying subpopulations of cohort participants that share common characteristics (e.g. alcohol consumption) to explain risk factors of diseases in cohort study data. These data contain information about the participants' health status gathered from questionnaires, medical examinations, and image acquisition. Due to the growing volume and heterogeneity of epidemiological data, the discovery of meaningful subpopulations is challenging. Subspace clustering can be leveraged to find subpopulations in large and heterogeneous cohort study datasets. In our collaboration with epidemiologists, we realized their need for a tool to validate discovered subpopulations. For this purpose, identified subpopulations should be searched for independent cohorts to check whether the findings apply there as well. In this paper we describe our interactive Visual Analytics framework S-ADVIsED for SubpopulAtion Discovery and Validation In Epidemiological Data. S-ADVIsED enables epidemiologists to explore and validate findings derived from subspace clustering. We provide a coordinated multiple view system, which includes a summary view of all subpopulations, detail views, and statistical information. Users can assess the quality of subspace clusters by considering different criteria via visualization. Furthermore, intervals for variables involved in a subspace cluster can be adjusted. This extension was suggested by epidemiologists. We investigated the replication of a selected subpopulation with multiple variables in another population by considering different measurements. As a specific result, we observed that study participants exhibiting high liver fat accumulation deviate strongly from other subpopulations and from the total study population with respect to age, body mass index, thyroid volume and thyroid-stimulating hormone.
Visual Subpopulation Discovery and Validation in Cohort Study Data
10,143
Liquid simulations for computer animation often avoid simulating the air phase to reduce computational costs and ensure good conditioning of the linear systems required to enforce incompressibility. However, this free surface assumption leads to an inability to realistically treat bubbles: submerged gaps in the liquid are interpreted as empty voids that immediately collapse. To address this shortcoming, we present an efficient, practical, and conceptually simple approach to augment free surface flows with negligible density bubbles. Our method adds a new constraint to each disconnected air region that guarantees zero net flux across its entire surface, and requires neither simulating both phases nor reformulating into stream function variables. Implementation of the method requires only minor modifications to the pressure solve of a standard grid-based fluid solver, and yields linear systems that remain sparse and symmetric positive definite. In our evaluations, solving the modified pressure projection system took no more than 10% longer than the corresponding free surface solve. We demonstrate the method's effectiveness and flexibility by incorporating it into commercial fluid animation software and using it to generate a variety of dynamic bubble scenarios showcasing glugging effects, viscous and inviscid bubbles, interactions with irregularly-shaped and moving solid boundaries, and surface tension effects.
Constraint Bubbles: Adding Efficient Zero-Density Bubbles to Incompressible Free Surface Flow
10,144
We introduce the cause of the inefficiency of bivariate glyphs by defining the corresponding error. To recommend efficient and perceptually accurate bivariate-glyph design, we present an empirical study of five bivariate glyphs based on three psychophysics principles: integral-separable dimensions, visual hierarchy, and pre-attentive pop out, to choose one integral pair ($length_y-length_x$), three separable pairs ($length-color$, $length-texture$, $length_y-length_y$), and one redundant pair ($length_y-color/length_x$). Twenty participants performed four tasks requiring: reading numerical values, estimating ratio, comparing two points, and looking for extreme values among a subset of points belonging to the same sub-group. The most surprising result was that $length-texture$ was among the most effective methods, suggesting that local spatial frequency features can lead to global pattern detection that facilitate visual search in complex 3D structure. Our results also reveal the following: $length-color$ bivariate glyphs led to the most accurate answers and the least task execution time, while $length_y-length_x$ (integral) dimensions were among the worst and is not recommended; it achieved high performance only when pop-up color was added.
Bivariate Separable-Dimension Glyphs can Improve Visual Analysis of Holistic Features
10,145
The joint bilateral filter, which enables feature-preserving signal smoothing according to the structural information from a guidance, has been applied for various tasks in geometry processing. Existing methods either rely on a static guidance that may be inconsistent with the input and lead to unsatisfactory results, or a dynamic guidance that is automatically updated but sensitive to noises and outliers. Inspired by recent advances in image filtering, we propose a new geometry filtering technique called static/dynamic filter, which utilizes both static and dynamic guidances to achieve state-of-the-art results. The proposed filter is based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales. We develop an efficient iterative solver for the problem, which unifies existing filters that are based on static or dynamic guidances. The filter can be applied to mesh face normals followed by vertex position update, to achieve scale-aware and feature-preserving filtering of mesh geometry. It also works well for other types of signals defined on mesh surfaces, such as texture colors. Extensive experimental results demonstrate the effectiveness of the proposed filter for various geometry processing applications such as mesh denoising, geometry feature enhancement, and texture color filtering.
Static/Dynamic Filtering for Mesh Geometry
10,146
We present a system to convert any set of images (e.g., a video clip or a photo album) into a storyboard. We aim to create multiple pleasing graphic representations of the content at interactive rates, so the user can explore and find the storyboard (images, layout, and stylization) that best suits their needs and taste. The main challenges of this work are: selecting the content images, placing them into panels, and applying a stylization. For the latter, we propose an interactive design tool to create new stylizations using a wide range of filter blocks. This approach unleashes the creativity by allowing the user to tune, modify, and intuitively design new sequences of filters. In parallel to this manual design, we propose a novel procedural approach that automatically assembles sequences of filters for innovative results. We aim to keep the algorithm complexity as low as possible such that it can run interactively on a mobile device. Our results include examples of styles designed using both our interactive and procedural tools, as well as their final composition into interesting and appealing storyboards.
Graphic Narrative with Interactive Stylization Design
10,147
A method is proposed for constructing a spline curve of the Bezier type, which is continuous along with its first derivative by a piecewise polynomial function. Conditions for its existence and uniqueness are given. The constructed curve lies inside the convex hull of the control points, and the segments of the broken line connecting the control points are tangent to the curve. To construct the curve, we use the approach proposed earlier for constructing a parabolic spline. The idea is to use additional points with unknown values of some function. Additional points are used as spline nodes, and the function values are determined from the condition of the first derivative continuity of a piecewise polynomial curve. In multiple interpolation nodes, the function takes the given values and the values of the first derivative, which are determined by the control points. Examples of constructing a spline curve are given.
On the one method of a third-degree bezier type spline curve construction
10,148
We present a palette-based framework for color composition for visual applications. Color composition is a critical aspect of visual applications in art, design, and visualization. The color wheel is often used to explain pleasing color combinations in geometric terms, and, in digital design, to provide a user interface to visualize and manipulate colors. We abstract relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. Our framework provides a basis for a variety of color-aware image operations, such as color harmonization and color transfer, and can be applied to videos. To enable our approach, we introduce an extremely scalable and efficient yet simple palette-based image decomposition algorithm. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer decomposition tool. After preprocessing, our algorithm can decompose 6 MP images into layers in 20 milliseconds. We also conducted three large-scale, wide-ranging perceptual studies on the perception of harmonic colors and harmonization algorithms.
Palette-based image decomposition, harmonization, and color transfer
10,149
Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also the possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed; however, as the lenses and iris of the eyeball are in front of the retina, which is a limitation of the eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. In this short technical report, we introduce ideas and samples of an optical system for solving the common problems of retinal projection by using the metamaterial mirror (plane symmetric transfer optical system). Using this projection method, the designing of retinal projection can becomes easy, and if appropriate optics are available, it would be possible to construct an optical system that allows the quick follow-up of retinal projection hardware.
How could we ignore the lens and pupils of eyeballs: Metamaterial optics for retinal projection
10,150
Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage --- and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve.
Deep Painterly Harmonization
10,151
We propose a hardware and software pipeline to fabricate flexible wearable sensors and use them to capture deformations without line of sight. Our first contribution is a low-cost fabrication pipeline to embed multiple aligned conductive layers with complex geometries into silicone compounds. Overlapping conductive areas from separate layers form local capacitors that measure dense area changes. Contrary to existing fabrication methods, the proposed technique only requires hardware that is readily available in modern fablabs. While area measurements alone are not enough to reconstruct the full 3D deformation of a surface, they become sufficient when paired with a data-driven prior. A novel semi-automatic tracking algorithm, based on an elastic surface geometry deformation, allows to capture ground-truth data with an optical mocap system, even under heavy occlusions or partially unobservable markers. The resulting dataset is used to train a regressor based on deep neural networks, directly mapping the area readings to global positions of surface vertices. We demonstrate the flexibility and accuracy of the proposed hardware and software in a series of controlled experiments, and design a prototype of wearable wrist, elbow and biceps sensors, which do not require line-of-sight and can be worn below regular clothing.
Deformation Capture via Soft and Stretchable Sensor Arrays
10,152
Since the history of display technologies began, people have dreamed an ultimate 3D display system. In order to get close to the dream, 3D displays should provide both of psychological and physiological cues for recognition of depth information. However, it is challenging to satisfy the essential features without sacrifice in conventional technical values including resolution, frame rate, and eye-box. Here, we present a new type of 3D displays: tomographic displays. We claim that tomographic displays may support extremely wide depth of field, quasi-continuous accommodation, omni-directional motion parallax, preserved resolution, full frame, and moderate field of view within enough eye-box. Tomographic displays consist of focus-tunable optics, 2D display panel, and fast spatially adjustable backlight. The synchronization of the focus-tunable optics and the backlight enables the 2D display panel to express the depth information. Tomographic displays have various applications including tabletop 3D displays, head-up displays, and near-eye stereoscopes. In this study, we implement a near-eye display named TomoReal, which is one of the most promising application of tomographic displays. We conclude with the detailed analysis and thorough discussion for tomographic displays, which would open a new research field.
TomoReal: Tomographic Displays
10,153
We introduce a normal-based bas-relief generation and stylization method which is motivated by the recent advancement in this topic. Creating bas-relief from normal images has successfully facilitated bas-relief modeling in image space. However, the use of normal images in previous work is often restricted to certain type of operations only. This paper is intended to extend normal-based methods and construct bas-reliefs from normal images in a versatile way. Our method can not only generate a new normal image by combining various frequencies of existing normal images and details transferring, but also build bas-reliefs from a single RGB image and its edge-based sketch image. In addition, we introduce an auxiliary function to represent a smooth base surface and generate a layered global shape. To integrate above considerations into our framework, we formulate the bas- relief generation as a variational problem which can be solved by a screened Poisson equation. Some advantages of our method are that it expands the bas-relief shape space and generates diversified styles of results, and that it is capable of transferring details from one region to other regions. Our method is easy to implement, and produces good-quality bas-relief models. We experiment our method on a range of normal images and it compares favorably to other popular classic and state-of-the-art methods.
Normal Image Manipulation for Bas-relief Generation with Hybrid Styles
10,154
Approximation of scattered geometric data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for large scattered (unordered) datasets in d-dimensional space. This method is useful for a higher dimension d>=2, because the other methods require a conversion of a scattered dataset to a semi-regular mesh using some tessellation techniques, which is computationally expensive. The RBF approximation is non-separable, as it is based on a distance of two points. It leads to a solution of overdetermined Linear System of Equations (LSE). In this paper a new RBF approximation method is derived and presented. The presented approach is applicable for d dimensional cases in general.
A New Radial Basis Function Approximation with Reproduction
10,155
Extreme deformation can drastically morph a structure from one structural form into another. Programming such deformation properties into the structure is often challenging and in many cases an impossible task. The morphed forms do not hold and usually relapse to the original form, where the structure is in its lowest energy state. For example, a stick, when bent, resists its bent form and tends to go back to its initial straight form, where it holds the least amount of potential energy. In this project, we present a computational design method which can create fabricable planar structure that can morph into two different bistable forms. Once the user provides the initial desired forms, the method automatically creates support structures (internal springs), such that, the structure can not only morph, but also hold the respective forms under external force application. We achieve this through an iterative nonlinear optimization strategy for shaping the potential energy of the structure in the two forms simultaneously. Our approach guarantees first and second-order stability with respect to the potential energy of the bistable structure.
Metamorphs: Bistable Planar Structures
10,156
We present an efficient, trivially parallelizable algorithm to compute offset surfaces of shapes discretized using a dexel data structure. Our algorithm is based on a two-stage sweeping procedure that is simple to implement and efficient, entirely avoiding volumetric distance field computations typical of existing methods. Our construction is based on properties of half-space power diagrams, where each seed is only visible by a half-space, which were never used before for the computation of surface offsets. The primary application of our method is interactive modeling for digital fabrication. Our technique enables a user to interactively process high-resolution models. It is also useful in a plethora of other geometry processing tasks requiring fast, approximate offsets, such as topology optimization, collision detection, and skeleton extraction. We present experimental timings, comparisons with previous approaches, and provide a reference implementation in the supplemental material.
Half-Space Power Diagrams and Discrete Surface Offsets
10,157
An ideal software system in computer graphics should be a combination of innovative ideas, solid software engineering and rapid development. However, in reality these requirements are seldom met simultaneously. In this paper, we present early results on an open-source library named Taichi (http://taichi.graphics), which alleviates this practical issue by providing an accessible, portable, extensible, and high-performance infrastructure that is reusable and tailored for computer graphics. As a case study, we share our experience in building a novel physical simulation system using Taichi.
Taichi: An Open-Source Computer Graphics Library
10,158
This work presents a halftoning technique to manufacture 3D objects with the appearance of continuous grayscale imagery for Fused Deposition Modeling (FDM) printers. While droplet-based dithering is a common halftoning technique, this is not applicable to FDM printing, since FDM builds up objects by extruding material in semi-continuous paths. The line-based halftoning principle called 'hatching' is applied to the line patterns naturally occuring in FDM prints, which are built up in a layer-by-layer fashion. The proposed halftoning technique isn't limited by the challenges existing techniques face; existing FDM coloring techniques greatly influence the surface geometry and deteriorate with surface slopes deviating from vertical or greatly influence the basic parameters of the printing process and thereby the structural properties of the resulting product. Furthermore, the proposed technique has little effect on printing time. Experiments on a dual-nozzle FDM printer show promising results. Future work is required to calibrate the perceived tone.
Hatching for 3D prints: line-based halftoning for dual extrusion fused deposition modeling
10,159
We introduce a non-exponential radiative framework that takes into account the local spatial correlation of scattering particles in a medium. Most previous works in graphics have ignored this, assuming uncorrelated media with a uniform, random local distribution of particles. However, positive and negative correlation lead to slower- and faster-than-exponential attenuation respectively, which cannot be predicted by the Beer-Lambert law. As our results show, this has a major effect on extinction, and thus appearance. From recent advances in neutron transport, we first introduce our Extended Generalized Boltzmann Equation, and develop a general framework for light transport in correlated media. We lift the limitations of the original formulation, including an analysis of the boundary conditions, and present a model suitable for computer graphics, based on optical properties of the media and statistical distributions of scatterers. In addition, we present an analytic expression for transmittance in the case of positive correlation, and show how to incorporate it efficiently into a Monte Carlo renderer. We show results with a wide range of both positive and negative correlation, and demonstrate the differences compared to classic light transport.
A Radiative Transfer Framework for Spatially-Correlated Materials
10,160
We present a novel end-to-end framework for facial performance capture given a monocular video of an actor's face. Our framework are comprised of 2 parts. First, to extract the information in the frames, we optimize a triplet loss to learn the embedding space which ensures the semantically closer facial expressions are closer in the embedding space and the model can be transferred to distinguish the expressions that are not presented in the training dataset. Second, the embeddings are fed into an LSTM network to learn the deformation between frames. In the experiments, we demonstrated that compared to other methods, our method can distinguish the delicate motion around lips and significantly reduce jitters between the tracked meshes.
LSTM-Based Facial Performance Capture Using Embedding Between Expressions
10,161
This paper considers a different aspect of anatomical face modeling: kinematic modeling of the jaw, i.e., the Temporo-Mandibular Joint (TMJ). Previous work often relies on simple models of jaw kinematics, even though the actual physiological behavior of the TMJ is quite complex, allowing not only for mouth opening, but also for some amount of sideways (lateral) and front-to-back (protrusion) motions. Fortuitously, the TMJ is the only joint whose kinematics can be accurately measured with optical methods, because the bones of the lower and upper jaw are rigidly connected to the lower and upper teeth. We construct a person-specific jaw kinematic model by asking an actor to exercise the entire range of motion of the jaw while keeping the lips open so that the teeth are at least partially visible. This performance is recorded with three calibrated cameras. We obtain highly accurate 3D models of the teeth with a standard dental scanner and use these models to reconstruct the rigid body trajectories of the teeth from the videos (markerless tracking). The relative rigid transformations samples between the lower and upper teeth are mapped to the Lie algebra of rigid body motions in order to linearize the rotational motion. Our main contribution is to fit these samples with a three-dimensional nonlinear model parameterizing the entire range of motion of the TMJ. We show that standard Principal Component Analysis (PCA) fails to capture the nonlinear trajectories of the moving mandible. However, we found these nonlinearities can be captured with a special modification of autoencoder neural networks known as Nonlinear PCA. By mapping back to the Lie group of rigid transformations, we obtain parameterization of the jaw kinematics which provides an intuitive interface allowing the animators to explore realistic jaw motions in a user-friendly way.
Building Anatomically Realistic Jaw Kinematics Model from Data
10,162
We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.
RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding
10,163
We present in this paper several improvements for computing shortest path maps using OpenGL shaders. The approach explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing shortest path maps which can efficiently be queried at run-time. Our improved method relies on Compute Shaders for improved performance, does not require any CPU pre-computation, and handles shortest path maps both with source points and with line segment sources. The produced path maps partition the input environment into regions sharing a same parent point along the shortest path to the closest source point or segment source. Our method produces paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps.
Improved Shortest Path Maps with GPU Shaders
10,164
We present new methods for uniformly sampling the solid angle subtended by a disk. To achieve this, we devise two novel area-preserving mappings from the unit square $[0,1]^2$ to a spherical ellipse (i.e. the projection of the disk onto the unit sphere). These mappings allow for low-variance stratified sampling of direct illumination from disk-shaped light sources. We discuss how to efficiently incorporate our methods into a production renderer and demonstrate the quality of our maps, showing significantly lower variance than previous work.
Area-preserving parameterizations for spherical ellipses
10,165
This paper describes a method for efficiently computing parallel transport of tangent vectors on curved surfaces, or more generally, any vector-valued data on a curved manifold. More precisely, it extends a vector field defined over any region to the rest of the domain via parallel transport along shortest geodesics. This basic operation enables fast, robust algorithms for extrapolating level set velocities, inverting the exponential map, computing geometric medians and Karcher/Fr\'{e}chet means of arbitrary distributions, constructing centroidal Voronoi diagrams, and finding consistently ordered landmarks. Rather than evaluate parallel transport by explicitly tracing geodesics, we show that it can be computed via a short-time heat flow involving the connection Laplacian. As a result, transport can be achieved by solving three prefactored linear systems, each akin to a standard Poisson problem. To implement the method we need only a discrete connection Laplacian, which we describe for a variety of geometric data structures (point clouds, polygon meshes, etc). We also study the numerical behavior of our method, showing empirically that it converges under refinement, and augment the construction of intrinsic Delaunay triangulations (iDT) so that they can be used in the context of tangent vector field processing.
The Vector Heat Method
10,166
We present a second-order gradient analysis of light transport in participating media and use this to develop an improved radiance caching algorithm for volumetric light transport. We adaptively sample and interpolate radiance from sparse points in the medium using a second-order Hessian-based error metric to determine when interpolation is appropriate. We derive our metric from each point's incoming light field, computed by using a proxy triangulation-based representation of the radiance reflected by the surrounding medium and geometry. We use this representation to efficiently compute the first- and second-order derivatives of the radiance at the cache points while accounting for occlusion changes. We also propose a self-contained two-dimensional model for light transport in media and use it to validate and analyze our approach, demonstrating that our method outperforms previous radiance caching algorithms both in terms of accurate derivative estimates and final radiance extrapolation. We generalize these findings to practical three-dimensional scenarios, where we show improved results while reducing computation time by up to 30\% compared to previous work.
Second-Order Occlusion-Aware Volumetric Radiance Caching
10,167
Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work we avoid these approaches, and propose a new technique that corrects errors in depth caused by MPI that requires no camera modifications, and corrects depth in just 10 milliseconds per frame. By observing that most MPI information can be expressed as a function of the captured depth, we pose MPI removal as a convolutional approach, and model it using a convolutional neural network. In particular, given that the input and output data present similar structure, we base our network in an autoencoder, which we train in two stages: first, we use the encoder (convolution filters) to learn a suitable basis to represent corrupted range images; then, we train the decoder (deconvolution filters) to correct depth from the learned basis from synthetically generated scenes. This approach allows us to tackle the lack of reference data, by using a large-scale captured training set with corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the corrector stage of the network, which we generate by using a physically-based, time-resolved rendering. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured incorrect depth as input.
DeepToF: Off-the-Shelf Real-Time Correction of Multipath Interference in Time-of-Flight Imaging
10,168
We present a novel deep-learning based approach to producing animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio. Our three-stage Long Short-Term Memory (LSTM) network architecture is motivated by psycho-linguistic insights: segmenting speech audio into a stream of phonetic-groups is sufficient for viseme construction; speech styles like mumbling or shouting are strongly co-related to the motion of facial landmarks; and animator style is encoded in viseme motion curve profiles. Our contribution is an automatic real-time lip-synchronization from audio solution that integrates seamlessly into existing animation pipelines. We evaluate our results by: cross-validation to ground-truth data; animator critique and edits; visual comparison to recent deep-learning lip-synchronization solutions; and showing our approach to be resilient to diversity in speaker and language.
VisemeNet: Audio-Driven Animator-Centric Speech Animation
10,169
In this work we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust, and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend the beam steady-state radiance estimates to include the temporal domain. Then, we develop a progressive version of spatio-temporal density estimations, that converges to the correct solution with finite memory requirements by iteratively averaging several realizations of independent renders with a progressively reduced kernel bandwidth. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
Progressive Transient Photon Beams
10,170
Many Monte Carlo light transport simulations use multiple importance sampling (MIS) to weight between different path sampling strategies. We propose to use the path throughput to compute the MIS weights instead of the commonly used probability density per area measure. This new formulation is equivalent to the previous approach and results in the same weights as well as implementation. However, it is more intuitive and can help in understanding the effects of modifications to the weight function. We show some examples of required modifications which are often neglected in implementations. Also, our new perspective might help to derive MIS strategies for new samplers in the future.
Path Throughput Importance Weights
10,171
We propose a novel shape representation useful for analyzing and processing shape collections, as well for a variety of learning and inference tasks. Unlike most approaches that capture variability in a collection by using a template model or a base shape, we show that it is possible to construct a full shape representation by using the latent space induced by a functional map net- work, allowing us to represent shapes in the context of a collection without the bias induced by selecting a template shape. Key to our construction is a novel analysis of latent functional spaces, which shows that after proper regularization they can be endowed with a natural geometric structure, giving rise to a well-defined, stable and fully informative shape representation. We demonstrate the utility of our representation in shape analysis tasks, such as highlighting the most distorted shape parts in a collection or separating variability modes between shape classes. We further exploit our representation in learning applications by showing how it can naturally be used within deep learning and convolutional neural networks for shape classi cation or reconstruction, signi cantly outperforming existing point-based techniques.
Latent Space Representation for Shape Analysis and Learning
10,172
We propose a method for efficiently computing orientation-preserving and approximately continuous correspondences between non-rigid shapes, using the functional maps framework. We first show how orientation preservation can be formulated directly in the functional (spectral) domain without using landmark or region correspondences and without relying on external symmetry information. This allows us to obtain functional maps that promote orientation preservation, even when using descriptors, that are invariant to orientation changes. We then show how higher quality, approximately continuous and bijective pointwise correspondences can be obtained from initial functional maps by introducing a novel refinement technique that aims to simultaneously improve the maps both in the spectral and spatial domains. This leads to a general pipeline for computing correspondences between shapes that results in high-quality maps, while admitting an efficient optimization scheme. We show through extensive evaluation that our approach improves upon state-of-the-art results on challenging isometric and non-isometric correspondence benchmarks according to both measures of continuity and coverage as well as producing semantically meaningful correspondences as measured by the distance to ground truth maps.
Continuous and Orientation-preserving Correspondences via Functional Maps
10,173
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers' behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers' attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation.
Movie Editing and Cognitive Event Segmentation in Virtual Reality Video
10,174
Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction.
An intuitive control space for material appearance
10,175
We suggest a rasterization pipeline tailored towards the need of head-mounted displays (HMD), where latency and field-of-view requirements pose new challenges beyond those of traditional desktop displays. Instead of rendering and warping for low latency, or using multiple passes for foveation, we show how both can be produced directly in a single perceptual rasterization pass. We do this with per-fragment ray-casting. This is enabled by derivations of tight space-time-fovea pixel bounds, introducing just enough flexibility for requisite geometric tests, but retaining most of the the simplicity and efficiency of the traditional rasterizaton pipeline. To produce foveated images, we rasterize to an image with spatially varying pixel density. To reduce latency, we extend the image formation model to directly produce "rolling" images where the time at each pixel depends on its display location. Our approach overcomes limitations of warping with respect to disocclusions, object motion and view-dependent shading, as well as geometric aliasing artifacts in other foveated rendering techniques. A set of perceptual user studies demonstrates the efficacy of our approach.
Perceptual Rasterization for Head-mounted Display Image Synthesis
10,176
Memory and network bandwidth are decisive bottlenecks when handling high-resolution multidimensional data sets in visualization applications, and they increasingly demand suitable data compression strategies. We introduce a novel lossy compression algorithm for multidimensional data over regular grids. It leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to three dimensions and higher, together with bit-plane, run-length and arithmetic coding to compress the HOSVD transform coefficients. Our scheme degrades the data particularly smoothly and achieves lower mean squared error than other state-of-the-art algorithms at low-to-medium bit rates, as it is required in data archiving and management for visualization purposes. Further advantages of the proposed algorithm include very fine bit rate selection granularity and the ability to manipulate data at very small cost in the compression domain, for example to reconstruct filtered and/or subsampled versions of all (or selected parts) of the data set.
TTHRESH: Tensor Compression for Multidimensional Visual Data
10,177
The Mumford-Shah functional approximates a function by a piecewise smooth function. Its versatility makes it ideal for tasks such as image segmentation or restoration, and it is now a widespread tool of image processing. Recent work has started to investigate its use for mesh segmentation and feature lines detection, but we take the stance that the power of this functional could reach far beyond these tasks and integrate the everyday mesh processing toolbox. In this paper, we discretize an Ambrosio-Tortorelli approximation via a Discrete Exterior Calculus formulation. We show that, combined with a new shape optimization routine, several mesh processing problems can be readily tackled within the same framework. In particular, we illustrate applications in mesh denoising, normal map embossing, mesh inpainting and mesh segmentation.
Mumford-Shah Mesh Processing using the Ambrosio-Tortorelli Functional
10,178
We introduce HexaLab: a WebGL application for real time visualization, exploration and assessment of hexahedral meshes. HexaLab can be used by simply opening www.hexalab.net. Our visualization tool targets both users and scholars. Practitioners who employ hexmeshes for Finite Element Analysis, can readily check mesh quality and assess its usability for simulation. Researchers involved in mesh generation may use HexaLab to perform a detailed analysis of the mesh structure, isolating weak points and testing new solutions to improve on the state of the art and generate high quality images. To this end, we support a wide variety of visualization and volume inspection tools. Our system offers also immediate access to a repository containing all the publicly available meshes produced with the most recent techniques for hexmesh generation. We believe HexaLab, providing a common tool for visualizing, assessing and distributing results, will push forward the recent strive for replicability in our scientific community.
HexaLab.net: an online viewer for hexahedral meshes
10,179
Sample patterns have many uses in Computer Graphics, ranging from procedural object placement over Monte Carlo image synthesis to non-photorealistic depiction. Their properties such as discrepancy, spectra, anisotropy, or progressiveness have been analyzed extensively. However, designing methods to produce sampling patterns with certain properties can require substantial hand-crafting effort, both in coding, mathematical derivation and compute time. In particular, there is no systematic way to derive the best sampling algorithm for a specific end-task. Tackling this issue, we suggest another level of abstraction: a toolkit to end-to-end optimize over all sampling methods to find the one producing user-prescribed properties such as discrepancy or a spectrum that best fit the end-task. A user simply implements the forward losses and the sampling method is found automatically -- without coding or mathematical derivation -- by making use of back-propagation abilities of modern deep learning frameworks. While this optimization takes long, at deployment time the sampling method is quick to execute as iterated unstructured non-linear filtering using radial basis functions (RBFs) to represent high-dimensional kernels. Several important previous methods are special cases of this approach, which we compare to previous work and demonstrate its usefulness in several typical Computer Graphics applications. Finally, we propose sampling patterns with properties not shown before, such as high-dimensional blue noise with projective properties.
End-to-end Sampling Patterns
10,180
Coarse building mass models are now routinely generated at scales ranging from individual buildings through to whole cities. For example, they can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful semantic or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs to create plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods. We provide the user with direct control over the variability of the output. We allow her to interactively specify style via images and manipulate style-adapted sliders to control style variability. We demonstrate our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of user studies and are found to be realistic, semantically-plausible, and style-consistent.
FrankenGAN: Guided Detail Synthesis for Building Mass-Models Using Style-Synchonized GANs
10,181
We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures and show reconstructed hair sequences from videos.
HairNet: Single-View Hair Reconstruction using Convolutional Neural Networks
10,182
To enhance depth perception and thus data comprehension, additional depth cues are often used in 3D visualizations of complex vascular structures. Accordingly, there is a variety of different approaches described in the literature, ranging from chromadepth color coding over depth of field to glyph-based encodings. Unfortunately, the majority of existing approaches suffers from the same problem. As these cues are directly applied to the geometry's surface, the display of additional information, such as other modalities or derived attributes, associated with a vessel is impaired. To overcome this limitation we propose Void Space Surfaces which utilize the empty space in between vessel branches to communicate depth and their relative positioning. This allows us to enhance the depth perception of vascular structures without interfering with the spatial data and potentially superimposed parameter information. Within this paper we introduce Void Space Surfaces, describe their technical realization, and show their application to various vessel trees. Moreover, we report the outcome of a user study which we have conducted in order to evaluate the perceptual impact of Void Space Surfaces as compared to existing vessel visualization techniques.
Void Space Surfaces to Convey Depth in Vessel Visualizations
10,183
This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.
Combining Recurrent Neural Networks and Adversarial Training for Human Motion Synthesis and Control
10,184
For the classic aesthetic interpolation problem, we propose an entirely new thought: apply the golden section. For how to apply the golden section to interpolation methods, we present three examples: the golden step interpolation, the golden piecewise linear interpolation and the golden curve interpolation, which respectively deal with the applications of golden section in the interpolation of degree 0, 1, and 2 in the plane. In each example, we present our basic ideas, the specific methods, comparative examples and applications, and relevant criteria. And it is worth mentioning that for aesthetics, we propose two novel concepts: the golden cuspidal hill and the golden domed hill. This paper aims to provide the reference for the combination of golden section and interpolation, and stimulate more and better related researches.
Golden interpolation
10,185
Designing real and virtual garments is becoming extremely demanding with rapidly changing fashion trends and increasing need for synthesizing realistic dressed digital humans for various applications. This necessitates creating simple and effective workflows to facilitate authoring sewing patterns customized to garment and target body shapes to achieve desired looks. Traditional workflow involves a trial-and-error procedure wherein a mannequin is draped to judge the resultant folds and the sewing pattern iteratively adjusted until the desired look is achieved. This requires time and experience. Instead, we present a data-driven approach wherein the user directly indicates desired fold patterns simply by sketching while our system estimates corresponding garment and body shape parameters at interactive rates. The recovered parameters can then be further edited and the updated draped garment previewed. Technically, we achieve this via a novel shared shape space that allows the user to seamlessly specify desired characteristics across multimodal input {\em without} requiring to run garment simulation at design time. We evaluate our approach qualitatively via a user study and quantitatively against test datasets, and demonstrate how our system can generate a rich quality of on-body garments targeted for a range of body shapes while achieving desired fold characteristics.
Learning a Shared Shape Space for Multimodal Garment Design
10,186
Rendering highly scattering participating media using brute force path tracing is a challenge. The diffusion approximation reduces the problem to solving a simple linear partial differential equation. Flux-limited diffusion introduces non-linearities to improve the accuracy of the solution, especially in low optical depth media, but introduces several ad-hoc assumptions. Both methods are based on a spherical harmonics expansion of the radiance field that is truncated after the first order. In this paper, we investigate the open question of whether going to higher spherical harmonic orders provides a viable improvement to these two approaches. Increasing the order introduces a set of complex coupled partial differential equations (the $P_N$-equations), whose growing number make them difficult to work with at higher orders. We thus use a computer algebra framework for representing and manipulating the underlying mathematical equations, and use it to derive the real-valued $P_N$-equations for arbitrary orders. We further present a staggered-grid $P_N$-solver and generate its stencil code directly from the expression tree of the $P_N$-equations. Finally, we discuss how our method compares to prior work for various standard problems.
$P_N$-Method for Multiple Scattering in Participating Media
10,187
Our ability to understand data has always lagged behind our ability to collect it. This is particularly true in urban environments, where mass data capture is particularly valuable, but the objects captured are more varied, denser, and complex. To understand the structure and content of the environment, we must process the unstructured data to a structured form. BigSUR is an urban reconstruction algorithm which fuses GIS data, photogrammetric meshes, and street level photography, to create clean representative, semantically labelled, geometry. However, we have identified three problems with the system i) the street level photography is often difficult to acquire; ii) novel fa\c{c}ade styles often frustrate the detection of windows and doors; iii) the computational requirements of the system are large, processing a large city block can take up to 15 hours. In this paper we describe the process of simplifying and validating the BigSUR semantic reconstruction system. In particular, the requirement for street level images is removed, and greedy post-process profile assignment is introduced to accelerate the system. We accomplish this by modifying the binary integer programming (BIP) optimization, and re-evaluating the effects of various parameters. The new variant of the system is evaluated over a variety of urban areas. We objectively measure mean squared error (MSE) terms over the unstructured geometry, showing that BigSUR is able to accurately recover omissions from the input meshes. Further, we evaluate the ability of the system to label the walls and roofs of input meshes, concluding that our new BigSUR variant achieves highly accurate semantic labelling with shorter computational time and less input data.
Simplifying Urban Data Fusion with BigSUR
10,188
Many tasks in geometry processing are modeled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on deconstructed domains, where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modeling and how to couple solutions on each subdomain together algebraically. We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second-order and fourth-order partial differential equations.
Solid Geometry Processing on Deconstructed Domains
10,189
Modeling relations between components of 3D objects is essential for many geometry editing tasks. Existing techniques commonly rely on labeled components, which requires substantial annotation effort and limits components to a dictionary of predefined semantic parts. We propose a novel framework based on neural networks that analyzes an uncurated collection of 3D models from the same category and learns two important types of semantic relations among full and partial shapes: complementarity and interchangeability. The former helps to identify which two partial shapes make a complete plausible object, and the latter indicates that interchanging two partial shapes from different objects preserves the object plausibility. Our key idea is to jointly encode both relations by embedding partial shapes as fuzzy sets in dual embedding spaces. We model these two relations as fuzzy set operations performed across the dual embedding spaces, and within each space, respectively. We demonstrate the utility of our method for various retrieval tasks that are commonly needed in geometric modeling interfaces.
Learning Fuzzy Set Representations of Partial Shapes on Dual Embedding Spaces
10,190
Kinetic approaches, i.e., methods based on the lattice Boltzmann equations, have long been recognized as an appealing alternative for solving incompressible Navier-Stokes equations in computational fluid dynamics. However, such approaches have not been widely adopted in graphics mainly due to the underlying inaccuracy, instability and inflexibility. In this paper, we try to tackle these problems in order to make kinetic approaches practical for graphical applications. To achieve more accurate and stable simulations, we propose to employ the non-orthogonal central-moment-relaxation model, where we develop a novel adaptive relaxation method to retain both stability and accuracy in turbulent flows. To achieve flexibility, we propose a novel continuous-scale formulation that enables samples at arbitrary resolutions to easily communicate with each other in a more continuous sense and with loose geometrical constraints, which allows efficient and adaptive sample construction to better match the physical scale. Such a capability directly leads to an automatic sample construction which generates static and dynamic scales at initialization and during simulation, respectively. This effectively makes our method suitable for simulating turbulent flows with arbitrary geometrical boundaries. Our simulation results with applications to smoke animations show the benefits of our method, with comparisons for justification and verification.
Continuous-Scale Kinetic Fluid Simulation
10,191
Assessing the quality of 3D printed models before they are printed remains a challeng- ing problem, particularly when considering point cloud-based models. This paper introduces an approach to quality assessment, which uses techniques from the field of Topological Data Analy- sis (TDA) to compute a topological abstraction of the eventual printed model. Two main tools of TDA, Mapper and persistent homology, are used to analyze both the printed space and empty space created by the model. This abstraction enables investigating certain qualities of the model, with respect to print quality, and identifies potential anomalies that may appear in the final product.
Inferring Quality in Point Cloud-based 3D Printed Objects using Topological Data Analysis
10,192
We present StyleBlit---an efficient example-based style transfer algorithm that can deliver high-quality stylized renderings in real-time on a single-core CPU. Our technique is especially suitable for style transfer applications that use local guidance - descriptive guiding channels containing large spatial variations. Local guidance encourages transfer of content from the source exemplar to the target image in a semantically meaningful way. Typical local guidance includes, e.g., normal values, texture coordinates or a displacement field. Contrary to previous style transfer techniques, our approach does not involve any computationally expensive optimization. We demonstrate that when local guidance is used, optimization-based techniques converge to solutions that can be well approximated by simple pixel-level operations. Inspired by this observation, we designed an algorithm that produces results visually similar to, if not better than, the state-of-the-art, and is several orders of magnitude faster. Our approach is suitable for scenarios with low computational budget such as games and mobile applications.
StyleBlit: Fast Example-Based Stylization with Local Guidance
10,193
Viewpoint estimation from 2D rendered images is helpful in understanding how users select viewpoints for volume visualization and guiding users to select better viewpoints based on previous visualizations. In this paper, we propose a viewpoint estimation method based on Convolutional Neural Networks (CNNs) for volume visualization. We first design an overfit-resistant image rendering pipeline to generate the training images with accurate viewpoint annotations, and then train a category-specific viewpoint classification network to estimate the viewpoint for the given rendered image. Our method can achieve good performance on images rendered with different transfer functions and rendering parameters in several categories. We apply our model to recover the viewpoints of the rendered images in publications, and show how experts look at volumes. We also introduce a CNN feature-based image similarity measure for similarity voting based viewpoint selection, which can suggest semantically meaningful optimal viewpoints for different volumes and transfer functions.
CNNs based Viewpoint Estimation for Volume Visualization
10,194
The problem of polycube construction or deformation is an essential problem in computer graphics. In this paper, we present a robust, simple, efficient and automatic algorithm to deform the meshes of arbitrary shapes into their polycube ones. We derive a clear relationship between a mesh and its corresponding polycube shape. Our algorithm is edge-preserved, and works on surface meshes with or without boundaries. Our algorithm outperforms previous ones in speed, robustness, efficiency. Our method is simple to implement. To demonstrate the robustness and effectiveness of our method, we apply it to hundreds of models of varying complexity and topology. We demonstrat that our method compares favorably to other state-of-the-art polycube deformation methods.
Robust Edge-Preserved Surface Mesh Polycube Deformation
10,195
In this paper, we introduce discrete Calabi flow to the graphics research community and present a novel conformal mesh parameterization algorithm. Calabi energy has a succinct and explicit format. Its corresponding flow is conformal and convergent under certain conditions. Our method is based on the Calabi energy and Calabi flow with solid theoretical and mathematical base. We demonstrate our approach on dozens of models and compare it with other related flow based methods, such as the well-known Ricci flow and CETM. Our experiments show that the performance of our algorithm is comparably the same with other methods. The discrete Calabi flow in our method provides another perspective on conformal flow and conformal parameterization.
Conformal Mesh Parameterization Using Discrete Calabi Flow
10,196
We present a generative neural network which enables us to generate plausible 3D indoor scenes in large quantities and varieties, easily and highly efficiently. Our key observation is that indoor scene structures are inherently hierarchical. Hence, our network is not convolutional; it is a recursive neural network or RvNN. Using a dataset of annotated scene hierarchies, we train a variational recursive autoencoder, or RvNN-VAE, which performs scene object grouping during its encoding phase and scene generation during decoding. Specifically, a set of encoders are recursively applied to group 3D objects based on support, surround, and co-occurrence relations in a scene, encoding information about object spatial properties, semantics, and their relative positioning with respect to other objects in the hierarchy. By training a variational autoencoder (VAE), the resulting fixed-length codes roughly follow a Gaussian distribution. A novel 3D scene can be generated hierarchically by the decoder from a randomly sampled code from the learned distribution. We coin our method GRAINS, for Generative Recursive Autoencoders for INdoor Scenes. We demonstrate the capability of GRAINS to generate plausible and diverse 3D indoor scenes and compare with existing methods for 3D scene synthesis. We show applications of GRAINS including 3D scene modeling from 2D layouts, scene editing, and semantic scene segmentation via PointNet whose performance is boosted by the large quantity and variety of 3D scenes generated by our method.
GRAINS: Generative Recursive Autoencoders for INdoor Scenes
10,197
Time-based media (videos, synthetic animations, and virtual reality experiences) are used for communication, in applications such as manufacturers explaining the operation of a new appliance to consumers and scientists illustrating the basis of a new conclusion. However, authoring time-based media that are effective and personalized for the viewer remains a challenge. We introduce AniCode, a novel framework for authoring and consuming time-based media. An author encodes a video animation in a printed code, and affixes the code to an object. A consumer uses a mobile application to capture an image of the object and code, and to generate a video presentation on the fly. Importantly, AniCode presents the video personalized in the consumer's visual context. Our system is designed to be low cost and easy to use. By not requiring an internet connection, and through animations that decode correctly only in the intended context, AniCode enhances privacy of communication using time-based media. Animation schemes in the system include a series of 2D and 3D geometric transformations, color transformation, and annotation. We demonstrate the AniCode framework with sample applications from a wide range of domains, including product "how to" examples, cultural heritage, education, creative art, and design. We evaluate the ease of use and effectiveness of our system with a user study.
AniCode: Authoring Coded Artifacts for Network-Free Personalized Animations
10,198
Power saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.
On-the-Fly Power-Aware Rendering
10,199