text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
We present LCollision, a learning-based method that synthesizes collision-free 3D human poses. At the crux of our approach is a novel deep architecture that simultaneously decodes new human poses from the latent space and predicts colliding body parts. These two components of our architecture are used as the objective function and surrogate hard constraints in a constrained optimization for collision-free human pose generation. A novel aspect of our approach is the use of a bilevel autoencoder that decomposes whole-body collisions into groups of collisions between localized body parts. By solving the constrained optimizations, we show that a significant amount of collision artifacts can be resolved. Furthermore, in a large test set of $2.5\times 10^6$ randomized poses from SCAPE, our architecture achieves a collision-prediction accuracy of $94.1\%$ with $80\times$ speedup over exact collision detection algorithms. To the best of our knowledge, LCollision is the first approach that accelerates collision detection and resolves penetrations using a neural network.
LCollision: Fast Generation of Collision-Free Human Poses using Learned Non-Penetration Constraints
10,500
We present Diffusion Structures, a family of resilient shell structures from the eigenfunctions of a pair of novel diffusion operators. This approach is based on Michell's theorem but avoids expensive non-linear optimization with computation that amounts to constructing and solving two generalized eigenvalue problems to generate two sets of stripe patterns. This structure family can be generated quickly, and navigated in real-time using a small number of tuneable parameters.
Diffusion Structures for Architectural Stripe Pattern Generation
10,501
This paper describes a pipeline built with open source tools for interpolating 3D facial expressions taken from images. The presented approach allows anyone to create 3D face animations from 2 input photos: one from the start face expression, and the other from the final face expression. Given the input photos, corresponding 3D face models are constructed and texture-mapped using the photos as textures aligned with facial features. Animations are then generated by morphing the models by interpolation of the geometries and textures of the models. This work was performed as a MS project at the University of California, Merced.
Assembling a Pipeline for 3D Face Interpolation
10,502
This paper proposes a novel framework to evaluate fluid simulation methods based on crowd-sourced user studies in order to robustly gather large numbers of opinions. The key idea for a robust and reliable evaluation is to use a reference video from a carefully selected real-world setup in the user study. By conducting a series of controlled user studies and comparing their evaluation results, we observe various factors that affect the perceptual evaluation. Our data show that the availability of a reference video makes the evaluation consistent. We introduce this approach for computing scores of simulation methods as visual accuracy metric. As an application of the proposed framework, a variety of popular simulation methods are evaluated.
Perceptual Evaluation of Liquid Simulation Methods
10,503
Recently there has been a significant effort to automate UV mapping, the process of mapping 3D-dimensional surfaces to the UV space while minimizing distortion and seam length. Although state-of-the-art methods, Autocuts and OptCuts, addressed this task via energy-minimization approaches, they fail to produce semantic seam styles, an essential factor for professional artists. The recent emergence of Graph Neural Networks (GNNs), and the fact that a mesh can be represented as a particular form of a graph, has opened a new bridge to novel graph learning-based solutions in the computer graphics domain. In this work, we use the power of supervised GNNs for the first time to propose a fully automated UV mapping framework that enables users to replicate their desired seam styles while reducing distortion and seam length. To this end, we provide augmentation and decimation tools to enable artists to create their dataset and train the network to produce their desired seam style. We provide a complementary post-processing approach for reducing the distortion based on graph algorithms to refine low-confidence seam predictions and reduce seam length (or the number of shells in our supervised case) using a skeletonization method.
GraphSeam: Supervised Graph Learning Framework for Semantic UV Mapping
10,504
Many real-world hand-crafted objects are decorated with elements that are packed onto the object's surface and deformed to cover it as much as possible. Examples are artisanal ceramics and metal jewelry. Inspired by these objects, we present a method to enrich surfaces with packed volumetric decorations. Our algorithm works by first determining the locations in which to add the decorative elements and then removing the non-physical overlap between them while preserving the decoration volume. For the placement, we support several strategies depending on the desired overall motif. To remove the overlap, we use an approach based on implicit deformable models creating the qualitative effect of plastic warping while avoiding expensive and hard-to-control physical simulations. Our decorative elements can be used to enhance virtual surfaces, as well as 3D-printed pieces, by assembling the decorations onto real-surfaces to obtain tangible reproductions.
PAVEL: Decorative Patterns with Packed Volumetric Elements
10,505
We propose Blue Noise Plots, two-dimensional dot plots that depict data points of univariate data sets. While often one-dimensional strip plots are used to depict such data, one of their main problems is visual clutter which results from overlap. To reduce this overlap, jitter plots were introduced, whereby an additional, non-encoding plot dimension is introduced, along which the data point representing dots are randomly perturbed. Unfortunately, this randomness can suggest non-existent clusters, and often leads to visually unappealing plots, in which overlap might still occur. To overcome these shortcomings, we introduce BlueNoise Plots where random jitter along the non-encoding plot dimension is replaced by optimizing all dots to keep a minimum distance in 2D i. e., Blue Noise. We evaluate the effectiveness as well as the aesthetics of Blue Noise Plots through both, a quantitative and a qualitative user study. The Python implementation of Blue Noise Plots is available here.
Blue Noise Plots
10,506
Point cloud upsampling is vital for the quality of the mesh in three-dimensional reconstruction. Recent research on point cloud upsampling has achieved great success due to the development of deep learning. However, the existing methods regard point cloud upsampling of different scale factors as independent tasks. Thus, the methods need to train a specific model for each scale factor, which is both inefficient and impractical for storage and computation in real applications. To address this limitation, in this work, we propose a novel method called ``Meta-PU" to firstly support point cloud upsampling of arbitrary scale factors with a single model. In the Meta-PU method, besides the backbone network consisting of residual graph convolution (RGC) blocks, a meta-subnetwork is learned to adjust the weights of the RGC blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points. Together, these two blocks enable our Meta-PU to continuously upsample the point cloud with arbitrary scale factors by using only a single model. In addition, the experiments reveal that training on multiple scales simultaneously is beneficial to each other. Thus, Meta-PU even outperforms the existing methods trained for a specific scale factor only.
Meta-PU: An Arbitrary-Scale Upsampling Network for Point Cloud
10,507
The standardized sizes used in the garment industry do not cover the range of individual differences in body shape for most people, leading to ill-fitting clothes, high return rates and overproduction. Recent research efforts in both industry and academia therefore focus on virtual try-on and on-demand fabrication of individually fitting garments. We propose an interactive design tool for creating custom-fit garments based on 3D body scans of the intended wearer. Our method explicitly incorporates transitions between various body poses to ensure a better fit and freedom of movement. The core of our method focuses on tools to create a 3D garment shape directly on an avatar without an underlying sewing pattern, and on the adjustment of that garment's rest shape while interpolating and moving through the different input poses. We alternate between cloth simulation and rest shape adjustment based on stretch to achieve the final shape of the garment. At any step in the real-time process, we allow for interactive changes to the garment. Once the garment shape is finalized for production, established techniques can be used to parameterize it into a 2D sewing pattern or transform it into a knitting pattern.
Designing Personalized Garments with Body Movement
10,508
B\'ezier curves provide the basic building blocks of graphic design in 2D. In this paper, we port B\'ezier curves to manifolds. We support the interactive drawing and editing of B\'ezier splines on manifold meshes with millions of triangles, by relying on just repeated manifold averages. We show that direct extensions of the De Casteljau and Bernstein evaluation algorithms to the manifold setting are fragile, and prone to discontinuities when control polygons become large. Conversely, approaches based on subdivision are robust and can be implemented efficiently. We define B\'ezier curves on manifolds, by extending both the recursive De Casteljau bisection and a new open-uniform Lane-Riesenfeld subdivision scheme, which provide curves with different degrees of smoothness. For both schemes, we present algorithms for curve tracing, point evaluation, and point insertion. We test our algorithms for robustness and performance on all watertight, manifold, models from the Thingi10k repository, without any pre-processing and with random control points. For interactive editing, we port all the basic user interface interactions found in 2D tools directly to the mesh. We also support mapping complex SVG drawings to the mesh and their interactive editing.
B/Surf: Interactive Bézier Splines on Surfaces
10,509
Conformal Geometric Algebra (CGA) is a framework that allows the representation of objects, such as points, planes and spheres, and deformations, such as translations, rotations and dilations as uniform vectors, called multivectors. In this work, we demonstrate the merits of multivector usage with a novel, integrated rigged character simulation framework based on CGA. In such a framework, and for the first time, one may perform real-time cuts and tears as well as drill holes on a rigged 3D model. These operations can be performed before and/or after model animation, while maintaining deformation topology. Moreover, our framework permits generation of intermediate keyframes on-the-fly based on user input, apart from the frames provided in the model data. We are motivated to use CGA as it is the lowest-dimension extension of dual-quaternion algebra that amends the shortcomings of the majority of existing animation and deformation techniques. Specifically, we no longer need to maintain objects of multiple algebras and constantly transmute between them, such as matrices, quaternions and dual-quaternions, and we can effortlessly apply dilations. Using such an all-in-one geometric framework allows for better maintenance and optimization and enables easier interpolation and application of all native deformations. Furthermore, we present these three novel algorithms in a single CGA representation which enables cutting, tearing and drilling of the input rigged model, where the output model can be further re-deformed in interactive frame rates. These close to real-time cut,tear and drill algorithms can enable a new suite of applications, especially under the scope of a medical VR simulation.
An All-In-One Geometric Algorithm for Cutting, Tearing, and Drilling Deformable Models
10,510
We present an in-depth analysis of the sources of variance in state-of-the-art unbiased volumetric transmittance estimators, and propose several new methods for improving their efficiency. These combine to produce a single estimator that is universally optimal relative to prior work, with up to several orders of magnitude lower variance at the same cost, and has zero variance for any ray with non-varying extinction. We first reduce the variance of truncated power-series estimators using a novel efficient application of U-statistics. We then greatly reduce the average expansion order of the power series and redistribute density evaluations to filter the optical depth estimates with an equidistant sampling comb. Combined with the use of an online control variate built from a sampled mean density estimate, the resulting estimator effectively performs ray marching most of the time while using rarely-sampled higher order terms to correct the bias.
An unbiased ray-marching transmittance estimator
10,511
Automated floorplanning or space layout planning has been a long-standing NP-hard problem in the field of computer-aided design, with applications in integrated circuits, architecture, urbanism, and operational research. In this paper, we introduce GenFloor, an interactive design system that takes geometrical, topological, and performance goals and constraints as input and provides optimized spatial design solutions as output. As part of our work, we propose three novel permutation methods for existing space layout graph representations, namely O-Tree and B*-Tree representations. We implement our proposed floorplanning methods as a package for Dynamo, a visual programming tool, with a custom GUI and additional evaluation functionalities to facilitate designers in their generative design workflow. Furthermore, we illustrate the performance of GenFloor in two sets of case-study experiments for residential floorplanning tasks by (a) measuring the ability of the proposed system to find a known optimal solution, and (b) observing how the system can generate diverse floorplans while addressing given a constant residential design problem. Our results indicate convergence to the global optimum is achieved while offering a diverse set of solutions of a residential floorplan corresponding to the Pareto-optimums of the solution landscape.
GenFloor: Interactive Generative Space Layout System via Encoded Tree Graphs
10,512
We present a hybrid-driven trajectory prediction method based on group emotion. The data driven and model driven methods are combined to make a compromise between the controllability, generality, and efficiency of the method on the basis of simulating more real crowd movements. A hybrid driven method is proposed to improve the reliability of the calculation results based on real crowd data, and ensure the controllability of the model. It reduces the dependence of our model on real data and realizes the complementary advantages of these two kinds of methods. In addition, we divide crowd into groups based on human relations in society. So our method can calculate the movements in different scales. We predict individual movement trajectories according to the trajectories of group and fully consider the influence of the group movement state on the individual movements. Besides we also propose a group emotion calculation method and our method also considers the effect of group emotion on crowd movements.
Hybrid-driven Trajectory Prediction Based on Group Emotion
10,513
Physical representations of data offer physical and spatial ways of looking at, navigating, and interacting with data. While digital fabrication has facilitated the creation of objects with data-driven geometry, rendering data as a physically fabricated object is still a daunting leap for many physicalization designers. Rendering in the scope of this research refers to the back-and-forth process from digital design to digital fabrication and its specific challenges. We developed a corpus of example data physicalizations from research literature and physicalization practice. This survey then unpacks the "rendering" phase of the extended InfoVis pipeline in greater detail through these examples, with the aim of identifying ways that researchers, artists, and industry practitioners "render" physicalizations using digital design and fabrication tools.
Data to Physicalization: A Survey of the Physical Rendering Process
10,514
Building systems capable of replicating global illumination models with interactive frame-rates has long been one of the toughest conundrums facing computer graphics researchers. Voxel Cone Tracing, as proposed by Cyril Crassin et al. in 2011, makes use of mipmapped 3D textures containing a voxelized representation of an environments direct light component to trace diffuse, specular and occlusion cones in linear time to extrapolate a surface fragments indirect light emitted towards a given photo-receptor. Seemingly providing a well-disposed balance between performance and physical fidelity, this thesis examines the algorithms theoretical side on the basis of the rendering equation as well as its practical side in the context of a self-implemented, OpenGL-based variant. Whether if it can compete with long standing alternatives such as radiosity and raytracing will be determined in the subsequent evaluation.
Real-Time Global Illumination Using OpenGL And Voxel Cone Tracing
10,515
Biomarkers play an important role in early detection and intervention in Alzheimer's disease (AD). However, obtaining effective biomarkers for AD is still a big challenge. In this work, we propose to use the worst transportation cost as a univariate biomarker to index cortical morphometry for tracking AD progression. The worst transportation (WT) aims to find the least economical way to transport one measure to the other, which contrasts to the optimal transportation (OT) that finds the most economical way between measures. To compute the WT cost, we generalize the Brenier theorem for the OT map to the WT map, and show that the WT map is the gradient of a concave function satisfying the Monge-Ampere equation. We also develop an efficient algorithm to compute the WT map based on computational geometry. We apply the algorithm to analyze cortical shape difference between dementia due to AD and normal aging individuals. The experimental results reveal the effectiveness of our proposed method which yields better statistical performance than other competiting methods including the OT.
Cortical Morphometry Analysis based on Worst Transportation Theory
10,516
We propose to augment standard grid-based fluid solvers with pointwise divergence-free velocity interpolation, thereby ensuring exact incompressibility down to the sub-cell level. Our method takes as input a discretely divergence-free velocity field generated by a staggered grid pressure projection, and first recovers a corresponding discrete vector potential. Instead of solving a costly vector Poisson problem for the potential, we develop a fast parallel sweeping strategy to find a candidate potential and apply a gauge transformation to enforce the Coulomb gauge condition and thereby make it numerically smooth. Interpolating this discrete potential generates a pointwise vector potential whose analytical curl is a pointwise incompressible velocity field. Our method further supports irregular solid geometry through the use of level set-based cut-cells and a novel Curl-Noise-inspired potential ramping procedure that simultaneously offers strictly non-penetrating velocities and incompressibility. Experimental comparisons demonstrate that the vector potential reconstruction procedure at the heart of our approach is consistently faster than prior such reconstruction schemes, especially those that solve vector Poisson problems. Moreover, in exchange for its modest extra cost, our overall Curl-Flow framework produces significantly improved particle trajectories that closely respect irregular obstacles, do not suffer from spurious sources or sinks, and yield superior particle distributions over time.
Curl-Flow: Boundary-Respecting Pointwise Incompressible Velocity Interpolation for Grid-Based Fluids
10,517
We present a suite of techniques for jointly optimizing triangle meshes and shading models to match the appearance of reference scenes. This capability has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations. We follow and extend the classic analysis-by-synthesis family of techniques: enabled by a highly efficient differentiable renderer and modern nonlinear optimization algorithms, our results are driven to minimize the image-space difference to the target scene when rendered in similar viewing and lighting conditions. As the only signals driving the optimization are differences in rendered images, the approach is highly general and versatile: it easily supports many different forward rendering models such as normal mapping, spatially-varying BRDFs, displacement mapping, etc. Supervision through images only is also key to the ability to easily convert between rendering systems and scene representations. We output triangle meshes with textured materials to ensure that the models render efficiently on modern graphics hardware and benefit from, e.g., hardware-accelerated rasterization, ray tracing, and filtered texture lookups. Our system is integrated in a small Python code base, and can be applied at high resolutions and on large models. We describe several use cases, including mesh decimation, level of detail generation, seamless mesh filtering and approximations of aggregate geometry.
Appearance-Driven Automatic 3D Model Simplification
10,518
Secondary animation effects are essential for liveliness. We propose a simple, real-time solution for adding them on top of standard skinning, enabling artist-driven stylization of skeletal motion. Our method takes a standard skeleton animation as input, along with a skin mesh and rig weights. It then derives per-vertex deformations from the different linear and angular velocities along the skeletal hierarchy. We highlight two specific applications of this general framework, namely the cartoonlike "squashy" and "floppy" effects, achieved from specific combinations of velocity terms. As our results show, combining these effects enables to mimic, enhance and stylize physical-looking behaviours within a standard animation pipeline, for arbitrary skinned characters. Interactive on CPU, our method allows for GPU implementation, yielding real-time performances even on large meshes. Animator control is supported through a simple interface toolkit, enabling to refine the desired type and magnitude of deformation at relevant vertices by simply painting weights. The resulting rigged character automatically responds to new skeletal animation, without further input.
Velocity Skinning for Real-time Stylized Skeletal Animation
10,519
Modern 3D printing technologies and the upcoming mass-customization paradigm call for efficient methods to produce and distribute arbitrarily-shaped 3D objects. This paper introduces an original algorithm to split a 3D model in parts that can be efficiently packed within a box, with the objective of reassembling them after delivery. The first step consists in the creation of a hierarchy of possible parts that can be tightly packed within their minimum bounding boxes. In a second step, the hierarchy is exploited to extract the (single) segmentation whose parts can be most tightly packed. The fact that shape packing is an NP-complete problem justifies the use of heuristics and approximated solutions whose efficacy and efficiency must be assessed. Extensive experimentation demonstrates that our algorithm produces satisfactory results for arbitrarily-shaped objects while being comparable to ad-hoc methods when specific shapes are considered.
Shapes In A Box -- Disassembling 3D objects for efficient packing and fabrication
10,520
While commodity GPUs provide a continuously growing range of features and sophisticated methods for accelerating compute jobs, many state-of-the-art solutions for point cloud rendering still rely on the provided point primitives (GL_POINTS, POINTLIST, ...) of graphics APIs for image synthesis. In this paper, we present several compute-based point cloud rendering approaches that outperform the hardware pipeline by up to an order of magnitude and achieve significantly better frame times than previous compute-based methods. Beyond basic closest-point rendering, we also introduce a fast, high-quality variant to reduce aliasing. We present and evaluate several variants of our proposed methods with different flavors of optimization, in order to ensure their applicability and achieve optimal performance on a range of platforms and architectures with varying support for novel GPU hardware features. During our experiments, the observed peak performance was reached rendering 796 million points (12.7GB) at rates of 62 to 64 frames per second (50 billion points per second, 802GB/s) on an RTX 3090 without the use of level-of-detail structures. We further introduce an optimized vertex order for point clouds to boost the efficiency of GL_POINTS by a factor of 5x in cases where hardware rendering is compulsory. We compare different orderings and show that Morton sorted buffers are faster for some viewpoints, while shuffled vertex buffers are faster in others. In contrast, combining both approaches by first sorting according to Morton-code and shuffling the resulting sequence in batches of 128 points leads to a vertex buffer layout with high rendering performance and low sensitivity to viewpoint changes.
Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
10,521
When walking on loose terrains, possibly covered with vegetation, the ground and grass should deform, but the character's gait should also change accordingly. We propose a method for modeling such two-ways interactions in real-time. We first complement a layered character model by a high-level controller, which uses position and angular velocity inputs to improve dynamic oscillations when walking on various slopes. Secondly, at a refined level, the feet are set to locally deform the ground and surrounding vegetation using efficient procedural functions, while the character's response to such deformations is computed through adapted inverse kinematics. While simple to set up, our method is generic enough to adapt to any character morphology. Moreover, its ability to generate in real time, consistent gaits on a variety of loose grounds of arbitrary slope, possibly covered with grass, makes it an interesting solution to enhance films and games.
Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds
10,522
This paper introduces a new method to stylize 3D geometry. The key observation is that the surface normal is an effective instrument to capture different geometric styles. Centered around this observation, we cast stylization as a shape analogy problem, where the analogy relationship is defined on the surface normal. This formulation can deform a 3D shape into different styles within a single framework. One can plug-and-play different target styles by providing an exemplar shape or an energy-based style description (e.g., developable surfaces). Our surface stylization methodology enables Normal Captures as a geometric counterpart to material captures (MatCaps) used in rendering, and the prototypical concept of Spherical Shape Analogies as a geometric counterpart to image analogies in image processing.
Normal-Driven Spherical Shape Analogies
10,523
Constrained by the limitations of learning toolkits engineered for other applications, such as those in image processing, many mesh-based learning algorithms employ data flows that would be atypical from the perspective of conventional geometry processing. As an alternative, we present a technique for learning from meshes built from standard geometry processing modules and operations. We show that low-order eigenvalue/eigenvector computation from operators parameterized using discrete exterior calculus is amenable to efficient approximate backpropagation, yielding spectral per-element or per-mesh features with similar formulas to classical descriptors like the heat/wave kernel signatures. Our model uses few parameters, generalizes to high-resolution meshes, and exhibits performance and time complexity on par with past work.
HodgeNet: Learning Spectral Geometry on Triangle Meshes
10,524
This paper examines an algorithm using dual OpenCL image buffers to optimize data streaming for ensemble processing and visualization. Image buffers were utilized because they allow cached memory access, unlike simple data buffers, which are more commonly used. OpenCL image object performance was improved by allowing upload and mapping into one buffer to occur concurrently with mapping and/or processing of data in another buffer. This technique was applied in an interactive application allowing multiple flood extent maps to be combined into a single image, and allowing users to vary input image sets in real time. The efficiency of this technique was tested by varying both dimensions of input images and number of iterations; computation scaled linearly with number of input images, with best results achieved using ~4k images. Tests were performed to determine the rate at which data could be moved from data buffers to image buffers, examining a large range of possible image buffer dimensions. Additional tests examined kernel runtimes with different image and buffer variants. Limitations of the algorithm and possible applications are discussed.
Efficacy of Images Versus Data Buffers: Optimizing Interactive Applications Utilizing OpenCL for Scientific Visualization
10,525
We propose a novel deep learning framework for animation video resequencing. Our system produces new video sequences by minimizing a perceptual distance of images from an existing animation video clip. To measure perceptual distance, we utilize the activations of convolutional neural networks and learn a perceptual distance by training these features on a small network with data comprised of human perceptual judgments. We show that with this perceptual metric and graph-based manifold learning techniques, our framework can produce new smooth and visually appealing animation video results for a variety of animation video styles. In contrast to previous work on animation video resequencing, the proposed framework applies to wide range of image styles and does not require hand-crafted feature extraction, background subtraction, or feature correspondence. In addition, we also show that our framework has applications to appealing arrange unordered collections of images.
Learning a perceptual manifold with deep features for animation video resequencing
10,526
We present a sample implementation of a Virtual and Augmented Reality immersive environment based on Free and Libre Open Source Hardware and Software and the HTC Vive system, used to enhance the immersive experience of the user and to track her/his movements. The sense of immersion has increased and stimulated using a footplate and a Tibetan bridge, connected to the virtual world as Augmented Reality applications and implemented through an Arduino board, thereby adopting a low cost, open source hardware and software approach. The proposed architecture is relatively affordable from the cost point of view, easy to implement, configure and adapt to different contexts. It can be of great help for organizing laboratory classes for young students to afford the implementation of virtual worlds and Augmented Reality applications.
An immersive Open Source environment using Godot
10,527
We present a novel two-stage approach for automated floorplan design in residential buildings with a given exterior wall boundary. Our approach has the unique advantage of being human-centric, that is, the generated floorplans can be geometrically plausible, as well as topologically reasonable to enhance resident interaction with the environment. From the input boundary, we first synthesize a human-activity map that reflects both the spatial configuration and human-environment interaction in an architectural space. We propose to produce the human-activity map either automatically by a pre-trained generative adversarial network (GAN) model, or semi-automatically by synthesizing it with user manipulation of the furniture. Second, we feed the human-activity map into our deep framework ActFloor-GAN to guide a pixel-wise prediction of room types. We adopt a re-formulated cycle-consistency constraint in ActFloor-GAN to maximize the overall prediction performance, so that we can produce high-quality room layouts that are readily convertible to vectorized floorplans. Experimental results show several benefits of our approach. First, a quantitative comparison with prior methods shows superior performance of leveraging the human-activity map in predicting piecewise room types. Second, a subjective evaluation by architects shows that our results have compelling quality as professionally-designed floorplans and much better than those generated by existing methods in terms of the room layout topology. Last, our approach allows manipulating the furniture placement, considers the human activities in the environment, and enables the incorporation of user-design preferences.
ActFloor-GAN: Activity-Guided Adversarial Networks for Human-Centric Floorplan Design
10,528
We present a neural network to estimate the visual information of important pixels in image and video, which is used in content-aware media retargeting applications. Existing techniques are successful in proposing retargeting methods. Yet, the serious distortion and the shrunk problem in the retargeted results still need to be investigated due to the limitations in the methods used to analyze visual attention. To accomplish this, we propose a network to define the importance map, which is sufficient to describe the energy of the significant regions in image/video. With this strategy, more ideal results are obtained from our system. Besides, the objective evaluation presented in this paper shows that our media retargeting system can achieve better and more plausible results than those of other works. In addition, our proposed importance map performs well in the enlarge operator and on the "difficult-to-resize" images.
Content-aware media retargeting based on deep importance map
10,529
Drawing a direct analogy with the well-studied vibration or elastic modes, we introduce an object's fracture modes, which constitute its preferred or most natural ways of breaking. We formulate a sparsified eigenvalue problem, which we solve iteratively to obtain the n lowest-energy modes. These can be precomputed for a given shape to obtain a prefracture pattern that can substitute the state of the art for realtime applications at no runtime cost but significantly greater realism. Furthermore, any realtime impact can be projected onto our modes to obtain impact-dependent fracture patterns without the need for any online crack propagation simulation. We not only introduce this theoretically novel concept, but also show its fundamental and practical superiority in a diverse set of examples and contexts.
Breaking Good: Fracture Modes for Realtime Destruction
10,530
In this paper we present a technique that improves rendering performance for real-time scenes with ray traced lighting in the presence of dynamic lights and objects. In particular we verify photon paths from the previous frame against dynamic objects in the current frame, and show how most photon paths are still valid. When using area lights, we use a data structure to store light distribution that tracks light paths allowing photons to be reused when the light source is moving in the scene. We also show that by reusing paths when the error in the reflected energy is below a threshold value, even more paths can be reused. We apply this technique to Indirect Illumination using a screen space photon splatting rendering engine. By reusing photon paths and applying our error threshold, our method can reduce the number of rays traced by up to 5x, and improve performance by up to 2x.
Path Verification for Dynamic Indirect Illumination
10,531
A single Panorama can be drawn perspectively without distortions in arbitrary viewing directions and field-of-views when the camera position is at the origin. This is a key advantage in VR and virtual tour applications because it enables the user to freely"look around" in a virtual world with just a single panorama, albeit at a fixed position. However, when the camera moves away from the center, barrel distortions appear and realism breaks. We propose modifications to the equirectangular-to-perspective(E2P) projection that significantly reduce distortions when the camera position is away from the origin. This enables users to not only "look around" but also "walk around" virtually in a single panorama with more convincing renderings. We compare with other techniques that aim to augment panoramas with 3D information, including: 1) panoramas with depth information and 2) panoramas augmented with room layouts, and show that our approach provides more visually convincing results
Distortion Reduction for Off-Center Perspective Projection of Panoramas
10,532
The paper presents a novel method for interpolating rotations based on the non-Abelian Kuramoto model on sphere S3. The algorithm, introduced in this paper, finds the shortest and most direct path between two rotations. We have discovered that it gives approximately the same results as a Spherical Linear Interpolation algorithm. Simulation results of our algorithm are visualized on S2 using Hopf fibration. In addition, in order to gain a better insight, we have provided one short video illustrating the rotation of an object between two positions.
Interpolating Rotations with Non-abelian Kuramoto Model on the 3-Sphere
10,533
We propose and explore a new, general-purpose method for the implicit time integration of elastica. Key to our approach is the use of a mixed variational principle. In turn its finite element discretization leads to an efficient alternating projections solver with a superset of the desirable properties of many previous fast solution strategies. This framework fits a range of elastic constitutive models and remains stable across a wide span of timestep sizes, material parameters (including problems that are quasi-static and approximately rigid). It is efficient to evaluate and easily applicable to volume, surface, and rods models. We demonstrate the efficacy of our approach on a number of simulated examples across all three codomains.
Mixed Variational Finite Elements for Implicit, General-Purpose Simulation of Deformables
10,534
Dr.Jit is a new just-in-time compiler for physically based rendering and its derivative. Dr.Jit expedites research on these topics in two ways: first, it traces high-level simulation code (e.g., written in Python) and aggressively simplifies and specializes the resulting program representation, producing data-parallel kernels with state-of-the-art performance on CPUs and GPUs. Second, it simplifies the development of differentiable rendering algorithms. Efficient methods in this area turn the derivative of a simulation into a simulation of the derivative. Dr.Jit provides fine-grained control over the process of automatic differentiation to help with this transformation. Specialization is particularly helpful in the context of differentiation, since large parts of the simulation ultimately do not influence the computed gradients. Dr.Jit tracks data dependencies globally to find and remove redundant computation.
Dr.Jit: A Just-In-Time Compiler for Differentiable Rendering
10,535
We propose a neural network-based approach for collision detection with deformable objects. Unlike previous approaches based on bounding volume hierarchies, our neural approach does not require an update of the spatial data structure when the object deforms. Our network is trained on the reduced degrees of freedom of the object, so that we can use the same network to query for collisions even when the object deforms. Our approach is simple to use and implement, and it can readily be employed on the GPU. We demonstrate our approach with two concrete examples: a haptics application with a finite element mesh, and cloth simulation with a skinned character.
Neural Collision Detection for Deformable Objects
10,536
We propose a simple and practical approach for incorporating the effects of muscle inertia, which has been ignored by previous musculoskeletal simulators in both graphics and biomechanics. We approximate the inertia of the muscle by assuming that muscle mass is distributed along the centerline of the muscle. We express the motion of the musculotendons in terms of the motion of the skeletal joints using a chain of Jacobians, so that at the top level, only the reduced degrees of freedom of the skeleton are used to completely drive both bones and musculotendons. Our approach can handle all commonly used musculotendon path types, including those with multiple path points and wrapping surfaces. For muscle paths involving wrapping surfaces, we use neural networks to model the Jacobians, trained using existing wrapping surface libraries, which allows us to effectively handle the Jacobian discontinuities that occur when musculotendon paths collide with wrapping surfaces. We demonstrate support for higher-order time integrators, complex joints, inverse dynamics, Hill-type muscle models, and differentiability. In the limit, as the muscle mass is reduced to zero, our approach gracefully degrades to traditional simulators without support for muscle inertia. Finally, it is possible to mix and match inertial and non-inertial musculotendons, depending on the application.
Differentiable Simulation of Inertial Musculotendons
10,537
We present a new approach that allows large time steps in dynamic simulations. Our approach, ConJac, is based on condensation, a technique for eliminating many degrees of freedom (DOFs) by expressing them in terms of the remaining degrees of freedom. In this work, we choose a subset of nodes to be dynamic nodes, and apply condensation at the velocity level by defining a linear mapping from the velocities of these chosen dynamic DOFs to the velocities of the remaining quasistatic DOFs. We then use this mapping to derive reduced equations of motion involving only the dynamic DOFs. We also derive a novel stabilization term that enables us to use complex nonlinear material models. ConJac remains stable at large time steps, exhibits highly dynamic motion, and displays minimal numerical damping. In marked contrast to subspace approaches, ConJac gives exactly the same configuration as the full space approach once the static state is reached. Furthermore, ConJac can automatically choose which parts of the object are to be simulated dynamically or quasistatically. Finally, ConJac works with a wide range of moderate to stiff materials, supports anisotropy and heterogeneity, handles topology changes, and can be combined with existing solvers including rigid body dynamics.
Condensation Jacobian with Adaptivity
10,538
Over the past decade, 3D graphics have become highly detailed to mimic the real world, exploding their size and complexity. Certain applications and device constraints necessitate their simplification and/or lossy compression, which can degrade their visual quality. Thus, to ensure the best Quality of Experience (QoE), it is important to evaluate the visual quality to accurately drive the compression and find the right compromise between visual quality and data size. In this work, we focus on subjective and objective quality assessment of textured 3D meshes. We first establish a large-scale dataset, which includes 55 source models quantitatively characterized in terms of geometric, color, and semantic complexity, and corrupted by combinations of 5 types of compression-based distortions applied on the geometry, texture mapping and texture image of the meshes. This dataset contains over 343k distorted stimuli. We propose an approach to select a challenging subset of 3000 stimuli for which we collected 148929 quality judgments from over 4500 participants in a large-scale crowdsourced subjective experiment. Leveraging our subject-rated dataset, a learning-based quality metric for 3D graphics was proposed. Our metric demonstrates state-of-the-art results on our dataset of textured meshes and on a dataset of distorted meshes with vertex colors. Finally, we present an application of our metric and dataset to explore the influence of distortion interactions and content characteristics on the perceived quality of compressed textured meshes.
Textured Mesh Quality Assessment: Large-Scale Dataset and Deep Learning-based Quality Metric
10,539
Highly accurate alpha blending can be performed entirely with integer operations, and no divisions. To reduce the number of integer multiplications, multiple color components can be blended in parallel in the same 32-bit or 64-bit register. This tutorial explains how to avoid division operations when alpha blending with 32-bit RGBA pixels. An RGBA pixel contains four 8-bit components (red, green, blue, and alpha) whose values range from 0 to 255. Alpha blending requires multiplication of the color components by an alpha value, after which (for greatest accuracy) each of these products is divided by 255 and then rounded to the nearest integer. This tutorial presents an approximate alpha-blending formula that replaces the division operation with an integer shift and add -- and also enables the number of multiplications to be reduced. When the same blending calculation is carried out to high precision using double-precision floating-point division operations, the results are found to exactly match those produced by this approximation. C++ code examples are included.
Alpha Blending with No Division Operations
10,540
Real-time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel-prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real-time applications. This paper expands the kernel-prediction method and proposes a novel approach to denoise very low spp (e.g., 1-spp) Monte Carlo path traced images at real-time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per-pixel filtering kernel, we predict an encoding of the kernel map, followed by a high-efficiency decoder with unfolding operations for a high-quality reconstruction of the filtering kernels. The kernel map encoding yields a compact single-channel representation of the kernel map, which can significantly reduce the kernel-prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods' denoising quality while roughly halving its denoising time for 1-spp noisy inputs. In addition, compared with the recent neural bilateral grid-based real-time denoiser, our approach benefits from the high parallelism of kernel-based reconstruction and produces better denoising results at equal time.
Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network
10,541
Aiming at the high cost of embedding annotation watermark in a narrow small area and the information distortion caused by the change of annotation watermark image resolution, this paper proposes a color block code technology, which uses location information and color code to form recognizable graphics, which can not only simplify the annotation graphics, but also ensure the recognition efficiency. First, the constituent elements of color block code are designed, and then the coding and decoding method of color block code is proposed. Experiments show that color block code has high anti-scaling and anti-interference, and can be widely used in the labeling of small object surface and low resolution image.
Application of Color Block Code in Image Scaling
10,542
We propose a method for computing a sewing pattern of a given 3D garment model. Our algorithm segments an input 3D garment shape into patches and computes their 2D parameterization, resulting in pattern pieces that can be cut out of fabric and sewn together to manufacture the garment. Unlike the general state-of-the-art approaches for surface cutting and flattening, our method explicitly targets garment fabrication. It accounts for the unique properties and constraints of tailoring, such as seam symmetry, the usage of darts, fabric grain alignment, and a flattening distortion measure that models woven fabric deformation, respecting its anisotropic behavior. We bootstrap a recent patch layout approach developed for quadrilateral remeshing and adapt it to the purpose of computational pattern making, ensuring that the deformation of each pattern piece stays within prescribed bounds of cloth stress. While our algorithm can automatically produce the sewing patterns, it is fast enough to admit user input to creatively iterate on the pattern design. Our method can take several target poses of the 3D garment into account and integrate them into the sewing pattern design. We demonstrate results on both skintight and loose garments, showcasing the versatile application possibilities of our approach.
Computational Pattern Making from 3D Garment Models
10,543
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
Point Containment Queries on Ray Tracing Cores for AMR Flow Visualization
10,544
The many-light formulation provides a general framework for rendering various illumination effects using hundreds of thousands of virtual point lights (VPLs). To efficiently gather the contributions of the VPLs, lightcuts and its extensions cluster the VPLs, which implicitly approximates the lighting matrix with some representative blocks similar to vector quantization. In this paper, we propose a new approximation method based on the previous lightcut method and a low-rank matrix factorization model. As many researchers pointed out, the lighting matrix is low rank, which implies that it can be completed from a small set of known entries. We first generate a conservative global light cut with bounded error and partition the lighting matrix into slices by the coordinate and normal of the surface points using the method of lightslice. Then we perform two passes of randomly sampling on each matrix slice. In the first pass, uniformly distributed random entries are sampled to coarsen the global light cut, further clustering the similar light for the spatially localized surface points of the slices. In the second pass, more entries are sampled according to the possibility distribution function estimated from the first sampling result. Then each matrix slice is factorized into a product of two smaller low-rank matrices constrained by the sampled entries, which delivers a completion of the lighting matrix. The factorized form provides an additional speedup for adding up the matrix columns which is more GPU friendly. Compared with the previous lightcut based methods, we approximate the lighting matrix with some signal specialized bases via factorization. The experimental results shows that we can achieve significant acceleration than the state of the art many-light methods.
Sparse Sampling and Completion for Light Transport in VPL-based Rendering
10,545
In this article, we provide a detailed survey of techniques for hexahedral mesh generation. We cover the whole spectrum of alternative approaches to mesh generation, as well as post processing algorithms for connectivity editing and mesh optimization. For each technique, we highlight capabilities and limitations, also pointing out the associated unsolved challenges. Recent relaxed approaches, aiming to generate not pure-hex but hex-dominant meshes, are also discussed. The required background, pertaining to geometrical as well as combinatorial aspects, is introduced along the way.
Hex-Mesh Generation and Processing: a Survey
10,546
We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.
Toward the Graphics Turing Scale on a Blue Gene Supercomputer
10,547
This article introduces the next version of MathPSfrag. MathPSfrag is a Mathematica package that during export automatically replaces all expressions in a plot by corresponding LaTeX commands. The new version can also produce LaTeX independent images; e.g., PDF files for inclusion in pdfLaTeX. Moreover from these files a preview is generated and shown within Mathematica.
MathPSfrag 2: Convenient LaTeX Labels in Mathematica
10,548
Parallel coordinates plot (PCP) is an excellent tool for multivariate visualization and analysis, but it may fail to reveal inherent structures for datasets with a large number of items. In this paper, we propose a suite of novel clustering, dimension ordering and visualization techniques based on PCP, to reveal and highlight hidden structures. First, we propose a continuous spline based polycurves design to extract and classify different cluster aspects of the data. Then, we provide an efficient and optimal correlation based sorting technique to reorder coordinates, as a helpful visualization tool for data analysis. Various results generated by our framework visually represent much structure, trend and correlation information to guide the user, and improve the efficacy of analysis, especially for complex and noisy datasets.
Pattern Recognition and Revealing using Parallel Coordinates Plot
10,549
We present the first triangle mesh-based technique for tracking the evolution of general three-dimensional multimaterial interfaces undergoing complex topology changes induced by deformations and collisions. Our core representation is a non-manifold triangle surface mesh with material labels assigned to each half-face to distinguish volumetric regions. We advect the vertices of the mesh in a Lagrangian manner, and employ a complete set of collision-safe mesh improvement and topological operations that track and update material labels. In particular, we develop a unified, collision-safe strategy for handling complex topological operations acting on evolving triple- and higher-valence junctions, and a flexible method to merge colliding multimaterial meshes. We demonstrate our approach with a number of challenging geometric flows, including passive advection, normal flow, and mean curvature flow.
Multimaterial Front Tracking
10,550
The paper addresses the following problem: given a set of man-made shapes, e.g., chairs, can we quickly rank and explore the set of shapes with respect to a given avatar pose? Answering this question requires identifying which shapes are more suitable for the defined avatar and pose; and moreover, to provide fast preview of how to alter the input geometry to better fit the deformed shapes to the given avatar pose? The problem naturally links physical proportions of human body and its interaction with object shapes in an attempt to connect ergonomics with shape geometry. We designed an interaction system that allows users to explore shape collections using the deformation of human characters while at the same time providing interactive previews of how to alter the shapes to better fit the user-specified character. We achieve this by first mapping ergonomics guidelines into a set of simultaneous multi-part constraints based on target contacts; and then, proposing a novel contact-based deformation model to realize multi-contact constraints. We evaluate our framework on various chair models and validate the results via a small user study.
Ergonomic-driven Geometric Exploration and Reshaping
10,551
This paper presents a graph bundling algorithm that agglomerates edges taking into account both spatial proximity as well as user-defined criteria in order to reveal patterns that were not perceivable with previous bundling techniques. Each edge belongs to a group that may either be an input of the problem or found by clustering one or more edge properties such as origin, destination, orientation, length or domain-specific properties. Bundling is driven by a stack of density maps, with each map capturing both the edge density of a given group as well as interactions with edges from other groups. Density maps are efficiently calculated by smoothing 2D histograms of edge occurrence using repeated averaging filters based on integral images. A CPU implementation of the algorithm is tested on several graphs, and different grouping criteria are used to illustrate how the proposed technique can render different visualizations of the same data. Bundling performance is much higher than on previous approaches, being particularly noticeable on large graphs, with millions of edges being bundled in seconds.
3D Density Histograms for Criteria-driven Edge Bundling
10,552
This work introduces a novel tool for interactive, real-time transformations of two dimensional IFS fractals. We assign barycentric coordinates (relative to an arbitrary affine basis of $\mathbb{R}^2$) to the points that constitute the image of a fractal. The tool uses some of the nice properties of the barycentric coordinates, enabling any affine transformation of the basis, done by click-and-drag, to be immediately followed by the same affine transformation of the IFS fractal attractor. In order to have a better control over the fractal, as affine basis we use a kind of minimal simplex that contains the attractor. We give theoretical grounds of the tool and then the software application.
Real-time Tool for Affine Transformations of Two Dimensional IFS Fractals
10,553
Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of high-quality global illumination at a heavy computational cost. Because of the high computation complexity, it can't reach the requirement of real-time rendering. The emergence of many-core architectures, makes it possible to reduce significantly the running time of ray tracing algorithm by employing the powerful ability of floating point computation. In this paper, a new GPU implementation and optimization of the ray tracing to accelerate the rendering process is presented.
Massively Parallel Ray Tracing Algorithm Using GPU
10,554
Wide-angle images gained a huge popularity in the last years due to the development of computational photography and imaging technological advances. They present the information of a scene in a way which is more natural for the human eye but, on the other hand, they introduce artifacts such as bent lines. These artifacts become more and more unnatural as the field of view increases. In this work, we present a technique aimed to improve the perceptual quality of panorama visualization. The main ingredients of our approach are, on one hand, considering the viewing sphere as a Riemann sphere, what makes natural the application of M\"obius (complex) transformations to the input image, and, on the other hand, a projection scheme which changes in function of the field of view used. We also introduce an implementation of our method, compare it against images produced with other methods and show that the transformations can be done in real-time, which makes our technique very appealing for new settings, as well as for existing interactive panorama applications.
Real-time correction of panoramic images using hyperbolic Möbius transformations
10,555
The task of voxel resolution for a space curve in video memory of 3D display is set. Furthermore, an approach solution of voxel resolution of arbitrary space curve, given in parametric form, is studied. Numerous numbers of intensive experiments are conducted and interesting results with significant recommendations are presented.
Resolution Improvement of the Common Method for Presentating Arbitrary Space Curves Voxel
10,556
We present a new sound rendering pipeline that is able to generate plausible sound propagation effects for interactive dynamic scenes. Our approach combines ray-tracing-based sound propagation with reverberation filters using robust automatic reverb parameter estimation that is driven by impulse responses computed at a low sampling rate.We propose a unified spherical harmonic representation of directional sound in both the propagation and auralization modules and use this formulation to perform a constant number of convolution operations for any number of sound sources while rendering spatial audio. In comparison to previous geometric acoustic methods, we achieve a speedup of over an order of magnitude while delivering similar audio to high-quality convolution rendering algorithms. As a result, our approach is the first capable of rendering plausible dynamic sound propagation effects on commodity smartphones.
Interactive Sound Rendering on Mobile Devices using Ray-Parameterized Reverberation Filters
10,557
Multiple importance sampling (MIS) is employed to reduce variance of estimators, but when sampling and weighting are not suitable to the integrand, the estimators would have extra variance. Therefore, robust light transport simulation algorithms based on Monte Carlo sampling for different types of scenes are still uncompleted. In this paper, we address this problem by present a general method, named generalized multiple importance sampling (GMIS), to enhance the robustness of light transport simulation based on MIS. GMIS combines different sampling techniques and weighting functions, extending MIS to a more generalized framework. Meanwhile, we implement the GMIS in common renderers and illustrate how it increase the robustness of light transport simulation. Experiments show that, by applying GMIS, we obtain better convergence performance and lower variance, and increase the rendering of ambient light and specular shadow effects apparently.
Light Transport Simulation via Generalized Multiple Importance Sampling
10,558
Terrains are the main part of an electronic game. To reduce human effort on game development, procedural techniques are used to generate synthetic terrains. However rendering a terrain is not a trivial task. Their rendering techniques must be optimal for gaming. Specially planetary terrains, which must account for precision and scale conversion. Multi-resolution models are best fit to planetary terrains. An observer can change his point of view without noticing any decrease in visual quality. There are several proposals regarding real-time terrain rendering with multi-resolution models, and there are game engines capable of generating large scale terrains with fixed resolution. However for the best of our knowledge, it was noticed that there are no techniques which combine both aspects. In this paper we present a new technique capable of generating large-scale multi-resolution terrains, whichcan be rendered and viewed at different scales. Rendering large scale models with high definition and low scale areas with finer details added with the aid of procedural content generation.
Procedural Planetary Multi-resolution Terrain Generation for Games
10,559
We propose a robust normal estimation method for both point clouds and meshes using a low rank matrix approximation algorithm. First, we compute a local feature descriptor for each point and find similar, non-local neighbors that we organize into a matrix. We then show that a low rank matrix approximation algorithm can robustly estimate normals for both point clouds and meshes. Furthermore, we provide a new filtering method for point cloud data to smooth the position data to fit the estimated normals. We show applications of our method to point cloud filtering, point set upsampling, surface reconstruction, mesh denoising, and geometric texture removal. Our experiments show that our method outperforms current methods in both visual quality and accuracy.
Low Rank Matrix Approximation for Geometry Filtering
10,560
DeepWarp is an efficient and highly re-usable deep neural network (DNN) based nonlinear deformable simulation framework. Unlike other deep learning applications such as image recognition, where different inputs have a uniform and consistent format (e.g. an array of all the pixels in an image), the input for deformable simulation is quite variable, high-dimensional, and parametrization-unfriendly. Consequently, even though DNN is known for its rich expressivity of nonlinear functions, directly using DNN to reconstruct the force-displacement relation for general deformable simulation is nearly impossible. DeepWarp obviates this difficulty by partially restoring the force-displacement relation via warping the nodal displacement simulated using a simplistic constitutive model -- the linear elasticity. In other words, DeepWarp yields an incremental displacement fix based on a simplified (therefore incorrect) simulation result other than returning the unknown displacement directly. We contrive a compact yet effective feature vector including geodesic, potential and digression to sort training pairs of per-node linear and nonlinear displacement. DeepWarp is robust under different model shapes and tessellations. With the assistance of deformation substructuring, one DNN training is able to handle a wide range of 3D models of various geometries including most examples shown in the paper. Thanks to the linear elasticity and its constant system matrix, the underlying simulator only needs to perform one pre-factorized matrix solve at each time step, and DeepWarp is able to simulate large models in real time.
DeepWarp: DNN-based Nonlinear Deformation
10,561
We present a novel spatial hashing based data structure to facilitate 3D shape analysis using convolutional neural networks (CNNs). Our method well utilizes the sparse occupancy of 3D shape boundary and builds hierarchical hash tables for an input model under different resolutions. Based on this data structure, we design two efficient GPU algorithms namely hash2col and col2hash so that the CNN operations like convolution and pooling can be efficiently parallelized. The spatial hashing is nearly minimal, and our data structure is almost of the same size as the raw input. Compared with state-of-the-art octree-based methods, our data structure significantly reduces the memory footprint during the CNN training. As the input geometry features are more compactly packed, CNN operations also run faster with our data structure. The experiment shows that, under the same network structure, our method yields comparable or better benchmarks compared to the state-of-the-art while it has only one-third memory consumption. Such superior memory performance allows the CNN to handle high-resolution shape analysis.
H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis
10,562
Parallel coordinates are a popular technique to visualize multi-dimensional data. However, they face a significant problem influencing the perception and interpretation of patterns. The distance between two parallel lines differs based on their slope. Vertical lines are rendered longer and closer to each other than horizontal lines. This problem is inherent in the technique and has two main consequences: (1) clusters which have a steep slope between two axes are visually more prominent than horizontal clusters. (2) Noise and clutter can be perceived as clusters, as a few parallel vertical lines visually emerge as a ghost cluster. Our paper makes two contributions: First, we formalize the problem and show its impact. Second, we present a novel technique to reduce the effects by rendering the polylines of the parallel coordinates based on their slope: horizontal lines are rendered with the default width, lines with a steep slope with a thinner line. Our technique avoids density distortions of clusters, can be computed in linear time, and can be added on top of most parallel coordinate variations. To demonstrate the usefulness, we show examples and compare them to the classical rendering.
Slope-Dependent Rendering of Parallel Coordinates to Reduce Density Distortion and Ghost Clusters
10,563
We present a general sample reweighting scheme and its underlying theory for the integration of an unknown function with low dimensionality. Our method produces better results than standard weighting schemes for common sampling strategies, while avoiding bias. Our main insight is to link the weight derivation to the function reconstruction process during integration. The implementation of our solution is simple and results in an improved convergence behavior. We illustrate its benefit by applying our method to multiple Monte Carlo rendering problems.
Geometric Sample Reweighting for Monte Carlo Integration
10,564
Sample based ray marching is an effective method for direct volume rendering of unstructured meshes. However, sampling such meshes remains expensive, and strategies to reduce the number of samples taken have received relatively little attention. In this paper, we introduce a method for rendering unstructured meshes using a combination of a coarse spatial acceleration structure and hardware-accelerated ray tracing. Our approach enables efficient empty space skipping and adaptive sampling of unstructured meshes, and outperforms a reference ray marcher by up to 7x.
Efficient Space Skipping and Adaptive Sampling of Unstructured Volumes Using Hardware Accelerated Ray Tracing
10,565
A porous scaffold is a three-dimensional network structure composed of a large number of pores, and triply periodic minimal surfaces (TPMSs) are one of conventional tools for designing porous scaffolds. However, discontinuity, incompleteness, and high storage space requirements are the three main shortcomings of TPMSs for porous scaffold design. In this study, we developed an effective method for heterogeneous porous scaffold generation to overcome the abovementioned shortcomings of TPMSs. The input of the proposed method is a trivariate B-spline solid (TBSS) with a cubic parameter domain. The proposed method first constructs a threshold distribution field (TDF) in the cubic parameter domain, and then produces a continuous and complete TPMS within it. Moreover, by mapping the TPMS in the parametric domain to the TBSS, a continuous and complete porous scaffold is generated in the TBSS. In addition, if the TBSS does not satisfy engineering requirements, the TDF can be locally modified in the parameter domain, and the porous scaffold in the TBSS can be rebuilt. We also defined a new storage space-saving file format based on the TDF to store porous scaffolds. The experimental results presented in this paper demonstrate the effectiveness and efficiency of the method using a TBSS as well as the superior space-saving of the proposed storage format.
Heterogeneous porous scaffold generation in trivariate B-spline solid with triply periodic minimal surface in the parametric domain
10,566
We propose a compute shader based point cloud rasterizer with up to 10 times higher performance than classic point-based rendering with the GL_POINT primitive. In addition to that, our rasterizer offers 5 byte depth-buffer precision with uniform or customizable distribution, and we show that it is possible to implement a high-quality splatting method that blends together overlapping fragments while still maintaining higher frame-rates than the traditional approach.
Rendering Point Clouds with Compute Shaders
10,567
We propose a new tetrahedral meshing method, fTetWild, to convert triangle soups into high-quality tetrahedral meshes. Our method builds on the TetWild algorithm, replacing the rational triangle insertion with a new incremental approach to construct and optimize the output mesh, interleaving triangle insertion and mesh optimization. Our approach makes it possible to maintain a valid floating-point tetrahedral mesh at all algorithmic stages, eliminating the need for costly constructions with rational numbers used by TetWild, while maintaining full robustness and similar output quality. This allows us to improve on TetWild in two ways. First, our algorithm is significantly faster, with running time comparable to less robust Delaunay-based tetrahedralization algorithms. Second, our algorithm is guaranteed to produce a valid tetrahedral mesh with floating-point vertex coordinates, while TetWild produces a valid mesh with rational coordinates which is not guaranteed to be valid after floating-point conversion. As a trade-off, our algorithm no longer guarantees that all input triangles are present in the output mesh, but in practice, as confirmed by our tests on the Thingi10k dataset, the algorithm always succeeds in inserting all input triangles.
Fast Tetrahedral Meshing in the Wild
10,568
In this paper we present a new deep learning-driven approach to image-based synthesis of animations involving humanoid characters. Unlike previous deep approaches to image-based animation our method makes no assumptions on the type of motion to be animated nor does it require dense temporal input to produce motion. Instead we generate new animations by interpolating between user chosen keyframes, arranged sparsely in time. Utilizing a novel configuration manifold learning approach we interpolate suitable motions between these keyframes. In contrast to previous methods, ours requires less data (animations can be generated from a single youtube video) and is broadly applicable to a wide range of motions including facial motion, whole body motion and even scenes with multiple characters. These improvements serve to significantly reduce the difficulty in producing image-based animations of humanoid characters, allowing even broader audiences to express their creativity.
Convolutional Humanoid Animation via Deformation
10,569
Field-guided parametrization methods have proven effective for quad meshing of surfaces; these methods compute smooth cross fields to guide the meshing process and then integrate the fields to construct a discrete mesh. A key challenge in extending these methods to three dimensions, however, is representation of field values. Whereas cross fields can be represented by tangent vector fields that form a linear space, the 3D analog---an octahedral frame field---takes values in a nonlinear manifold. In this work, we describe the space of octahedral frames in the language of differential and algebraic geometry. With this understanding, we develop geometry-aware tools for optimization of octahedral fields, namely geodesic stepping and exact projection via semidefinite relaxation. Our algebraic approach not only provides an elegant and mathematically-sound description of the space of octahedral frames but also suggests a generalization to frames whose three axes scale independently, better capturing the singular behavior we expect to see in volumetric frame fields. These new odeco frames, so-called as they are represented by orthogonally decomposable tensors, also admit a semidefinite program--based projection operator. Our description of the spaces of octahedral and odeco frames suggests computing frame fields via manifold-based optimization algorithms; we show that these algorithms efficiently produce high-quality fields while maintaining stability and smoothness.
Algebraic Representations for Volumetric Frame Fields
10,570
In this paper we extend the 2D circle average of [11] to a 3D binary average of point-normal pairs, and study its properties. We modify classical surface-generating linear subdivision schemes with this average obtaining surface-generating schemes refining point-normal pairs. The modified schemes give the possibility to generate more geometries by editing the initial normals. For the case of input data consisting of a mesh only, we present a method for computing "naive" initial normals from the initial mesh. The performance of several modified schemes is compared to their linear variants, when operating on the same initial mesh, and examples of the editing capabilities of the modified schemes are given. In addition we provide a link to our repository, where we store the initial and refined mesh files, and the implementation code. Several videos, demonstrating the editing capabilities of the initial normals are provided in our Youtube channel.
Extending editing capabilities of subdivision schemes by refinement of point-normal pairs
10,571
Gupta et al. [1, 2] describe a very beautiful application of algebraic geometry to lattice structures composed of quadric of revolution (quador) implicit surfaces. However, the shapes created have concave edges where the stubs meet, and such edges can be stress-raisers which can cause significant problems with, for instance, fatigue under cyclic loading. This note describes a way in which quadric fillets can be added to these models, thus relieving this problem while retaining their computational simplicity and efficiency.
Adding quadric fillets to quador lattice structures
10,572
We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art.
DeepSketchHair: Deep Sketch-based 3D Hair Modeling
10,573
Photo realism in computer generated imagery is crucially dependent on how well an artist is able to recreate real-world materials in the scene. The workflow for material modeling and editing typically involves manual tweaking of material parameters and uses a standard path tracing engine for visual feedback. A lot of time may be spent in iterative selection and rendering of materials at an appropriate quality. In this work, we propose a convolutional neural network based workflow which quickly generates high-quality ray traced material visualizations on a shaderball. Our novel architecture allows for control over environment lighting and assists material selection along with the ability to render spatially-varying materials. Additionally, our network enables control over environment lighting which gives an artist more freedom and provides better visualization of the rendered material. Comparison with state-of-the-art denoising and neural rendering techniques suggests that our neural renderer performs faster and better. We provide a interactive visualization tool and release our training dataset to foster further research in this area.
A Flexible Neural Renderer for Material Visualization
10,574
We present a fast framework for indoor scene synthesis, given a room geometry and a list of objects with learnt priors. Unlike existing data-driven solutions, which often extract priors by co-occurrence analysis and statistical model fitting, our method measures the strengths of spatial relations by tests for complete spatial randomness (CSR), and extracts complex priors based on samples with the ability to accurately represent discrete layout patterns. With the extracted priors, our method achieves both acceleration and plausibility by partitioning input objects into disjoint groups, followed by layout optimization based on the Hausdorff metric. Extensive experiments show that our framework is capable of measuring more reasonable relations among objects and simultaneously generating varied arrangements in seconds.
Fast 3D Indoor Scene Synthesis with Discrete and Exact Layout Pattern Extraction
10,575
The visualization of hierarchically structured data over time is an ongoing challenge and several approaches exist trying to solve it. Techniques such as animated or juxtaposed tree visualizations are not capable of providing a good overview of the time series and lack expressiveness in conveying changes over time. Nested streamgraphs provide a better understanding of the data evolution, but lack the clear outline of hierarchical structures at a given timestep. Furthermore, these approaches are often limited to static hierarchies or exclude complex hierarchical changes in the data, limiting their use cases. We propose a novel visual metaphor capable of providing a static overview of all hierarchical changes over time, as well as clearly outlining the hierarchical structure at each individual time step. Our method allows for smooth transitions between tree maps and nested streamgraphs, enabling the exploration of the trade-off between dynamic behavior and hierarchical structure. As our technique handles topological changes of all types, it is suitable for a wide range of applications. We demonstrate the utility of our method on several use cases, evaluate it with a user study, and provide its full source code.
SplitStreams: A Visual Metaphor for Evolving Hierarchies
10,576
What is the best representation for doing euclidean geometry on computers? These notes from a SIGGRAPH 2019 short course entitled "Geometric algebra for computer graphics" introduce projective geometric algebra (PGA) as a modern framework for this task. PGA features: uniform representation of points, lines, and planes; robust, parallel-safe join and meet operations; compact, polymorphic syntax for euclidean formulas and constructions; a single intuitive sandwich form for isometries; native support for automatic differentiation; and tight integration of kinematics and rigid body mechanics. PGA includes vector, quaternion, dual quaternion, and exterior algebras as sub-algebras, simplifying the learning curve and transition path for experienced practitioners. On the practical side, it can be efficiently implemented, while its rich syntax enhances programming productivity. The basic ideas are introduced in the 2D context and developed selectively for 3D. Advantages to traditional approaches are collected in a table at the end. The article aims to be a self-contained introduction for practitioners of euclidean geometry and includes numerous examples, formulas, figures, and tables.
Course notes Geometric Algebra for Computer Graphics, SIGGRAPH 2019
10,577
Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of significant advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or less robust in feature preservation; and 2) they usually have many parameters and require tedious parameter tuning. In this paper, we propose a novel deep learning approach that automatically and robustly filters point clouds by removing noise and preserving their sharp features. Our point-wise learning architecture consists of an encoder and a decoder. The encoder directly takes points (a point and its neighbors) as input, and learns a latent representation vector which goes through the decoder to relate the ground-truth position with a displacement vector. The trained neural network can automatically generate a set of clean points from a noisy input. Extensive experiments show that our approach outperforms the state-of-the-art deep learning techniques in terms of both visual quality and quantitative error metrics. The source code and dataset can be found at https://github.com/dongbo-BUAA-VR/Pointfilter.
Pointfilter: Point Cloud Filtering via Encoder-Decoder Modeling
10,578
Dimensionality reduction methods are an essential tool for multidimensional data analysis, and many interesting processes can be studied as time-dependent multivariate datasets. There are, however, few studies and proposals that leverage on the concise power of expression of projections in the context of dynamic/temporal data. In this paper, we aim at providing an approach to assess projection techniques for dynamic data and understand the relationship between visual quality and stability. Our approach relies on an experimental setup that consists of existing techniques designed for time-dependent data and new variations of static methods. To support the evaluation of these techniques, we provide a collection of datasets that has a wide variety of traits that encode dynamic patterns, as well as a set of spatial and temporal stability metrics that assess the quality of the layouts. We present an evaluation of 11 methods, 10 datasets, and 12 quality metrics, and elect the best-suited methods for projecting time-dependent multivariate data, exploring the design choices and characteristics of each method. All our results are documented and made available in a public repository to allow reproducibility of results.
Quantitative Evaluation of Time-Dependent Multidimensional Projection Techniques
10,579
Researchers have now achieved great success on dealing with 2D images using deep learning. In recent years, 3D computer vision and Geometry Deep Learning gain more and more attention. Many advanced techniques for 3D shapes have been proposed for different applications. Unlike 2D images, which can be uniformly represented by regular grids of pixels, 3D shapes have various representations, such as depth and multi-view images, voxel-based representation, point-based representation, mesh-based representation, implicit surface representation, etc. However, the performance for different applications largely depends on the representation used, and there is no unique representation that works well for all applications. Therefore, in this survey, we review recent development in deep learning for 3D geometry from a representation perspective, summarizing the advantages and disadvantages of different representations in different applications. We also present existing datasets in these representations and further discuss future research directions.
A Survey on Deep Geometry Learning: From a Representation Perspective
10,580
We propose a novel algorithm to efficiently generate hidden structures to support arrangements of floating rigid objects. Our optimization finds a small set of rods and wires between objects and each other or a supporting surface (e.g., wall or ceiling) that hold all objects in force and torque equilibrium. Our objective function includes a sparsity inducing total volume term and a linear visibility term based on efficiently pre-computed Monte-Carlo integration, to encourage solutions that are as-hidden-as-possible. The resulting optimization is convex and the global optimum can be efficiently recovered via a linear program. Our representation allows for a user-controllable mixture of tension-, compression-, and shear-resistant rods or tension-only wires. We explore applications to theatre set design, museum exhibit curation, and other artistic endeavours.
Levitating Rigid Objects with Hidden Rods and Wires
10,581
We introduce a modeling tool which can evolve a set of 3D objects in a functionality-aware manner. Our goal is for the evolution to generate large and diverse sets of plausible 3D objects for data augmentation, constrained modeling, as well as open-ended exploration to possibly inspire new designs. Starting with an initial population of 3D objects belonging to one or more functional categories, we evolve the shapes through part recombination to produce generations of hybrids or crossbreeds between parents from the heterogeneous shape collection. Evolutionary selection of offsprings is guided both by a functional plausibility score derived from functionality analysis of shapes in the initial population and user preference, as in a design gallery. Since cross-category hybridization may result in offsprings not belonging to any of the known functional categories, we develop a means for functionality partial matching to evaluate functional plausibility on partial shapes. We show a variety of plausible hybrid shapes generated by our functionality-aware model evolution, which can complement existing datasets as training data and boost the performance of contemporary data-driven segmentation schemes, especially in challenging cases. Our tool supports constrained modeling, allowing users to restrict or steer the model evolution with functionality labels. At the same time, unexpected yet functional object prototypes can emerge during open-ended exploration owing to structure breaking when evolving a heterogeneous collection.
FAME: 3D Shape Generation via Functionality-Aware Model Evolution
10,582
Local and global illumination were recently defined in Riemannian manifolds to visualize classical Non-Euclidean spaces. This work focuses on Riemannian metric construction in $\mathbb{R}^3$ to explore special effects like warping, mirages, and deformations. We investigate the possibility of using graphs of functions and diffeomorphism to produce such effects. For these, their Riemannian metrics and geodesics derivations are provided, and ways of accumulating such metrics. We visualize, in "real-time", the resulting Riemannian manifolds using a ray tracing implemented on top of Nvidia RTX GPUs.
Design and visualization of Riemannian metrics
10,583
In this paper, we propose a novel approach for the quality assessment of pork carcasses using 3D shape analysis. First, we make a 3D model of a pork half-carcass using a 3D scanner and then we take advantage of spectral graph wavelet signature (SGWS) to build a local spectral descriptor. Next, we aggregate the extracted features using the bag-of-geometric-words paradigm to globally represent the half-carcass shape. We then employ partial least-squares regression to predict the weight of pork cuts for the quality assessment of carcasses. Our results demonstrate that SpectralWeight can predict the weight of different pork cuts and tissues with high accuracy. Although in this study we evaluate the performance of SGWS for the weight prediction of pork dissection, our framework is fairly general and enables new ways to estimate the quality and economical value of carcasses of different animals.
SpectralWeight: a spectral graph wavelet framework for weight prediction of pork cuts
10,584
This work introduces progressive spatio-temporal filtering, an efficient method to build all-frequency approximations to the light transport distribution into a scene by filtering individual samples produced by an underlying path sampler, using online, iterative algorithms and data-structures that exploit both the spatial and temporal coherence of the approximated light field. Unlike previous approaches, the proposed method is both more efficient, due to its use of an iterative temporal feedback loop that massively improves convergence to a noise-free approximant, and more flexible, due to its introduction of a spatio-directional hashing representation that allows to encode directional variations like those due to glossy reflections. We then introduce four different methods to employ the resulting approximations to control the underlying path sampler and/or modify its associated estimator, greatly reducing its variance and enhancing its robustness to complex lighting scenarios. The core algorithms are highly scalable and low-overhead, requiring only minor modifications to an existing path tracer.
Online path sampling control with progressive spatio-temporal filtering
10,585
Approximating data points in three or higher dimension space based on cubic B-spline curve is presented. Representations for planar curves, are merged and extended to the higher dimension. The curve is fitted to the order of data points, or uniform parameter values are assumed for the points. Tangents are assumed at the data points, corresponding to the property used in cardinal splines, for shape preserving and visually pleasing fit. Control points of piecewise continuous cubic bezier curves, meeting the boundary conditions of cardinal spline segments, are used for b-spline curve in corresponding coordinate planes. Approximation using error computed in the least square sense, based on a fraction of data points, is also presented.
An error reduced and uniform parameter approximation in fitting of B-spline curves to data points
10,586
In a recent application of the Bokeh Python library for visualizing physico-chemical properties of chemical entities text-mined from the scientific literature, we found ourselves facing the task of smoothing hexagonally binned data in Cartesian coordinates. To the best of our knowledge, no documentation for how to do this exist in the public domain. This short paper shows how to accomplish this in general and for Bokeh in particular. We illustrate the method with a real-world example and discuss some potential advantages of using hexagonal bins in these and similar applications.
Non-Uniform Gaussian Blur of Hexagonal Bins in Cartesian Coordinates
10,587
This paper introduces a novel approach leveraging objective image quality assessment (IQA) metrics to optimize the outcomes of traditional bicubic (BIC) image interpolation and interpolated scan conversion algorithms. Specifically, feature selection through line chart data visualization and computing the IQA metrics scores are used to estimate the IQA-guided coefficient-k that up-dates the traditional BIC algorithm weighting function. The resulting optimized bicubic (OBIC) algorithm was subjectively and objectively evaluated using natural and ultrasound images. Results showed that the overall performance of the OBIC algorithm was equivalent to 92.22% of 180 occurrences when compared to the BIC algorithm, while it was 57.22% of 180 occurrences when compared to other algorithms. On top of that, the OBIC interpolated scan conversion algorithm generally produced crisper and better contrast cropped ultrasound sectored images than the BIC algorithm, as well as other interpolated scan conversion algorithms mentioned.
Software Implementation of Optimized Bicubic Interpolated Scan Conversion in Echocardiography
10,588
This paper presents a novel recurrent neural network-based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence. We introduce several new components to increase the spatial and temporal coverage in motion space while retaining the details of motion capture data. These include new regularization terms for the motion manifold, combination of two complementary decoders for predicting joint rotations and joint velocities, and the addition of the forward kinematics layer to consider both joint rotation and position errors. In addition, we propose a set of loss terms that improve the overall quality of the motion manifold from various aspects, such as the capability of reconstructing not only the motion but also the latent manifold vector, and the naturalness of the motion through adversarial loss. These components contribute to creating compact and versatile motion manifold that allows for creating new motions by performing random sampling and algebraic operations, such as interpolation and analogy, in the latent motion manifold.
Constructing Human Motion Manifold with Sequential Networks
10,589
Shaded relief is an effective method for visualising terrain on topographic maps, especially when the direction of illumination is adapted locally to emphasise individual terrain features. However, digital shading algorithms are unable to fully match the expressiveness of hand-crafted masterpieces, which are created through a laborious process by highly specialised cartographers. We replicate hand-drawn relief shading using U-Net neural networks. The deep neural networks are trained with manual shaded relief images of the Swiss topographic map series and terrain models of the same area. The networks generate shaded relief that closely resemble hand-drawn shaded relief art. The networks learn essential design principles from manual relief shading such as removing unnecessary terrain details, locally adjusting the illumination direction to accentuate individual terrain features, and varying brightness to emphasise larger landforms. Neural network shadings are generated from digital elevation models in a few seconds, and a study with 18 relief shading experts found that they are of high quality.
Cartographic Relief Shading with Neural Networks
10,590
We present a novel autoregression network to generate virtual agents that convey various emotions through their walking styles or gaits. Given the 3D pose sequences of a gait, our network extracts pertinent movement features and affective features from the gait. We use these features to synthesize subsequent gaits such that the virtual agents can express and transition between emotions represented as combinations of happy, sad, angry, and neutral. We incorporate multiple regularizations in the training of our network to simultaneously enforce plausible movements and noticeable emotions on the virtual agents. We also integrate our approach with an AR environment using a Microsoft HoloLens and can generate emotive gaits at interactive rates to increase the social presence. We evaluate how human observers perceive both the naturalness and the emotions from the generated gaits of the virtual agents in a web-based study. Our results indicate around 89% of the users found the naturalness of the gaits satisfactory on a five-point Likert scale, and the emotions they perceived from the virtual agents are statistically similar to the intended emotions of the virtual agents. We also use our network to augment existing gait datasets with emotive gaits and will release this augmented dataset for future research in emotion prediction and emotive gait synthesis. Our project website is available at https://gamma.umd.edu/gen_emotive_gaits/.
Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression
10,591
This paper presents a framework that fully leverages the advantages of a deferred rendering approach for the interactive visualization of large-scale datasets. Geometry buffers (G-Buffers) are generated and stored in situ, and shading is performed post hoc in an interactive image-based rendering front end. This decoupled framework has two major advantages. First, the G-Buffers only need to be computed and stored once---which corresponds to the most expensive part of the rendering pipeline. Second, the stored G-Buffers can later be consumed in an image-based rendering front end that enables users to interactively adjust various visualization parameters---such as the applied color map or the strength of ambient occlusion---where suitable choices are often not known a priori. This paper demonstrates the use of Cinema Darkroom on several real-world datasets, highlighting CD's ability to effectively decouple the complexity and size of the dataset from its visualization.
Cinema Darkroom: A Deferred Rendering Framework for Large-Scale Datasets
10,592
Current GPU rasterization procedure is limited to narrow views in rectilinear perspective. While industries demand curvilinear perspective in wide-angle views, like Virtual Reality and Virtual Film Production industry. This paper delivers new rasterization method using industry-standard STMaps. Additionally new antialiasing rasterization method is proposed, which outperforms MSAA in both quality and performance. It is an improvement upon previous solutions found in paper Perspective picture from Visual Sphere by yours truly.
Temporally-smooth Antialiasing and Lens Distortion with Rasterization Map
10,593
The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artifacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as "the set of maximally inscribed spheres". We show how this definition can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem. The proposed approach is inherently parallelizable by performing independant optimization of each sphere using Gauss-Newton, and its least-squares form allows it to be significantly more robust compared to traditional computational geometry approaches. Extensive experiments on 2D and 3D objects demonstrate that our method provides superior results to the state of the art on both synthetic and real-data.
LSMAT Least Squares Medial Axis Transform
10,594
We introduce Fat Pad cages for posing facial meshes. It combines cage representation and facial anatomical elements, and enables users with no artistic skill to quickly sketch realistic facial expressions. The model relies on one or several cage(s) that deform(s) the mesh following the human fat pads map. We propose a new function to filter Green Coordinates using geodesic distances preventing global deformation while ensuring smooth deformations at the borders. Lips, nostrils and eyelids are processed slightly differently to allow folding up and opening. Cages are automatically created and fit any new unknown facial mesh. To validate our approach, we present a user study comparing our Fat Pad cages to regular Green Coordinates. Results show that Fat Pad cages bring a significant improvement in reproducing existing facial expressions.
Fat Pad Cages for Facial Posing
10,595
For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following user-provided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.
Intuitive Facial Animation Editing Based On A Generative RNN Framework
10,596
We introduce TM-NET, a novel deep generative model for synthesizing textured meshes in a part-aware manner. Once trained, the network can generate novel textured meshes from scratch or predict textures for a given 3D mesh, without image guidance. Plausible and diverse textures can be generated for the same mesh part, while texture compatibility between parts in the same shape is achieved via conditional generation. Specifically, our method produces texture maps for individual shape parts, each as a deformable box, leading to a natural UV map with minimal distortion. The network separately embeds part geometry (via a PartVAE) and part texture (via a TextureVAE) into their respective latent spaces, so as to facilitate learning texture probability distributions conditioned on geometry. We introduce a conditional autoregressive model for texture generation, which can be conditioned on both part geometry and textures already generated for other parts to achieve texture compatibility. To produce high-frequency texture details, our TextureVAE operates in a high-dimensional latent space via dictionary-based vector quantization. We also exploit transparencies in the texture as an effective means to model complex shape structures including topological details. Extensive experiments demonstrate the plausibility, quality, and diversity of the textures and geometries generated by our network, while avoiding inconsistency issues that are common to novel view synthesis methods.
TM-NET: Deep Generative Networks for Textured Meshes
10,597
Elastic geodesic grids (EGG) are lightweight structures that can be easily deployed to approximate designer provided free-form surfaces. In the initial configuration the grids are perfectly flat, during deployment, though, curvature is induced to the structure, as grid elements bend and twist. Their layout is found geometrically, it is based on networks of geodesic curves on free-form design-surfaces. Generating a layout with this approach encodes an elasto-kinematic mechanism to the grid that creates the curved shape during deployment. In the final state the grid can be fixed to supports and serve for all kinds of purposes like free-form sub-structures, paneling, sun and rain protectors, pavilions, etc. However, so far these structures have only been investigated using small-scale desktop models. We investigate the scalability of such structures, presenting a medium sized model. It was designed by an architecture student without expert knowledge on elastic structures or differential geometry, just using the elastic geodesic grids design-pipeline. We further present a fabrication-process for EGG-models. They can be built quickly and with a small budget.
Design and Fabrication of Elastic Geodesic Grid Structures
10,598
We propose a real-time method to render high-quality images of a non-rotating black hole with an accretion disc and background stars. Our method is based on beam tracing, but uses precomputed tables to find the intersections of each curved light beam with the scene in constant time per pixel. It also uses a specific texture filtering scheme to integrate the contribution of the light sources to each beam. Our method is simple to implement and achieves high frame rates.
Real-time High-Quality Rendering of Non-Rotating Black Holes
10,599