text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In this technical report, we document our attempt to visualize adaptive heightfields with smooth interpolation using ray casting in real time. The performance of ray casting depends strongly on the used interpolant and its efficient evaluation. Unfortunately, analytical solutions for ray-surface intersections are only given in the literature for very few simple, piece-wise polynomial surfaces. In our use case, we approximate the heightfield with radial basis functions defined on an adaptive grid, for which we propose a two-step solution: First, we reconstruct and discretize the currently visible portion of the surface with smooth approximation into a set of off-screen buffers. In a second step, we interpret these off-screen buffers as regular heightfields that can be rendered efficiently with ray casting using a simple bilinear interpolant. While our approach works, our quantitative evaluation shows that the performance depends strongly on the complexity and size of the heightfield. Real-time performance cannot be achieved for arbitrary heightfields, which is why we report our findings as a failed attempt to use ray casting for practical geospatial visualization in real time. | An Attempt of Adaptive Heightfield Rendering with Complex Interpolants
Using Ray Casting | 10,700 |
Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on a single-layer, direct ink writing printer. | Closed-Loop Control of Direct Ink Writing via Reinforcement Learning | 10,701 |
We apply an iterative weighting scheme for additive light field synthesis. Unlike previous work optimizing additive light field evenly over viewpoints, we constrain the optimization to deliver a reconstructed light field of high image quality for viewpoints of large weight. | Weighted Simultaneous Algebra Reconstruction Technique (wSART) for
Additive Light Field Synthesis | 10,702 |
Image pixel aliasing caused by insufficient sampling is a long-standing problem in the field of computer graphics. It has always been the goal of researchers to seek anti-aliasing algorithms with high speed and good effect. Due to the deficiencies in local detection and reconstruction of sloping line boundaries, a morphological anti-aliasing method for boundary slope prediction is proposed. This method uses the information of the local line boundary slope to predict and test the end positions of the line boundary in the global scope, thereby reconstructing The boundary information more consistent with the actual boundary is obtained, and a more accurate linear boundary shape is obtained with only a small increase in the amount of calculation. Compared with the previous morphological anti-aliasing algorithm, the proposed method is based on the global morphological boundary. , can reconstruct the straight line boundary more accurately, and apply it to the anti-aliasing calculation, which can further improve the color transition of the straight line boundary, make the inclined straight line boundary have higher continuity, and obtain a better anti-aliasing effect. | Morphological Anti-Aliasing Method for Boundary Slope Prediction | 10,703 |
Recently, public displays such as liquid crystal displays (LCDs) are often used in urban green spaces, however, the display devices can spoil green landscape of urban green spaces because they look like artificial materials. We previously proposed a green landscape-friendly grass animation display method by controlling a pixel-by-pixel grass color dynamically. The grass color can be changed by moving a green grass length in yellow grass, and the grass animation display can play simple animations using grayscale images. In the previous research, the color scale was mapped to the green grass length subjectively, however, this method has not achieved displaying the grass colors corresponding to the color scale based on objective evaluations. Here, we introduce a dynamic grass color scale display technique based on a grass length. In this paper, we developed a grass color scale setting procedure to map the grass length to the color scale with five levels through image processing. Through the outdoor experiment of the grass color scale setting procedure, the color scale can correspond to the green grass length based on a viewpoint. After the experiments, we demonstrated a grass animation display to show the animations with the color scale using the experiment results. | Dynamic Grass Color Scale Display Technique Based on Grass Length for
Green Landscape-Friendly Animation Display | 10,704 |
Motion capture from sparse inertial sensors has shown great potential compared to image-based approaches since occlusions do not lead to a reduced tracking quality and the recording space is not restricted to be within the viewing frustum of the camera. However, capturing the motion and global position only from a sparse set of inertial sensors is inherently ambiguous and challenging. In consequence, recent state-of-the-art methods can barely handle very long period motions, and unrealistic artifacts are common due to the unawareness of physical constraints. To this end, we present the first method which combines a neural kinematics estimator and a physics-aware motion optimizer to track body motions with only 6 inertial sensors. The kinematics module first regresses the motion status as a reference, and then the physics module refines the motion to satisfy the physical constraints. Experiments demonstrate a clear improvement over the state of the art in terms of capture accuracy, temporal stability, and physical correctness. | Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion
Tracking from Sparse Inertial Sensors | 10,705 |
Object oriented bounding box tree (OBB-Tree for short) has many applications in collision detection, real-time rendering, etc. It has a wide range of applications. The construction of the hierarchical directed bounding box of the solid mesh model is studied, and a new optimization solution method is proposed. But this part of the external space volume that does not belong to the solid mesh model is used as the error, and an error calculation method based on hardware acceleration is given. Secondly, the hierarchical bounding box construction problem is transformed into a variational approximation problem, and the optimal hierarchical directed bounding box is obtained by solving the global error minimum. In the optimization calculation, we propose that combining Lloyd clustering iteration in the same layer and MultiGrid-like reciprocating iteration between layers. Compared with previous results, this method can generate aired original solid mesh models are more tightly packed with hierarchical directed bounding box approximation. In the practical application of collision detection, the results constructed using this method can reduce the computational time of collision detection and improve detection efficiency. | Variational Hierarchical Directed Bounding Box Construction for Solid
Mesh Models | 10,706 |
As shared, collaborative, networked, virtual environments become increasingly popular, various challenges arise regarding the efficient transmission of model and scene transformation data over the network. As user immersion and real-time interactions heavily depend on VR stream synchronization, transmitting the entire data sat does not seem a suitable approach, especially for sessions involving a large number of users. Session recording is another momentum-gaining feature of VR applications that also faces the same challenge. The selection of a suitable data format can reduce the occupied volume, while it may also allow effective replication of the VR session and optimized post-processing for analytics and deep-learning algorithms. In this work, we propose two algorithms that can be applied in the context of a networked multiplayer VR session, to efficiently transmit the displacement and orientation data from the users' hand-based VR HMDs. Moreover, we present a novel method describing effective VR recording of the data exchanged in such a session. Our algorithms, based on the use of dual-quaternions and multivectors, impact the network consumption rate and are highly effective in scenarios involving multiple users. By sending less data over the network and interpolating the in-between frames locally, we manage to obtain better visual results than current state-of-the-art methods. Lastly, we prove that, for recording purposes, storing less data and interpolating them on-demand yields a data set quantitatively close to the original one. | Less Is More: Efficient Networked VR Transformation Handling Using
Geometric Algebra | 10,707 |
Multifields datasets are common in a large number of research and engineering applications of computational science. The effective visualization of the corresponding datasets can facilitate their analysis by elucidating the complex and dynamic interactions that exist between the attributes that describe the physics of the phenomenon. We present in this paper a new hybrid Lagrangian-Eulerian model that extends existing Lagrangian visualization techniques to the analysis of multifields problems. In particular, our approach factors in the entire data space to reveal the structure of multifield datasets, thereby combining both Eulerian and Lagrangian perspectives. We evaluate our technique in the context of several fluid dynamics applications. Our results indicate that our proposed approach is able to characterize important structural features that are missed by existing methods. | A Hybrid Lagrangian-Eulerian Model for the Structural Analysis of
Multifield Datasets | 10,708 |
This paper proposes a new hybrid algorithm for sampling virtual point light (VPL). The indirect lighting calculation of the scene is used to distribute the VPL reasonably. In the process of generating VPL, we divide the scene into two parts according to the camera position and orientation. The close-range part: the part that the camera pays attention to. The distant-range part: the part that the camera does not pay attention to or rarely pays attention to. For the close-range part, we use a patch-based vPL sampling method to distribute the VPL as evenly as possible on the patch in the near-field area; for the distant-range part, we use sparse instant radiosity (IR) for sampling. It turns out that, in contrast to conventional multiple instant radiance Compared with the VPL generation algorithm, the method proposed in this paper can greatly improve the quality of the final result graph when the number of VPLs is the same; Under the same rendering quality, the rendering speed can be greatly improved. | A Virtual Point Light Generation Method in Close-Range Area | 10,709 |
In this work, we introduce a novel method to render, in real-time, Lambertian surfaces with a rough dieletric coating. We show that the appearance of such configurations is faithfully represented with two microfacet lobes accounting for direct and indirect interactions respectively. We numerically fit these lobes based on the first order directional statistics (energy, mean and variance) of light transport using 5D tables and narrow them down to 2D + 1D with analytical forms and dimension reduction. We demonstrate the quality of our method by efficiently rendering rough plastics and ceramics, closely matching ground truth. In addition, we improve a state-of-the-art layered material model to include Lambertian interfaces. | Rendering Layered Materials with Diffuse Interfaces | 10,710 |
Linearly Transformed Cosines (LTCs) are a family of distributions that are used for real-time area-light shading thanks to their analytic integration properties. Modern game engines use an LTC approximation of the ubiquitous GGX model, but currently this approximation only exists for isotropic GGX and thus anisotropic GGX is not supported. While the higher dimensionality presents a challenge in itself, we show that several additional problems arise when fitting, post-processing, storing, and interpolating LTCs in the anisotropic case. Each of these operations must be done carefully to avoid rendering artifacts. We find robust solutions for each operation by introducing and exploiting invariance properties of LTCs. As a result, we obtain a small $8^4$ look-up table that provides a plausible and artifact-free LTC approximation to anisotropic GGX and brings it to real-time area-light shading. | Bringing Linearly Transformed Cosines to Anisotropic GGX | 10,711 |
The photorealistic rendering of the transparent effect of translucent objects is a hot research topic in recent years. A real-time photorealistic rendering and material dynamic editing method for the diffuse scattering effect of translucent objects is proposed based on the bidirectional surface scattering reflectance function's (BSSRDF) Dipole approximation. The diffuse scattering material function in the Dipo le approximation is decomposed into the product form of the shape-related function and the translucent material-related function through principal component analysis; using this decomposition representation, under the real-time photorealistic rendering framework of pre-radiative transmission and the scattering transmission to realize real-time editing of translucent object materials under various light sources. In addition, a method for quadratic wavelet compression of precomputed radiative transfer data in the spatial domain is also proposed. Using the correlation of surface points in the spatial distribution position, on the premise of ensuring the rendering quality, the data is greatly compressed and the rendering is efficiently improved. The experimental results show that the method in this paper can generate a highly realistic translucent effect and ensure the real-time rendering speed. | Real-time Rendering and Editing of Scattering Effects for Translucent
Objects | 10,712 |
Precomputed Radiance Transfer (PRT) can achieve high quality renders of glossy materials at real-time framerates. PRT involves precomputing a k-dimensional transfer vector of Spherical Harmonic (SH) coefficients at specific points for a scene. Most prior art precomputes transfer at vertices of the mesh and interpolates color for interior points. They require finer mesh tessellations for high quality renderings. In this paper, we explore and present the use of textures for storing transfer. Using transfer textures decouples mesh resolution from transfer storage and sampling which is useful especially for glossy renders. We further demonstrate glossy inter-reflections by precomputing additional textures. We thoroughly discuss practical aspects of transfer textures and analyze their performance in real-time rendering applications. We show equivalent or higher render quality and FPS and demonstrate results on several challenging scenes. | PRTT: Precomputed Radiance Transfer Textures | 10,713 |
Holographic displays promise to deliver unprecedented display capabilities in augmented reality applications, featuring a wide field of view, wide color gamut, spatial resolution, and depth cues all in a compact form factor. While emerging holographic display approaches have been successful in achieving large etendue and high image quality as seen by a camera, the large etendue also reveals a problem that makes existing displays impractical: the sampling of the holographic field by the eye pupil. Existing methods have not investigated this issue due to the lack of displays with large enough etendue, and, as such, they suffer from severe artifacts with varying eye pupil size and location. We show that the holographic field as sampled by the eye pupil is highly varying for existing display setups, and we propose pupil-aware holography that maximizes the perceptual image quality irrespective of the size, location, and orientation of the eye pupil in a near-eye holographic display. We validate the proposed approach both in simulations and on a prototype holographic display and show that our method eliminates severe artifacts and significantly outperforms existing approaches. | Pupil-aware Holography | 10,714 |
We introduce the concept of a fiber bundle color space, which acts according to the psychophysiological rules of trichromacy perception of colors by a human. The image resides in the fiber bundle base space and the fiber color space contains color vectors. Further we propose the decomposition of color vectors into spectral and achromatic parts. A homomorphism of a color image and constructed two-dimensional vector field is demonstrated that allows us to apply well-known advanced methods of vector analysis to a color image, i.e. ultimately give new numerical characteristics of the image. Appropriate image to vector field forward mapping is constructed. The proposed backward mapping algorithm converts a two-dimensional vector field to color image. The type of image filter is described using sequential forward and backward mapping algorithms. An example of the color image formation on the base of two-dimensional magnetic vector field scattered by a typical pipe line defect is given. | Forward and backward mapping of image to 2D vector field using fiber
bundle color space | 10,715 |
We propose a software rasterization pipeline for point clouds that is capable of brute-force rendering up to two billion points in real time (60fps). Improvements over the state of the art are achieved by batching points in a way that a number of batch-level optimizations can be computed before rasterizing the points within the same rendering pass. These optimizations include frustum culling, level-of-detail rendering, and choosing the appropriate coordinate precision for a given batch of points directly within a compute workgroup. Adaptive coordinate precision, in conjunction with visibility buffers, reduces the number of loaded bytes for the majority of points down to 4, thus making our approach several times faster than the bandwidth-limited state of the art. Furthermore, support for LOD rendering makes our software-rasterization approach suitable for rendering arbitrarily large point clouds, and to meet the increased performance demands of virtual reality rendering. | Software Rasterization of 2 Billion Points in Real Time | 10,716 |
The sensitivity of parameters in computational science problems is difficult to assess, especially for algorithms with multiple input parameters and diverse outputs. This work seeks to explore sensitivity analysis in the visualization domain, introducing novel techniques for respective visual analyses of parameter sensitivity in multi-dimensional algorithms. First, the sensitivity analysis background is revisited, highlighting the definition of sensitivity analysis and approaches analyzing global and local sensitivity as well as the differences of sensitivity analysis to the more common uncertainty analysis. We introduce and explore parameter sensitivity using visualization techniques from overviews to details on demand, covering the analysis of all aspects of sensitivity in a prototypical implementation. The respective visualization techniques outline the algorithmic in- and outputs including indications, on how sensitive an input is with regard to the outputs. The detailed sensitivity information is communicated through constellation plots for the exploration of input and output spaces. A matrix view is discussed for localized information on the sensitivity of specific outputs to specific inputs. A 3D view provides the link of the parameter sensitivity to the spatial domain, in which the results of the multi-dimensional algorithms are embedded. The proposed sensitivity analysis techniques are implemented and evaluated in a prototype called Sensitivity Explorer. We show that Sensitivity Explorer reliably identifies the most influential parameters and provides insights into which of the output characteristics these affect as well as to which extent. | Sensitive vPSA -- Exploring Sensitivity in Visual Parameter Space
Analysis | 10,717 |
In some entertainment and virtual reality applications, it is necessary to model and draw the real world realistically, so as to improve the fidelity of natural scenes and make users have a better sense of immersion. However, due to the morphological structure of trees The complexity and variety present many challenges for photorealistic modeling and rendering of trees. This paper reviews the progress achieved in photorealistic modeling and rendering of tree branches, leaves, and bark over the past few decades. The main achievement is mainly a rule-based procedural tree modeling method. | Rule-based Procedural Tree Modeling Approach | 10,718 |
We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell. | Nanomatrix: Scalable Construction of Crowded Biological Environments | 10,719 |
There are plenty of excellent plotting libraries. Each excels at a different use case: one is good for printed 2D publication figures, the other at interactive 3D graphics, a third has excellent L A TEX integration or is good for creating dashboards on the web. The aim of Plots.jl is to enable the user to use the same syntax to interact with many different plotting libraries, such that it is possible to change the library "backend" without needing to touch the code that creates the content -- and without having to learn yet another application programming interface (API). This is achieved by the separation of the plot specification from the implementation of the actual graphical backend. These plot specifications may be extended by a "recipe" system, which allows package authors and users to define how to plot any new type (be it a statistical model, a map, a phylogenetic tree or the solution to a system of differential equations) and create new types of plots -- without depending on the Plots.jl package. This supports a modular ecosystem structure for plotting and yields a high reuse potential across the entire julia package ecosystem. Plots.jl is publicly available at https://github.com/JuliaPlots/Plots.jl. | Plots.jl -- a user extendable plotting API for the julia programming
language | 10,720 |
There currently exist two main approaches to reproducing visual appearance using Machine Learning (ML): The first is training models that generalize over different instances of a problem, e.g., different images of a dataset. As one-shot approaches, these offer fast inference, but often fall short in quality. The second approach does not train models that generalize across tasks, but rather over-fit a single instance of a problem, e.g., a flash image of a material. These methods offer high quality, but take long to train. We suggest to combine both techniques end-to-end using meta-learning: We over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop across many exemplars. To this end, we derive the required formalism that allows applying meta-learning to a wide range of visual appearance reproduction problems: textures, BRDFs, svBRDFs, illumination or the entire light transport of a scene. The effects of meta-learning parameters on several different aspects of visual appearance are analyzed in our framework, and specific guidance for different tasks is provided. Metappearance enables visual quality that is similar to over-fit approaches in only a fraction of their runtime while keeping the adaptivity of general models. | Metappearance: Meta-Learning for Visual Appearance Reproduction | 10,721 |
In six-degrees-of-freedom light-field (LF) experiences, the viewer's freedom is limited by the extent to which the plenoptic function was sampled. Existing LF datasets represent only small portions of the plenoptic function, such that they either cover a small volume, or they have limited field of view. Therefore, we propose a new LF image dataset "SILVR" that allows for six-degrees-of-freedom navigation in much larger volumes while maintaining full panoramic field of view. We rendered three different virtual scenes in various configurations, where the number of views ranges from 642 to 2226. One of these scenes (called Zen Garden) is a novel scene, and is made publicly available. We chose to position the virtual cameras closely together in large cuboid and spherical organisations ($2.2m^3$ to $48m^3$), equipped with 180{\deg} fish-eye lenses. Every view is rendered to a color image and depth map of 2048px $\times$ 2048px. Additionally, we present the software used to automate the multi-view rendering process, as well as a lens-reprojection tool that converts between images with panoramic or fish-eye projection to a standard rectilinear (i.e., perspective) projection. Finally, we demonstrate how the proposed dataset and software can be used to evaluate LF coding/rendering techniques(in this case for training NeRFs with instant-ngp). As such, we provide the first publicly-available LF dataset for large volumes of light with full panoramic field of view | SILVR: A Synthetic Immersive Large-Volume Plenoptic Dataset | 10,722 |
Software libraries for Topological Data Analysis (TDA) offer limited support for interactive visualization. Most libraries only allow to visualize topological descriptors (e.g., persistence diagrams), and lose the connection with the original domain of data. This makes it challenging for users to interpret the results of a TDA pipeline in an exploratory context. In this paper, we present TopoEmbedding, a web-based tool that simplifies the interactive visualization and analysis of persistence-based descriptors. TopoEmbedding allows non-experts in TDA to explore similarities and differences found by TDA descriptors with simple yet effective visualization techniques. | TopoEmbedding, a web tool for the interactive analysis of persistent
homology | 10,723 |
Kinesthetic garments provide physical feedback on body posture and motion through tailored distributions of reinforced material. Their ability to selectively stiffen a garment's response to specific motions makes them appealing for rehabilitation, sports, robotics, and many other application fields. However, finding designs that distribute a given amount of reinforcement material to maximally stiffen the response to specified motions is a challenging problem. In this work, we propose an optimization-driven approach for automated design of reinforcement patterns for kinesthetic garments. Our main contribution is to cast this design task as an on-body topology optimization problem. Our method allows designers to explore a continuous range of designs corresponding to various amounts of reinforcement coverage. Our model captures both tight contact and lift-off separation between cloth and body. We demonstrate our method on a variety of reinforcement design problems for different body sites and motions. Optimal designs lead to a two- to threefold improvement in performance in terms of energy density. A set of manufactured designs were consistently rated as providing more resistance than baselines in a comparative user study | Computational Design of Kinesthetic Garments | 10,724 |
In this paper, we present a new workflow for the computer-aided generation of physicalizations, addressing nested configurations in anatomical and biological structures. Physicalizations are an important component of anatomical and biological education and edutainment. However, existing approaches have mainly revolved around creating data sculptures through digital fabrication. Only a few recent works proposed computer-aided pipelines for generating sculptures, such as papercrafts, with affordable and readily available materials. Papercraft generation remains a challenging topic by itself. Yet, anatomical and biological applications pose additional challenges, such as reconstruction complexity and insufficiency to account for multiple, nested structures--often present in anatomical and biological structures. Our workflow comprises the following steps: (i) define the nested configuration of the model and detect its levels, (ii) calculate the viewpoint that provides optimal, unobstructed views on inner levels, (iii) perform cuts on the outer levels to reveal the inner ones based on the viewpoint selection, (iv) estimate the stability of the cut papercraft to ensure a reliable outcome, (v) generate textures at each level, as a smart visibility mechanism that provides additional information on the inner structures, and (vi) unfold each textured mesh guaranteeing reconstruction. Our novel approach exploits the interactivity of nested papercraft models for edutainment purposes. | Nested Papercrafts for Anatomical and Biological Edutainment | 10,725 |
The performance of applications that require frame rendering time estimation or dynamic frequency scaling, rely on the accuracy of the workload model that is utilized within these applications. Existing models lack sufficient accuracy in their core model. Hence, they require changes to the target application or the hardware to produce accurate results. This paper introduces a mathematical workload model for a rasterization-based graphics Application Programming Interface (API) pipeline, named GAMORRA, which works based on the load and complexity of each stage of the pipeline. Firstly, GAMORRA models each stage of the pipeline based on their operation complexity and the input data size. Then, the calculated workloads of the stages are fed to a Multiple Linear Regression (MLR) model as explanatory variables. A hybrid offline/online training scheme is proposed as well to train the model. A suite of benchmarks is also designed to tune the model parameters based on the performance of the target system. The experiments were performed on Direct3D 11 and on two different rendering platforms comparing GAMORRA to an AutoRegressive (AR) model, a Frame Complexity Model (FCM) and a frequency-based (FRQ) model. The experiments show an average of 1.27 ms frame rendering time estimation error (9.45%) compared to an average of 1.87 ms error (13.23%) for FCM which is the best method among the three chosen methods. However, this comes at the cost of 0.54 ms (4.58%) increase in time complexity compared to FCM. Furthermore, GAMMORA improves frametime underestimations by 1.1% compared to FCM. | GAMORRA: An API-Level Workload Model for Rasterization-based Graphics
Pipeline Architecture | 10,726 |
3D volume rendering is widely used to reveal insightful intrinsic patterns of volumetric datasets across many domains. However, the complex structures and varying scales of volumetric data can make efficiently generating high-quality volume rendering results a challenging task. Multivariate functional approximation (MFA) is a new data model that addresses some of the critical challenges: high-order evaluation of both value and derivative anywhere in the spatial domain, compact representation for large-scale volumetric data, and uniform representation of both structured and unstructured data. In this paper, we present MFA-DVR, the first direct volume rendering pipeline utilizing the MFA model, for both structured and unstructured volumetric datasets. We demonstrate improved rendering quality using MFA-DVR on both synthetic and real datasets through a comparative study. We show that MFA-DVR not only generates more faithful volume rendering than using local filters but also performs faster on high-order interpolations on structured and unstructured datasets. MFA-DVR is implemented in the existing volume rendering pipeline of the Visualization Toolkit (VTK) to be accessible by the scientific visualization community. | MFA-DVR: Direct Volume Rendering of MFA Models | 10,727 |
Rendering on conventional computers is capable of generating realistic imagery, but the computational complexity of these light transport algorithms is a limiting factor of image synthesis. Quantum computers have the potential to significantly improve rendering performance through reducing the underlying complexity of the algorithms behind light transport. This paper investigates hybrid quantum-classical algorithms for ray tracing, a core component of most rendering techniques. Through a practical implementation of quantum ray tracing in a 3D environment, we show quantum approaches provide a quadratic improvement in query complexity compared to the equivalent classical approach. Based on domain specific knowledge, we then propose algorithms to significantly reduce the computation required for quantum ray tracing through exploiting image space coherence and a principled termination criteria for quantum searching. We show results for both Whitted style ray tracing, and for accelerating ray tracing operations when performing classical Monte Carlo integration for area lights and indirect illumination. | Towards Quantum Ray Tracing | 10,728 |
Modeling perception is critical for many applications and developments in computer graphics to optimize and evaluate content generation techniques. Most of the work to date has focused on central (foveal) vision. However, this is insufficient for novel wide-field-of-view display devices, such as virtual and augmented reality headsets. Furthermore, the perceptual models proposed for the fovea do not readily extend to the off-center, peripheral visual field, where human perception is drastically different. In this paper, we focus on modeling the temporal aspect of visual perception in the periphery. We present new psychophysical experiments that measure the sensitivity of human observers to different spatio-temporal stimuli across a wide field of view. We use the collected data to build a perceptual model for the visibility of temporal changes at different eccentricities in complex video content. Finally, we discuss, demonstrate, and evaluate several problems that can be addressed using our technique. First, we show how our model enables injecting new content into the periphery without distracting the viewer, and we discuss the link between the model and human attention. Second, we demonstrate how foveated rendering methods can be evaluated and optimized to limit the visibility of temporal aliasing. | Perceptual Visibility Model for Temporal Contrast Changes in Periphery | 10,729 |
We consider the problem of multiple scattering on Smith microfacets. This problem is equivalent to computing volumetric light transport in a homogeneous slab. Although the symmetry of the slab allows for significant simplification, fully analytic solutions are scarce and not general enough for most applications. Standard Monte Carlo simulation, although general, is expensive and leads to variance that must be dealt with. We present the first unbiased, truly position-free path integral for evaluating the BSDF of a homogeneous slab. We collapse the spatially-1D path integral of previous works to a position-free form using an analytical preintegration of collision distances. Evaluation of the resulting path integral, which now contains only directions, reduces to simple recursive manipulation of exponential distributions. Applying Monte Carlo to solve the reduced integration problem leads to lower variance. Our new algorithm allows us to render multiple scattering on Smith microfacets with less variance than prior work, and, in the case of conductors, to do so without any bias. Additionally, our algorithm can also be used to accelerate the rendering of BSDFs containing volumetrically scattering layers, at reduced variance compared to standard Monte Carlo integration. | A Position-Free Path Integral for Homogeneous Slabs and Multiple
Scattering on Smith Microfacets | 10,730 |
Polycube-maps are used as base-complexes in various fields of computational geometry, including the generation of regular all-hexahedral meshes free of internal singularities. However, the strict alignment constraints behind polycube-based methods make their computation challenging for CAD models used in numerical simulation via Finite Element Method (FEM). We propose a novel approach based on an evolutionary algorithm to robustly compute polycube-maps in this context. We address the labeling problem, which aims to precompute polycube alignment by assigning one of the base axes to each boundary face on the input. Previous research has described ways to initialize and improve a labeling via greedy local fixes. However, such algorithms lack robustness and often converge to inaccurate solutions for complex geometries. Our proposed framework alleviates this issue by embedding labeling operations in an evolutionary heuristic, defining fitness, crossover, and mutations in the context of labeling optimization. We evaluate our method on a thousand smooth and CAD meshes, showing Evocube converges to valid labelings on a wide range of shapes. The limitations of our method are also discussed thoroughly. | Evocube: a Genetic Labeling Framework for Polycube-Maps | 10,731 |
We present a novel integration of a real-time continuous tearing algorithm for 3D meshes in VR, suitable for devices of low CPU/GPU specifications, along with a suitable particle decomposition that allows soft-body deformations on both the original and the torn model. | Realistic soft-body tearing under 10ms in VR | 10,732 |
Mathematically representing the shape of an object is a key ingredient for solving inverse rendering problems. Explicit representations like meshes are efficient to render in a differentiable fashion but have difficulties handling topology changes. Implicit representations like signed-distance functions, on the other hand, offer better support of topology changes but are much more difficult to use for physics-based differentiable rendering. We introduce a new physics-based inverse rendering pipeline that uses both implicit and explicit representations. Our technique enjoys the benefit of both representations by supporting both topology changes and differentiable rendering of complex effects such as environmental illumination, soft shadows, and interreflection. We demonstrate the effectiveness of our technique using several synthetic and real examples. | Physics-Based Inverse Rendering using Combined Implicit and Explicit
Geometries | 10,733 |
The ability to morph flat sheets into complex 3D shapes is extremely useful for fast manufacturing and saving materials while also allowing volumetrically efficient storage and shipment and a functional use. Direct 4D printing is a compelling method to morph complex 3D shapes out of as-printed 2D plates. However, most direct 4D printing methods require multi-material systems involving costly machines. Moreover, most works have used an open-cell design for shape shifting by encoding a collection of 1D rib deformations, which cannot remain structurally stable. Here, we demonstrate the direct 4D printing of an isotropic single-material system to morph 2D continuous bilayer plates into doubly curved and multimodal 3D complex shapes whose geometry can also be locked after deployment. We develop an inverse-design algorithm that integrates extrusion-based 3D printing of a single-material system to directly morph a raw printed sheet into complex 3D geometries such as a doubly curved surface with shape locking. Furthermore, our inverse-design tool encodes the localized shape-memory anisotropy during the process, providing the processing conditions for a target 3D morphed geometry. Our approach could be used for conventional extrusion-based 3D printing for various applications including biomedical devices, deployable structures, smart textiles, and pop-up Kirigami structures. | Encoding of direct 4D printing of isotropic single-material system for
double-curvature and multimodal morphing | 10,734 |
With the recent interest in virtual reality and augmented reality, there is a newfound demand for displays that can provide high resolution with a wide field of view (FOV). However, such displays incur significantly higher costs for rendering the larger number of pixels. This poses the challenge of rendering realistic real-time images that have a wide FOV and high resolution using limited computing resources. The human visual system does not need every pixel to be rendered at a uniformly high quality. Foveated rendering methods provide perceptually high-quality images while reducing computational workload and are becoming a crucial component for large-scale rendering. In this paper, we present key motivations, research directions, and challenges for leveraging the limitations of the human visual system as they relate to foveated rendering. We provide a taxonomy to compare and contrast various foveated techniques based on key factors. We also review aliasing artifacts arising due to foveation methods and discuss several approaches that attempt to mitigate such effects. Finally, we present several open problems and possible future research directions that can further reduce computational costs while generating perceptually high-quality renderings. | Foveated Rendering: Motivation, Taxonomy, and Research Directions | 10,735 |
We present a technique to reduce the dynamic range of an HDRI lighting environment map in an efficient, energy-preserving manner by spreading out the light of concentrated light sources. This allows us to display a reasonable approximation of the illumination of an HDRI map in a lighting reproduction system with limited dynamic range such as virtual production LED Stage. The technique identifies regions of the HDRI map above a given pixel threshold, dilates these regions until the average pixel value within each is below the threshold, and finally replaces each dilated region's pixels with the region's average pixel value. The new HDRI map contains the same energy as the original, spreads the light as little as possible, and avoids chromatic fringing. | HDR Lighting Dilation for Dynamic Range Reduction on Virtual Production
Stages | 10,736 |
Accurate frictional contact is critical in simulating the assembly of rod-like structures in the practical world, such as knots, hairs, flagella, and more. Due to their high geometric nonlinearity and elasticity, rod-on-rod contact remains a challenging problem tackled by researchers in both computational mechanics and computer graphics. Typically, frictional contact is regarded as constraints for the equations of motions of a system. Such constraints are often computed independently at every time step in a dynamic simulation, thus slowing down the simulation and possibly introducing numerical convergence issues. This paper proposes a fully implicit penalty-based frictional contact method, Implicit Contact Model (IMC), that efficiently and robustly captures accurate frictional contact responses. We showcase our algorithm's performance in achieving visually realistic results for the challenging and novel contact scenario of flagella bundling in fluid medium, a significant phenomenon in biology that motivates novel engineering applications in soft robotics. In addition to this, we offer a side-by-side comparison with Incremental Potential Contact (IPC), a state-of-the-art contact handling algorithm. We show that IMC possesses comparable performance to IPC while converging at a faster rate. | A Fully Implicit Method for Robust Frictional Contact Handling in
Elastic Rods | 10,737 |
Computation of bounding boxes is a fundamental problem in high performance rendering, as it is an input to visibility culling and binning operations. In a scene description structured as a tree, clip nodes and blend nodes entail intersection and union of bounding boxes, respectively. These are straightforward to compute on the CPU using a sequential algorithm, but an efficient, parallel GPU algorithm is more elusive. This paper presents a fast and practical solution, with a new algorithm for the classic parentheses matching problem at its core. The core algorithm is presented abstractly (in terms of a PRAM abstraction), then with a concrete mapping to the thread, workgroup, and dispatch levels of real GPU hardware. The algorithm is implemented portably using compute shaders, and performance results show a dramatic speedup over a sequential CPU version, and indeed a reasonable fraction of maximum theoretical throughput of the GPU hardware. The immediate motivating application is 2D rendering, but the algorithms generalize to other domains, and the core parentheses matching problem has other applications including parsing. | Fast GPU bounding boxes on tree-structured scenes | 10,738 |
We describe a novel approach to decompose a single panorama of an empty indoor environment into four appearance components: specular, direct sunlight, diffuse and diffuse ambient without direct sunlight. Our system is weakly supervised by automatically generated semantic maps (with floor, wall, ceiling, lamp, window and door labels) that have shown success on perspective views and are trained for panoramas using transfer learning without any further annotations. A GAN-based approach supervised by coarse information obtained from the semantic map extracts specular reflection and direct sunlight regions on the floor and walls. These lighting effects are removed via a similar GAN-based approach and a semantic-aware inpainting step. The appearance decomposition enables multiple applications including sun direction estimation, virtual furniture insertion, floor material replacement, and sun direction change, providing an effective tool for virtual home staging. We demonstrate the effectiveness of our approach on a large and recently released dataset of panoramas of empty homes. | Semantically Supervised Appearance Decomposition for Virtual Staging
from a Single Panorama | 10,739 |
We introduce a general differentiable solver for time-dependent deformation problems with contact and friction. Our approach uses a finite element discretization with a high-order time integrator coupled with the recently proposed incremental potential contact method for handling contact and friction forces to solve ODE- and PDE-constrained optimization problems on scenes with complex geometry. It supports static and dynamic problems and differentiation with respect to all physical parameters involved in the physical problem description, which include shape, material parameters, friction parameters, and initial conditions. Our analytically derived adjoint formulation is efficient, with a small overhead (typically less than 10% for nonlinear problems) over the forward simulation, and shares many similarities with the forward problem, allowing the reuse of large parts of existing forward simulator code. We implement our approach on top of the open-source PolyFEM library and demonstrate the applicability of our solver to shape design, initial condition optimization, and material estimation on both simulated results and physical validations. | Differentiable solver for time-dependent deformation problems with
contact | 10,740 |
High-order bases provide major advantages over linear ones in terms of efficiency, as they provide (for the same physical model) higher accuracy for the same running time, and reliability, as they are less affected by locking artifacts and mesh quality. Thus, we introduce a high-order finite element (FE) formulation (high-order bases) for elastodynamic simulation on high-order (curved) meshes with contact handling based on the recently proposed Incremental Potential Contact (IPC) model. Our approach is based on the observation that each IPC optimization step used to minimize the elasticity, contact, and friction potentials leads to linear trajectories even in the presence of nonlinear meshes or nonlinear FE bases. It is thus possible to retain the strong non-penetration guarantees and large time steps of the original formulation while benefiting from the high-order bases and high-order geometry. We accomplish this by mapping displacements and resulting contact forces between a linear collision proxy and the underlying high-order representation. We demonstrate the effectiveness of our approach in a selection of problems from graphics, computational fabrication, and scientific computing. | High-Order Incremental Potential Contact for Elastodynamic Simulation on
Curved Meshes | 10,741 |
We present angle-uniform parallel coordinates, a data-independent technique that deforms the image plane of parallel coordinates so that the angles of linear relationships between two variables are linearly mapped along the horizontal axis of the parallel coordinates plot. Despite being a common method for visualizing multidimensional data, parallel coordinates are ineffective for revealing positive correlations since the associated parallel coordinates points of such structures may be located at infinity in the image plane and the asymmetric encoding of negative and positive correlations may lead to unreliable estimations. To address this issue, we introduce a transformation that bounds all points horizontally using an angle-uniform mapping and shrinks them vertically in a structure-preserving fashion; polygonal lines become smooth curves and a symmetric representation of data correlations is achieved. We further propose a combined subsampling and density visualization approach to reduce visual clutter caused by overdrawing. Our method enables accurate visual pattern interpretation of data correlations, and its data-independent nature makes it applicable to all multidimensional datasets. The usefulness of our method is demonstrated using examples of synthetic and real-world datasets. | Angle-Uniform Parallel Coordinates | 10,742 |
Due to the increasing demand in films and games, synthesizing 3D avatar animation has attracted much attention recently. In this work, we present a production-ready text/speech-driven full-body animation synthesis system. Given the text and corresponding speech, our system synthesizes face and body animations simultaneously, which are then skinned and rendered to obtain a video stream output. We adopt a learning-based approach for synthesizing facial animation and a graph-based approach to animate the body, which generates high-quality avatar animation efficiently and robustly. Our results demonstrate the generated avatar animations are realistic, diverse and highly text/speech-correlated. | Text/Speech-Driven Full-Body Animation | 10,743 |
In this technical report, we improve the DVGO framework (called DVGOv2), which is based on Pytorch and uses the simplest dense grid representation. First, we re-implement part of the Pytorch operations with cuda, achieving 2-3x speedup. The cuda extension is automatically compiled just in time. Second, we extend DVGO to support Forward-facing and Unbounded Inward-facing capturing. Third, we improve the space time complexity of the distortion loss proposed by mip-NeRF 360 from O(N^2) to O(N). The distortion loss improves our quality and training speed. Our efficient implementation could allow more future works to benefit from the loss. | Improved Direct Voxel Grid Optimization for Radiance Fields
Reconstruction | 10,744 |
Recent differentiable rendering techniques have become key tools to tackle many inverse problems in graphics and vision. Existing models, however, assume steady-state light transport, i.e., infinite speed of light. While this is a safe assumption for many applications, recent advances in ultrafast imaging leverage the wealth of information that can be extracted from the exact time of flight of light. In this context, physically-based transient rendering allows to efficiently simulate and analyze light transport considering that the speed of light is indeed finite. In this paper, we introduce a novel differentiable transient rendering framework, to help bring the potential of differentiable approaches into the transient regime. To differentiate the transient path integral we need to take into account that scattering events at path vertices are no longer independent; instead, tracking the time of flight of light requires treating such scattering events at path vertices jointly as a multidimensional, evolving manifold. We thus turn to the generalized transport theorem, and introduce a novel correlated importance term, which links the time-integrated contribution of a path to its light throughput, and allows us to handle discontinuities in the light and sensor functions. Last, we present results in several challenging scenarios where the time of flight of light plays an important role such as optimizing indices of refraction, non-line-of-sight tracking with nonplanar relay walls, and non-line-of-sight tracking around two corners. | Differentiable Transient Rendering | 10,745 |
We present a novel smart visibility system for visualizing crowded volumetric data containing many object instances. The presented approach allows users to form groups of objects through membership predicates and to individually control the visibility of the instances in each group. Unlike previous smart visibility approaches, our approach controls the visibility on a per-instance basis and decides which instances are displayed or hidden based on the membership predicates and the current view. Thus, cluttered and dense volumes that are notoriously difficult to explore effectively are automatically sparsified so that the essential information is extracted and presented to the user. The proposed system is generic and can be easily integrated into existing volume rendering applications and applied to many different domains. We demonstrate the use of the volume conductor for visualizing fiber-reinforced polymers and intracellular organelle structures. | Volume Conductor: Interactive Visibility Management for Crowded Volumes | 10,746 |
Additive manufacturing is a process that has facilitated the cost effective production of complicated designs. Objects fabricated via additive manufacturing technologies often suffer from dimensional accuracy issues and other part specific problems such as thin part robustness, overhang geometries that may collapse, support structures that cannot be removed, engraved and embossed details that are indistinguishable. In this work we present an approach to predict the dimensional accuracy per vertex and per part. Furthermore, we provide a framework for estimating the probability that a model is fabricated correctly via an additive manufacturing technology for a specific application. This framework can be applied to several 3D printing technologies and applications. In the context of this paper, a thorough experimental evaluation is presented for binder jetting technology and applications. | Predicting Geometric Errors and Failures in Additive Manufacturing | 10,747 |
We present an efficient raycasting algorithm for rendering Volumetric Depth Images (VDIs), and we show how it can be used in a remote visualization setting with VDIs generated and streamed from a remote server. VDIs are compact view-dependent volume representations that enable interactive visualization of large volumes at high frame rates by decoupling viewpoint changes from expensive rendering calculations. However, current rendering approaches for VDIs struggle with achieving interactive frame rates at high image resolutions. Here, we exploit the properties of perspective projection to simplify intersections of rays with the view-dependent frustums in a VDI and leverage spatial smoothness in the volume data to minimize memory accesses. Benchmarks show that responsive frame rates can be achieved close to the viewpoint of generation for HD display resolutions, providing high-fidelity approximate renderings of Gigabyte-sized volumes. We also propose a method to subsample the VDI for preview rendering, maintaining high frame rates even for large viewpoint deviations. We provide our implementation as an extension of an established open-source visualization library. | Efficient Raycasting of Volumetric Depth Images for Remote Visualization
of Large Volumes at High Frame Rates | 10,748 |
In parametric design, the geometric model is edited by changing relevant parameters in the parametric model, which is commonly done sequentially on multiple parameters. Without guidance on allowable parameter ranges that can guarantee the solvability of the geometric constraint system, the user could assign improper parameter values to the model's parameters, which would further lead to a failure in model updating. However, current commercial CAD systems provide little support for the proper parameter assignments. Although the existing methods can compute allowable ranges for individual parameters, they face difficulties in handling multi-parameter situations. In particular, these methods could miss some feasible parameter values and provide incomplete allowable parameter ranges. To solve this problem, an automatic approach is proposed in this paper to compute complete parameter ranges in multi-parameter editing. In the approach, a set of variable parameters are first selected to be sequentially edited by the user; before each editing operation, the one-dimensional ranges of the variable parameters are presented as guidance. To compute the one-dimensional ranges, each variable parameter is expressed as an equality-constrained function, and its one-dimensional allowable range is obtained by calculating the function range. To effectively obtain the function range which can hardly be calculated in a normal way, the function range problem is converted into a constrained optimization problem, and is then solved by Lagrange multiplier method and the Niching particle swarm optimization algorithm (the NichePSO). The effectiveness and efficiency of the proposed approach is verified by several experimental results. | Towards computing complete parameter ranges in parametric modeling | 10,749 |
In this work, we explore a change of paradigm to build Precomputed Radiance Transfer (PRT) methods in a data-driven way. This paradigm shift allows us to alleviate the difficulties of building traditional PRT methods such as defining a reconstruction basis, coding a dedicated path tracer to compute a transfer function, etc. Our objective is to pave the way for Machine Learned methods by providing a simple baseline algorithm. More specifically, we demonstrate real-time rendering of indirect illumination in hair and surfaces from a few measurements of direct lighting. We build our baseline from pairs of direct and indirect illumination renderings using only standard tools such as Singular Value Decomposition (SVD) to extract both the reconstruction basis and transfer function. | A Data-Driven Paradigm for Precomputed Radiance Transfer | 10,750 |
We present a parallel compositing algorithm for Volumetric Depth Images (VDIs) of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. VDIs are view-dependent piecewise constant representations of volume data that offer a potential solution. They are more compact and less expensive to render than the original data. So far, however, there is no method for generating VDIs from distributed data. We propose an algorithm that enables this by sort-last parallel generation and compositing of VDIs with automatically chosen content-adaptive parameters. The resulting composited VDI can then be streamed for remote display, providing responsive visualization of large, distributed volume data. | Parallel Compositing of Volumetric Depth Images for Interactive
Visualization of Distributed Volumes at High Frame Rates | 10,751 |
Despite the ubiquitousness of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example-based material control method to augment input material maps based on user-provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso-structure textures of user provided target(s) photographs, while preserving the structure of the input and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials. | Controlling Material Appearance by Examples | 10,752 |
We introduce a statistical extension of the classic Poisson Surface Reconstruction algorithm for recovering shapes from 3D point clouds. Instead of outputting an implicit function, we represent the reconstructed shape as a modified Gaussian Process, which allows us to conduct statistical queries (e.g., the likelihood of a point in space being on the surface or inside a solid). We show that this perspective: improves PSR's integration into the online scanning process, broadens its application realm, and opens the door to other lines of research such as applying task-specific priors. | Stochastic Poisson Surface Reconstruction | 10,753 |
Multi-sided surfaces are often defined by side interpolants (also called ribbons), i.e. the surface has to connect to the ribbons with a prescribed degree of smoothness. The I-patch is such a family of implicit surfaces capable of interpolating an arbitrary number of ribbons and can be used in design and approximation. While in the case of parametric surfaces describing ribbons is a well-discussed problem, defining implicit ribbons is a different task. This paper will introduce corner I-patches, a new representation that describes implicit surfaces based on corner interpolants. Those may be defined with much simpler surfaces, while the shape of the patch will depend on a handful of scalar parameters. Continuity between patches will be enforced via constraints on these parameters. Corner I-patches have several favorable properties that can be exploited for example in volume rendering or approximation. | Corner-based implicit patches | 10,754 |
Procedural material graphs are a compact, parameteric, and resolution-independent representation that are a popular choice for material authoring. However, designing procedural materials requires significant expertise and publicly accessible libraries contain only a few thousand such graphs. We present MatFormer, a generative model that can produce a diverse set of high-quality procedural materials with complex spatial patterns and appearance. While procedural materials can be modeled as directed (operation) graphs, they contain arbitrary numbers of heterogeneous nodes with unstructured, often long-range node connections, and functional constraints on node parameters and connections. MatFormer addresses these challenges with a multi-stage transformer-based model that sequentially generates nodes, node parameters, and edges, while ensuring the semantic validity of the graph. In addition to generation, MatFormer can be used for the auto-completion and exploration of partial material graphs. We qualitatively and quantitatively demonstrate that our method outperforms alternative approaches, in both generated graph and material quality. | MatFormer: A Generative Model for Procedural Materials | 10,755 |
Quantization has proven effective in high-resolution and large-scale simulations, which benefit from bit-level memory saving. However, identifying a quantization scheme that meets the requirement of both precision and memory efficiency requires trial and error. In this paper, we propose a novel framework to allow users to obtain a quantization scheme by simply specifying either an error bound or a memory compression rate. Based on the error propagation theory, our method takes advantage of auto-diff to estimate the contributions of each quantization operation to the total error. We formulate the task as a constrained optimization problem, which can be efficiently solved with analytical formulas derived for the linearized objective function. Our workflow extends the Taichi compiler and introduces dithering to improve the precision of quantized simulations. We demonstrate the generality and efficiency of our method via several challenging examples of physics-based simulation, which achieves up to 2.5x memory compression without noticeable degradation of visual quality in the results. Our code and data are available at https://github.com/Hanke98/AutoQuantizer. | Automatic Quantization for Physics-Based Simulation | 10,756 |
We introduce per-halfedge texturing (Htex) a GPU-friendly method for texturing arbitrary polygon-meshes without an explicit parameterization. Htex builds upon the insight that halfedges encode an intrinsic triangulation for polygon meshes, where each halfedge spans a unique triangle with direct adjacency information. Rather than storing a separate texture per face of the input mesh as is done by previous parameterization-free texturing methods, Htex stores a square texture for each halfedge and its twin. We show that this simple change from face to halfedge induces two important properties for high performance parameterization-free texturing. First, Htex natively supports arbitrary polygons without requiring dedicated code for, e.g, non-quad faces. Second, Htex leads to a straightforward and efficient GPU implementation that uses only three texture-fetches per halfedge to produce continuous texturing across the entire mesh. We demonstrate the effectiveness of Htex by rendering production assets in real time. | Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies | 10,757 |
Graph-based procedural materials are ubiquitous in content production industries. Procedural models allow the creation of photorealistic materials with parametric control for flexible editing of appearance. However, designing a specific material is a time-consuming process in terms of building a model and fine-tuning parameters. Previous work [Hu et al. 2022; Shi et al. 2020] introduced material graph optimization frameworks for matching target material samples. However, these previous methods were limited to optimizing differentiable functions in the graphs. In this paper, we propose a fully differentiable framework which enables end-to-end gradient based optimization of material graphs, even if some functions of the graph are non-differentiable. We leverage the Differentiable Proxy, a differentiable approximator of a non-differentiable black-box function. We use our framework to match structure and appearance of an output material to a target material, through a multi-stage differentiable optimization. Differentiable Proxies offer a more general optimization solution to material appearance matching than previous work. | Node Graph Optimization Using Differentiable Proxies | 10,758 |
Technological advances for measuring or simulating volume data have led to large data sizes in many research areas such as biology, medicine, physics, and geoscience. Here, large data can refer to individual data sets with high spatial and/or temporal resolution as well as collections of data sets in the sense of cohorts or ensembles. Therefore, general-purpose and customizable volume visualization and processing systems have to provide out-of-core mechanisms that allow for handling and analyzing such data. Voreen is an open-source rapid-prototyping framework that was originally designed to quickly create custom visualization applications for volumetric imaging data using the meanwhile quite common data flow graph paradigm. In recent years, Voreen has been used in various interdisciplinary research projects with an increasing demand for large data processing capabilities without relying on cluster compute resources. In its latest release, Voreen has thus been extended by out-of-core techniques for processing and visualization of volume data with very high spatial resolution as well as collections of volume data sets including spatio-temporal multi-field simulation ensembles. In this paper we compare state-of-the-art volume processing and visualization systems and conclude that Voreen is the first system combining out-of-core processing and rendering capabilities for large volume data on consumer hardware with features important for interdisciplinary research. We describe how Voreen achieves these goals and show-case its use, performance, and capability to support interdisciplinary research by presenting typical workflows within two large volume data case studies. | Voreen -- An Open-source Framework for Interactive Visualization and
Processing of Large Volume Data | 10,759 |
Mechanical interactions between rigid rings and flexible cables find broad application in both daily life (hanging clothes) and engineering systems (closing a tether-net). A reduced-order method for the dynamic analysis of sliding rings on a deformable one-dimensional (1D) rod-like object is proposed. In contrast to the conventional approach of discretizing joint rings into multiple nodes and edges for contact detection and numerical simulation, a single point is used to reduce the order of the model. To ensure that the sliding ring and flexible rod do not deviate from their desired positions, a new barrier function is formulated using the incremental potential theory. Subsequently, the interaction between tangent frictional forces is obtained through a delayed dissipative approach. The proposed barrier functional and the associated frictional functional are C2 continuous, hence the nonlinear elastodynamic system can be solved variationally by an implicit time-stepping scheme. The numerical framework is initially applied to simple examples where the analytical solutions are available for validation. Then, multiple complex practical engineering examples are considered to showcase the effectiveness of the proposed method. The simplified ring-to-rod interaction model has the capacity to enhance the realism of visual effects in image animations, while simultaneously facilitating the optimization of designs for space debris removal systems. | Dynamic modeling of a sliding ring on an elastic rod with incremental
potential formulation | 10,760 |
Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation. | Procedural Generation and Rendering of Realistic, Navigable Forest
Environments: An Open-Source Tool | 10,761 |
In this paper, we present a powerful differentiable surface fitting technique to derive a compact surface representation for a given dense point cloud or mesh, with application in the domains of graphics and CAD/CAM. We have chosen the Loop subdivision surface, which in the limit yields the smooth surface underlying the point cloud, and can handle complex surface topology better than other popular compact representations, such as NURBS. The principal idea is to fit the Loop subdivision surface not directly to the point cloud, but to the IMLS (implicit moving least squares) surface defined over the point cloud. As both Loop subdivision and IMLS have analytical expressions, we are able to formulate the problem as an unconstrained minimization problem of a completely differentiable function that can be solved with standard numerical solvers. Differentiability enables us to integrate the subdivision surface into any deep learning method for point clouds or meshes. We demonstrate the versatility and potential of this approach by using it in conjunction with a differentiable renderer to robustly reconstruct compact surface representations of spatial-temporal sequences of dense meshes. | Differentiable Subdivision Surface Fitting | 10,762 |
UV parameterization is a core task in computer graphics, with applications in mesh texturing, remeshing, mesh repair, mesh editing, and more. It is thus an active area of research, which has led to a wide variety of parameterization methods that excel according to different measures of quality. There is no single metric capturing parameterization quality in practice, since the quality of a parameterization heavily depends on its application; hence, parameterization methods can best be judged by the actual users of the computed result. In this paper, we present a dataset of meshes together with UV maps collected from various sources and intended for real-life use. Our dataset can be used to test parameterization methods in realistic environments. We also introduce a benchmark to compare parameterization methods with artist-provided UV parameterizations using a variety of metrics. This strategy enables us to evaluate the performance of a parameterization method by computing the quality indicators that are valued by the designers of a mesh. | A Dataset and Benchmark for Mesh Parameterization | 10,763 |
The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. Here, the problem is approached by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. The semantics and the variable geometry of a class of shapes is represented through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, the design and development of a framework for the semantics-aware modelling of shapes is presented, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. | Report on the software "SemanticModellingFramework" | 10,764 |
The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the user goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection, and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals. | PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel
Coordinate Displays | 10,765 |
Many algorithms used are based on geometrical computation. There are several criteria in selecting appropriate algorithm from already known. Recently, the fastest algorithms have been preferred. Nowadays, algorithms with a high stability are preferred. Also technology and computer architecture, like GPU etc., plays a significant role for large data processing. However, some algorithms are ill-conditioned due to numerical representation used; result of the floating point representation. In this paper, relations between projective representation, duality and Plucker coordinates will be explored with demonstration on simple geometric examples. The presented approach is convenient especially for application on GPUs or vector-vector computational architectures | Projective Geometry, Duality and Plucker Coordinates for Geometric
Computations with Determinants on GPUs | 10,766 |
There are many applications in which a bounding sphere containing the given triangle E3 is needed, e.g. fast collision detection, ray-triangle intersecting in raytracing etc. This is a typical geometrical problem in E3 and it has also applications in computational problems in general. In this paper a new fast and robust algorithm of circumscribed sphere computation in the -dimensional space is presented and specification for the E3 space is given, too. The presented method is convenient for use on GPU or with SSE or Intel AVX instructions on a standard CPU | A New Robust Algorithm for Computation of a Triangle Circumscribed
Sphere in E3 and a Hypersphere Simplex | 10,767 |
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies, etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E3 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(lg N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly. | Point-in-Convex Polygon and Point-in-Convex Polyhedron Algorithms with
O(1) Complexity using Space Subdivision | 10,768 |
Finding a low dimensional parametric representation of measured BRDF remains challenging. Currently available solutions are either not interpretable, or rely on limited analytical solutions, or require expensive test subject based investigations. In this work, we strive to establish a parametrization space that affords the data-driven representation variance of measured BRDF models while still offering the artistic control of parametric analytical BRDFs. We present a machine learning approach that generates an interpretable disentangled parameter space. A disentangled representation is one in which each parameter is responsible for a unique generative factor and is insensitive to the ones encoded by the other parameters. To that end, we resort to a $\beta$-Variational AutoEncoder ($\beta$-VAE), a specific architecture of Deep Neural Network (DNN). After training our network, we analyze the parametrization space and interpret the learned generative factors utilizing our visual perception. It should be noted that perceptual analysis is called upon downstream of the system for interpretation purposes compared to most other existing methods where it is used upfront to elaborate the parametrization. In addition to that, we do not need a test subject investigation. A novel feature of our interpretable disentangled parametrization is the post-processing capability to incorporate new parameters along with the learned ones, thus expanding the richness of producible appearances. Furthermore, our solution allows more flexible and controllable material editing possibilities than manifold exploration. Finally, we provide a rendering interface, for real-time material editing and interpolation based on the presented new parametrization system. | Interpretable Disentangled Parametrization of Measured BRDF with
$β$-VAE | 10,769 |
Modeling arbitrarily large deformations of surfaces smoothly embedded in three-dimensional space is challenging. The difficulties come from two aspects: the existing geometry processing or forward simulation methods penalize the difference between the current status and the rest configuration to maintain the initial shape, which will lead to sharp spikes or wiggles for large deformations; the co-dimensional nature of the problem makes it more complicated because the deformed surface has to locally satisfy compatibility conditions on fundamental forms to guarantee a feasible solution exists. To address these two challenges, we propose a rotation-strain method to modify the fundamental forms in a compatible way, and model the large deformation of surface meshes smoothly using plasticity. The user prescribes the positions of a few vertices, and our method finds a smooth strain and rotation field under which the surface meets the target positions. We demonstrate several examples whereby triangle meshes are smoothly deformed to large strains while meeting user constraints. | A Rotation-Strain Method to Model Surfaces using Plasticity | 10,770 |
We present a method for transferring the style from a set of images to a 3D object. The texture appearance of an asset is optimized with a differentiable renderer in a pipeline based on losses using pretrained deep neural networks. More specifically, we utilize a nearest-neighbor feature matching loss with CLIP-ResNet50 to extract the style from images. We show that a CLIP- based style loss provides a different appearance over a VGG-based loss by focusing more on texture over geometric shapes. Additionally, we extend the loss to support multiple images and enable loss-based control over the color palette combined with automatic color palette extraction from style images. | CLIP-based Neural Neighbor Style Transfer for 3D Assets | 10,771 |
Human motion synthesis and editing are essential to many applications like film post-production. However, they often introduce artefacts in motions, which can be detrimental to the perceived realism. In particular, footskating is a frequent and disturbing artefact requiring foot contacts knowledge to be cleaned up. Current approaches to obtain foot contact labels rely either on unreliable threshold-based heuristics or on tedious manual annotation. In this article, we address foot contact label detection from motion with a deep learning. To this end, we first publicly release UnderPressure, a novel motion capture database labelled with pressure insoles data serving as reliable knowledge of foot contact with the ground. Then, we design and train a deep neural network to estimate ground reaction forces exerted on the feet from motion data and then derive accurate foot contact labels. The evaluation of our model shows that we significantly outperform heuristic approaches based on height and velocity thresholds and that our approach is much more robust on motion sequences suffering from perturbations like noise or footskate. We further propose a fully automatic workflow for footskate cleanup: foot contact labels are first derived from estimated ground reaction forces. Then, footskate is removed by solving foot constraints through an optimisation-based inverse kinematics (IK) approach that ensures consistency with the estimated ground reaction forces. Beyond footskate cleanup, both the database and the method we propose could help to improve many approaches based on foot contact labels or ground reaction forces, including inverse dynamics problems like motion reconstruction and learning of deep motion models in motion synthesis or character animation. Our implementation, pre-trained model as well as links to database can be found at https://github.com/InterDigitalInc/UnderPressure. | UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction
Force Estimation and Footskate Cleanup | 10,772 |
Line intersection with convex and un-convex polygons or polyhedron algorithms are well known as line clipping algorithms and very often used in computer graphics. Rendering of geometrical problems often leads to ray tracing techniques, when an intersection of many lines with spheres or quadrics is a critical issue due to ray-tracing algorithm complexity. A new formulation of detection and computation of the intersection of line (ray) with a quadric surface is presented, which separates geometric properties of the line and quadrics that enables pre-computation. The presented approach is especially convenient for implementation with SSE instructions or on GPU | A New Approach to Line-Sphere and Line-Quadrics Intersection Detection
and Computation | 10,773 |
This paper presents a new approach to computation of geometric continuity for parametric bi-cubic patches, based on a simple mathematical reformulation which leads to simple additional conditions to be applied in the patching computation. The paper presents an Hermite formulation of a bicubic parametric patch, but reformulations can be made also for B\'ezier and B-Spline patches as well. The presented approach is convenient for the cases when valencies of corners are different from the value 4, in general. | New Geometric Continuity Solution of Parametric Surfaces | 10,774 |
Airfoil shape design is a classical problem in engineering and manufacturing. In this work, we combine principled physics-based considerations for the shape design problem with modern computational techniques using a data-driven approach. Modern and traditional analyses of 2D and 3D aerodynamic shapes reveal a flow-based sensitivity to specific deformations that can be represented generally by affine transformations (rotation, scaling, shearing, translation). We present a novel representation of shapes that decouples affine-style deformations over a submanifold and a product submanifold principally of the Grassmannian. As an analytic generative model, the separable representation, informed by a database of physically relevant airfoils, offers (i) a rich set of novel 2D airfoil deformations not previously captured in the data, (ii) an improved low-dimensional parameter domain for inferential statistics informing design/manufacturing, and (iii) consistent 3D blade representation and perturbation over a sequence of nominal 2D shapes. | Separable Shape Tensors for Aerodynamic Design | 10,775 |
Creating 3D shapes from 2D drawings is an important problem with applications in content creation for computer animation and virtual reality. We introduce a new sketch-based system, CreatureShop, that enables amateurs to create high-quality textured 3D character models from 2D drawings with ease and efficiency. CreatureShop takes an input bitmap drawing of a character (such as an animal or other creature), depicted from an arbitrary descriptive pose and viewpoint, and creates a 3D shape with plausible geometric details and textures from a small number of user annotations on the 2D drawing. Our key contributions are a novel oblique view modeling method, a set of systematic approaches for producing plausible textures on the invisible or occluded parts of the 3D character (as viewed from the direction of the input drawing), and a user-friendly interactive system. We validate our system and methods by creating numerous 3D characters from various drawings, and compare our results with related works to show the advantages of our method. We perform a user study to evaluate the usability of our system, which demonstrates that our system is a practical and efficient approach to create fully-textured 3D character models for novice users. | CreatureShop: Interactive 3D Character Modeling and Texturing from a
Single Color Drawing | 10,776 |
Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews. | MosaicSets: Embedding Set Systems into Grid Graphs | 10,777 |
This work proposes a framework for the patient-specific characterization of the spine, which integrates information on the tissues with geometric information on the spine morphology. Key elements are the extraction of 3D patient-specific models of each vertebra and the intervertebral space from 3D CT images, the segmentation of each vertebra in its three functional regions, and the analysis of the tissue condition in the functional regions based on geometrical parameters. The localization of anomalies obtained in the results and the proposed visualization support the applicability of our tool for quantitative and visual evaluation of possible damages, for surgery planning, and early diagnosis or follow-up studies. Finally, we discuss the main properties of the proposed framework in terms of characterisation of the morphology and pathology of the spine on benchmarks of the spine district. | 3D Anatomical Representations and Analysis: an Application to the Spine | 10,778 |
Projection matrices are necessary for a large portion of rendering computer graphics. There are primarily two different types of projection matrices -- perspective and orthographic -- which are used frequently, and are traditionally treated as mutually incompatible with each other in how they are defined. Here, we bridge the gap between the two different forms of projection matrices to present a single generalized projection matrix that can represent both. | Generalized Projection Matrices | 10,779 |
The overdraw problem of scatterplots seriously interferes with the visual tasks. Existing methods, such as data sampling, node dispersion, subspace mapping, and visual abstraction, cannot guarantee the correspondence and consistency between the data points that reflect the intrinsic original data distribution and the corresponding visual units that reveal the presented data distribution, thus failing to obtain an overlap-free scatterplot with unbiased and lossless data distribution. A dual space coupling model is proposed in this paper to represent the complex bilateral relationship between data space and visual space theoretically and analytically. Under the guidance of the model, an overlap-free scatterplot method is developed through integration of the following: a geometry-based data transformation algorithm, namely DistributionTranscriptor; an efficient spatial mutual exclusion guided view transformation algorithm, namely PolarPacking; an overlap-free oriented visual encoding configuration model and a radius adjustment tool, namely $f_{r_{draw}}$. Our method can ensure complete and accurate information transfer between the two spaces, maintaining consistency between the newly created scatterplot and the original data distribution on global and local features. Quantitative evaluation proves our remarkable progress on computational efficiency compared with the state-of-the-art methods. Three applications involving pattern enhancement, interaction improvement, and overdraw mitigation of trajectory visualization demonstrate the broad prospects of our method. | Dual Space Coupling Model Guided Overlap-Free Scatterplot | 10,780 |
Environment maps with high dynamic range lighting, such as daylight sky maps, require importance sampling to keep the balance between noise and number of samples per pixel manageable. Typically, importance sampling schemes for environment maps are based directly on the map parameterization, e.g. equirectangular maps, and do not work with alternative parameterizations that might provide better sampling quality. In this paper, an importance sampling scheme based on an equal-area projection of the sphere is proposed that is easy to implement and works independently of the environment map parameterization or resolution. This allows to apply the same scheme to equirectangular maps, cube map variants, or any other map representation, and to adapt the importance sampling granularity to the requirements of the map contents. | Parameterization-Independent Importance Sampling of Environment Maps | 10,781 |
In this paper, we propose a wavelet-based video codec specifically designed for VR displays that enables real-time playback of high-resolution 360{\deg} videos. Our codec exploits the fact that only a fraction of the full 360{\deg} video frame is visible on the display at any time. To load and decode the video viewport-dependently in real time, we make use of the wavelet transform for intra- as well as inter-frame coding. Thereby, the relevant content is directly streamed from the drive, without the need to hold the entire frames in memory. With an average of 193 frames per second at 8192x8192-pixel full-frame resolution, the conducted evaluation demonstrates that our codec's decoding performance is up to 272% higher than that of the state-of-the-art video codecs H.265 and AV1 for typical VR displays. By means of a perceptual study, we further illustrate the necessity of high frame rates for a better VR experience. Finally, we demonstrate how our wavelet-based codec can also directly be used in conjunction with foveation for further performance increase. | Wavelet-Based Fast Decoding of 360-Degree Videos | 10,782 |
The dynamic effects of smoke are impressive in illustration design, but it is a troublesome and challenging issue for common users to design the smoke effect without domain knowledge of fluid simulations. In this work, we propose DualSmoke, two stage global-to-local generation framework for the interactive smoke illustration design. For the global stage, the proposed approach utilizes fluid patterns to generate Lagrangian coherent structure from the user's hand-drawn sketches. For the local stage, the detailed flow patterns are obtained from the generated coherent structure. Finally, we apply the guiding force field to the smoke simulator to design the desired smoke illustration. To construct the training dataset, DualSmoke generates flow patterns using the finite-time Lyapunov exponents of the velocity fields. The synthetic sketch data is generated from the flow patterns by skeleton extraction. From our user study, it is verified that the proposed design interface can provide various smoke illustration designs with good user usability. Our code is available at: https://github.com/shasph/DualSmoke | DualSmoke: Sketch-Based Smoke Illustration Design with Two-Stage
Generative Model | 10,783 |
Numerical simulation has become omnipresent in the automotive domain, posing new challenges such as high-dimensional parameter spaces and large as well as incomplete and multi-faceted data. In this design study, we show how interactive visual exploration and analysis of high-dimensional, spectral data from noise simulation can facilitate design improvements in the context of conflicting criteria. Here, we focus on structure-borne noise, i.e., noise from vibrating mechanical parts. Detecting problematic noise sources early in the design and production process is essential for reducing a product's development costs and its time to market. In a close collaboration of visualization and automotive engineering, we designed a new, interactive approach to quickly identify and analyze critical noise sources, also contributing to an improved understanding of the analyzed system. Several carefully designed, interactive linked views enable the exploration of noises, vibrations, and harshness at multiple levels of detail, both in the frequency and spatial domain. This enables swift and smooth changes of perspective; selections in the frequency domain are immediately reflected in the spatial domain, and vice versa. Noise sources are quickly identified and shown in the context of their neighborhood, both in the frequency and spatial domain. We propose a novel drill-down view, especially tailored to noise data analysis. Split boxplots and synchronized 3D geometry views support comparison tasks. With this solution, engineers iterate over design optimizations much faster, while maintaining a good overview at each iteration. We evaluated the new approach in the automotive industry, studying noise simulation data for an internal combustion engine. | Interactive Visual Analysis of Structure-borne Noise Data | 10,784 |
Animating an avatar that reflects a user's action in the VR world enables natural interactions with the virtual environment. It has the potential to allow remote users to communicate and collaborate in a way as if they met in person. However, a typical VR system provides only a very sparse set of up to three positional sensors, including a head-mounted display (HMD) and optionally two hand-held controllers, making the estimation of the user's full-body movement a difficult problem. In this work, we present a data-driven physics-based method for predicting the realistic full-body movement of the user according to the transformations of these VR trackers and simulating an avatar character to mimic such user actions in the virtual world in real-time. We train our system using reinforcement learning with carefully designed pretraining processes to ensure the success of the training and the quality of the simulation. We demonstrate the effectiveness of the method with an extensive set of examples. | Neural3Points: Learning to Generate Physically Realistic Full-body
Motion for Virtual Reality Users | 10,785 |
Transfer function (TF) plays a key role for the generation of direct volume rendering (DVR), by enabling accurate identification of structures of interest (SOIs) interactively as well as ensuring appropriate visibility of them. Attempts at mitigating the repetitive manual process of TF design have led to approaches that make use of a knowledge database consisting of pre-designed TFs by domain experts. In these approaches, a user navigates the knowledge database to find the most suitable pre-designed TF for their input volume to visualize the SOIs. Although these approaches potentially reduce the workload to generate the TFs, they, however, require manual TF navigation of the knowledge database, as well as the likely fine tuning of the selected TF to suit the input. In this work, we propose a TF design approach where we introduce a new content-based retrieval (CBR) to automatically navigate the knowledge database. Instead of pre-designed TFs, our knowledge database contains image volumes with SOI labels. Given an input image volume, our CBR approach retrieves relevant image volumes (with SOI labels) from the knowledge database; the retrieved labels are then used to generate and optimize TFs of the input. This approach does not need any manual TF navigation and fine tuning. For improving SOI retrieval performance, we propose a two-stage CBR scheme to enable the use of local intensity and regional deep image feature representations in a complementary manner. We demonstrate the capabilities of our approach with comparison to a conventional CBR approach in visualization, where an intensity profile matching algorithm is used, and also with potential use-cases in medical image volume visualization where DVR plays an indispensable role for different clinical usages. | A Transfer Function Design Using A Knowledge Database based on Deep
Image and Primitive Intensity Profile Features Retrieval | 10,786 |
We present an algorithm that allows a user within a virtual environment to perform real-time unconstrained cuts or consecutive tears, i.e., progressive, continuous fractures on a deformable rigged and soft-body mesh model in high-performance 10ms. In order to recreate realistic results for different physically-principled materials such as sponges, hard or soft tissues, we incorporate a novel soft-body deformation, via a particle system layered on-top of a linear-blend skinning model. Our framework allows the simulation of realistic, surgical-grade cuts and continuous tears, especially valuable in the context of medical VR training. In order to achieve high performance in VR, our algorithms are based on Euclidean geometric predicates on the rigged mesh, without requiring any specific model pre-processing. The contribution of this work lies on the fact that current frameworks supporting similar kinds of model tearing, either do not operate in high-performance real-time or only apply to predefined tears. The framework presented allows the user to freely cut or tear a 3D mesh model in a consecutive way, under 10ms, while preserving its soft-body behaviour and/or allowing further animation. | Progressive tearing and cutting of soft-bodies in high-performance
virtual reality | 10,787 |
In this work, we propose MAGES 4.0, a novel Software Development Kit (SDK) to accelerate the creation of collaborative medical training applications in VR/AR. Our solution is essentially a low-code metaverse authoring platform for developers to rapidly prototype high-fidelity and high-complexity medical simulations. MAGES breaks the authoring boundaries across extended reality, since networked participants can also collaborate using different virtual/augmented reality as well as mobile and desktop devices, in the same metaverse world. With MAGES we propose an upgrade to the outdated 150-year-old master-apprentice medical training model. Our platform incorporates, in a nutsell, the following novelties: a) 5G edge-cloud remote rendering and physics dissection layer, b) realistic real-time simulation of organic tissues as soft-bodies under 10ms, c) a highly realistic cutting and tearing algorithm, d) neural network assessment for user profiling and, e) a VR recorder to record and replay or debrief the training simulation from any perspective. | MAGES 4.0: Accelerating the world's transition to VR training and
democratizing the authoring of the medical metaverse | 10,788 |
Poisson surface reconstruction (PSR) remains a popular technique for reconstructing watertight surfaces from 3D point samples thanks to its efficiency, simplicity, and robustness. Yet, the existing PSR method and subsequent variants work only for oriented points. This paper intends to validate that an improved PSR, called iPSR, can completely eliminate the requirement of point normals and proceed in an iterative manner. In each iteration, iPSR takes as input point samples with normals directly computed from the surface obtained in the preceding iteration, and then generates a new surface with better quality. Extensive quantitative evaluation confirms that the new iPSR algorithm converges in 5-30 iterations even with randomly initialized normals. If initialized with a simple visibility based heuristic, iPSR can further reduce the number of iterations. We conduct comprehensive comparisons with PSR and other powerful implicit-function based methods. Finally, we confirm iPSR's effectiveness and scalability on the AIM@SHAPE dataset and challenging (indoor and outdoor) scenes. Code and data for this paper are at https://github.com/houfei0801/ipsr. | Iterative Poisson Surface Reconstruction (iPSR) for Unoriented Points | 10,789 |
Real-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a flexible model for upper-body interactions, based on the anticipation of the character's surroundings, and on antagonistic controllers to adapt the amount of muscular stiffness and response time to better deal with obstacles. Our solution relies on a hybrid method for character animation that couples a keyframe sequence with kinematic constraints and lightweight physics. The dynamic response of the character's upper-limbs leverages antagonistic controllers, allowing us to tune tension/relaxation in the upper-body without diverging from the reference keyframe motion. A new sight model, controlled by procedural rules, enables high-level authoring of the way the character generates interactions by adapting its stiffness and reaction time. As results show, our real-time method offers precise and explicit control over the character's behavior and style, while seamlessly adapting to new situations. Our model is therefore well suited for gaming applications. | Generating Upper-Body Motion for Real-Time Characters Making their Way
through Dynamic Environments | 10,790 |
When we move on snow, sand, or mud, the ground deforms under our feet, immediately affecting our gait. We propose a physically based model for computing such interactions in real time, from only the kinematic motion of a virtual character. The force applied by each foot on the ground during contact is estimated from the weight of the character, its current balance, the foot speed at the time of contact, and the nature of the ground. We rely on a standard stress-strain relationship to compute the dynamic deformation of the soil under this force, where the amount of compression and lateral displacement of material are, respectively, parameterized by the soil's Young modulus and Poisson ratio. The resulting footprint is efficiently applied to the terrain through procedural deformations of refined terrain patches, while the addition of a simple controller on top of a kinematic character enables capturing the effect of ground deformation on the character's gait. As our results show, the resulting footprints greatly improve visual realism, while ground compression results in consistent changes in the character's motion. Readily applicable to any locomotion gait and soft soil material, our real-time model is ideal for enhancing the visual realism of outdoor scenes in video games and virtual reality applications. | Real-Time Locomotion on Soft Grounds With Dynamic Footprints | 10,791 |
Fluidic devices are crucial components in many industrial applications involving fluid mechanics. Computational design of a high-performance fluidic system faces multifaceted challenges regarding its geometric representation and physical accuracy. We present a novel topology optimization method to design fluidic devices in a Stokes flow context. Our approach is featured by its capability in accommodating a broad spectrum of boundary conditions at the solid-fluid interface. Our key contribution is an anisotropic and differentiable constitutive model that unifies the representation of different phases and boundary conditions in a Stokes model, enabling a topology optimization method that can synthesize novel structures with accurate boundary conditions from a background grid discretization. We demonstrate the efficacy of our approach by conducting several fluidic system design tasks with over four million design parameters. | Fluidic Topology Optimization with an Anisotropic Mixture Model | 10,792 |
Computational fluid dynamic simulations often produce large clusters of finite elements with non-trivial, non-convex boundaries and uneven distributions among compute nodes, posing challenges to compositing during interactive volume rendering. Correct, in-place visualization of such clusters becomes difficult because viewing rays straddle domain boundaries across multiple compute nodes. We propose a GPU-based, scalable, memory-efficient direct volume visualization framework suitable for in~situ and post~hoc usage. Our approach reduces memory usage of the unstructured volume elements by leveraging an exclusive or-based index reduction scheme and provides fast ray-marching-based traversal without requiring large external data structures built over the elements themselves. Moreover, we present a GPU-optimized deep compositing scheme that allows correct order compositing of intermediate color values accumulated across different ranks that works even for non-convex clusters. Our method scales well on large data-parallel systems and achieves interactive frame rates during visualization. We can interactively render both Fun3D Small Mars Lander (14 GB / 798.4 million finite elements) and Huge Mars Lander (111.57 GB / 6.4 billion finite elements) data sets at 14 and 10 frames per second using 72 and 80 GPUs, respectively, on TACC's Frontera supercomputer. | GPU-based Data-parallel Rendering of Large, Unstructured, and
Non-convexly Partitioned Data | 10,793 |
Signed Distance Fields (SDFs) for surface representation are commonly generated offline and subsequently loaded into interactive applications like games. Since they are not updated every frame, they only provide a rigid surface representation. While there are methods to generate them quickly on GPU, the efficiency of these approaches is limited at high resolutions. This paper showcases a novel technique that combines jump flooding and ray tracing to generate approximate SDFs in real-time for soft shadow approximation, achieving prominent shadow penumbras while maintaining interactive frame rates. | RTSDF: Generating Signed Distance Fields in Real Time for Soft Shadow
Rendering | 10,794 |
Depth of Field (DoF) in games is usually achieved as a post-process effect by blurring pixels in the sharp rasterized image based on the defined focus plane. This paper describes a novel real-time DoF technique that uses ray tracing with image filtering to achieve more accurate partial occlusion semi-transparencies on edges of blurry foreground geometry. This hybrid rendering technique leverages ray tracing hardware acceleration as well as spatio-temporal reconstruction techniques to achieve interactive frame rates. | Hybrid DoF: Ray-Traced and Post-Processed Hybrid Depth of Field Effect
for Real-Time Rendering | 10,795 |
For a foreground object in motion, details of its background which would otherwise be hidden are uncovered through its inner blur. This paper presents a novel hybrid motion blur rendering technique combining post-process image filtering and hardware-accelerated ray tracing. In each frame, we advance rays recursively into the scene to retrieve background information for inner blur regions and apply a post-process filtering pass on the ray-traced background and rasterized colour before compositing them together. Our approach achieves more accurate partial occlusion semi-transparencies for moving objects while maintaining interactive frame rates. | Hybrid MBlur: Using Ray Tracing to Solve the Partial Occlusion Artifacts
in Real-Time Rendering of Motion Blur Effect | 10,796 |
We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications. Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server and client can communicate effectively over the network and work together to produce a high-quality output while maintaining interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as compared to remote rendering. | Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR
Applications | 10,797 |
Real-time depth of field in game cinematics tends to approximate the semi-transparent silhouettes of out-of-focus objects through post-processing techniques. We leverage ray tracing hardware acceleration and spatio-temporal reconstruction to improve the realism of such semi-transparent regions through hybrid rendering, while maintaining interactive frame rates for immersive gaming. This paper extends our previous work with a complete presentation of our technique and details on its design, implementation, and future work. | A Hybrid System for Real-time Rendering of Depth of Field Effect in
Games | 10,798 |
Motion blur is commonly used in game cinematics to achieve photorealism by modelling the behaviour of the camera shutter and simulating its effect associated with the relative motion of scene objects. A common real-time post-process approach is spatial sampling, where the directional blur of a moving object is rendered by integrating its colour based on velocity information within a single frame. However, such screen space approaches typically cannot produce accurate partial occlusion semi-transparencies. Our real-time hybrid rendering technique leverages hardware-accelerated ray tracing to correct post-process partial occlusion artifacts by advancing rays recursively into the scene to retrieve background information for motion-blurred regions, with reasonable additional performance cost for rendering game contents. We extend our previous work with details on the design, implementation, and future work of the technique as well as performance comparisons with post-processing. | Hybrid MBlur: A Systematic Approach to Augment Rasterization with Ray
Tracing for Rendering Motion Blur in Games | 10,799 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.