text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
We revisit the design space of visualizations aiming at identifying and relating its components. In this sense, we establish a model to examine the process through which visualizations become expressive for users. This model has leaded us to a taxonomy oriented to the human visual perception, a conceptualization that provides natural criteria in order to delineate a novel understanding for the visualization design space. The new organization of concepts that we introduce is our main contribution: a grammar for the visualization design based on the review of former works and of classical and state-of-the-art techniques. Like so, the paper is presented as a survey whose structure introduces a new conceptualization for the space of techniques concerning visual analysis. | The Spatial-Perceptual Design Space: a new comprehension for Data
Visualization | 10,000 |
In this report, we revisit the work of Pilleboue et al. [2015], providing a representation-theoretic derivation of the closed-form expression for the expected value and variance in homogeneous Monte Carlo integration. We show that the results obtained for the variance estimation of Monte Carlo integration on the torus, the sphere, and Euclidean space can be formulated as specific instances of a more general theory. We review the related representation theory and show how it can be used to derive a closed-form solution. | Variance Analysis for Monte Carlo Integration: A
Representation-Theoretic Perspective | 10,001 |
We propose a new gradient-domain technique for processing registered EM image stacks to remove inter-image discontinuities while preserving intra-image detail. To this end, we process the image stack by first performing anisotropic smoothing along the slice axis and then solving a Poisson equation within each slice to re-introduce the detail. The final image stack is continuous across the slice axis and maintains sharp details within each slice. Adapting existing out-of-core techniques for solving the linear system, we describe a parallel algorithm with time complexity that is linear in the size of the data and space complexity that is sub-linear, allowing us to process datasets as large as five teravoxels with a 600 MB memory footprint. | Gradient-Domain Fusion for Color Correction in Large EM Image Stacks | 10,002 |
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details. | Pushing the Limits of 3D Color Printing: Error Diffusion with
Translucent Materials | 10,003 |
This paper presents an analytical taxonomy that can suitably describe, rather than simply classify, techniques for data presentation. Unlike previous works, we do not consider particular aspects of visualization techniques, but their mechanisms and foundational vision perception. Instead of just adjusting visualization research to a classification system, our aim is to better understand its process. For doing so, we depart from elementary concepts to reach a model that can describe how visualization techniques work and how they convey meaning. | Reviewing Data Visualization: an Analytical Taxonomical Study | 10,004 |
In this paper, we present a data-driven approach to generate realistic steering behaviors for virtual crowds in crowd simulation. We take advantage of both rule-based models and data-driven models by applying the interaction patterns discovered from crowd videos. Unlike existing example-based models in which current states are matched to states extracted from crowd videos directly, our approach adopts a hierarchical mechanism to generate the steering behaviors of agents. First, each agent is classified into one of the interaction patterns that are automatically discovered from crowd video before simulation. Then the most matched action is selected from the associated interaction pattern to generate the steering behaviors of the agent. By doing so, agents can avoid performing a simple state matching as in the traditional example-based approaches, and can perform a wider variety of steering behaviors as well as mimic the cognitive process of pedestrians. Simulation results on scenarios with different crowd densities and main motion directions demonstrate that our approach performs better than two state-of-the-art simulation models, in terms of prediction accuracy. Besides, our approach is efficient enough to run at interactive rates in real time simulation. | A Clustering Based Approach for Realistic and Efficient Data-Driven
Crowd Simulation | 10,005 |
There is an increasing requirement for efficient image retargeting techniques to adapt the content to various forms of digital media. With rapid growth of mobile communications and dynamic web page layouts, one often needs to resize the media content to adapt to the desired display sizes. For various layouts of web pages and typically small sizes of handheld portable devices, the importance in the original image content gets obfuscated after resizing it with the approach of uniform scaling. Thus, there occurs a need for resizing the images in a content aware manner which can automatically discard irrelevant information from the image and present the salient features with more magnitude. There have been proposed some image retargeting techniques keeping in mind the content awareness of the input image. However, these techniques fail to prove globally effective for various kinds of images and desired sizes. The major problem is the inefficiency of these algorithms to process these images with minimal visual distortion while also retaining the meaning conveyed from the image. In this dissertation, we present a novel perspective for content aware image retargeting, which is well implementable in real time. We introduce a novel method of analysing semantic information within the input image while also maintaining the important and visually significant features. We present the various nuances of our algorithm mathematically and logically, and show that the results prove better than the state-of-the-art techniques. | A Novel Semantics and Feature Preserving Perspective for Content Aware
Image Retargeting | 10,006 |
The "Ajeijadinho 3D" project is an initiative supported by the University of S\~ao Paulo (Museum of Science and Dean of Culture and Extension), which involves the 3D digitization of art works of Brazilian sculptor Antonio Francisco Lisboa, better known as Aleijadinho. The project made use of advanced acquisition and processing of 3D meshes for preservation and dissemination of the cultural heritage. The dissemination occurs through a Web portal, so that the population has the opportunity to meet the art works in detail using 3D visualization and interaction. The portal address is http://www.aleijadinho3d.icmc.usp.br. The 3D acquisitions were conducted over a week at the end of July 2013 in the cities of Ouro Preto, MG, Brazil and Congonhas do Campo, MG, Brazil. The scanning was done with a special equipment supplied by company Leica Geosystems, which allowed the work to take place at distances between 10 and 30 meters, defining a non-invasive procedure, simplified logistics, and without the need for preparation or isolation of the sites. In Ouro Preto, we digitized the churches of Francisco of Assis, Our Lady of Carmo, and Our Lady of Mercy; in Congonhas do Campo we scanned the entire Sanctuary of Bom Jesus de Matosinhos and his 12 prophets. Once scanned, the art works went through a long process of preparation, which required careful handling of meshes done by experts from the University of S\~ao Paulo in partnership with company Imprimate. | The 12 prophets dataset | 10,007 |
This paper proposes a simple and efficient method for the reconstruction and extraction of geometric parameters from 3D tubular objects. Our method constructs an image that accumulates surface normal information, then peaks within this image are located by tracking. Finally, the positions of these are optimized to lie precisely on the tubular shape centerline. This method is very versatile, and is able to process various input data types like full or partial mesh acquired from 3D laser scans, 3D height map or discrete volumetric images. The proposed algorithm is simple to implement, contains few parameters and can be computed in linear time with respect to the number of surface faces. Since the extracted tube centerline is accurate, we are able to decompose the tube into rectilinear parts and torus-like parts. This is done with a new linear time 3D torus detection algorithm, which follows the same principle of a previous work on 2D arc circle recognition. Detailed experiments show the versatility, accuracy and robustness of our new method. | 3D Geometric Analysis of Tubular Objects based on Surface Normal
Accumulation | 10,008 |
3D shape creation and modeling remains a challenging task especially for novice users. Many methods in the field of computer graphics have been proposed to automate the often repetitive and precise operations needed during the modeling of detailed shapes. This report surveys different approaches of shape modeling and correspondence especially for shapes exhibiting topological complexity. We focus on methods designed to help generate or process shapes with large number of interconnected components often found in man-made shapes. We first discuss a variety of modeling techniques, that leverage existing shapes, in easy to use creative modeling systems. We then discuss possible correspondence strategies for topologically different shapes as it is a requirement for such systems. Finally, we look at different shape representations and tools that facilitate the modification of shape topology and we focus on those particularly useful in free-form 3D modeling. | Modeling and Correspondence of Topologically Complex 3D Shapes | 10,009 |
In the last years, Distributed Visualization over Personal Computer (PC) clusters has become important for research and industrial communities. They have made large-scale visualizations practical and more accessible. In this work we survey Distributed Visualization techniques aiming at compiling last decade's literature on the use of PC clusters as suitable alternatives to high-end workstations. We review the topic by defining basic concepts, enumerating system requirements and implementation challenges, and presenting up-to-date methodologies. Our work fulfills the needs of newcomers and seasoned professionals as an introductory compilation at the same time that it can help experienced personnel by organizing ideas. | A Survey on Distributed Visualization Techniques over Clusters of
Personal Computers | 10,010 |
Designing programming environments for physical simulation is challenging because simulations rely on diverse algorithms and geometric domains. These challenges are compounded when we try to run efficiently on heterogeneous parallel architectures. We present Ebb, a domain-specific language (DSL) for simulation, that runs efficiently on both CPUs and GPUs. Unlike previous DSLs, Ebb uses a three-layer architecture to separate (1) simulation code, (2) definition of data structures for geometric domains, and (3) runtimes supporting parallel architectures. Different geometric domains are implemented as libraries that use a common, unified, relational data model. By structuring the simulation framework in this way, programmers implementing simulations can focus on the physics and algorithms for each simulation without worrying about their implementation on parallel computers. Because the geometric domain libraries are all implemented using a common runtime based on relations, new geometric domains can be added as needed, without specifying the details of memory management, mapping to different parallel architectures, or having to expand the runtime's interface. We evaluate Ebb by comparing it to several widely used simulations, demonstrating comparable performance to hand-written GPU code where available, and surpassing existing CPU performance optimizations by up to 9$\times$ when no GPU code exists. | Ebb: A DSL for Physical Simulation on CPUs and GPUs | 10,011 |
One of the most useful techniques to help visual data analysis systems is interactive filtering (brushing). However, visualization techniques often suffer from overlap of graphical items and multiple attributes complexity, making visual selection inefficient. In these situations, the benefits of data visualization are not fully observable because the graphical items do not pop up as comprehensive patterns. In this work we propose the use of content-based data retrieval technology combined with visual analytics. The idea is to use the similarity query functionalities provided by metric space systems in order to select regions of the data domain according to user-guidance and interests. After that, the data found in such regions feed multiple visualization workspaces so that the user can inspect the correspondent datasets. Our experiments showed that the methodology can break the visual analysis process into smaller problems (views) and that the views hold the expectations of the analyst according to his/her similarity query selection, improving data perception and analytical possibilities. Our contribution introduces a principle that can be used in all sorts of visualization techniques and systems, this principle can be extended with different kinds of integration visualization-metric-space, and with different metrics, expanding the possibilities of visual data analysis in aspects such as semantics and scalability. | Combining Visual Analytics and Content Based Data Retrieval Technology
for Efficient Data Analysis | 10,012 |
Solving large-scale optimization on-the-fly is often a difficult task for real-time computer graphics applications. To tackle this challenge, model reduction is a well-adopted technique. Despite its usefulness, model reduction often requires a handcrafted subspace that spans a domain that hypothetically embodies desirable solutions. For many applications, obtaining such subspaces case-by-case either is impossible or requires extensive human labors, hence does not readily have a scalable solution for growing number of tasks. We propose linear variational subspace design for large-scale constrained quadratic programming, which can be computed automatically without any human interventions. We provide meaningful approximation error bound that substantiates the quality of calculated subspace, and demonstrate its empirical success in interactive deformable modeling for triangular and tetrahedral meshes. | On the Approximation Theory of Linear Variational Subspace Design | 10,013 |
In this paper, we present a hybrid graph-drawing algorithm (GDA) for layouting large, naturally-clustered, disconnected graphs. We called it a hybrid algorithm because it is an implementation of a series of already known graph-drawing and graph-theoretic procedures. We remedy in this hybrid the problematic nature of the current force-based GDA which has the inability to scale to large, naturally-clustered, and disconnected graphs. These kinds of graph usually model the complex inter-relationships among entities in social, biological, natural, and artificial networks. Obviously, the hybrid runs longer than the current GDAs. By using two extreme cases of graphs as inputs, we present in this paper the derivation of the time complexity of the hybrid which we found to be $O(|\V|^3)$. | A Hybrid Graph-drawing Algorithm for Large, Naturally-clustered,
Disconnected Graphs | 10,014 |
Handle-driven deformation based on linear blending is widely used in many applications because of its merits in intuitiveness, efficiency and easiness of implementation. We provide a meshfree method to compute the smooth weights of linear blending for shape deformation. The C2-continuity of weighting is guaranteed by the carefully formulated basis functions, with which the computation of weights is in a closed-form. Criteria to ensure the quality of deformation are preserved by the basis functions after decomposing the shape domain according to the Voronoi diagram of handles. The cost of inserting a new handle is only the time to evaluate the distances from the new handle to all sample points in the space of deformation. Moreover, a virtual handle insertion algorithm has been developed to allow users freely placing handles while preserving the criteria on weights. Experimental examples for real-time 2D/3D deformations are shown to demonstrate the effectiveness of this method. | Meshfree C^2-Weighting for Shape Deformation | 10,015 |
The Hermite radial basis functions (HRBFs) implicits have been used to reconstruct surfaces from scattered Hermite data points. In this work, we propose a closed-form formulation to construct HRBF-based implicits by a quasi-solution approximating the exact solution. A scheme is developed to automatically adjust the support sizes of basis functions to hold the error bound of a quasi-solution. Our method can generate an implicit function from positions and normals of scattered points without taking any global operation. Working together with an adaptive sampling algorithm, the HRBF-based implicits can also reconstruct surfaces from point clouds with non-uniformity and noises. Robust and efficient reconstruction has been observed in our experimental tests on real data captured from a variety of scenes. | A Closed-Form Formulation of HRBF-Based Surface Reconstruction | 10,016 |
We analyze actual methods that generate smooth frame fields both in 2D and in 3D. We formalize the 2D problem by representing frames as functions (as it was done in 3D), and show that the derived optimization problem is the one that previous work obtain via "representation vectors." We show (in 2D) why this non linear optimization problem is easier to solve than directly minimizing the rotation angle of the field, and observe that the 2D algorithm is able to find good fields. Now, the 2D and the 3D optimization problems are derived from the same formulation (based on representing frames by functions). Their energies share some similarities from an optimization point of view (smoothness, local minima, bounds of partial derivatives, etc.), so we applied the 2D resolution mechanism to the 3D problem. Our evaluation of all existing 3D methods suggests to initialize the field by this new algorithm, but possibly use another method for further smoothing. | On Smooth 3D Frame Field Design | 10,017 |
In this paper, a de Casteljau algorithm to compute (p,q)-Bernstein Bezier curves based on (p,q)-integers is introduced. We study the nature of degree elevation and degree reduction for (p,q)-Bezier Bernstein functions. The new curves have some properties similar to q-Bezier curves. Moreover, we construct the corresponding tensor product surfaces over the rectangular domain (u, v) \in [0, 1] \times [0, 1] depending on four parameters. We also study the de Casteljau algorithm and degree evaluation properties of the surfaces for these generalization over the rectangular domain. Furthermore, some fundamental properties for (p,q)-Bernstein Bezier curves are discussed. We get q-Bezier curves and surfaces for (u, v) \in [0, 1] \times [0, 1] when we set the parameter p1 = p2 = 1. | A de Casteljau Algorithm for Bernstein type Polynomials based on
(p,q)-integers | 10,018 |
Good parametrisations of affine transformations are essential to interpolation, deformation, and analysis of shape, motion, and animation. It has been one of the central research topics in computer graphics. However, there is no single perfect method and each one has both advantages and disadvantages. In this paper, we propose a novel parametrisation of affine transformations, which is a generalisation to or an improvement of existing methods. Our method adds yet another choice to the existing toolbox and shows better performance in some applications. A C++ implementation is available to make our framework ready to use in various applications. | A concise parametrisation of affine transformation | 10,019 |
L-BFGS is a hill climbing method that is guarantied to converge only for convex problems. In computer graphics, it is often used as a black box solver for a more general class of non linear problems, including problems having many local minima. Some works obtain very nice results by solving such difficult problems with L-BFGS. Surprisingly, the method is able to escape local minima: our interpretation is that the approximation of the Hessian is smoother than the real Hessian, making it possible to evade the local minima. We analyse the behavior of L-BFGS on the design of 2D frame fields. It involves an energy function that is infinitly continuous, strongly non linear and having many local minima. Moreover, the local minima have a clear visual interpretation: they corresponds to differents frame field topologies. We observe that the performances of LBFGS are almost unpredictables: they are very competitive when the field is sampled on the primal graph, but really poor when they are sampled on the dual graph. | Inappropriate use of L-BFGS, Illustrated on frame field design | 10,020 |
The prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making, and is intended to be readable both by machines and humans. The language is designed to serve as a high-level user interface for intelligent cinematography and editing systems. | The Prose Storyboard Language: A Tool for Annotating and Directing
Movies | 10,021 |
In digital painting software, layers organize paintings. However, layers are not explicitly represented, transmitted, or published with the final digital painting. We propose a technique to decompose a digital painting into layers. In our decomposition, each layer represents a coat of paint of a single paint color applied with varying opacity throughout the image. Our decomposition is based on the painting's RGB-space geometry. In RGB-space, a geometric structure is revealed due to the linear nature of the standard Porter-Duff "over" pixel compositing operation. The vertices of the convex hull of pixels in RGB-space suggest paint colors. Users choose the degree of simplification to perform on the convex hull, as well as a layer order for the colors. We solve a constrained optimization problem to find maximally translucent, spatially coherent opacity for each layer, such that the composition of the layers reproduces the original image. We demonstrate the utility of the resulting decompositions for re-editing. | Decomposing Digital Paintings into Layers via RGB-space Geometry | 10,022 |
Many colour maps provided by vendors have highly uneven perceptual contrast over their range. It is not uncommon for colour maps to have perceptual flat spots that can hide a feature as large as one tenth of the total data range. Colour maps may also have perceptual discontinuities that induce the appearance of false features. Previous work in the design of perceptually uniform colour maps has mostly failed to recognise that CIELAB space is only designed to be perceptually uniform at very low spatial frequencies. The most important factor in designing a colour map is to ensure that the magnitude of the incremental change in perceptual lightness of the colours is uniform. The specific requirements for linear, diverging, rainbow and cyclic colour maps are developed in detail. To support this work two test images for evaluating colour maps are presented. The use of colour maps in combination with relief shading is considered and the conditions under which colour can enhance or disrupt relief shading are identified. Finally, a set of new basis colours for the construction of ternary images are presented. Unlike the RGB primaries these basis colours produce images whereby the salience of structures are consistent irrespective of the assignment of basis colours to data channels. | Good Colour Maps: How to Design Them | 10,023 |
CODYRUN is a software for computational aeraulic and thermal simulation in buildings developed by the Laboratory of Building Physics and Systems (L.P.B.S). Numerical simulation codes of artificial lighting have been introduced to extend the tool capacity. These calculation codes are able to predict the amount of light received by any point of a given working plane and from one or more sources installed on the ceiling of the room. The model used for these calculations is original and semi-detailed (simplified). The test case references of the task-3 TC-33 International Commission on Illumination (CIE) were applied to the software to ensure reliability to properly handle this photometric aspect. This allowed having a precise idea about the reliability of the results of numerical simulations. | Elements of Validation of Artificial Lighting through the Software
CODYRUN: Application to a Test Case of the International Commission on
Illumination (CIE) | 10,024 |
We present a technique for designing 3D-printed perforated lampshades, which project continuous grayscale images onto the surrounding walls. Given the geometry of the lampshade and a target grayscale image, our method computes a distribution of tiny holes over the shell, such that the combined footprints of the light emanating through the holes form the target image on a nearby diffuse surface. Our objective is to approximate the continuous tones and the spatial detail of the target image, to the extent possible within the constraints of the fabrication process. To ensure structural integrity, there are lower bounds on the thickness of the shell, the radii of the holes, and the minimal distances between adjacent holes. Thus, the holes are realized as thin tubes distributed over the lampshade surface. The amount of light passing through a single tube may be controlled by the tube's radius and by its direction (tilt angle). The core of our technique thus consists of determining a suitable configuration of the tubes: their distribution across the relevant portion of the lampshade, as well as the parameters (radius, tilt angle) of each tube. This is achieved by computing a capacity-constrained Voronoi tessellation over a suitably defined density function, and embedding a tube inside the maximal inscribed circle of each tessellation cell. The density function for a particular target image is derived from a series of simulated images, each corresponding to a different uniform density tube pattern on the lampshade. | Printed Perforated Lampshades for Continuous Projective Images | 10,025 |
In this paper, we present a surface remeshing method with high approximation quality based on Principal Component Analysis. Given a triangular mesh and a user assigned polygon/vertex budget, traditional methods usually require the extra curvature metric field for the desired anisotropy to best approximate the surface, even though the estimated curvature metric is known to be imperfect and already self-contained in the surface. In our approach, this anisotropic control is achieved through the optimal geometry partition without this explicit metric field. The minimization of our proposed partition energy has the following properties: Firstly, on a C2 surface, it is theoretically guaranteed to have the optimal aspect ratio and cluster size as specified in approximation theory for L1 piecewise linear approximation. Secondly, it captures sharp features on practical models without any pre-tagging. We develop an effective merging-swapping framework to seek the optimal partition and construct polygonal/triangular mesh afterwards. The effectiveness and efficiency of our method are demonstrated through the comparison with other state-of-the-art remeshing methods. | Surface Approximation via Asymptotic Optimal Geometric Partition | 10,026 |
Screen content coding (SCC) is becoming increasingly important in various applications, such as desktop sharing, video conferencing, and remote education. When compared to natural camera- captured content, screen content has different characteristics, in particular sharper edges. In this paper, we propose a novel intra prediction scheme for screen content video. In the proposed scheme, bilinear interpolation in angular intra prediction in HEVC is selectively replaced by nearest-neighbor intra prediction to preserve the sharp edges in screen content video. We present three different variants of the proposed nearest neighbor prediction algorithm: two implicit methods where both the encoder, and the decoder derive whether to perform nearest neighbor prediction or not based on either (a) the sum of the absolute difference, or (b) the difference between the boundary pixels from which prediction is performed; and another variant where Rate-Distortion-Optimization (RDO) search is performed at the encoder to decide whether or not to use the nearest neighbor interpolation, and explicitly signaled to the decoder. We also discuss the various underlying trade-offs in terms of the complexity of the three variants. All the three proposed variants provide significant gains over HEVC, and simulation results show that average gains of 3.3% BD-bitrate in Intra-frame coding are achieved by the RDO variant for screen content video. To the best of our knowledge, this is the first paper that 1) points out current HEVC intra prediction scheme with bilinear interpolation does not work efficiently for screen content video and 2) uses different filters adaptively in the HEVC intra prediction interpolation. | On Intra Prediction for Screen Content Video Coding | 10,027 |
Existing bidirectional reflectance distribution function (BRDF) models are capable of capturing the distinctive highlights produced by the fibrous nature of wood. However, capturing parameter textures for even a single specimen remains a laborious process requiring specialized equipment. In this paper we take a procedural approach to generating parameters for the wood BSDF. We characterize the elements of trees that are important for the appearance of wood, discuss techniques appropriate for representing those features, and present a complete procedural wood shader capable of reproducing the growth patterns responsible for the distinctive appearance of highly prized ``figured'' wood. Our procedural wood shader is random-access, 3D, modular, and is fast enough to generate a preview for design. | Procedural wood textures | 10,028 |
In this paper, we use the blending functions of Bernstein polynomials with shifted knots for construction of Bezier curves and surfaces. We study the nature of degree elevation and degree reduction for Bezier Bernstein functions with shifted knots. Parametric curves are represented using these modified Bernstein basis and the concept of total positivity is applied to investigate the shape properties of the curve. We get Bezier curve defined on [0, 1] when we set the parameter \alpha=\beta to the value 0. We also present a de Casteljau algorithm to compute Bernstein Bezier curves and surfaces with shifted knots. The new curves have some properties similar to Bezier curves. Furthermore, some fundamental properties for Bernstein Bezier curves and surfaces are discussed. | Bezier curves and surfaces based on modified Bernstein polynomials | 10,029 |
Task mapping in modern high performance parallel computers can be modeled as a graph embedding problem, which simulates the mapping as embedding one graph into another and try to find the minimum wirelength for the mapping. Though embedding problems have been considered for several regular graphs, such as hypercubes into grids, binary trees into grids, et al, it is still an open problem for hypercubes into cylinders. In this paper, we consider the problem of embedding hypercubes into cylinders to minimize the wirelength. We obtain the exact wirelength formula of embedding hypercube $Q^r$ into cylinder $C_{2^3}\times P_{2^{r-3}}$ with $r\ge3$. | Embedding of Hypercube into Cylinder | 10,030 |
Biopsy is commonly used to confirm cancer diagnosis when radiologically indicated. Given the ability of PET to localize malignancies in heterogeneous tumors and tumors that do not have a CT correlate, PET/CT guided biopsy may improve the diagnostic yield of biopsies. To facilitate PET/CT guided needle biopsy, we developed a workflow that allows us to bring PET image guidance into the interventional CT suite. In this abstract, we present SlicerPET, a user-friendly workflow based module developed using open source software libraries to guide needle biopsy in the interventional suite. | SlicerPET: A workflow based software module for PET/CT guided needle
biopsy | 10,031 |
The idea of style similarity metrics has been recently developed for various media types such as 2D clip art and 3D shapes. We explore this style metric problem and improve existing style similarity metrics of 3D shapes in four novel ways. First, we consider the color and texture of 3D shapes which are important properties that have not been previously considered. Second, we explore the effect of clustering a dataset of 3D models by comparing between style metrics for a single object type and style metrics that combine clusters of object types. Third, we explore the idea of user-guided learning for this problem. Fourth, we introduce an iterative approach that can learn a metric from a general set of 3D models. We demonstrate these contributions with various classes of 3D shapes and with applications such as style-based similarity search and scene composition. | Improving Style Similarity Metrics of 3D Shapes | 10,032 |
Tone-mapping operators (TMO) are designed to generate perceptually similar low-dynamic range images from high-dynamic range ones. We studied the performance of fifteen TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment and the setups were designed to emphasise different image properties: in the first experiment we evaluated the local relationships among intensity-levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduce the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. We conclude that a more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments. | Which tone-mapping operator is the best? A comparative study of
perceptual quality | 10,033 |
The reassembly of a broken archaeological ceramic pottery is an open and complex problem, which remains a scientific process of extreme interest for the archaeological community. Usually, the solutions suggested by various research groups and universities depend on various aspects such as the matching process of the broken surfaces, the outline of sherds or their colors and geometric characteris-tics, their axis of symmetry, the corners of their contour, the theme portrayed on the surface, the concentric circular rills that are left during the base construction in the inner pottery side by the fingers of the potter artist etc. In this work the reassembly process is based on a different and more secure idea, since it is based on the thick-ness profile, which is appropriately identified in every fragment. Specifically, our approach is based on information encapsulated in the inner part of the sherd (i.e. thickness), which is not -or at least not heavily- affected by the presence of harsh environmental conditions, but is safely kept within the sherd itself. Our method is verified in various use case experiments, using cutting edge technologies such as 3D representations and precise measurements on surfaces from the acquired 3D models. | 3D digital reassembling of archaeological ceramic pottery fragments
based on their thickness profile | 10,034 |
Targeted user studies are often employed to measure how well artists can perform specific tasks. But these studies cannot properly describe editing workflows as wholes, since they guide the artists both by choosing the tasks and by using simplified interfaces. In this paper, we investigate digital sculpting workflows used to produce detailed models. In our experiment design, artists can choose freely what and how to model. We recover whole-workflow trends with sophisticated statistical analyzes and validate these trends with goodness-of-fits measures. We record brush strokes and mesh snapshots by instrumenting a sculpting program and analyze the distribution of these properties and their spatial and temporal characteristics. We hired expert artists that can produce relatively sophisticated models in short time, since their workflows are representative of best practices. We analyze 13 meshes corresponding to roughly 25 thousand strokes in total. We found that artists work mainly with short strokes, with average stroke length dependent on model features rather than the artist itself. Temporally, artists do not work coarse-to-fine but rather in bursts. Spatially, artists focus on some selected regions by dedicating different amounts of edits and by applying different techniques. Spatio-temporally, artists return to work on the same area multiple times without any apparent periodicity. We release the entire dataset and all code used for the analyzes as reference for the community. | SculptStat: Statistical Analysis of Digital Sculpting Workflows | 10,035 |
The generalized winding number function measures insideness for arbitrary oriented triangle meshes. Exploiting this, I similarly generalize binary boolean operations to act on such meshes. The resulting operations for union, intersection, difference, etc. avoid volumetric discretization or pre-processing. | Boolean Operations using Generalized Winding Numbers | 10,036 |
We present a new method for the interpolation of given data points and associated normals with surface parametric patches with rational normal fields. We give some arguments why a dual approach is the most convenient for these surfaces, which are traditionally called Pythagorean normal vector (PN) surfaces. Our construction is based on the isotropic model of the dual space to which the original data are pushed. Then the bicubic Coons patches are constructed in the isotropic space and then pulled back to the standard three dimensional space. As a result we obtain the patch construction which is completely local and produces surfaces with the global G1~continuity. | Smooth surface interpolation using patches with rational offsets | 10,037 |
Bezigons, i.e., closed paths composed of B\'ezier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality. | Effective Clipart Image Vectorization Through Direct Optimization of
Bezigons | 10,038 |
Game level editing is the process of constructing a full game level starting from 3D asset libraries, e.g. 3d models, textures, shaders, scripts. In level editing, designers define the look and behavior of the whole level by placing objects, assigning materials and lighting parameters, setting animations and physics properties and customizing the objects AI and behavior by editing scripts. The heterogeneity of the task usually translates to a workflow where a team of people, experts on separate aspects, cooperate to edit the game level, often working on the same objects (e.g.: a programmer working on the AI of a character, while an artist works on its 3D model or its materials). Today this collaboration is established by using version control systems designed for text documents, such as Git, to manage different versions and share them amongst users. The merge algorithms used in these systems though does not perform well in our case since it does not respect the relations between game objects necessary to maintain the semantic of the game level behavior and look. This is a known problem and commercial systems for game level merging exists, e.g. PlasticSCM, but these are only slightly more robust than text-based ones. This causes designers to often merge scenes manually, essentially reapplying others edits in the game level editor. | LevelMerge: Collaborative Game Level Editing by Merging Labeled Graphs | 10,039 |
This paper presents a new method for modelling the dynamic behaviour of developable ribbons, two dimensional strips with much smaller width than length. Instead of approximating such surface with a general triangle mesh, we characterize it by a set of creases and bending angles across them. This representation allows the developability to be satisfied everywhere while still leaves enough degree of freedom to represent salient global deformation. We show how the potential and kinetic energies can be properly discretized in this configuration space and time integrated in a fully implicit manner. The result is a dynamic simulator with several desirable features: We can model non-trivial deformation using much fewer elements than conventional FEM method. It is stable under extreme deformation, external force or large timestep size. And we can readily handle various user constraints in Euclidean space. | Modelling Developable Ribbons Using Ruling Bending Coordinates | 10,040 |
This paper deals with the problem of multi-degree reduction of a composite B\'ezier curve with the parametric continuity constraints at the endpoints of the segments. We present a novel method which is based on the idea of using constrained dual Bernstein polynomials to compute the control points of the reduced composite curve. In contrast to other methods, ours minimizes the $L_2$-error for the whole composite curve instead of minimizing the $L_2$-errors for each segment separately. As a result, an additional optimization is possible. Examples show that the new method gives much better results than multiple application of the degree reduction of a single B\'ezier curve. | Degree reduction of composite Bézier curves | 10,041 |
In this report we describe a mesh editing system that we implemented that uses a natural stretching and bending energy defined over smooth surfaces. As such, this energy behaves uniformly under various mesh resolutions. All of the elements of our approach already exist in the literature. We hope that our discussions of these energies helps to shed light on the behaviors of these methods and provides a unified discussion of these methods. | A Report on Shape Deformation with a Stretching and Bending Energy | 10,042 |
Many problems can be presented in an abstract form through a wide range of binary objects and relations which are defined over problem domain. In these problems, graphical demonstration of defined binary objects and solutions is the most suitable representation approach. In this regard, graph drawing problem discusses the methods for transforming combinatorial graphs to geometrical drawings in order to visualize them. This paper studies the force-directed algorithms and multi-surface techniques for drawing general undirected graphs. Particularly, this research describes force-directed approach to model the drawing of a general graph as a numerical optimization problem. So, it can use rich knowledge which is presented as an established system by the numerical optimization. Moreover, this research proposes the multi-surface approach as an efficient tool for overcoming local minimums in standard force-directed algorithms. Next, we introduce a new method for multi-surface approach based on fuzzy clustering algorithms. | Graphs Drawing through Fuzzy Clustering | 10,043 |
Man-made objects usually exhibit descriptive curved features (i.e., curve networks). The curve network of an object conveys its high-level geometric and topological structure. We present a framework for extracting feature curve networks from unstructured point cloud data. Our framework first generates a set of initial curved segments fitting highly curved regions. We then optimize these curved segments to respect both data fitting and structural regularities. Finally, the optimized curved segments are extended and connected into curve networks using a clustering method. To facilitate effectiveness in case of severe missing data and to resolve ambiguities, we develop a user interface for completing the curve networks. Experiments on various imperfect point cloud data validate the effectiveness of our curve network extraction framework. We demonstrate the usefulness of the extracted curve networks for surface reconstruction from incomplete point clouds. | Curve Networks for Surface Reconstruction | 10,044 |
In traditional design, shapes are first conceived, and then fabricated. While this decoupling simplifies the design process, it can result in inefficient material usage, especially where off-cut pieces are hard to reuse. The designer, in absence of explicit feedback on material usage remains helpless to effectively adapt the design -- even though design variabilities exist. In this paper, we investigate {\em waste minimizing furniture design} wherein based on the current design, the user is presented with design variations that result in more effective usage of materials. Technically, we dynamically analyze material space layout to determine {\em which} parts to change and {\em how}, while maintaining original design intent specified in the form of design constraints. We evaluate the approach on simple and complex furniture design scenarios, and demonstrate effective material usage that is difficult, if not impossible, to achieve without computational support. | Towards Zero-Waste Furniture Design | 10,045 |
In this technical report we derive the analytic form of the Hessian matrix for shape matching energy. Shape matching is a useful technique for meshless deformation, which can be easily combined with multiple techniques in real-time dynamics. Nevertheless, it has been rarely applied in scenarios where implicit integrators are required, and hence strong viscous damping effect, though popular in simulation systems nowadays, is forbidden for shape matching. The reason lies in the difficulty to derive the Hessian matrix of the shape matching energy. Computing the Hessian matrix correctly, and stably, is the key to more broadly application of shape matching in implicitly-integrated systems. | On the Hessian of Shape Matching Energy | 10,046 |
In this paper, a new analogue of blossom based on post quantum calculus is introduced. The post quantum blossom has been adapted for developing identities and algorithms for Bernstein bases and B$\acute{e}$zier curves. By applying the post quantum blossom, various new identities and formulae expressing the monomials in terms of the post quantun Bernstein basis functions and a post quantun variant of Marsden's identity are investigated. For each post quantum B$\acute{e}$zier curves of degree $m,$ a collection of $m!$ new, affine invariant, recursive evaluation algorithms are derived. | Algorithms and Identities for B$\acute{e}$zier curves via Post Quantum
Blossom | 10,047 |
We present a new method for real-time physics-based simulation supporting many different types of hyperelastic materials. Previous methods such as Position Based or Projective Dynamics are fast, but support only limited selection of materials; even classical materials such as the Neo-Hookean elasticity are not supported. Recently, Xu et al. [2015] introduced new "spline-based materials" which can be easily controlled by artists to achieve desired animation effects. Simulation of these types of materials currently relies on Newton's method, which is slow, even with only one iteration per timestep. In this paper, we show that Projective Dynamics can be interpreted as a quasi-Newton method. This insight enables very efficient simulation of a large class of hyperelastic materials, including the Neo-Hookean, spline-based materials, and others. The quasi-Newton interpretation also allows us to leverage ideas from numerical optimization. In particular, we show that our solver can be further accelerated using L-BFGS updates (Limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm). Our final method is typically more than 10 times faster than one iteration of Newton's method without compromising quality. In fact, our result is often more accurate than the result obtained with one iteration of Newton's method. Our method is also easier to implement, implying reduced software development costs. | Towards Real-time Simulation of Hyperelastic Materials | 10,048 |
This paper covers the whole process of developing an Augmented Reality Stereoscopig Render Engine for the Oculus Rift. To capture the real world in form of a camera stream, two cameras with fish-eye lenses had to be installed on the Oculus Rift DK1 hardware. The idea was inspired by Steptoe \cite{steptoe2014presence}. After the introduction, a theoretical part covers all the most neccessary elements to achieve an AR System for the Oculus Rift, following the implementation part where the code from the AR Stereo Engine is explained in more detail. A short conclusion section shows some results, reflects some experiences and in the final chapter some future works will be discussed. The project can be accessed via the git repository https://github.com/MaXvanHeLL/ARift.git. | Augmented Reality Oculus Rift | 10,049 |
We present a new method for performing Boolean operations on volumes represented as triangle meshes. In contrast to existing methods which treat meshes as 3D polyhedra and try to partition the faces at their exact intersection curves, we treat meshes as adaptive surfaces which can be arbitrarily refined. Rather than depending on computing precise face intersections, our approach refines the input meshes in the intersection regions, then discards intersecting triangles and fills the resulting holes with high-quality triangles. The original intersection curves are approximated to a user-definable precision, and our method can identify and preserve creases and sharp features. Advantages of our approach include the ability to trade speed for accuracy, support for open meshes, and the ability to incorporate tolerances to handle cases where large numbers of faces are slightly inter-penetrating or near-coincident. | Adaptive Mesh Booleans | 10,050 |
Empirically validating new 3D-printing related algorithms and implementations requires testing data representative of inputs encountered \emph{in the wild}. An ideal benchmarking dataset should not only draw from the same distribution of shapes people print in terms of class (e.g., toys, mechanisms, jewelry), representation type (e.g., triangle soup meshes) and complexity (e.g., number of facets), but should also capture problems and artifacts endemic to 3D printing models (e.g., self-intersections, non-manifoldness). We observe that the contextual and geometric characteristics of 3D printing models differ significantly from those used for computer graphics applications, not to mention standard models (e.g., Stanford bunny, Armadillo, Fertility). We present a new dataset of 10,000 models collected from an online 3D printing model-sharing database. Via analysis of both geometric (e.g., triangle aspect ratios, manifoldness) and contextual (e.g., licenses, tags, classes) characteristics, we demonstrate that this dataset represents a more concise summary of real-world models used for 3D printing compared to existing datasets. To facilitate future research endeavors, we also present an online query interface to select subsets of the dataset according to project-specific characteristics. The complete dataset and per-model statistical data are freely available to the public. | Thingi10K: A Dataset of 10,000 3D-Printing Models | 10,051 |
We address the problem of texturing flat surfaces by spray-painting through 3D printed stencils. We propose a system that (1) decomposes an image into alpha-blended layers; (2) computes a stippling given a transparency channel; (3) generates a 3D printed stencil given a stippling and (4) simulates the effects of spray-painting through the stencil. | 3D Printed Stencils for Texturing Flat Surfaces | 10,052 |
We present, SurfCuit, a novel approach to design and construction of electric circuits on the surface of 3D prints. Our surface mounting technique allows durable construction of circuits on the surface of 3D prints. SurfCuit does not require tedious circuit casing design or expensive set-ups, thus we can expedite the process of circuit construction for 3D models. Our technique allows the user to construct complex circuits for consumer-level desktop fused decomposition modeling (FDM) 3D printers. The key idea behind our technique is that FDM plastic forms a strong bond with metal when it is melted. This observation enables construction of a robust circuit traces using copper tape and soldering. We also present an interactive tool to design such circuits on arbitrary 3D geometry. We demonstrate the effectiveness of our approach through various actual construction examples. | SurfCuit: Surface Mounted Circuits on 3D Prints | 10,053 |
A method to visualize polytopes in a four dimensional euclidian space $(x,y,z,w)$ is proposed. A polytope is sliced by multiple hyperplanes that are parallel each other and separated by uniform intervals. Since the hyperplanes are perpendicular to the $w$ axis, the resulting multiple slices appear in the three-dimensional $(x,y,z)$ space and they are shown by the standard computer graphics. The polytope is rotated extrinsically in the four dimensional space by means of a simple input method based on keyboard typings. The multiple slices are placed on a parabola curve in the three-dimensional world coordinates. The slices in a view window form an oval appearance. Both the simple and the double rotations in the four dimensional space are applied to the polytope. All slices synchronously change their shapes when a rotation is applied to the polytope. The compact display in the oval of many slices with the help of quick rotations facilitate a grasp of the four dimensional configuration of the polytope. | A Visualization Method of Four Dimensional Polytopes by Oval Display of
Parallel Hyperplane Slices | 10,054 |
During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being static- or tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user's position, illumination factors, and response times, demonstrate HoloMed's effectiveness as a low-cost system for teaching, using a natural user interface and 3D images. | HoloMed: A Low-Cost Gesture-Based Holographic | 10,055 |
We present a novel algorithm to control the physically-based animation of smoke. Given a set of keyframe smoke shapes, we compute a dense sequence of control force fields that can drive the smoke shape to match several keyframes at certain time instances. Our approach formulates this control problem as a PDE constrained spacetime optimization and computes locally optimal control forces as the stationary point of the Karush-Kuhn-Tucker conditions. In order to reduce the high complexity of multiple passes of fluid resimulation, we utilize the coherence between consecutive fluid simulation passes and update our solution using a novel spacetime full approximation scheme (STFAS). We demonstrate the benefits of our approach by computing accurate solutions on 2D and 3D benchmarks. In practice, we observe more than an order of magnitude improvement over prior methods. | Efficient Optimal Control of Smoke using Spacetime Multigrid | 10,056 |
We introduce geoplotlib, an open-source python toolbox for visualizing geographical data. geoplotlib supports the development of hardware-accelerated interactive visualizations in pure python, and provides implementations of dot maps, kernel density estimation, spatial graphs, Voronoi tesselation, shapefiles and many more common spatial visualizations. We describe geoplotlib design, functionalities and use cases. | Geoplotlib: a Python Toolbox for Visualizing Geographical Data | 10,057 |
Subdivision is a well-known and established method for generating smooth curves and surfaces from discrete data by repeated refinements. The typical input for such a process is a mesh of vertices. In this work we propose to refine 2D data consisting of vertices of a polygon and a normal at each vertex. Our core refinement procedure is based on a circle average, which is a new non-linear weighted average of two points and their corresponding normals. The ability to locally approximate curves by the circle average is demonstrated. With this ability, the circle average is a candidate for modifying linear subdivision schemes refining points, to schemes refining point-normal pairs. This is done by replacing the weighted binary arithmetic means in a linear subdivision scheme, expressed in terms of repeated binary averages, by circle averages with the same weights. Here we investigate the modified Lane-Riesenfeld algorithm and the 4-point scheme. For the case that the initial data consists of a control polygon only, a naive method for choosing initial normals is proposed. An example demonstrates the superiority of the above two modified schemes, with the naive choice of initial normals over the corresponding linear schemes, when applied to a control polygon with edges of significantly different lengths. | A weighted binary average of point-normal pairs with application to
subdivision schemes | 10,058 |
Porous structures such as trabecular bone are widely seen in nature. These structures exhibit superior mechanical properties whilst being lightweight. In this paper, we present a method to generate bone-like porous structures as lightweight infill for additive manufacturing. Our method builds upon and extends voxel-wise topology optimization. In particular, for the purpose of generating sparse yet stable structures distributed in the interior of a given shape, we propose upper bounds on the localized material volume in the proximity of each voxel in the design domain. We then aggregate the local per-voxel constraints by their p-norm into an equivalent global constraint, in order to facilitate an efficient optimization process. Implemented on a high-resolution topology optimization framework, our results demonstrate mechanically optimized, detailed porous structures which mimic those found in nature. We further show variants of the optimized structures subject to different design specifications, and analyze the optimality and robustness of the obtained structures. | Infill Optimization for Additive Manufacturing -- Approaching Bone-like
Porous Structures | 10,059 |
The Position Based Fluids (PBF) method is a state-of-the-art approach for fluid simulations in the context of real-time applications like games. It uses an iterative solver concept that tries to maintain a constant fluid density (incompressibility) to realize incompressible fluids like water. However, larger fluid volumes that consist of several hundred thousand particles (e.g. for the simulation of oceans) require many iterations and a lot of simulation power. We present a lightweight and easy-to-integrate extension to PBF that adaptively adjusts the number of solver iterations on a fine-grained basis. Using a novel adaptive-simulation approach, we are able to achieve significant improvements in performance on our evaluation scenarios while maintaining high-quality results in terms of visualization quality, which makes it a perfect choice for game developers. Furthermore, our method does not weaken the advantages of prior work and seamlessly integrates into other position-based methods for physically-based simulations. | Adaptive Position-Based Fluids: Improving Performance of Fluid
Simulations for Real-Time Applications | 10,060 |
We present a web application for the procedural generation of transformations of 3D models. We generate the transformations by algorithmically generating the vertex shaders of the 3D models. The vertex shaders are created with an interactive genetic algorithm, which displays to the user the visual effect caused by each vertex shader, allows the user to select the visual effect the user likes best, and produces a new generation of vertex shaders using the user feedback as the fitness measure of the genetic algorithm. We use genetic programming to represent each vertex shader as a computer program. This paper presents details of requirements specification, software architecture, high and low-level design, and prototype user interface. We discuss the project's current status and development challenges. | Design and Implementation of a Procedural Content Generation Web
Application for Vertex Shaders | 10,061 |
A wide variety of color schemes have been devised for mapping scalar data to color. Some use the data value to index a color scale. Others assign colors to different, usually blended disjoint materials, to handle areas where materials overlap. A number of methods can map low-dimensional data to color, however, these methods do not scale to higher dimensional data. Likewise, schemes that take a more artistic approach through color mixing and the like also face limits when it comes to the number of variables they can encode. We address the challenge of mapping multivariate data to color and avoid these limitations at the same time. It is a data driven method, which first gauges the similarity of the attributes and then arranges them according to the periphery of a convex 2D color space, such as HSL. The color of a multivariate data sample is then obtained via generalized barycentric coordinate (GBC) interpolation. | A Data-Driven Approach for Mapping Multivariate Data to Color | 10,062 |
We present a novel method to interpolate smoke and liquid simulations in order to perform data-driven fluid simulations. Our approach calculates a dense space-time deformation using grid-based signed-distance functions of the inputs. A key advantage of this implicit Eulerian representation is that it allows us to use powerful techniques from the optical flow area. We employ a five-dimensional optical flow solve. In combination with a projection algorithm, and residual iterations, we achieve a robust matching of the inputs. Once the match is computed, arbitrary in between variants can be created very efficiently. To concatenate multiple long-range deformations, we propose a novel alignment technique. Our approach has numerous advantages, including automatic matches without user input, volumetric deformations that can be applied to details around the surface, and the inherent handling of topology changes. As a result, we can interpolate swirling smoke clouds, and splashing liquid simulations. We can even match and interpolate phenomena with fundamentally different physics: a drop of liquid, and a blob of heavy smoke. | Interpolations of Smoke and Liquid Simulations | 10,063 |
This German paper was written entirely at the University of Duisburg-Essen in 2011 for a 3D modeling masters course in applied computer science. We publish this paper, thus, interested people can acquire a first impression of the topic "volume raycasting". In addition to writing this paper, we developed a functioning open-source OpenCL raycaster. A video of this raycaster is available: http://www.youtube.com/watch?v=VMMsQnf4zEY. Additionally, we archived and published the complete source code of the raycaster in the Google Code Archive: http://code.google.com/p/gputracer/. If this is no longer the case, those who are interested can also write an email to the author, hence, we can provide the source code. This paper provides an introduction and overview of the topic "volume ray casting with OpenCL". We show how volume data can be loaded, manipulated, and visualized by modern GPUs in real time. In addition, we present basic algorithms and data structures that are necessary for building such a raycaster. Then, we describe how we built a rudimentary raycaster using OpenCL and .NET C#. Furthermore, we analyze different gradient operators (CentralDifference, Sobel3D and Zucker-Hummel) for surface detection and show an evaluation of these with respect to their performance. Finally, we present optimization techniques (hitpoint refinement, adaptive sampling, octrees, and empty-space-skipping) for improving a raycaster. | Volume Raycasting mit OpenCL | 10,064 |
Sub-surface scattering is key to our perception of translucent materials. Models based on diffusion theory are used to render such materials in a realistic manner by evaluating an approximation of the material BSSRDF at any two points of the surface. Under the assumption of perpendicular incidence, this BSSRDF approximation can be tabulated over 2 dimensions to provide fast evaluation and importance sampling. However, accounting for non-perpendicular incidence with the same approach would require to tabulate over 4 dimensions, making the model too large for practical applications. In this report, we present a method to efficiently evaluate and importance sample the multi-scattering component of diffusion based BSSRDFs for non-perpendicular incidence. Our approach is based on tabulating a compressed angular model of Photon Beam Diffusion. We explain how to generate, evaluate and sample our model. We show that 1 MiB is enough to store a model of the multi-scattering BSSRDF that is within $0.5\%$ relative error of Photon Beam Diffusion. Finally, we present a method to use our model in a Monte Carlo particle tracer and show results of our implementation in PBRT. | Sampling BSSRDFs with non-perpendicular incidence | 10,065 |
Layered manufacturing inherently suffers from staircase defects along surfaces that are gently slopped with respect to the build direction. Reducing the slice thickness improves the situation but never resolves it completely as flat layers remain a poor approximation of the true surface in these regions. In addition, reducing the slice thickness largely increases the print time. In this work we focus on a simple yet effective technique to improve the print accuracy for layered manufacturing by filament deposition. Our method works with standard three-axis 3D filament printers (e.g. the typical, widely available 3D printers), using standard extrusion nozzles. It better reproduces the geometry of sloped surfaces without increasing the print time. Our key idea is to perform a local anti-aliasing, working at a sub-layer accuracy to produce slightly curved deposition paths and reduce approximation errors. This is inspired by Computer Graphics anti-aliasing techniques which consider sub-pixel precision to treat aliasing effects. We show that the necessary deviation in height compared to standard slicing is bounded by half the layer thickness. Therefore, the height changes remain small and plastic deposition remains reliable. We further split and order paths to minimize defects due to the extruder nozzle shape, avoiding any change to the existing hardware. We apply and analyze our approach on 3D printed examples, showing that our technique greatly improves surface accuracy and silhouette quality while keeping the print time nearly identical. | Anti-aliasing for fused filament deposition | 10,066 |
Volumetric cloudscapes are prohibitively expensive to render in real time without extensive optimisations. Previous approaches render the clouds to an offscreen buffer at one quarter resolution and update a fraction of the pixels per frame, drawing the remaining pixels by temporal reprojection. We present an alternative approach, reducing the number of raymarching steps and adding a randomly jittered offset to the raymarch. We use an analytical integration technique to make results consistent with a lower number of raymarching steps. To remove noise from the resulting image we apply a temporal anti-aliasing implementation. The result is a technique producing visually similar results with 1/16 the number of steps. | Optimisations for Real-Time Volumetric Cloudscapes | 10,067 |
We present an unsupervised method for co-segmentation of a set of 3D shapes from the same class with the aim of segmenting the input shapes into consistent semantic parts and establishing their correspondence across the set. Starting from meaningful pre-segmentation of all given shapes individually, we construct the correspondence between same candidate parts and obtain the labels via functional maps. And then, we use these labels to mark the input shapes and obtain results of co-segmentation. The core of our algorithm is to seek for an optimal correspondence between semantically similar parts through functional maps and mark such shape parts. Experimental results on the benchmark datasets show the efficiency of this method and comparable accuracy to the state-of-the-art algorithms. | Unsupervised Co-segmentation of 3D Shapes via Functional Maps | 10,068 |
We introduce the {\em polygon cloud}, also known as a polygon set or {\em soup}, as a compressible representation of 3D geometry (including its attributes, such as color texture) intermediate between polygonal meshes and point clouds. Dynamic or time-varying polygon clouds, like dynamic polygonal meshes and dynamic point clouds, can take advantage of temporal redundancy for compression, if certain challenges are addressed. In this paper, we propose methods for compressing both static and dynamic polygon clouds, specifically triangle clouds. We compare triangle clouds to both triangle meshes and point clouds in terms of compression, for live captured dynamic colored geometry. We find that triangle clouds can be compressed nearly as well as triangle meshes, while being far more robust to noise and other structures typically found in live captures, which violate the assumption of a smooth surface manifold, such as lines, points, and ragged boundaries. We also find that triangle clouds can be used to compress point clouds with significantly better performance than previously demonstrated point cloud compression methods. In particular, for intra-frame coding of geometry, our method improves upon octree-based intra-frame coding by a factor of 5-10 in bit rate. Inter-frame coding improves this by another factor of 2-5. Overall, our dynamic triangle cloud compression improves over the previous state-of-the-art in dynamic point cloud compression by 33\% or more. | Dynamic Polygon Clouds: Representation and Compression for VR/AR | 10,069 |
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats. | Inverse Diffusion Curves using Shape Optimization | 10,070 |
A new method is presented, allowing for the generation of 3D terrain and texture from coherent noise. The method is significantly faster than prevailing fractal brownian motion approaches, while producing results of equivalent quality. The algorithm is derived through a systematic approach that generalizes to an arbitrary number of spatial dimensions and gradient smoothness. The results are compared, in terms of performance and quality, to fundamental and efficient gradient noise methods widely used in the domain of fast terrain generation: Perlin noise and OpenSimplex noise. Finally, to objectively quantify the degree of realism of the results, a fractal analysis of the generated landscapes is performed and compared to real terrain data. | Polynomial methods for Procedural Terrain Generation | 10,071 |
We present a novel algorithm to compute multi-scale curvature fields on triangle meshes. Our algorithm is based on finding robust mean curvatures using the ball neighborhood, where the radius of a ball corresponds to the scale of the features. The essential problem is to find a good radius for each ball to obtain a reliable curvature estimation. We propose an algorithm that finds suitable radii in an automatic way. In particular, our algorithm is applicable to meshes produced by image-based reconstruction systems. These meshes often contain geometric features at various scales, for example if certain regions have been captured in greater detail. We also show how such a multi-scale curvature field can be converted to a density field and used to guide applications like mesh simplification. | Simplification of Multi-Scale Geometry using Adaptive Curvature Fields | 10,072 |
The problem of mesh matching is addressed in this work. For a given n-sided planar region bounded by one loop of n polylines we are selecting optimal quadrilateral mesh from existing catalogue of meshes. The formulation of matching between planar shape and quadrilateral mesh from the catalogue is based on the problem of finding longest common subsequence (LCS). Theoretical foundation of mesh matching method is provided. Suggested method represents a viable technique for selecting best mesh for planar region and stepping stone for further parametrization of the region. | Selecting the Best Quadrilateral Mesh for Given Planar Shape | 10,073 |
A new taxonomy of issues related to CAD model quality is presented, which distinguishes between explicit and procedural models. For each type of model, morphologic, syntactic, and semantic errors are characterized. The taxonomy was validated successfully when used to classify quality testing tools, which are aimed at detecting and repairing data errors that may affect the simplification, interoperability, and reusability of CAD models. The study shows that low semantic level errors that hamper simplification are reasonably covered in explicit representations, although many CAD quality testers are still unaffordable for Small and Medium Enterprises, both in terms of cost and training time. Interoperability has been reasonably solved by standards like STEP AP 203 and AP214, but model reusability is not feasible in explicit representations. Procedural representations are promising, as interactive modeling editors automatically prevent most morphologic errors derived from unsuitable modeling strategies. Interoperability problems between procedural representations are expected to decrease dramatically with STEP AP242. Higher semantic aspects of quality such as assurance of design intent, however, are hardly supported by current CAD quality testers. | A Survey on 3D CAD model quality assurance and testing tools | 10,074 |
Fractal image generation algorithms exhibit extreme parallelizability. Using general purpose graphics processing unit (GPU) programming to implement escape-time algorithms for Julia sets of functions,parallel methods generate visually attractive fractal images much faster than traditional methods. Vastly improved speeds are achieved using this method of computation, which allow real-time generation and display of images. A comparison is made between sequential and parallel implementations of the algorithm. An application created by the authors demonstrates using the increased speed to create dynamic imaging of fractals where the user may explore paths of parameter values corresponding to a given function's Mandelbrot set. Examples are given of artistic and mathematical insights gained by experiencing fractals interactively and from the ability to sample the parameter space quickly and comprehensively. | Fractal Art Generation using GPUs | 10,075 |
We apply a novel optimization scheme from the image processing and machine learning areas, a fast Primal-Dual method, to achieve controllable and realistic fluid simulations. While our method is generally applicable to many problems in fluid simulations, we focus on the two topics of fluid guiding and separating solid-wall boundary conditions. Each problem is posed as an optimization problem and solved using our method, which contains acceleration schemes tailored to each problem. In fluid guiding, we are interested in partially guiding fluid motion to exert control while preserving fluid characteristics. With our method, we achieve explicit control over both large-scale motions and small-scale details which is valuable for many applications, such as level-of-detail adjustment (after running the coarse simulation), spatially varying guiding strength, domain modification, and resimulation with different fluid parameters. For the separating solid-wall boundary conditions problem, our method effectively eliminates unrealistic artifacts of fluid crawling up solid walls and sticking to ceilings, requiring few changes to existing implementations. We demonstrate the fast convergence of our Primal-Dual method with a variety of test cases for both model problems. | Primal-Dual Optimization for Fluids | 10,076 |
Immersive, stereoscopic viewing enables scientists to better analyze the spatial structures of visualized physical phenomena. However, their findings cannot be properly presented in traditional media, which lack these core attributes. Creating a presentation tool that captures this environment poses unique challenges, namely related to poor viewing accessibility. Immersive scientific renderings often require high-end equipment, which can be impractical to obtain. We address these challenges with our authoring tool and navigational interface, which is designed for affordable head-mounted displays. With the authoring tool, scientists can show salient data features as connected 360{\deg} video paths, resulting in a "choose-your-own-adventure" experience. Our navigational interface features bidirectional video playback for added viewing control when users traverse the tailor-made content. We evaluate our system's benefits by authoring case studies on several data sets and conducting a usability study on the navigational interface's design. In summary, our approach provides scientists an immersive medium to visually present their research to the intended audience--spanning from students to colleagues--on affordable virtual reality headsets. | Navigable videos for presenting scientific data on head-mounted displays | 10,077 |
We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy. | A Qualitative and Quantitative Evaluation of 8 Clear Sky Models | 10,078 |
The EditLens is an interactive lens technique that supports the editing of graphs. The user can insert, update, or delete nodes and edges while maintaining an already existing layout of the graph. For the nodes and edges that are affected by an edit operation, the EditLens suggests suitable locations and routes, which the user can accept or adjust. For this purpose, the EditLens requires an efficient routing algorithm that can compute results at interactive framerates. Existing algorithms cannot fully satisfy the needs of the EditLens. This paper describes a novel algorithm that can compute orthogonal edge routes for incremental edit operations of graphs. Tests indicate that, in general, the algorithm is better than alternative solutions. | Orthogonal Edge Routing for the EditLens | 10,079 |
In this manuscript, inspired by a simpler reformulation of primary sample space Metropolis light transport, we derive a novel family of general Markov chain Monte Carlo algorithms called charted Metropolis-Hastings, that introduces the notion of sampling charts to extend a given sampling domain and making it easier to sample the desired target distribution and escape from local maxima through coordinate changes. We further apply the novel algorithms to light transport simulation, obtaining a new type of algorithm called charted Metropolis light transport, that can be seen as a bridge between primary sample space and path space Metropolis light transport. The new algorithms require to provide only right inverses of the sampling functions, a property that we believe crucial to make them practical in the context of light transport simulation. We further propose a method to integrate density estimation into this framework through a novel scheme that uses it as an independence sampler. | Charted Metropolis Light Transport | 10,080 |
This paper proposes a shoulder inverse kinematics (IK) technique. Shoulder complex is comprised of the sternum, clavicle, ribs, scapula, humerus, and four joints. The shoulder complex shows specific motion pattern, such as Scapulo humeral rhythm. As a result, if a motion of the shoulder isgenerated without the knowledge of kinesiology, it will be seen as un-natural. The proposed technique generates motion of the shoulder complex about the orientation of the upper arm by interpolating the measurement data. The shoulder IK method allows novice animators to generate natural shoulder motions easily. As a result, this technique improves the quality of character animation. | Data-driven Shoulder Inverse Kinematics | 10,081 |
This article introduces a new notion of optimal transport (OT) between tensor fields, which are measures whose values are positive semidefinite (PSD) matrices. This "quantum" formulation of OT (Q-OT) corresponds to a relaxed version of the classical Kantorovich transport problem, where the fidelity between the input PSD-valued measures is captured using the geometry of the Von-Neumann quantum entropy. We propose a quantum-entropic regularization of the resulting convex optimization problem, which can be solved efficiently using an iterative scaling algorithm. This method is a generalization of the celebrated Sinkhorn algorithm to the quantum setting of PSD matrices. We extend this formulation and the quantum Sinkhorn algorithm to compute barycenters within a collection of input tensor fields. We illustrate the usefulness of the proposed approach on applications to procedural noise generation, anisotropic meshing, diffusion tensor imaging and spectral texture synthesis. | Quantum Optimal Transport for Tensor Field Processing | 10,082 |
Exploring and editing colors in images is a common task in graphic design and photography. However, allowing for interactive recoloring while preserving smooth color blends in the image remains a challenging problem. We present LayerBuilder, an algorithm that decomposes an image or video into a linear combination of colored layers to facilitate color-editing applications. These layers provide an interactive and intuitive means for manipulating individual colors. Our approach reduces color layer extraction to a fast iterative linear system. Layer Builder uses locally linear embedding, which represents pixels as linear combinations of their neighbors, to reduce the number of variables in the linear solve and extract layers that can better preserve color blending effects. We demonstrate our algorithm on recoloring a variety of images and videos, and show its overall effectiveness in recoloring quality and time complexity compared to previous approaches. We also show how this representation can benefit other applications, such as automatic recoloring suggestion, texture synthesis, and color-based filtering. | LayerBuilder: Layer Decomposition for Interactive Image and Video Color
Editing | 10,083 |
This paper presents Poisson vector graphics, an extension of the popular first-order diffusion curves, for generating smooth-shaded images. Armed with two new types of primitives, namely Poisson curves and Poisson regions, PVG can easily produce photorealistic effects such as specular highlights, core shadows, translucency and halos. Within the PVG framework, users specify color as the Dirichlet boundary condition of diffusion curves and control tone by offsetting the Laplacian, where both controls are simply done by mouse click and slider dragging. The separation of color and tone not only follows the basic drawing principle that is widely adopted by professional artists, but also brings three unique features to PVG, i.e., local hue change, ease of extrema control, and permit of intersection among geometric primitives, making PVG an ideal authoring tool. To render PVG, we develop an efficient method to solve 2D Poisson's equations with piecewise constant Laplacians. In contrast to the conventional finite element method that computes numerical solutions only, our method expresses the solution using harmonic B-spline, whose basis functions can be constructed locally and the control coefficients are obtained by solving a small sparse linear system. Our closed-form solver is numerically stable and it supports random access evaluation, zooming-in of arbitrary resolution and anti-aliasing. Although the harmonic B-spline based solutions are approximate, computational results show that the relative mean error is less than 0.3%, which cannot be distinguished by naked eyes. | Poisson Vector Graphics (PVG) and Its Closed-Form Solver | 10,084 |
In this paper a new system for piecewise primitive surface recovery on point clouds is presented, which allows a novice user to sketch areas of interest in order to guide the fitting process. The algorithm is demonstrated against a benchmark technique for autonomous surface fitting, and, contrasted against existing literature in user guided surface recovery, with empirical evidence. It is concluded that the system is an improvement to the current documented literature for its visual quality when modelling objects which are composed of piecewise primitive shapes, and, in its ability to fill large holes on occluded surfaces using free-form input. | User-guided free-form asset modelling | 10,085 |
Photographers routinely compose multiple manipulated photos of the same scene (layers) into a single image, which is better than any individual photo could be alone. Similarly, 3D artists set up rendering systems to produce layered images to contain only individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, both approaches either take considerable time to capture, or remain limited to synthetic scenes. In this paper, we suggest a system to allow decomposing a single image into a plausible shading decomposition (PSD) that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. This decomposition can then be manipulated in any off-the-shelf image manipulation software and recomposited back. We perform such a decomposition by learning a convolutional neural network trained using synthetic data. We demonstrate the effectiveness of our decomposition on synthetic (i.e., rendered) and real data (i.e., photographs), and use them for common photo manipulation, which are nearly impossible to perform otherwise from single images. | Plausible Shading Decomposition For Layered Photo Retouching | 10,086 |
Dealing with visualizations containing large data set is a challenging issue and, in the field of Information Visualization, almost every visual technique reveals its drawback when visualizing large number of items. To deal with this problem we introduce a formal environment, modeling in a virtual space the image features we are interested in (e.g, absolute and relative density, clusters, etc.) and we define some metrics able to characterize the image decay. Such metrics drive our automatic techniques (i.e., not uniform sampling) rescuing the image features and making them visible to the user. In this paper we focus on 2D scatter-plots, devising a novel non uniform data sampling strategy able to preserve in an effective way relative densities. | By chance is not enough: Preserving relative density through non uniform
sampling | 10,087 |
This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU). The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA) have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 %) is spent for transfer of data between the central processing unit and GPU for computations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of computations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization. Keywords: Volumetric 3D visualization, stereo 3D visualization, ray tracing, parallel computing on GPU, CUDA | Ray tracing method for stereo image synthesis using CUDA | 10,088 |
The article presents a general concept of the organization of pseudo three dimension visualization of graphics and video content for three dimension visualization systems. The steps of algorithms for solving the problem of synthesis of three dimension stereo images based on two dimension images are introduced. The features of synthesis organization of standard format of three dimension stereo frame are presented. Moreover, the performed experimental simulation for generating complete stereoframes and the results of its time complexity are shown. Keywords:Three dimension visualization, pseudo three dimension stereo, a stereo pair, three dimension stereo format, algorithm, modeling, time complexity. | Conceptual and algorithmic development of Pseudo 3D Graphics and Video
Content Visualization | 10,089 |
A general concept of 3D volumetric visualization systems is described based on 3D discrete voxel scenes (worlds) representation. Definitions of 3D discrete voxel scene (world) basic elements and main steps of the image synthesis algorithm are formulated. An algorithm for solving the problem of the voxelized world 3D image synthesis, intended for the systems of volumetric spatial visualization, is proposed. A computer-based architecture for 3D volumetric visualization of 3D discrete voxel world is presented. On the basis of the proposed overall concept of discrete voxel representation, the proposed architecture successfully adapts the ray tracing technique for the synthesis of 3D volumetric images. Since it is algorithmically simple and effectively supports parallelism, it can efficiently be implemented. Key words:Volumetric spatial visualization, 3D volumetric imagesynthesis, discrete voxel world, ray tracing. | Generalized 3D Voxel Image Synthesis Architecture for Volumetric Spatial
Visualization | 10,090 |
This paper presents the three scripting commands and main functionalities of a novel character animation environment called CHASE. CHASE was developed for enabling inexperienced programmers, animators, artists, and students to animate in meaningful ways virtual reality characters. This is achieved by scripting simple commands within CHASE. The commands identified, which are associated with simple parameters, are responsible for generating a number of predefined motions and actions of a character. Hence, the virtual character is able to animate within a virtual environment and to interact with tasks located within it. An additional functionality of CHASE is supplied. It provides the ability to generate multiple tasks of a character, such as providing the user the ability to generate scenario-related animated sequences. However, since multiple characters may require simultaneous animation, the ability to script actions of different characters at the same time is also provided. | Towards Developing an Easy-To-Use Scripting Environment for Animating
Virtual Characters | 10,091 |
We present here the first systematic treatment of the problems posed by the visualization and analysis of large-scale, parallel adaptive mesh refinement (AMR) simulations on an Eulerian grid. When compared to those obtained by constructing an intermediate unstructured mesh with fully described connectivity, our primary results indicate a gain of at least 80\% in terms of memory footprint, with a better rendering while retaining similar execution speed. In this article, we describe the key concepts that allow us to obtain these results, together with the methodology that facilitates the design, implementation, and optimization of algorithms operating directly on such refined meshes. This native support for AMR meshes has been contributed to the open source Visualization Toolkit (VTK). This work pertains to a broader long-term vision, with the dual goal to both improve interactivity when exploring such data sets in 2 and 3 dimensions, and optimize resource utilization. | Visualization and Analysis of Large-Scale, Tree-Based, Adaptive Mesh
Refinement Simulations with Arbitrary Rectilinear Geometry | 10,092 |
We present here the result of continuation work, performed to further fulfill the vision we outlined in [Harel,Lekien,P\'eba\"y-2017] for the visualization and analysis of tree-based adaptive mesh refinement (AMR) simulations, using the hypertree grid paradigm which we proposed. The first filter presented hereafter implements an adaptive approach in order to accelerate the rendering of 2-dimensional AMR grids, hereby solving the problem posed by the loss of interactivity that occurs when dealing with large and/or deeply refined meshes. Specifically, view parameters are taken into account, in order to: on one hand, avoid creating surface elements that are outside of the view area; on the other hand, utilize level-of-detail properties to cull those cells that are deemed too small to be visible with respect to the given view parameters. This adaptive approach often results in a massive increase in rendering performance. In addition, two new selection filters provide data analysis capabilities, by means of allowing for the extraction of those cells within a hypertree grid that are deemed relevant in some sense, either geometrically or topologically. After a description of these new algorithms, we illustrate their use within the Visualization Toolkit (VTK) in which we implemented them. This note ends with some suggestions for subsequent work. | Two New Contributions to the Visualization of AMR Grids: I. Interactive
Rendering of Extreme-Scale 2-Dimensional Grids II. Novel Selection Filters in
Arbitrary Dimension | 10,093 |
A challenge in isogeometric analysis is constructing analysis-suitable volumetric meshes which can accurately represent the geometry of a given physical domain. In this paper, we propose a method to derive a spline-based representation of a domain of interest from voxel-based data. We show an efficient way to obtain a boundary representation of the domain by a level-set function. Then, we use the geometric information from the boundary (the normal vectors and curvature) to construct a matching C1 representation with hierarchical cubic splines. The approximation is done by a single template and linear transformations (scaling, translations and rotations) without the need for solving an optimization problem. We illustrate our method with several examples in two and three dimensions, and show good performance on some standard benchmark test problems. | Volumetric parametrization from a level set boundary representation with
PHT Splines | 10,094 |
With virtual reality, digital painting on 2D canvases is now being extended to 3D spaces. Tilt Brush and Oculus Quill are widely accepted among artists as tools that pave the way to a new form of art - 3D emmersive painting. Current 3D painting systems are only a start, emitting textured triangular geometries. In this paper, we advance this new art of 3D painting to 3D volumetric painting that enables an artist to draw a huge scene with full control of spatial color fields. Inspired by the fact that 2D paintings often use vast space to paint background and small but detailed space for foreground, we claim that supporting a large canvas in varying detail is essential for 3D painting. In order to help artists focus and audiences to navigate the large canvas space, we provide small artist-defined areas, called rooms, that serve as beacons for artist-suggested scales, spaces, locations for intended appreciation view of the painting. Artists and audiences can easily transport themselves between different rooms. Technically, our canvas is represented as an array of deep octrees of depth 24 or higher, built on CPU for volume painting and on GPU for volume rendering using accurate ray casting. In CPU side, we design an efficient iterative algorithm to refine or coarsen octree, as a result of volumetric painting strokes, at highly interactive rates, and update the corresponding GPU textures. Then we use GPU-based ray casting algorithms to render the volumetric painting result. We explore precision issues stemming from ray-casting the octree of high depth, and provide a new analysis and verification. From our experimental results as well as the positive feedback from the participating artists, we strongly believe that our new 3D volume painting system can open up a new possibility for VR-driven digital art medium to professional artists as well as to novice users. | CanvoX: High-resolution VR Painting in Large Volumetric Canvas | 10,095 |
This invited talk will present recent projection mapping technologies for augmented reality. First, fundamental technologies are briefly explained, which have been proposed to overcome the technical limitations of ordinary projectors. Second, augmented reality (AR) applications using projection mapping technologies are introduced. | Projection Mapping Technologies for AR | 10,096 |
Surface reconstruction from an unorganized point cloud is an important problem due to its widespread applications. White noise, possibly clustered outliers, and noisy perturbation may be generated when a point cloud is sampled from a surface. Most existing methods handle limited amount of noise. We develop a method to denoise a point cloud so that the users can run their surface reconstruction codes or perform other analyses afterwards. Our experiments demonstrate that our method is computationally efficient and it has significantly better noise handling ability than several existing surface reconstruction codes. | Denoising a Point Cloud for Surface Reconstruction | 10,097 |
We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample space and their interactions with multiple sampling techniques. We observe that incorporating the sampling technique into the state of the Markov Chain, as done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of the chain to properly explore the path space, as transitions between sampling techniques lead to disruptive alterations of path samples. To address this issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework (RJMCMC) and introduce inverse sampling techniques that turn light paths into the random numbers that would produce them. This allows us to formulate a novel perturbation that can locally transition between sampling techniques without changing the geometry of the path, and we derive the correct acceptance probability using RJMCMC. We investigate how to generalize this concept to non-invertible sampling techniques commonly found in practice, and introduce probabilistic inverses that extend our perturbation to cover most sampling methods found in light transport simulations. Our theory reconciles the inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible Jump MLT (RJMLT). We verify the correctness of our implementation in canonical and practical scenarios and demonstrate improved temporal coherence, decrease in structured artifacts, and faster convergence on a wide variety of scenes. | Reversible Jump Metropolis Light Transport using Inverse Mappings | 10,098 |
A conformal flattening maps a curved surface to the plane without distorting angles---such maps have become a fundamental building block for problems in geometry processing, numerical simulation, and computational design. Yet existing methods provide little direct control over the shape of the flattened domain, or else demand expensive nonlinear optimization. Boundary first flattening (BFF) is a linear method for conformal parameterization which is faster than traditional linear methods, yet provides control and quality comparable to sophisticated nonlinear schemes. The key insight is that the boundary data for many conformal mapping problems can be efficiently constructed via the Cherrier formula together with a pair of Poincare-Steklov operators; once the boundary is known, the map can be easily extended over the rest of the domain. Since computation demands only a single factorization of the real Laplace matrix, the amortized cost is about 50x less than any previously published technique for boundary-controlled conformal flattening. As a result, BFF opens the door to real-time editing or fast optimization of high-resolution maps, with direct control over boundary length or angle. We show how this method can be used to construct maps with sharp corners, cone singularities, minimal area distortion, and uniformization over the unit disk; we also demonstrate for the first time how a surface can be conformally flattened directly onto any given target shape. | Boundary First Flattening | 10,099 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.