diff --git "a/20240318/2310.04152v2.json" "b/20240318/2310.04152v2.json" new file mode 100644--- /dev/null +++ "b/20240318/2310.04152v2.json" @@ -0,0 +1,562 @@ +{ + "title": "Improving Neural Radiance Fields Using Near-Surface Sampling With Point Cloud Generation", + "abstract": "Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities.\nThe disadvantage of NeRF is that it requires a long training time since it samples many 3D points.\nIn addition, if one samples points from occluded regions or in the space where an object is unlikely to exist,\nthe rendering quality of NeRF can be degraded.\nThese issues can be solved by estimating the geometry of 3D scene.\nThis paper proposes a near-surface sampling framework to improve the rendering quality of NeRF.\nTo this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface.\nTo obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud.\nExperimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality,\ncompared to the original NeRF and three different state-of-the-art NeRF methods.\nIn addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recently, metaverse and virtual reality applications are rapidly drawing attention.\nIn such applications, it is important to generate novel views accurately.\nOne way to achieve this goal is to generate a three-dimensional (3D) model first and follow a conventional rendering pipeline [1 ###reference_b1###].\nHowever, generating a 3D model needs a lot of time and effort.\nImage-based rendering (IBR) is another approach that generates novel views without explicitly generating a 3D model.\nSeveral methods generate a novel view using image morphing [4 ###reference_b4###].\nThe Layered Depth Images method [24 ###reference_b24###] stores multiple depth and color values for each pixel to effectively fill the hole behind the foreground object in a novel view.\nLight fields [11 ###reference_b11###] and Lumigraph [7 ###reference_b7###] that express light rays as a function were also proposed.\nRecently, among IBR methods, neural radiance field (NeRF) [17 ###reference_b17###] has been rapidly gaining attention.\nRay, a core concept of NeRF, means lines shot in a straight line from the camera position to an object.\nA NeRF network predicts the color and density of each point utilizing 3D points sampled from each ray.\nThen a novel view is obtained by performing a line integral using this color and density.\nThe original NeRF [17 ###reference_b17###] performs sampling within a range that includes the entire 3D object.\nThis paper proposes to use depth information to sample 3D points only around surface of an object in NeRF,\nwhere we consider the practical scenario that depth information is only available at hands (from depth cameras) in a training dataset.\nTo consider that measured/estimated depths maps may be inaccurate due to capturing environments,\nwe propose to generate a 3D point cloud using available (inaccurate) depth information in training,\nand to use this 3D point cloud to estimate a depth image for each novel view in test (i.e., inference).\nFigure 1 ###reference_### illustrates the brief overview of the proposed NeRF framework.\nSimply projecting a 3D point cloud onto a novel view generates a rather rough depth image.\nTo obtain more accurate depth images, we additionally propose a refining method that removes unnecessary 3D points in generating a point cloud and fills the hole of the projected depth image.\nSimply put, to improve NeRF, the paper proposes an advanced sampling method around the surface of an object/a scene using estimated depth images from generated point cloud.\nOur experimental results with different datasets demonstrate that the proposed framework outperforms\noriginal NeRF and three different state-of-the-art NeRF methods.\n###figure_1### The rest of the paper is organized as follow.\nSection 2 reviews NeRF and its follow-up works with particularly related works with ours, and presents differences between the proposed NeRF and existing depth-based NeRFs.\nSection 3 provides motivation and detail of the proposed method,\nSection 4 reports experiments and analysis, and\nSection 5 discusses conclusions, limitation and future work." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "NeRF", + "text": "NeRF [17 ###reference_b17###] is a state-of-the-art view synthesis technology that samples points on rays and synthesizes views through differentiable volume rendering.\nThe input of this algorithm is a single continuous five-dimensional (5D) coordinate consisting of a 3D spatial location and a two-dimensional viewing direction.\nThe output is a volume density and view-dependent emitted radiance at the corresponding spatial location.\nIn other words,\nthe key idea of NeRF is to train a neural network that predicts a view-dependent color value and a volume probability value by taking a 5D coordinate.\nUsing those two predicted values, a final rendered color value is determined by performing a line integral with classical volume rendering.\nTo further improve the rendering quality, NeRF uses the following two techniques: positional encoding and hierarchical volume sampling.\nPositional encoding increases the dimension of input data; the hierarchical volume sampling technique allocates more samples to regions that are expected to include visible content.\nHierarchical volume sampling is named as it performs sampling with two different networks, \u201ccoarse\u201d one and \u201cfine\u201d one.\nFor each ray, a coarse network gives a view-dependent emitted color and volume density using points that are sampled with stratified sampling method along the ray.\nA piecewise-constant probability density function (PDF) is generated (along each ray) by normalizing contribution weights that are calculated with volume densities and the distances between adjacent samples of points.\nAfter integrating the generated PDF to calculate cumulative distribution function, points are sampled through inverse transform sampling.\nA fine network gives a view-dependent color value and volume density using points and those more informed points.\nAfter all, one calculates the final rendering of the corresponding ray with points.\nThrough this process, NeRF can represent a 3D object (in 360 degrees) and forward-facing scenes with continuous views.\nHowever, NeRF in its original form has several limitations.\nFor example, it can represent only static scenes;\nits training and inference is slow;\none NeRF network represents only one object/scene." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Follow-up works of NeRF", + "text": "Researchers has been improving the original NeRF model [17 ###reference_b17###] in various aspects.\nThe first aspect is to reduce training time of NeRF models while maintaining rendering accuracy [9 ###reference_b9###, 5 ###reference_b5###, 27 ###reference_b27###, 18 ###reference_b18###].\n[9 ###reference_b9###]\nreduces training time by proposing a new sampling method to use less number of samples per ray.\n[5 ###reference_b5###]\nsupervises depth to use a smaller number of views in training.\n[27 ###reference_b27###] can accelerate training by quickly generating an initial rough point cloud and refining it in an iterative manner.\n[18 ###reference_b18###] uses a learnable encoding method instead of positional encoding, and update only parameters related to sampling positions instead of updating all parameters.\nThe second aspect is to improve inference time of NeRF models [23 ###reference_b23###, 14 ###reference_b14###, 13 ###reference_b13###, 19 ###reference_b19###, 18 ###reference_b18###].\n[23 ###reference_b23###] and [14 ###reference_b14###] reduces inference time by spatially decomposing and processing the scene:\n[23 ###reference_b23###] uses a spatially decomposed scene and a small network for each space;\n[14 ###reference_b14###] skips spaces with irrelevant scenes among the decomposed spaces during inference.\n[13 ###reference_b13###] uses volume integral calculation network instead of the classical integral calculation method to shorten inference.\n[19 ###reference_b19###] uses a rendering pipeline that includes a network to predict the optimal sample locations on rays to reduce inference time.\nUsing learnable encoding method instead of positional encoding\n[18 ###reference_b18###] can accelerate inference.\nThird aspect is to consider different scenarios with NeRF models [29 ###reference_b29###, 12 ###reference_b12###, 26 ###reference_b26###, 20 ###reference_b20###, 10 ###reference_b10###, 21 ###reference_b21###, 22 ###reference_b22###, 25 ###reference_b25###, 2 ###reference_b2###, 16 ###reference_b16###].\n[29 ###reference_b29###] additionally estimates camera pose.\n[12 ###reference_b12###] considers the case that camera poses are imperfect or unknown.\n[26 ###reference_b26###, 20 ###reference_b20###, 10 ###reference_b10###] consider multi-object/scene representation.\nIn particular,\n[26 ###reference_b26###] disentangles foreground and background.\nDynamic scene representation [21 ###reference_b21###, 22 ###reference_b22###] and relighting [25 ###reference_b25###, 2 ###reference_b2###, 16 ###reference_b16###] makes NeRF to be applicable to changing scenes rather than static scenes." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Depth-based NeRFs and their relations with the proposed NeRF framework", + "text": "Depth oracle neural radiance field (DONeRF) [19 ###reference_b19###] uses ground-truth depth images of the training set to train ideal sample locations on rays, and performs sampling in the estimated locations.\nHowever, DONeRF works only on forward-facing scenes where all camera poses belong to a bounding box called the view cell.\nDepth supervised neural radiance field (DSNeRF) [5 ###reference_b5###] uses a sparse depth map estimated with the structure from motion technique and adds an optimization process to the original NeRF using estimated depth information, to achieve the best rendering performance of original NeRF with fewer training iterations and images.\nSimilar to DONeRF, we aim to improve the quality of rendered images by using depth images available at hand in a training dataset.\nNote, however, that different from DONeRF, the proposed method does not use the view cell information that is required in DONeRF, and is applicable with less restricted camera positions.\nSimilar to DSNeRF, we use depth information by leveraging a point cloud.\nHowever, the proposed framework and DSNeRF use a point cloud in a different way.\nDSNeRF uses a point cloud to adjust the volume density function of NeRF.\nDifferent from this, the proposed framework uses a point cloud to directly estimate the distance to the object surface from a camera." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Motivation", + "text": "###figure_2### In NeRF [17 ###reference_b17###], there exists a room for improvement of rendering accuracy.\nNeRF uses a hierarchical volume sampling method that performs sampling twice:\n\u201crough\u201d sampling with a stratified sampling approach and \u201cfine\u201d sampling in the space where an object is likely to exist.\nSee details in Section 2.1 ###reference_###.\nThe stratified sampling approach in NeRF divides a specified range many bins and selects a sample uniformly a random from each bin.\nIn the stratified sampling process, sampling is performed not only in the space where the object exists, but also in the free space or the occluded region.\nSampling in free space and occluded region may degrade rendering quality.\nIf one can sample points only around an object in the rough sampling stage, the rendering performance might improve even without the fine sampling process.\nTo show the effects of the sampling density around an object on the rendering quality, we ran simple experiments with different sampling ranges around the surface of an object.\nFigure 2 ###reference_### shows the rendering accuracy with peak signal-to-noise ratio (PSNR) values with different sampling range, where we increased the default sampling range of NeRF by a factor of , , and by increasing distances between two samples.\nAs the sampling range increases, i.e., sampling density around an object decreases, the rendering accuracy rapidly degrades.\nWe observed from theses experiments that narrowing the sampling range around an object can improve the rendering quality in NeRF.\nThis corresponds to the hierarchical volume sampling scheme of original NeRF that re-extracts samples with high volume density values to increase rendering efficiency.\nRecently, diverse low-cost depth cameras with high accuracy have been proposed [15 ###reference_b15###, 6 ###reference_b6###].\nDepth cameras (using multi-view) can measure the distance between an object and the device, giving additional 3D information of an object.\nWe conjecture that if we sample points on 3D ray only around the surface of an object, the rendering quality of NeRF improves." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Overview", + "text": "###figure_3### Figure 3 ###reference_### illustrates the overall process of the proposed framework.\nA training set consists of color images and depth images, and at the train stage we use both.\nIn particular, we use depth images to sample in the area close to the surface of the object in a 3D space, and we refer this sampling strategy as surface-based sampling.\nBy using those sample points obtained through surface-based sampling, we train the NeRF model.\nAt the offline stage,\nwe use depth images of the training set to generate a point cloud and save this point cloud for inference.\nAt the test stage,\nwe use the saved point cloud at the offline stage to generate a depth image corresponding to a novel view.\nWe further refine depth images through computationally efficient hole filling for surface-based sampling.\nUsing sampled points only around the surface of an object that is estimated with a refined depth,\nwe render images of novel views with a single NeRF network." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Surface-based sampling", + "text": "###figure_4### Figure 4 ###reference_### illustrates the difference between the sampling range of the original NeRF\u2019s sampling method (blue) and that of surface-based sampling method (orange).\nDifferent from original NeRF that samples 3D points at a wide range that includes the entire 3D object, the proposed surface-based sampling method mainly samples those around the surface of the object.\nWe now describe the geometry of the proposed surface-based sampling method for each ray of each view.\nAs in the original NeRF, we assume that each ray is propagated from the location of a camera (see Figure 4 ###reference_###).\nWe define the location of a camera in each ray as .\nThe distance between the locations of a camera and an object is the depth value from a depth image, and we denote it as .\nLet the half of some specified sampling range be .\nThen, the location of a point nearest to the camera within the sampling range can be calculated as follows:\nNow, we determine the location of the th sample for each ray (considering that a ray is originated from the camera location, ) by\nwhere is the number of sample points for each ray, and is a random number generated between and .\nWe perform stratified sampling near the surface of an object, where we determine the sample locations by (2 ###reference_###).\nIn (2 ###reference_###), is the length of each bin in stratified sampling of the original NeRF method.\nHere, the parameter determines the sampling range; if is fixed, ultimately affects the sampling density around the surface.\nAs decreases, the length of each bin is shorter and distances between sample points are expected to become close, so the sampling density near the surface increases.\nAs increases, the length of each bin is longer and distances between sample points are expected to become far, so the sampling density near the surface decreases.\nDifferent from the two-step network sampling scheme in original NeRF,\nthe proposed framework directly samples points near the surface of an object by using depth information in the near-surface sampling scheme (2 ###reference_###) in a single step, i.e., it uses a single network.\nWe expect that if the depth to the surface of a 3D object is accurately estimated,\nthe rendering quality improves by using small , i.e., densely sampling 3D points.\nIf it is poorly estimated, we expect that small rather degrades the rendering quality.\nWith fixed , we recommend setting considering the accuracy of depth images.\n###figure_5###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Depth image generation for novel views", + "text": "In the training stage, we perform surface-based sampling without any additional process, assuming that a depth image for each view is available.\nIn the test stage, however, we assume that depth images are unavailable, so we perform depth estimation for a novel view for surface-based sampling.\nFor depth estimation, in the offline stage, we generate and save a point cloud as shown in Figure 3 ###reference_###.\nIn the test stage, we use this point cloud to estimate depth images for novel views.\nUsing this depth estimation process, surface-based sampling can be performed without a ground truth depth image in the test stage." + }, + { + "section_id": "3.4.1", + "parent_section_id": "3.4", + "section_name": "3.4.1 Point cloud generation and refinement in the offline stage", + "text": "Figure 5 ###reference_### illustrates the key concept of the proposed point cloud generation and refinement method.\nTo improve the accuracy of depth estimation, we generate 3D points with a subset of training images, by repeatedly eliminating inaccurate points.\nIn constructing a subset of training images, we\ngive a sufficient and uniform distance between their adjacent viewpoints.\nThis setup is more efficient in constructing a 3D point cloud,\ncompared to the setup that uses the entire training views.\nSee details of this experimental setup later in Section 4.2 ###reference_###.\nEach iteration consists of the following four steps and we repeat them with the cardinality of a subset of training images, where we sequentially follow the trajectory of viewpoints in a subset of training data:\nWe generate a point cloud using a depth image from a viewpoint.\nWe project 3D points of the generated point cloud onto an image plane of the next viewpoint, and obtain the distance between each 3D point and the camera location of the next viewpoint by using the multiple view geometry calculation method [8 ###reference_b8###].\nWe compare each calculated distance to a ground-truth depth value from the depth image at the next viewpoint, and identify if the following condition is satisfied:\nwhere denotes the calculated distance using the second step above, and\n denotes the ground-truth depth value of a pixel position where the 3D point is projected, and denotes some specified threshold.\nIf the condition (3 ###reference_###) is not satisfied, we generate a new 3D point by back-projecting a pixel of the value .\nSetting appropriately is important to generate an accurate point cloud.\nIf is too large, 3D points with similar locations will be considered as the same point.\nConsequently, fewer 3D points are generated, leading to faster rendering times;\nhowever, estimated depth images may contain many holes.\nConversely, if is too small, the number of 3D points increases since point clouds can be generated with overlapping.\nThis decreases the number of holes in depth images, but it takes a long time for the rendering process.\nThroughout the paper, we use a subset of training views for point cloud generation and refinement.\nDifference with multi-view stereo (MVS) in point cloud generation.\nMVS is a standard approach for generating a cloud or mesh, from a set of images captured from many different views.\nWe observed that the proposed point cloud generation method can generate more points than the standard MVS method [3 ###reference_b3###] for similar computational time111With a standard graphics processing unit (GPU), the processing time of standard MVS is seconds (sec.) and that of the proposed point cloud generation is sec., both with views..\nThis leads to the consequence that a point cloud generated by the proposed method above can improve rendering quality compared to that generated by MVS.\nWithin the proposed NeRF framework,\na point cloud generated by the proposed point cloud generation method and that given by the standard MVS method resulted in dB and dB in PSNR, respectively (for the Pavillon dataset [19 ###reference_b19###]; , )." + }, + { + "section_id": "3.4.2", + "parent_section_id": "3.4", + "section_name": "3.4.2 Depth estimation from a point cloud in the test stage", + "text": "###figure_6### To obtain a depth image at a novel viewpoint using a point cloud,\nwe calculate the distance from a 3D point to the camera location by projecting a generated point cloud in Section 3.4.1 ###reference_.SSS1### to the image plane.\nIf more than one 3D point are projected onto the same pixel location, we use the closest 3D point to the camera location for distance calculations.\nAt a novel viewpoint, a projected depth image from a point cloud could have \u201choles\u201d, i.e., pixels with zero values, if those do not have corresponding 3D point(s) in a point cloud.\nIn projected depth images, however, one cannot identify if such holes correspond to background areas or are missing information on the surface of a foreground object due to limited 3D points.\nIn this section, we aim to fill-up missing information on the object surface while maintaining background areas.\nTo distinguish whether holes in projected depth images correspond to background area(s) or missing information on the surface of a foreground object, we use the following condition for a pixel of value :\nwhere and is the average and the standard deviation calculated from neighboring pixels in a projected depth image \u2013 whose center is the pixel of value \u2013 respectively, and is some specified threshold.\nIf the condition (4 ###reference_###) is satisfied, we determine that a hole is missing information on the surface, and fill that hole by applying the moving average filter with a kernel of size .\nIf is too large, there still may exist many holes with missing information on the surface of an object (not in background area(s)) even after the hole filling process.\nIf is too small, however, one may even fill holes in background area(s).\nSelecting an appropriate value can generate more accurate/useful depth images by minimizing missing information on the object surface and mitigating hole-filling the background areas.\nFigure 6 ###reference_### shows examples of estimated depth images without and with the proposed hole filling process, and the ground-truth depth image.\nWe observed that the proposed hole filling method estimates missing depth information for a foreground object, giving more appropriate depth maps.\nHowever, a few parts of the background that are supposed to have zero values are filled with some non-zero values.\nIt is suboptimal in the perspective of depth estimation, but it is a simple method that can provide sufficiently useful information for proposed near-surface sampling in Section 3.3 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results and discussion", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "###figure_7### We used the synthetic Lego and Ship datasets in original NeRF [17 ###reference_b17###],222\nEach original synthetic dataset consist of training images and test images; viewpoints are sampled on the upper hemisphere (with fixed diameter) around an object.\n, the real dataset with the identifier 5a8aa0fab18050187cbe060e\nin BlendedMVS [28 ###reference_b28###], and the Pavillon scene dataset.\nFigure 7 ###reference_### shows these datasets.\nFor each synthetic dataset, we used training images and test images, all with the spatial resolution of .\nIn generating a point cloud (Section 3.4.1 ###reference_.SSS1###) for each synthetic dataset, we used of training images from the original dataset.333\nWe generated a point cloud with viewpoints by sequentially using the every fifth viewpoint from viewpoints.\nWe repeated the point cloud generation process times, where each iteration consists of four steps in Section 3.4.1 ###reference_.SSS1###.\n(Section 3.4.1 ###reference_.SSS1### describes the relation between the numbers of viewpoints and repetitions.)\nIn constructing a training dataset for each synthetic data,\nwe selected of original test images by skipping one view by one view and added them to the original training dataset.\nFor the real dataset,\nwe used training images and test images, all with the the resolution of .\nIn generating a point cloud, we used of training images.\\@footnotemark\nFor all datasets, each instance has a different viewpoint.\nIf not further specified, we used the above experimental setup throughout all experiments.\nThe chosen real data contains multi-view images taken around an object and several images are captured from closer viewpoints to an object.\nIn our experiments, we used included depth images in [28 ###reference_b28###], and used blended color images reflecting view-dependent lighting [28 ###reference_b28###], as the ground truth color images.\nWe compared the proposed NeRF framework using near-surface sampling with a point cloud, with original NeRF, DONeRF [19 ###reference_b19###], DSNeRF [5 ###reference_b5###], and Instant-NGP [18 ###reference_b18###].\nFor comparing performances between all five methods, we used the re-rendered Lego dataset and Pavillon scene dataset to better fit the view cell methodology of DONeRF that uses additional configurations for view cell generation, and is forward-facing.\nWe used training images and test images, for these comparison experiments.\nFor a point cloud generation, we used training images.\nFor comparing performances between the proposed and original NeRF, we used all the three different datasets (Lego, Ship, and BlendedMVS) that are not necessarily forward-facing." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experimental setup", + "text": "Throughout experiments with different sampling ranges of the proposed surface-based sampling method, we assumed that the full sampling range of original NeRF [17 ###reference_b17###], i.e., the radius of the blue fan-shape in Figure 4 ###reference_###, is (unitless).\nFor synthetic datasets, we set half of the sampling range of proposed NeRF, i.e., in (1 ###reference_###)\u2013(2 ###reference_###), as , , , and .\nFor real dataset, we set as , , , and .\n(We used larger sampling ranges in real dataset experiments compared to synthetic dataset experiments, since the depth quality of the real dataset is relatively poorer than that of the synthetic dataset.444\nFor the synthetic Lego and Ship datasets and real BlendedMVS dataset, the PSNR value (in dB) for estimated depth in inference is , , and , respectively.)\nTo see the effects of depth estimation accuracy in the proposed NeRF framework,\nwe also ran experiments with ground-truth depth images and estimated depth images via the proposed method.\nWe set the number of sample points , except for experiments using different \u2019s.\nIn experiments comparing different NeRF methods, we used different numbers of sampling points, i.e., in (2 ###reference_###).\nFor fair comparisons, the total number of sampling points per ray of original NeRF is set identical to those of proposed NeRF, DONeRF [19 ###reference_b19###], DSNeRF [5 ###reference_b5###], and Instant-NGP [18 ###reference_b18###].\nIn the original NeRF approach, for each coarse and fine network,\nwe set the number of sample points per ray to , , , and .\nFor the proposed NeRF, DONeRF, DSNeRF, we set as , , , and , and used only one rendering network.\nDifferent from original NeRF that uses samples with different locations for two different networks,\nInstant-NGP uses two networks that estimate color and density respectively, but use samples with the same locations.\nFor Instant-NGP, we set the number of samples per ray to , , , and .\nThat is, in comparing different NeRF methods, we set the total number of sample points per ray as , , , and consistently for all the NeRF methods.\nThe remaining hyperparameters of the proposed NeRF approach are listed as follows.\nIn determining sampling locations (2 ###reference_###), we randomly sampled via the uniform distribution between and .\nIn the point cloud refinement condition (3 ###reference_###), we set as .\nIn the hole filling condition (4 ###reference_###), we set and as and , respectively.\nWe used the following hyperparameters throughout all experiments.\nWe set the total number of training iterations as , as the training losses tend to converge after iterations.\nFor each iteration, we set the batch size of input rays as .\nWe used the learning rate of until iterations, and reduced it to after iterations.\nWe used the ADAM optimizer.\nFor quantitave comparisons, we used the most representative measure, PSNR in dB, excluding the background area (if available).\nWe used an NVIDIA GeForce RTX 4090 GPU with 24GB GDDR6X VRAM and 2.31GHz, Intel(R) Xeon(R) Gold 6326 CPU with 2.90GHz, and main memory of 503GB RAM." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Comparisons with different sampling ranges in the proposed NeRF framework", + "text": "###figure_8### ###figure_9### ###figure_10### ###table_1### ###figure_11### ###figure_12### ###figure_13### ###figure_14### Using the proposed surface-based sampling method,\nwe compared results between different sampling ranges, either with ground-truth or estimated depth images.\nFirst, we compare performances between different sampling ranges, with the ground-truth depth images.\nFigure 8 ###reference_### with dotted lines compares the rendering quality of proposed NeRF with different sampling ranges, for three different datasets.\nIt demonstrates that as the sampling range becomes narrow, the rendering quality of NeRF improves.\nWith the ground truth depth information,\nthe rendering accuracy improved as the sampling range becomes narrow.\nThis is natural as the narrower the sampling range, the more sample points are located near the surface of an object.\nNext, we compare performances between different sampling ranges, with the estimated depth images via the proposed point cloud generation and hole filling approaches.\nFigures 8 ###reference_### (solid lines)\u20139 ###reference_### compare the rendering quality of proposed NeRF with different sampling ranges, for three different datasets.\nIn Figure 9 ###reference_###,\ndifferent columns show rendered images with different sampling ranges;\nin the last column, the ground truth images are presented;\ndifferent rows show rendered images with different datasets.\nFigures 8 ###reference_###\u20139 ###reference_### demonstrate that the rendering quality of proposed NeRF improves, as the sampling range becomes narrow, but only up to the certain sampling range, e.g., and of the full sampling range of original NeRF for synthetic data and real data, respectively.\nIf the sampling range is too narrow,\ne.g., and for synthetic data and real data, respectively, the rendering accuracy degraded.\nThis is because some estimated depth information is inaccurate, but we sample points too near the corresponding inaccurate regions where actual surfaces do not exist.\nFinally, we compare the rendering accuracy between two proposed NeRF methods using ground truth and estimated depth images respectively.\nFigure 8 ###reference_### demonstrates that in the proposed NeRF framework, using estimated depth images degrade the overall rendering accuracy compared to using the ground truth depth, as one may expect.\nIn particular, points sampled around the inaccurately estimated surface of an object degrade the rendering accuracy." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Rendering quality comparisons between different NeRF models", + "text": "" + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Comparisons between five different NeRF models", + "text": "###table_2### ###table_3### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###table_4### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###table_5### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###table_6### ###figure_35### ###figure_36### ###figure_37### ###table_7### Table 1 ###reference_### and Figures 10 ###reference_###\u201311 ###reference_### compare the rendering quality between the five different NeRF models, with different number of samples.\nThe demonstrate that the proposed NeRF outperforms original NeRF, DONeRF, DSNeRF, and Instant-NGP, regardless of the number of sample points per ray.\nFigures 10 ###reference_###\u201311 ###reference_### show that the proposed NeRF framework produces significantly better details of a 3D object, compared to the original NeRF, DONeRF, DSNeRF and Instant-NGP.\nTable 1 ###reference_### with two different datasets shows that rendering accuracy reduces as the number of sample points per ray decreases.\nThis is similarly observed in all the five different NeRF models.\nThis is because as the number of sample point decreases, we have less information to model a 3D object via networks." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 A closer look at original NeRF vs. proposed NeRF", + "text": "Figure 12 ###reference_### compares the rendering performance particularly between original and proposed NeRFs, with different numbers of samples per ray.\nThe figure demonstrates for the three different datasets that the proposed NeRF framework gives significantly better rendering accuracy compared to original NeRF, regardless of the number of sample points per ray.\nMore importantly,\nFigure 12 ###reference_### shows that in the proposed NeRF framework, the performance degradation according to reduction of number of samples per ray is significantly less, compared to original NeRF.\nIn other words, proposed NeRF can maintain the rendering quality, while reducing the number of samples per ray.\nConsequently, we conclude that only with a limited number of samples per ray, the proposed NerF model can achieve significantly better rendering accuracy,\ncompared to the original NeRF model using many samples per ray.\nFor the synthetic datasets, the proposed framework using samples per ray outperformed original NeRF using samples per ray;\nfor the real data, the rendering accuracy of the proposed NeRF model using samples per ray is comparable with that of original NeRF using samples per ray.\nWe expect that the smaller the error in estimated depth at a novel view, the narrower sampling range can be used while reducing the number of samples.\nFigure 13 ###reference_### shows rendered images by the proposed framework for different numbers of sample points per ray, with three different datasets.\nExcept for the extreme case of using only eight samples per ray (), the image quality of rendered images by the proposed framework gradually degraded as the number of samples per ray reduces.\n(When , the rendering quality significantly degraded.)\nThis with the above results from Figure 12 ###reference_### underscores the importance of the near-surface sampling approach.\nFigure 14 ###reference_### compares rendered images by the original and proposed NeRF methods when .\nParticularly in the proposed NeRF framework, we used the worst sampling range for the BlendedMVS dataset.\nThe proposed surface-based sampling method significantly improves the overall rendering quality of NeRF, but there exists some dot artifacts.\nThis is because some missing information still exists or filled holes have inaccurate depth information, after the hole filling.\nWe conjecture that if one uses a fancier depth estimation method than the proposed simple hole filling scheme, one can remove those artifacts.\nTable 2 ###reference_### summarizes PSNR values of the original and proposed NeRF models, for different numbers of samples per ray () and different sampling range ().\nFor each setup using an identical value,\nthe proposed NeRF framework outperformed the original NeRF model, regardless of .\n###table_8###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Training time comparisons between different NeRF models", + "text": "Table 3 ###reference_### compares the training time between the five different NeRF methods,\nwith different numbers of samples.\nThe Instant-NGP model showed the fastest training time among the five NeRF models \u2013 note, however, that its rendering accuracy is significantly worse than the proposed NeRF method (see Table 1 ###reference_###).\nExcept for Instant-NGP, the proposed NeRF method showed the the fastest training time.\nParticularly compared to the original NeRF, the proposed NeRF was about two times faster.\nThe reason is that we trained a single fully-connected network in the proposed NeRF framework, whereas the original NeRF approach trained two fully-connected networks.\nIt took longer in training DONeRF and DSNeRF than the proposed NeRF model (with the same number of iterations).\nThis is natural because DONeRF and DSNeRF train an extra depth estimation network.\nRegardless of the models, the smaller the number of sample points, it took the less training time." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In NeRF methods, it is important to reduce the number of sample points per ray while maintaining the rendering quality, as using less samples can reduce training/inference time.\nBased on the assumption that the closer the sample point is to the surface of an object, the more important it is for rendering, we propose a near-surface sampling method for NeRF.\nThe proposed framework samples 3D points only near the surface of an object, by estimating depth images from a 3D point cloud generated with a subset of training data and a simple hole filling method.\nFor different datasets, the proposed NeRF framework significantly improves the original NeRF [17 ###reference_b17###] and three state-of-the-art NeRF methods, DONeRF [19 ###reference_b19###], DSNeRF [5 ###reference_b5###], and Instant-NGP [18 ###reference_b18###].\nParticularly compared to the original NeRF method,\nthe proposed framework can achieve significantly better rendering accuracy, with only a quarter of sample points per ray.\nIn addition, the proposed near-surface sampling framework can accelerate the NeRF training time twice as fast, while improving the rendering quality with an appropriate sampling range parameter.\nThe proposed method would be useful particularly for applications/technologies where visualizing details is important in novel views.\nThere are a number of avenues for future work to improve the proposed framework.\nFirst, the proposed framework takes a longer inference time compared to the original NeRF model, because projecting many 3D points to a view plane and estimating a depth image is slower than inference via coarse network in original NeRF.\nWe expect to reduce rendering time by speeding up the point cloud projection process.\nSecond, the proposed NeRF framework is not completely end-to-end.\nIn particular, the point cloud generation and refinement process is in the offline stage and not yet optimized for rendering.\nTherefore, we expect to improve the performance of the NeRF model by modifying it with the fully end-to-end approach, incorporating point cloud generation and refinement process into training.\nFinally, we expect to further improve the rendering performance of the proposed method by using a more accurate depth estimation method." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: PSNR (dB) comparisons with different numbers of samples per ray for different NeRF methods\n( and for the Lego and Pavillon datasets in [19], respectively).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(a) The Lego dataset
\n\n\n\n\n\n\n\n\nMethod\n\nNeRFDSNeRFInstant-NGPDONeRFProposed NeRF
6427.8429.2430.4031.2533.57
3226.8228.6730.3131.1332.13
1624.5825.8129.1330.0831.36
822.7223.9228.9429.1330.25
(b) The Pavillon scene dataset
\n\n\n\n\n\n\n\n\nMethod\n\nNeRFDSNeRFInstant-NGPDONeRFProposed NeRF
6425.1327.6132.0932.0033.21
3222.2626.1531.7131.9932.40
1619.2523.8930.6031.7131.97
817.5221.1828.4231.2931.44
\\botrule
\n
", + "capture": "Table 1: PSNR (dB) comparisons with different numbers of samples per ray for different NeRF methods\n( and for the Lego and Pavillon datasets in [19], respectively)." + }, + "2": { + "table_html": "
\n
Table 2: PSNR (dB) comparisons between the proposed method and original NeRF with a different number of samples and sampling range. The numbers in parentheses denote performance comparisons between the proposed and original NeRF models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSampling range ()LegoShipBlendedMVS
64Original NeRF425.4324.3719.96
Proposed NeRF126.19 (+0.76)24.85 (+0.48)\n20.92 (+0.96)
1/228.38 (+2.95)25.49 (+1.12)20.81 (+0.85)
1/4\n28.87 (+3.44)\n25.63 (+1.26)19.81 (-0.15)
1/827.86 (+2.43)25.19 (+0.82)
\n\u00a0\n32Original NeRF422.0622.7518.79
Proposed NeRF123.50 (+1.44)23.57 (+0.82)19.92 (+1.13)
1/226.05 (+3.99)24.64 (+1.89)\n20.29 (+1.50)
1/427.45 (+5.39)\n25.09 (+2.34)19.54 (+0.75)
1/8\n27.55 (+5.49)25.04 (+2.29)
\n\u00a0\n16Original NeRF419.1521.0716.99
Proposed NeRF121.10 (+1.95)22.59 (+1.52)18.30 (+1.31)
1/223.50 (+4.35)23.65 (+2.58)\n19.37 (+2.38)
1/425.16 (+6.01)24.23 (+3.16)18.99 (+2.00)
1/8\n26.78 (+7.63)\n24.70 (+3.63)
\n\u00a0\n8Origianl NeRF416.7519.1114.42
Proposed NeRF119.51 (+2.76)21.48 (+2.37)16.73 (+2.31)
1/221.12 (+4.37)22.63 (+3.52)17.74 (+3.32)
1/422.67 (+5.92)23.21 (+4.10)\n18.06 (+3.64)
1/8\n25.44 (+8.69)\n24.11 (+5.00)
\\botrule
\n
", + "capture": "Table 2: PSNR (dB) comparisons between the proposed method and original NeRF with a different number of samples and sampling range. The numbers in parentheses denote performance comparisons between the proposed and original NeRF models." + }, + "3": { + "table_html": "
\n
Table 3: Training time (hour) comparisons between the proposed method and four different NeRF models with a different number of samples (the Pavillon scene dataset).\nWe used iterations throughout the experiments.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\nMethod\n\nNeRFDSNeRFInstant-NGPDONeRFProposed NeRF
6421.2716.761.5416.1612.50
3217.5814.160.6513.669.34
1613.2712.440.6211.637.52
812.1411.860.5711.047.47
\\botrule
\n
", + "capture": "Table 3: Training time (hour) comparisons between the proposed method and four different NeRF models with a different number of samples (the Pavillon scene dataset).\nWe used iterations throughout the experiments." + } + }, + "image_paths": { + "1": { + "figure_path": "2310.04152v2_figure_1.png", + "caption": "Figure 1: The brief overview of the proposed NeRF framework that samples points near the estimated surface from a point cloud.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig1.png" + }, + "2": { + "figure_path": "2310.04152v2_figure_2.png", + "caption": "Figure 2: The NeRF rendering accuracy comparisons with different sampling ranges. Here, d\ud835\udc51ditalic_d denotes the default sampling range of NeRF.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig2.png" + }, + "3": { + "figure_path": "2310.04152v2_figure_3.png", + "caption": "Figure 3: The overall diagram of the proposed NeRF framework.\nThe red words highlight proposed modules.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig3.png" + }, + "4": { + "figure_path": "2310.04152v2_figure_4.png", + "caption": "Figure 4: Sampling range comparisons between the original NeRF (blue) and the proposed surface-based sampling scheme (orange).\nThe solid line represents the surface of an object, and the dotted lines inside the blue fan represent rays.\nThe area within two dotted lines outside the blue region corresponds to the field of view of a camera.\nDifferent the original NeRF, the proposed method samples only around the surface of an object.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig4.png" + }, + "5": { + "figure_path": "2310.04152v2_figure_5.png", + "caption": "Figure 5: An example of the proposed point cloud refinement.\nIn the first step, we generate a point cloud from a depth image of a viewpoint.\nIn the second step,\nwe project the generated point cloud onto the next viewpoint.\nIn the third step,\nwe use the depth thresholding scheme (3) using projected points in the next viewpoint and ground-truth depth values.\nIf a projected point in the next viewpoint has a similar value to the ground-truth, we consider that the corresponding 3D point is redundant to generate.\nWe then generate new 3D points in the next viewpoint if they are determined to be necessary.\nWe repeat the above steps.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig5.png" + }, + "6": { + "figure_path": "2310.04152v2_figure_6.png", + "caption": "Figure 6: A depth image by projecting a point cloud (left);\na depth image by projecting a refined point cloud with hole filling (middle);\nthe ground truth depth image (right).", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig6.png" + }, + "7": { + "figure_path": "2310.04152v2_figure_7.png", + "caption": "Figure 7: The Lego (1111st), Ship (2222nd), BlendedMVS (3333rd), and Pavillon (4444th) datasets.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig7.png" + }, + "8(a)": { + "figure_path": "2310.04152v2_figure_8(a).png", + "caption": "Figure 8: PSNR (dB) comparisons with different sampling ranges, for three different datasets (N=64\ud835\udc4164N=64italic_N = 64).\nThe dotted and solid lines denote the rendering accuracy in PSNR values of proposed NeRF, with the ground-truth and estimated depth images, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig8_a.png" + }, + "8(b)": { + "figure_path": "2310.04152v2_figure_8(b).png", + "caption": "Figure 8: PSNR (dB) comparisons with different sampling ranges, for three different datasets (N=64\ud835\udc4164N=64italic_N = 64).\nThe dotted and solid lines denote the rendering accuracy in PSNR values of proposed NeRF, with the ground-truth and estimated depth images, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig8_b.png" + }, + "8(c)": { + "figure_path": "2310.04152v2_figure_8(c).png", + "caption": "Figure 8: PSNR (dB) comparisons with different sampling ranges, for three different datasets (N=64\ud835\udc4164N=64italic_N = 64).\nThe dotted and solid lines denote the rendering accuracy in PSNR values of proposed NeRF, with the ground-truth and estimated depth images, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig8_c.png" + }, + "9(a)": { + "figure_path": "2310.04152v2_figure_9(a).png", + "caption": "Figure 9: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different sampling ranges (we used estimated depth images via the proposed method; N=64\ud835\udc4164N=64italic_N = 64).\nThe sampling ranges are scaled versions of the original NeRF\u2019s with \u03b1\ud835\udefc\\alphaitalic_\u03b1\u2019s in (2).\nImages in the 4444th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig9_a.png" + }, + "9(b)": { + "figure_path": "2310.04152v2_figure_9(b).png", + "caption": "Figure 9: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different sampling ranges (we used estimated depth images via the proposed method; N=64\ud835\udc4164N=64italic_N = 64).\nThe sampling ranges are scaled versions of the original NeRF\u2019s with \u03b1\ud835\udefc\\alphaitalic_\u03b1\u2019s in (2).\nImages in the 4444th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig9_b.png" + }, + "9(c)": { + "figure_path": "2310.04152v2_figure_9(c).png", + "caption": "Figure 9: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different sampling ranges (we used estimated depth images via the proposed method; N=64\ud835\udc4164N=64italic_N = 64).\nThe sampling ranges are scaled versions of the original NeRF\u2019s with \u03b1\ud835\udefc\\alphaitalic_\u03b1\u2019s in (2).\nImages in the 4444th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig9_c.png" + }, + "9(d)": { + "figure_path": "2310.04152v2_figure_9(d).png", + "caption": "Figure 9: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different sampling ranges (we used estimated depth images via the proposed method; N=64\ud835\udc4164N=64italic_N = 64).\nThe sampling ranges are scaled versions of the original NeRF\u2019s with \u03b1\ud835\udefc\\alphaitalic_\u03b1\u2019s in (2).\nImages in the 4444th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig9_d.png" + }, + "10(a)": { + "figure_path": "2310.04152v2_figure_10(a).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_a.png" + }, + "10(b)": { + "figure_path": "2310.04152v2_figure_10(b).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_b.png" + }, + "10(c)": { + "figure_path": "2310.04152v2_figure_10(c).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_c.png" + }, + "10(d)": { + "figure_path": "2310.04152v2_figure_10(d).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_d.png" + }, + "10(e)": { + "figure_path": "2310.04152v2_figure_10(e).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_e.png" + }, + "10(f)": { + "figure_path": "2310.04152v2_figure_10(f).png", + "caption": "Figure 10: \nComparisons of rendered images with different NeRFs (the Lego dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig10_f.png" + }, + "11(a)": { + "figure_path": "2310.04152v2_figure_11(a).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_a.png" + }, + "11(b)": { + "figure_path": "2310.04152v2_figure_11(b).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_b.png" + }, + "11(c)": { + "figure_path": "2310.04152v2_figure_11(c).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_c.png" + }, + "11(d)": { + "figure_path": "2310.04152v2_figure_11(d).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_d.png" + }, + "11(e)": { + "figure_path": "2310.04152v2_figure_11(e).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_e.png" + }, + "11(f)": { + "figure_path": "2310.04152v2_figure_11(f).png", + "caption": "Figure 11: \nComparisons of rendered images with different NeRFs (the Pavillon scene dataset [19]; N=8\ud835\udc418N=8italic_N = 8, \u03b1=1/2\ud835\udefc12\\alpha=1/2italic_\u03b1 = 1 / 2)", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig11_f.png" + }, + "12(a)": { + "figure_path": "2310.04152v2_figure_12(a).png", + "caption": "Figure 12: PSNR (dB) comparisons with different numbers of samples per ray, for for three different datasets (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4).\nThe green line with squares and yellow line with triangles denote the rendering accuracy of proposed and original NeRF, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig12_a.png" + }, + "12(b)": { + "figure_path": "2310.04152v2_figure_12(b).png", + "caption": "Figure 12: PSNR (dB) comparisons with different numbers of samples per ray, for for three different datasets (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4).\nThe green line with squares and yellow line with triangles denote the rendering accuracy of proposed and original NeRF, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig12_b.png" + }, + "12(c)": { + "figure_path": "2310.04152v2_figure_12(c).png", + "caption": "Figure 12: PSNR (dB) comparisons with different numbers of samples per ray, for for three different datasets (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4).\nThe green line with squares and yellow line with triangles denote the rendering accuracy of proposed and original NeRF, respectively.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig12_c.png" + }, + "13(a)": { + "figure_path": "2310.04152v2_figure_13(a).png", + "caption": "Figure 13: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different numbers of samples per ray (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4). Images in the 5555th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig13_a.png" + }, + "13(b)": { + "figure_path": "2310.04152v2_figure_13(b).png", + "caption": "Figure 13: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different numbers of samples per ray (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4). Images in the 5555th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig13_b.png" + }, + "13(c)": { + "figure_path": "2310.04152v2_figure_13(c).png", + "caption": "Figure 13: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different numbers of samples per ray (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4). Images in the 5555th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig13_c.png" + }, + "13(d)": { + "figure_path": "2310.04152v2_figure_13(d).png", + "caption": "Figure 13: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different numbers of samples per ray (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4). Images in the 5555th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig13_d.png" + }, + "13(e)": { + "figure_path": "2310.04152v2_figure_13(e).png", + "caption": "Figure 13: \nComparisons of rendered images via proposed NeRF for the Lego (1111st row), Ship (2222nd row), and BlendedMVS (3333rd row) datasets, with different numbers of samples per ray (for Lego and Ship, \u03b1=1/16\ud835\udefc116\\alpha=1/16italic_\u03b1 = 1 / 16; for BlendedMVS, \u03b1=1/4\ud835\udefc14\\alpha=1/4italic_\u03b1 = 1 / 4). Images in the 5555th column are ground truths.", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig13_e.png" + }, + "14(a)": { + "figure_path": "2310.04152v2_figure_14(a).png", + "caption": "Figure 14: A closer look at rendered images by the original NeRF and proposed NeRF method for a real dataset in BlendedMVS [28] (N=64\ud835\udc4164N=64italic_N = 64; we used the worst sampling range for the N=64\ud835\udc4164N=64italic_N = 64 case, \u03b1=1/8\ud835\udefc18\\alpha=1/8italic_\u03b1 = 1 / 8).", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig14_a.png" + }, + "14(b)": { + "figure_path": "2310.04152v2_figure_14(b).png", + "caption": "Figure 14: A closer look at rendered images by the original NeRF and proposed NeRF method for a real dataset in BlendedMVS [28] (N=64\ud835\udc4164N=64italic_N = 64; we used the worst sampling range for the N=64\ud835\udc4164N=64italic_N = 64 case, \u03b1=1/8\ud835\udefc18\\alpha=1/8italic_\u03b1 = 1 / 8).", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig14_b.png" + }, + "14(c)": { + "figure_path": "2310.04152v2_figure_14(c).png", + "caption": "Figure 14: A closer look at rendered images by the original NeRF and proposed NeRF method for a real dataset in BlendedMVS [28] (N=64\ud835\udc4164N=64italic_N = 64; we used the worst sampling range for the N=64\ud835\udc4164N=64italic_N = 64 case, \u03b1=1/8\ud835\udefc18\\alpha=1/8italic_\u03b1 = 1 / 8).", + "url": "http://arxiv.org/html/2310.04152v2/extracted/5477193/figs/Fig14_c.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Boss M, Braun R, Jampani V, et al (2021) NeRD: Neural reflectance decomposition from image collections. In: IEEE/CVF International Conference on Computer Vision, pp 12664\u201312674, 10.1109/ICCV48922.2021.01245\n\nCernea [2020]\n\nCernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Alan W (1993) 3D computer Graphics. Addison-Wesley\n\n\nBoss et al [2021]\n\nBoss M, Braun R, Jampani V, et al (2021) NeRD: Neural reflectance decomposition from image collections. In: IEEE/CVF International Conference on Computer Vision, pp 12664\u201312674, 10.1109/ICCV48922.2021.01245\n\nCernea [2020]\n\nCernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Cernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "2": { + "title": "Cernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Boss M, Braun R, Jampani V, et al (2021) NeRD: Neural reflectance decomposition from image collections. In: IEEE/CVF International Conference on Computer Vision, pp 12664\u201312674, 10.1109/ICCV48922.2021.01245\n\nCernea [2020]\n\nCernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Chen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "3": { + "title": "Chen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Cernea D (2020) OpenMVS: Multi-view stereo reconstruction library, URL https://cdcseacave.github.io/openMVS\n\nChen and Williams [1993]\n\nChen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Deng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "4": { + "title": "Deng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Chen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 279\u2013288, 10.1145/166117.166153\n\nDeng et al [2022]\n\nDeng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Draelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "5": { + "title": "Draelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Deng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: Fewer views and faster training for free. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12872\u201312881, 10.1109/CVPR52688.2022.01254\n\nDraelos et al [2015]\n\nDraelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Gortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "6": { + "title": "Gortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Draelos M, Qiu Q, Bronstein A, et al (2015) Intel realsense = real low cost gaze. In: IEEE International Conference on Image Processing, pp 2520\u20132524, 10.1109/ICIP.2015.7351256\n\nGortler et al [1996]\n\nGortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Hartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "7": { + "title": "Hartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Gortler SJ, Grzeszczuk R, Szeliski R, et al (1996) The lumigraph. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 43\u201354, 10.1145/237170.237200\n\nHartley and Zisserman [2003]\n\nHartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Hu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "8": { + "title": "Hu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Hartley R, Zisserman A (2003) Multiple view geometry in computer vision, Second Edition. Cambridge University Press\n\n\nHu et al [2022]\n\nHu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Johari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "9": { + "title": "Johari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Hu T, Liu S, Chen Y, et al (2022) EfficientNeRF efficient neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12902\u201312911, 10.1109/CVPR52688.2022.01256\n\nJohari et al [2022]\n\nJohari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Levoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "10": { + "title": "Levoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Johari MM, Lepoittevin Y, Fleuret F (2022) GeoNeRF: Generalizing nerf with geometry priors. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 18344\u201318347, 10.1109/CVPR52688.2022.01782\n\nLevoy and Hanrahan [1996]\n\nLevoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Lin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "11": { + "title": "Lin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Levoy M, Hanrahan P (1996) Light field rendering. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 31\u201342, 0.1145/237170.237199\n\nLin et al [2021]\n\nLin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Lindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "12": { + "title": "Lindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Lin CH, Ma WC, Torralba A, et al (2021) BARF: Bundle-adjusting neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5721\u20135731, 10.1109/ICCV48922.2021.00569\n\nLindell et al [2021]\n\nLindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Liu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "13": { + "title": "Liu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Lindell DB, Martel JNP, Wetzstein G (2021) AutoInt: Automatic integration for fast neural volume rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14551\u201314560, 10.1109/CVPR46437.2021.01432\n\nLiu et al [2020]\n\nLiu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Mankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "14": { + "title": "Mankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Liu L, Gu J, Lin KZ, et al (2020) Neural sparse voxel fields. In: Proceedings of the International Conference on Neural Information Processing Systems, p 15651\u201315663\n\n\nMankoff and Russo [2013]\n\nMankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Martin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "15": { + "title": "Martin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Mankoff K, Russo T (2013) The kinect: A low-cost, high-resolution, short-range 3d camera. Earth Surface Processes and Landforms 38:926\u2013936. doi.org/10.1002/esp.3332\n\nMartin-Brualla et al [2021]\n\nMartin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Mildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "16": { + "title": "Mildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Martin-Brualla R, Radwan N, Sajjadi MSM, et al (2021) NeRF in the Wild: Neural radiance fields for unconstrained photo collections. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7206\u20137215, 10.1109/CVPR46437.2021.00713\n\nMildenhall et al [2020]\n\nMildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "M\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "17": { + "title": "M\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Mildenhall B, Srinivasan PP, Tancik M, et al (2020) NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of the European Conference on Computer Vision, pp 405\u2013421, 10.1007/978-3-030-58452-8_24\n\nM\u00fcller et al [2022]\n\nM\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Neff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "18": { + "title": "Neff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "M\u00fcller T, Evans A, Schied C, et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph 41. 10.1145/3528223.3530127\n\nNeff et al [2021]\n\nNeff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Niemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "19": { + "title": "Niemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Neff T, Stadlbauer P, Parger M, et al (2021) DONeRF: Towards real-time rendering of neural radiance fields using depth oracle networks. Computer Graphics Forum 40:45\u201349. 10.1111/cgf.14340\n\nNiemeyer and Geiger [2021]\n\nNiemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Park K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "20": { + "title": "Park K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Niemeyer M, Geiger A (2021) GIRAFFE: Representing scenes as compositional generative neural feature fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11448\u201311459, 10.1109/CVPR46437.2021.01129\n\nPark et al [2021]\n\nPark K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Pumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "21": { + "title": "Pumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Park K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision, pp 5845\u20135854, 10.1109/ICCV48922.2021.00581\n\nPumarola et al [2021]\n\nPumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Rebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "22": { + "title": "Rebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Pumarola A, Corona E, Pons-Moll G, et al (2021) D-NeRF: Neural radiance fields for dynamic scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10313\u201310322, 10.1109/CVPR46437.2021.01018\n\nRebain et al [2021]\n\nRebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Shade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "23": { + "title": "Shade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Rebain D, Jiang W, Yazdani S, et al (2021) DeRF: Decomposed radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14148\u201314156, 10.1109/CVPR46437.2021.01393\n\nShade et al [1998]\n\nShade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Srinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "24": { + "title": "Srinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Shade J, Gortler S, He Lw, et al (1998) Layered depth images. In: Proceedings of the Conference on Computer Graphics and Interactive Techniques, pp 231\u2013242, 10.1145/280814.280882\n\nSrinivasan et al [2021]\n\nSrinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Xie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "25": { + "title": "Xie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Srinivasan PP, Deng B, Zhang X, et al (2021) NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7491\u20137500, 10.1109/CVPR46437.2021.00741\n\nXie et al [2021]\n\nXie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Xu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "26": { + "title": "Xu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Xie C, Park K, Martin-Brualla R, et al (2021) Fig-NeRF: Figure-ground neural radiance fields for 3d object category modelling. In: International Conference on 3D Vision, p 962\u2013971, 10.1109/3DV53792.2021.00104\n\nXu et al [2022]\n\nXu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Yao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "27": { + "title": "Yao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Xu Q, Xu Z, Philip J, et al (2022) Point-NeRF: Point-based neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5438\u20135448, 10.1109/CVPR52688.2022.00536\n\nYao et al [2020]\n\nYao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": "Yen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "url": null + } + }, + { + "28": { + "title": "Yen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "author": "Yao Y, Luo Z, Li S, et al (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1787\u20131796, 10.1109/cvpr42600.2020.00186\n\nYen-Chen et al [2021]\n\nYen-Chen L, Florence P, Barron JT, et al (2021) iNeRF: Inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), p 1323\u20131330, 10.1109/IROS51168.2021.9636708", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2310.04152v2" +} \ No newline at end of file