Papers
arxiv:2411.18966

SVGS: Enhancing Gaussian Splatting Using Primitives with Spatially Varying Colors

Published on May 4
ยท Submitted by
Rui Xu
on May 6
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Spatially varying Gaussian splatting improves multi-view reconstruction by enhancing Gaussian primitives with spatially variant colors and opacity functions, achieving better novel view synthesis and geometric reconstruction.

AI-generated summary

Gaussian Splatting demonstrates impressive results in multi-view reconstruction based on Gaussian explicit representations. However, the current Gaussian primitives only have a single view-dependent color and an opacity to represent the appearance and geometry of the scene, resulting in a non-compact representation. In this paper, we introduce a new method called SVGS (Spatially Varying Gaussian Splatting) that utilizes spatially varying colors and opacity in a single Gaussian primitive to improve its representation ability. We have implemented bilinear interpolation, movable kernels, and tiny neural networks as spatially varying functions. SVGS employs 2D Gaussian surfels as primitives, which significantly enhances novel-view synthesis while maintaining high-quality geometric reconstruction. This approach is particularly effective in practical applications, as scenes combining complex textures with relatively simple geometry occur frequently in real-world environments. Quantitative and qualitative experimental results demonstrate that all three functions outperform the baseline, with the best movable kernels achieving superior novel view synthesis performance on multiple datasets, highlighting the strong potential of spatially varying functions. Project page: https://ruixu.me/html/SuperGaussians/index.html

Community

Paper submitter

Today we're releasing SVGS, a new approach for Gaussian Splatting that makes each primitive far more expressive by giving it spatially varying colors and opacity.

Gaussian Splatting has become a powerful paradigm for novel view synthesis, but existing Gaussian primitives are still limited in how they represent appearance. In standard 3DGS and 2DGS, each Gaussian is typically assigned only a single learnable color and opacity, which makes the representation less compact and can limit fine detail reconstruction.

SVGS changes this by letting the appearance vary across each primitive itself.
Instead of treating every Gaussian as having just one color, SVGS equips each Gaussian surfel with spatially varying color and opacity, so different regions within a single primitive can represent different visual patterns.

This simple idea greatly improves the expressive power of the representation. In practice, it means that one Gaussian can model much richer local appearance, capture sharper details, and reconstruct challenging regions more faithfully.

SVGS supports several ways to model these spatially varying functions, including bilinear interpolation, movable kernels, and even tiny neural networks. Across extensive experiments, all of them outperform the standard Gaussian baseline, with movable kernels achieving the strongest overall performance.

Built on top of 2DGS, SVGS also inherits its strong geometric reconstruction ability, while further improving rendering quality through a more powerful appearance model. The gains become especially clear when the number of Gaussians is limited, where stronger per primitive expressiveness leads to better quality with a more compact representation.

We believe the next step for Gaussian Splatting is not just adding more primitives, but making each primitive smarter.
SVGS is a step in that direction.

๐Ÿ“„ Paper: http://arxiv.org/abs/2411.18966

๐Ÿ’ป Code: https://github.com/Xrvitd/SVGS

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2411.18966
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.18966 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.18966 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.18966 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.