Information
- Publication Type: Journal Paper (without talk)
- Workgroup(s)/Project(s): not specified
- Date: May 2025
- Article Number: e70032
- DOI: 10.1111/cgf.70032
- ISSN: 1467-8659
- Journal: Computer Graphics Forum
- Number: 2
- Pages: 12
- Volume: 44
- Publisher: WILEY
- Keywords: CCS Concepts, Rasterization, Ray tracing, Volumetric models, • Computing methodologies → Image-based rendering
Abstract
Since its introduction, 3D Gaussian Splatting (3DGS) has become an important reference method for learning 3D representations of a captured scene, allowing real-time novel-view synthesis with high visual quality and fast training times. Neural Radiance Fields (NeRFs), which preceded 3DGS, are based on a principled ray-marching approach for volumetric rendering. In contrast, while sharing a similar image formation model with NeRF, 3DGS uses a hybrid rendering solution that builds on the strengths of volume rendering and primitive rasterization. A crucial benefit of 3DGS is its performance, achieved through a set of approximations, in many cases with respect to volumetric rendering theory. A naturally arising question is whether replacing these approximations with more principled volumetric rendering solutions can improve the quality of 3DGS. In this paper, we present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution. We demonstrate that, while more accurate volumetric rendering can help for low numbers of primitives, the power of efficient optimization and the large number of Gaussians allows 3DGS to outperform volumetric rendering despite its approximations.Additional Files and Images
No additional files or images.
Weblinks
BibTeX
@article{celarek-2025-d3g,
title = "Does 3D Gaussian Splatting Need Accurate Volumetric
Rendering?",
author = "Adam Celarek and Georgios Kopanas and G. Drettakis and
Michael Wimmer and Bernhard Kerbl",
year = "2025",
abstract = "Since its introduction, 3D Gaussian Splatting (3DGS) has
become an important reference method for learning 3D
representations of a captured scene, allowing real-time
novel-view synthesis with high visual quality and fast
training times. Neural Radiance Fields (NeRFs), which
preceded 3DGS, are based on a principled ray-marching
approach for volumetric rendering. In contrast, while
sharing a similar image formation model with NeRF, 3DGS uses
a hybrid rendering solution that builds on the strengths of
volume rendering and primitive rasterization. A crucial
benefit of 3DGS is its performance, achieved through a set
of approximations, in many cases with respect to volumetric
rendering theory. A naturally arising question is whether
replacing these approximations with more principled
volumetric rendering solutions can improve the quality of
3DGS. In this paper, we present an in-depth analysis of the
various approximations and assumptions used by the original
3DGS solution. We demonstrate that, while more accurate
volumetric rendering can help for low numbers of primitives,
the power of efficient optimization and the large number of
Gaussians allows 3DGS to outperform volumetric rendering
despite its approximations.",
month = may,
articleno = "e70032",
doi = "10.1111/cgf.70032",
issn = "1467-8659",
journal = "Computer Graphics Forum",
number = "2",
pages = "12",
volume = "44",
publisher = "WILEY",
keywords = "CCS Concepts, Rasterization, Ray tracing, Volumetric models,
• Computing methodologies → Image-based rendering",
URL = "https://www.cg.tuwien.ac.at/research/publications/2025/celarek-2025-d3g/",
}