Progressive Real-Time Rendering of Unprocessed Point Clouds

Markus Sch├╝tz, Michael Wimmer
Progressive Real-Time Rendering of Unprocessed Point Clouds
Poster shown at ACM SIGGRAPH 2018 (12. August 2018-16. August 2018)
[abstract] [poster]

Information

Abstract

Rendering tens of millions of points in real time usually requires either high-end graphics cards, or the use of spatial acceleration structures. We introduce a method to progressively display as many points as the GPU memory can hold in real time by reprojecting what was visible and randomly adding additional points to uniformly converge towards the full result within a few frames.

Our method heavily limits the number of points that have to be rendered each frame and it converges quickly and in a visually pleasing way, which makes it suitable even for notebooks with low-end GPUs. The data structure consists of a randomly shuffled array of points that is incrementally generated on-the-fly while points are being loaded.

Due to this, it can be used to directly view point clouds in common sequential formats such as LAS or LAZ while they are being loaded and without the need to generate spatial acceleration structures in advance, as long as the data fits into GPU memory.

Additional Files and Images

Additional images and videos


Additional files

Weblinks

No further information available.

BibTeX

@misc{schuetz-2018-PPC,
  title =      "Progressive Real-Time Rendering of Unprocessed Point Clouds",
  author =     "Markus Sch\"{u}tz and Michael Wimmer",
  year =       "2018",
  abstract =   "Rendering tens of millions of points in real time usually
               requires either high-end  graphics cards, or the use of
               spatial acceleration structures.  We introduce a method to
               progressively display as many points as the GPU memory can
               hold in real time  by reprojecting what was visible and
               randomly adding additional points to uniformly  converge
               towards the full result within a few frames.   Our method
               heavily limits the number of points that have to be rendered
               each frame and  it converges quickly and in a visually
               pleasing way, which makes it suitable even  for notebooks
               with low-end GPUs.  The data structure consists of a
               randomly shuffled array of points that is incrementally
               generated  on-the-fly while points are being loaded.   Due
               to this, it can be used to directly view point clouds in
               common sequential formats such as LAS or LAZ while they are
               being loaded and without the need to generate spatial
               acceleration structures in advance, as long as the data fits
               into GPU memory.",
  month =      aug,
  doi =        "10.1145/3230744.3230816",
  event =      "ACM SIGGRAPH 2018",
  isbn =       "978-1-4503-5817-0/18/08",
  location =   "Vancouver, Canada",
  publisher =  "ACM",
  note =       "Poster presented at ACM SIGGRAPH 2018
               (2018-08-12--2018-08-16)",
  pages =      "Article 41--",
  keywords =   "point based rendering, point cloud, LIDAR",
  URL =        "https://www.cg.tuwien.ac.at/research/publications/2018/schuetz-2018-PPC/",
}