Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.

Funding

  • WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds

Team

News

Research Areas

  • In this area, we concentrate on algorithms that synthesize images to depict 3D models or scenes, often by simulating or approximating the physics of light.
  • Uses concepts from applied mathematics and computer science to design efficient algorithms for the reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Example applications are collision detection, reconstruction, compression, occlusion-aware surface handling and improved sampling conditions.
  • In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.

Publications

3 Publications found:
Image Bib Reference Publication Type
2024
Annalena UlschmidORCID iD, Bernhard KerblORCID iD, Katharina KröslORCID iD, Michael WimmerORCID iD
Real-Time Editing of Path-Traced Scenes with Prioritized Re-Rendering
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP and VISIGRAPP, pages 46-57. 2024.
Conference Paper
2023
Rendered Point Cloud to the left and points/voxels colored by the containing octree node to the right. Markus Schütz, Lukas HerzbergerORCID iD, Michael WimmerORCID iD
SimLOD: Simultaneous LOD Generation and Rendering
Source Code: https://github.com/m-schuetz/SimLOD
Miscellaneous Publication
Philip VoglreiterORCID iD, Bernhard KerblORCID iD, Alexander WeinrauchORCID iD, Joerg Hermann MuellerORCID iD, Thomas NeffORCID iD, Markus Steinberger, Dieter Schmalstieg
Trim Regions for Online Computation of From-Region Potentially Visible Sets
ACM Transactions on Graphics, 42(4):1-15, August 2023.
Journal Paper (without talk)
Download list as Bibtex, HTML (Advanced, Expert), JSON (with referenced objects), CSV, Permalink

Details

Project Leader

Start Date

End Date