Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.
- WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds
- In this area, we concentrate on algorithms that synthesize images to depict 3D models or scenes, often by simulating or approximating the physics of light.
- Uses concepts from applied mathematics and computer science to design efficient algorithms for the reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Example applications are collision detection, reconstruction, compression, occlusion-aware surface handling and improved sampling conditions.
In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.
|Image||Bib Reference||Publication Type|
|Markus Schütz, Lukas Herzberger, Michael Wimmer
SimLOD: Simultaneous LOD Generation and Rendering
Source Code: https://github.com/m-schuetz/SimLOD
|Philip Voglreiter, Bernhard Kerbl, Alexander Weinrauch, Joerg Hermann Mueller, Thomas Neff, Markus Steinberger, Dieter Schmalstieg
Trim Regions for Online Computation of From-Region Potentially Visible Sets
ACM Transactions on Graphics, 42(4):1-15, August 2023.
|Journal Paper (without talk)|