- Duration: 1. September 2019 - 31. August 2022
- Project leader: Stefan Ohrhallinger
- FWF P32418-N31 - 332.780,70 €
The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame.
In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level.
|Image||Bib Reference||Publication Type|
|Philipp Erler, Paul Guerrero, Stefan Ohrhallinger, Michael Wimmer, Niloy Mitra
Points2Surf: Learning Implicit Surfaces from Point Clouds
todo, todo:todo-todo, August 2020. [Github Repo] [Arxiv Pre-Print]
|Journal Paper with Conference Talk|
|Kurt Leimer, Andreas Winkler, Stefan Ohrhallinger, Przemyslaw Musialski
Pose to Seat: Automated design of body-supporting surfaces
Computer Aided Geometric Design, 79:1-1, April 2020. [image] [paper]
|Journal Paper (without talk)|