If we have an existing 3D scan/model of a scene, we want to point our smartphone there and see any changes as an augmented reality overlay immediately.
For this, we detect which objects have been removed/added or simply been moved around. In many environments, it is useful to observe such changes, e.g. for inventarization of offices, warehouses or urban spaces. Furthermore, this information can be used to train deep learning algorithms for semantic understanding of such environments.
- Register the currently scanned view to an existing scan (point cloud or 3d model).
- (Taken) Based on a modified Infinitam (infinitam.org) implementation which uses Kinect 3D scanner to compare an existing scan in real-time with a live scan.
- (Taken) Store view frusta of sensor (the seen space) in a flattened data structure for differentiating seen changes from the unknown: https://www.cg.tuwien.ac.at/research/publications/2014/Radwan-2014-CDR/
- (Taken) Extend the comparison between octree nodes between the old and new model, based on the permitted distance (points + uncertainty ellipsoids), to classify into changed/unchanged geometry
- (Taken) Evaluate the robustness of the algorithm by using a Blensor virtual scan of a ground truth 3D model
- Classify changes with CNN as which objects and whether added/removed/moved
- Fast parallel implementation in CUDA and compare run-time to the state-of-the-art
C++ programming skills and interest in geometry processing. Experience in geometry processing and 3D data structures such as octrees, point clouds, or CUDA will speed up the development tasks.
A bonus of €500/1,000 if completed to satisfaction within an agreed time-frame of 6/12 months (PR/BA or DA).