When we scan an interior room or 3D scene, e.g. with the Infinitam system (http://www.robots.ox.ac.uk/~victor/infinitam/), we want to make sure to visit all holes and niches so that the entire scene becomes captured. Tools like Google PhotoSphere indicate to the user where to point the camera next but are limited to pictures on a sphere. By analyzing scan boundaries and silhouettes of the acquired RGBD images we can compute missing areas and generate flying instructions for a drone to automate the scanning process.
- Connect a depth sensor to a Raspberry Pi on a drone to send RGB + depth data to a server
- Load depth images into our modified Infinitam engine to generate instructions for the Next-Best-View
- Determine locations of missing data from the scan boundaries and silhouettes
- Generate flying paths for the drone from missing locations to cover the scene geometry
- Generate the next best poses for humans to scan, similar to indicators for taking a panorama/hemisphere photo
- Compute the percentage of overlap with an existing scan
- Knowledge of C++, 3D data structures (point clouds, voxels), and client-server communication
- Some technical skills required, e.g. connecting a Raspberry Pi
- Knowledge of English language (source code comments and final report should be in English)
A bonus of €500/1,000 if completed to satisfaction within an agreed time-frame of 6/12 months (PR/BA or DA).