For visualizing changes, often static images are used, from which regions are cut out and renderings of new objects inserted. However, this does not permit to explore the changed scene from other view points. Environments such as street blocks or interior rooms can nowadays easily be acquired as point clouds with commodity devices. This allows to detect and remove objects, and insert new objects in their place.


These 3D scenes can be displayed in a point cloud viewer, such as from the Point Cloud Library (pcl), or potree. Several deep learning methods which detect and classify objects in point clouds are available as open source, as well as data sets. These can be used to segment the scene and remove objects by selecting its points cluster. Then, new objects can be chosen as point clouds, e.g. from the ground truth of the respective data set, and inserted into the scene. For the ground, planar objects can be removed by marking rectangles, and inpainting using image methods.

Tasks (depending on PR/BA/DA and number of students)

- Classify point cloud (existing data set) and visualize detected objects in viewer for selection and removal

- GUI for browsing through point clouds of ground truth in classes and position into scene

- Adapt image inpainting to flat point clouds (ground regions such as floor, sidewalk, greens)


  • Knowledge of English language (source code comments and final report should be in English)
  • Knowledge of C++, python or deep learning is advantageous, but not necessary


The project should be implemented platform independent (Linux, Windows).

A bonus of €500/€1000 if completed to satisfaction within an agreed time-frame of 6/12 months (PR/BA or DA)


For more information please contact Stefan Ohrhallinger.



Bachelor Thesis
Student Project
Master Thesis