The sensors we carry every day will design future digital worlds for our virtual selves.

Have you ever used Waze, the famous crowdsourced navigation app? When driving down a road that the app does not recognise, you are teaching it that a new street must be added. In a similar way, the sensors we carry every day in our smartphones and fitness gadgets could not just map the world but be used for creating a virtual copy, quicker and cheaper than anything we have today.

The new paradigm in effortlessly creating virtual worlds is enabled by the work of the European project Harvest4D. The concept of ‘incidental data capture’ hasled to a new method for acquiring 3D images that can produce a 4D model. Including the detection of changes occurring over time, it will be possible to navigate these 3D models as well as visualise their evolution.

This proof of concept project has completed two main prototype implementations to validate its approach: data processing and visualization. The Multi-View Environment (MVE) prototype has been used to reconstruct undistorted images and depth maps derived from incidental data acquisition. On the other hand, the graphical user interface of MVE, named Ultimate Multi-View Environment (UMVE), can visualise different types of data sets in a user-friendly manner.

The project team, coordinated by Technische Universität Wien in Vienna, Austria, has developed different algorithms to demonstrate that this new approach can be used in different ways. For instance, an algorithm has been developed for material classification in incidentally captured data sets, based on the richness of appearance variation found in real-world data under natural illumination. Another algorithm aligns sequences from hundreds of range maps, where object locations are calculated using the distance from a single point, in a few minutes and with minimal error.

Read the article on FETFX

 

Details