Vision is one of the most potent sources of information about the world we are interacting with. That is especially true for precise actions such as reaching for, touching, and grasping an object. Vision cues precede most of our actions and can be used for their anticipation by automated systems such as robots. In our setup, a mobile robot is used to present a haptic object to the users when they want to touch it. For a safe and realistic experience, the intention to touch should be predicted as soon and as accurate as possible.
You are expected to rely on the most relevant and cited sources, such as books and international research publications, to make a well-argued suggestion a good/practical/optimal solution for gaze-based action prediction in our setup.
Different sub-topics are available:
- saliency evaluation of VR environment: prediction of what objects in the environment will be most interesting for the user,
- gaze-to-action prediction: explore the connection of the gaze behavior to the actions performed,
- gaze manipulation: how to manipulate the user's attention and subtly provoke the desired behavior.
The chosen algorithm(s) will then be implemented for testing in Unity 3D (C#).
- Knowledge of English language (source code comments and final report should be in English)
- Familiarity with Unity3D is advantageous
- Programming languages: C#, C++
Game engine: Unity3D.