Abstract

In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.

Publications

23 Publications found:
Image Bib Reference Publication Type
2023
Marina Medeiros, Eduard Doujak, Franz Josef Haller, Constantin KönigswieserORCID iD, Johanna SchmidtORCID iD
Going with the flow: using immersive analytics to support lifetime predictions of hydropower turbines
In Proceedings SUI 2023 ACM : Symposium on Spatial User Interaction. October 2023.
[paper]
Conference Paper
2022
Jozef Hladky, Michael Stengel, Nicholas Vining, Bernhard KerblORCID iD, Hans-Peter Seidel, Markus Steinberger
QuadStream: A Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction
ACM Transactions on Graphics, 41(6), December 2022.
Journal Paper with Conference Talk
Iana Podkosova, Julia Reisinger, Hannes KaufmannORCID iD, Iva KovacicORCID iD
BIMFlexi-VR: A Virtual Reality Framework for Early-Stage Collaboration in Flexible Industrial Building Design
Frontiers in Virtual Reality, 3:1-13, February 2022.
Journal Paper (without talk)
2021
Ruwayda Alharbi, Ondrej Strnad, Laura R. LuidoltORCID iD, Manuela WaldnerORCID iD, David Kouřil, Ciril Bohak, Tobias Klein, Eduard GröllerORCID iD, Ivan ViolaORCID iD
Nanotilus: Generator of Immersive Guided-Tours in Crowded 3D Environments
IEEE Transactions on Visualization and Computer Graphics:1-16, December 2021. [Image] [Paper]
Journal Paper (without talk)
Johannes Sorger, Alessio Arleo, Peter Kán, Wolfgang Knecht, Manuela WaldnerORCID iD
Egocentric Network Exploration for Immersive Analytics
Computer Graphics Forum, 40:241-252, October 2021. [the paper] [video] [online egocentric network]
Journal Paper with Conference Talk
Soroosh MortezapoorORCID iD, Khrystyna VasylevskaORCID iD
Safety and Security Challenges for Collaborative Robotics in VR
In Proceedings of the 1st International Workshop on Security for XR and XR for Security (VR4Sec) at Symposium On Usable Privacy and Security (SOUPS) 2021, pages 1-4. August 2021.
Conference Paper
Dennis Reimer, Iana Podkosova, Daniel Scherzer, Hannes KaufmannORCID iD
Colocation for SLAM-Tracked VR Headsets with Hand Tracking
Computers, 10(5):1-17, April 2021.
Journal Paper (without talk)
Sebastian Pirch, Felix Müller, Eugenia Iofinova, Julia Pazmandi, Christiane Hütter, Martin Chiettini, Celine Sin, Kaan Boztug, Iana Podkosova, Hannes KaufmannORCID iD, Jörg Menche
The VRNetzer platform enables interactive network analysis in Virtual Reality
Nature Communications, 12(2432):1-14, April 2021.
Journal Paper (without talk)
Emanuel Vonach, Christoph Schindler, Hannes KaufmannORCID iD
StARboard & TrACTOr: Actuated Tangibles in an Educational TAR Application
Multimodal Technologies and Interaction, 5(2):1-22, February 2021.
Journal Paper (without talk)
Peter Kán, Andrija Kurtic, Mohamed Radwan, Jorge M. Loáiciga Rodríguez
Automatic Interior Design in Augmented Reality Based on Hierarchical Tree of Procedural Rules
Electronics, 10(3):1-17, 2021.
Journal Paper (without talk)
2020
Laura R. LuidoltORCID iD, Michael WimmerORCID iD, Katharina KröslORCID iD
Gaze-Dependent Simulation of Light Perception in Virtual Reality
IEEE Transactions on Visualization and Computer Graphics, Volume 26, Issue 12:3557-3567, December 2020. [paper]
Journal Paper with Conference Talk
Katharina KröslORCID iD
Simulating Vision Impairments in Virtual and Augmented Reality
Supervisor: Michael WimmerORCID iD
Duration: April 2016 — October 2020
[thesis]
PhD-Thesis
Katharina KröslORCID iD, Carmine Elvezio, Laura R. LuidoltORCID iD, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael WimmerORCID iD
CatARact: Simulating Cataracts in Augmented Reality
In IEEE International Symposium on Mixed and Augmented Reality (ISMAR)., pages 1-10. November 2020.
[Paper]
Conference Paper
Anna Sebernegg, Peter Kán, Hannes KaufmannORCID iD
Motion Similarity Modeling - A State of the Art Report
TR-193-02-2020-5, August 2020 [arXiv]
Technical Report
EarVR Mohammadreza Mirzaei, Peter Kán, Hannes KaufmannORCID iD
EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People
IEEE Transactions on Visualization and Computer Graphics, 26(05):2084-2093, May 2020. [TVCG]
Journal Paper with Conference Talk
live demo in mozilla social hubs room Katharina KröslORCID iD, Carmine Elvezio, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael WimmerORCID iD
XREye: Simulating Visual Impairments in Eye-Tracked XR
In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). March 2020.
[extended abstract] [image] [poster] [video]
Other Reviewed Publication
Dennis Reimer, Eike Langbehn, Hannes KaufmannORCID iD, Daniel Scherzer
The Influence of Full-Body Representation on Translation and CurvatureGain
In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstractsand Workshops (VRW), pages 154-159. March 2020.
Conference Paper
Khrystyna VasylevskaORCID iD, Bálint Istvan Kovács, Hannes KaufmannORCID iD
VR Bridges: An Approach to Uneven Surfaces Simulation in VR
In Proceedings of IEEE Conference on Virtual Reality 2020, pages 388-397. March 2020.
Conference Paper
2019
Katharina KröslORCID iD, Harald Steinlechner, Johanna Donabauer, Daniel Cornel, Jürgen Waser
Master of Disaster: Virtual-Reality Response Training in Disaster Management
Poster shown at VRCAI 2019 (14. November 2019-16. November 2019)
[extended abstract] [poster] [video]
Poster
Jindřich Adolf, Peter Kán, Benjamin Outram, Hannes KaufmannORCID iD, Jaromír Doležal, Lenka Lhotská
Juggling in VR: Advantages of Immersive Virtual Reality in Juggling Learning
In 25th ACM Symposium on Virtual Reality Software and Technology, pages 1-5. November 2019.
[ACM]
Conference Paper
Johannes Göllner, Andreas Peer, Christian Meurers, Gernot Wurzer, Christian Schönauer, Hannes KaufmannORCID iD
Virtual Reality CBRN Defence
In Meeting Proceedings of the Simulation and Modelling Group Symposium 171, pages 1-25. October 2019.
Conference Paper
Peter Kán, Hannes KaufmannORCID iD
DeepLight: Light Source Estimation for Augmented Reality using Deep Learning
The Visual Computer, 35(6):873-883, June 2019. []
Journal Paper with Conference Talk
2018
Katharina KröslORCID iD, Dominik Bauer, Michael Schwärzler, Henry FuchsORCID iD, Michael WimmerORCID iD, Georg Suter
A VR-based user study on the effects of vision impairments on recognition distances of escape-route signs in buildings
The Visual Computer, 34(6-8):911-923, April 2018. [Paper]
Journal Paper with Conference Talk
Download list as Bibtex, HTML (Advanced, Expert), JSON (with referenced objects), CSV, Permalink

Funded Projects

Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.


no funding
Contact: Eduard GröllerORCID iD
1. September 2019 - 31. August 2023 Superhumans - Walking Through Walls

In recent years, virtual and augmented reality have gained widespread attention because of newly developed head-mounted displays. For the first time, mass-market penetration seems plausible. Also, range sensors are on the verge of being integrated into smartphones, evidenced by prototypes such as the Google Tango device, making ubiquitous on-line acquisition of 3D data a possibility. The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame. In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level.


FWF P32418-N31 - 332.780,70 €
1. February 2020 - 31. October 2022 Virtual Reality Tennis Trainer

This research project focuses on 3D motion analysis and motion learning methodologies. We design novel methods for automated analysis of human motion by machine learning. These methods can be applicable in real training scenario or in VR training setup. The results of our motion analysis can help players better understand the errors in their motion and lead to improvement of motion performance. Our motion analysis methods are based on professional knowledge from tennis experts from our partner company VR Motion Learning GmbH & Co KG. We use numerous motion features, including rotations, positions, velocities and others, to analyze the motion.



Our goal is to use virtual reality as scenario for learning correct tennis technique that will be applicable in real tennis game. For this purpose, we plan to join our motion analysis with error visualization techniques in 3D and with novel motion learning methodologies. These methodologies may lead to learning correct sport technique, improvement of performance and prevention of injuries.


no funding
Contact: Hannes KaufmannORCID iD

Industrial Building Design is a design process where the successful implementation of each project is based on collaborative decision making of multiple domain specialists - architects, engineers, production system planners and building owners. Traditionally, such multi-collaborator workflows are subject to conflicting stakeholder goals and frequent changes in production processes inevitably resulting in lengthy planning periods. This particular design process needs novel approaches to decision-making support which would combine the ability to communicate design intent with real-time feedback on the impact of design decisions.

BimFlexi project aims to accelerate BIM design processes for industrial buildings by using parametric modelling, multi-parameter optimization and collaborative VR exploration and modification of models at early stages of building planning.


FFG
Contact: Hannes KaufmannORCID iD
The goal of this project is the development of a systematic evaluation methodology, the evaluation of AR controllers for industrial tasks by utilizing the developed methodology and the publication of guidelines for developers of AR controllers, user interface designers, AR developers in general and the AR research community.
FFG
Contact: Hannes KaufmannORCID iD
U-CREATE is initiated by Alterface, Imagination and ion2s, three SMEs which are primarily active in the field of edutainment, i.e. the joining of education and entertainment (customers are museums, cultural institutions, entertainment parks¿) They share a common and important problem: efficient content creation. Be it interactive setups, Mixed Reality experiences, location-based services, all these technologies are worthless without content: content is always to be tackled or delivered at the same time as technology. However, content creation is a long process that can turn to nightmare when implementing large-scale projects. The solution is two words: authoring tool. A powerful, graphical, beyond the state-of-the-art authoring tool is needed that allows one to create elaborated contents in a fast and easy way. No such tool exists to date due to the highly innovative products commercialized by the SMEs. Such a tool will be created by the project. The authoring tool will increase competitiveness, because it significantly shortens production time (50% reduction of integration time) and effort (creation process affordable to non-specialists) for content development. It will also enable other people to create contents for the intended systems: SMEs can then sell more software while subcontracting or licensing the content production. It will also strengthen the European position in an authoring market dominated by US companies. SMEs alone cannot afford such a task, in terms of expertise but also in terms of resources. This project gathers the highly-specialized expertise from ZGDV, TUW and DIST which allows for the delivery of a prototype authoring tool. HadroNet will be the end-user serving the consortium and helping it to gather a larger community of end-users, in order to assess requirements, validate results and construct the basis of a commercial distribution system. Doing so, the project will set the first basis of a longer-term collaboration amongst all partners.
EU IST - 6th Framework Program
1. February 2005 - 31. January 2007 Mobile Augmented Reality-Museumsführer

This project aims at the creation and real-world deployment of a handheld computer guide for museum visitors, based on Augmented Reality.


FWF