Abstract
In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.
Publications
Image | Bib Reference | Publication Type |
---|---|---|
2023 | ||
Marina Medeiros, Eduard Doujak, Franz Josef Haller, Constantin Königswieser, Johanna Schmidt Going with the flow: using immersive analytics to support lifetime predictions of hydropower turbines In Proceedings SUI 2023 ACM : Symposium on Spatial User Interaction. October 2023. [paper] |
Conference Paper | |
2022 | ||
Jozef Hladky, Michael Stengel, Nicholas Vining, Bernhard Kerbl, Hans-Peter Seidel, Markus Steinberger QuadStream: A Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction ACM Transactions on Graphics, 41(6), December 2022. |
Journal Paper with Conference Talk | |
Iana Podkosova, Julia Reisinger, Hannes Kaufmann, Iva Kovacic BIMFlexi-VR: A Virtual Reality Framework for Early-Stage Collaboration in Flexible Industrial Building Design Frontiers in Virtual Reality, 3:1-13, February 2022. |
Journal Paper (without talk) | |
2021 | ||
Ruwayda Alharbi, Ondrej Strnad, Laura R. Luidolt, Manuela Waldner, David Kouřil, Ciril Bohak, Tobias Klein, Eduard Gröller, Ivan Viola Nanotilus: Generator of Immersive Guided-Tours in Crowded 3D Environments IEEE Transactions on Visualization and Computer Graphics:1-16, December 2021. [Image] [Paper] |
Journal Paper (without talk) | |
Johannes Sorger, Alessio Arleo, Peter Kán, Wolfgang Knecht, Manuela Waldner Egocentric Network Exploration for Immersive Analytics Computer Graphics Forum, 40:241-252, October 2021. [the paper] [video] [online egocentric network] |
Journal Paper with Conference Talk | |
Soroosh Mortezapoor, Khrystyna Vasylevska Safety and Security Challenges for Collaborative Robotics in VR In Proceedings of the 1st International Workshop on Security for XR and XR for Security (VR4Sec) at Symposium On Usable Privacy and Security (SOUPS) 2021, pages 1-4. August 2021. |
Conference Paper | |
Dennis Reimer, Iana Podkosova, Daniel Scherzer, Hannes Kaufmann Colocation for SLAM-Tracked VR Headsets with Hand Tracking Computers, 10(5):1-17, April 2021. |
Journal Paper (without talk) | |
Sebastian Pirch, Felix Müller, Eugenia Iofinova, Julia Pazmandi, Christiane Hütter, Martin Chiettini, Celine Sin, Kaan Boztug, Iana Podkosova, Hannes Kaufmann, Jörg Menche The VRNetzer platform enables interactive network analysis in Virtual Reality Nature Communications, 12(2432):1-14, April 2021. |
Journal Paper (without talk) | |
Emanuel Vonach, Christoph Schindler, Hannes Kaufmann StARboard & TrACTOr: Actuated Tangibles in an Educational TAR Application Multimodal Technologies and Interaction, 5(2):1-22, February 2021. |
Journal Paper (without talk) | |
Peter Kán, Andrija Kurtic, Mohamed Radwan, Jorge M. Loáiciga Rodríguez Automatic Interior Design in Augmented Reality Based on Hierarchical Tree of Procedural Rules Electronics, 10(3):1-17, 2021. |
Journal Paper (without talk) | |
2020 | ||
Laura R. Luidolt, Michael Wimmer, Katharina Krösl Gaze-Dependent Simulation of Light Perception in Virtual Reality IEEE Transactions on Visualization and Computer Graphics, Volume 26, Issue 12:3557-3567, December 2020. [paper] |
Journal Paper with Conference Talk | |
Katharina Krösl Simulating Vision Impairments in Virtual and Augmented Reality Supervisor: Michael Wimmer Duration: April 2016 — October 2020 [thesis] |
PhD-Thesis | |
Katharina Krösl, Carmine Elvezio, Laura R. Luidolt, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael Wimmer CatARact: Simulating Cataracts in Augmented Reality In IEEE International Symposium on Mixed and Augmented Reality (ISMAR)., pages 1-10. November 2020. [Paper] |
Conference Paper | |
Anna Sebernegg, Peter Kán, Hannes Kaufmann Motion Similarity Modeling - A State of the Art Report TR-193-02-2020-5, August 2020 [arXiv] |
Technical Report | |
Mohammadreza Mirzaei, Peter Kán, Hannes Kaufmann EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People IEEE Transactions on Visualization and Computer Graphics, 26(05):2084-2093, May 2020. [TVCG] |
Journal Paper with Conference Talk | |
Katharina Krösl, Carmine Elvezio, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael Wimmer XREye: Simulating Visual Impairments in Eye-Tracked XR In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). March 2020. [extended abstract] [image] [poster] [video] |
Other Reviewed Publication | |
Dennis Reimer, Eike Langbehn, Hannes Kaufmann, Daniel Scherzer The Influence of Full-Body Representation on Translation and CurvatureGain In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstractsand Workshops (VRW), pages 154-159. March 2020. |
Conference Paper | |
Khrystyna Vasylevska, Bálint Istvan Kovács, Hannes Kaufmann VR Bridges: An Approach to Uneven Surfaces Simulation in VR In Proceedings of IEEE Conference on Virtual Reality 2020, pages 388-397. March 2020. |
Conference Paper | |
2019 | ||
Katharina Krösl, Harald Steinlechner, Johanna Donabauer, Daniel Cornel, Jürgen Waser Master of Disaster: Virtual-Reality Response Training in Disaster Management Poster shown at VRCAI 2019 (14. November 2019-16. November 2019) [extended abstract] [poster] [video] |
Poster | |
Jindřich Adolf, Peter Kán, Benjamin Outram, Hannes Kaufmann, Jaromír Doležal, Lenka Lhotská Juggling in VR: Advantages of Immersive Virtual Reality in Juggling Learning In 25th ACM Symposium on Virtual Reality Software and Technology, pages 1-5. November 2019. [ACM] |
Conference Paper | |
Johannes Göllner, Andreas Peer, Christian Meurers, Gernot Wurzer, Christian Schönauer, Hannes Kaufmann Virtual Reality CBRN Defence In Meeting Proceedings of the Simulation and Modelling Group Symposium 171, pages 1-25. October 2019. |
Conference Paper | |
Peter Kán, Hannes Kaufmann DeepLight: Light Source Estimation for Augmented Reality using Deep Learning The Visual Computer, 35(6):873-883, June 2019. [] |
Journal Paper with Conference Talk | |
2018 | ||
Katharina Krösl, Dominik Bauer, Michael Schwärzler, Henry Fuchs, Michael Wimmer, Georg Suter A VR-based user study on the effects of vision impairments on recognition distances of escape-route signs in buildings The Visual Computer, 34(6-8):911-923, April 2018. [Paper] |
Journal Paper with Conference Talk |
Funded Projects
2. July 2023 - 1. July 2026
Instant Visualization and Interaction for Large Point Clouds
Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.
no funding
Contact: Eduard Gröller
|
1. September 2019 - 31. August 2023
Superhumans - Walking Through Walls
In recent years, virtual and augmented reality have gained widespread attention because of newly developed head-mounted displays. For the first time, mass-market penetration seems plausible. Also, range sensors are on the verge of being integrated into smartphones, evidenced by prototypes such as the Google Tango device, making ubiquitous on-line acquisition of 3D data a possibility. The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame. In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level.
FWF
P32418-N31 - 332.780,70 €
Contact: Stefan Ohrhallinger
|
1. February 2020 - 31. October 2022
Virtual Reality Tennis Trainer
This research project focuses on 3D motion analysis and motion learning methodologies. We design novel methods for automated analysis of human motion by machine learning. These methods can be applicable in real training scenario or in VR training setup. The results of our motion analysis can help players better understand the errors in their motion and lead to improvement of motion performance. Our motion analysis methods are based on professional knowledge from tennis experts from our partner company VR Motion Learning GmbH & Co KG. We use numerous motion features, including rotations, positions, velocities and others, to analyze the motion. Our goal is to use virtual reality as scenario for learning correct tennis technique that will be applicable in real tennis game. For this purpose, we plan to join our motion analysis with error visualization techniques in 3D and with novel motion learning methodologies. These methodologies may lead to learning correct sport technique, improvement of performance and prevention of injuries.
no funding
Contact: Hannes Kaufmann
|
1. March 2020 - 31. August 2022
BIM-based digital Plattform for design and optimisation of flexible facilities for Industry 4.0
Industrial Building Design is a design process where the successful implementation of each project is based on collaborative decision making of multiple domain specialists - architects, engineers, production system planners and building owners. Traditionally, such multi-collaborator workflows are subject to conflicting stakeholder goals and frequent changes in production processes inevitably resulting in lengthy planning periods. This particular design process needs novel approaches to decision-making support which would combine the ability to communicate design intent with real-time feedback on the impact of design decisions.
FFG
Contact: Hannes Kaufmann
|
1. August 2017 - 31. July 2019
EvaluArte: Systematic Evaluation for AR Controllers
The goal of this project is the development of a systematic evaluation methodology, the evaluation of AR controllers for industrial tasks by utilizing the developed methodology and the publication of guidelines for developers of AR controllers, user interface designers, AR developers in general and the AR research community.
FFG
Contact: Hannes Kaufmann
|
15. June 2005 - 31. December 2007
EU U-CREATE, Creative Authoring tools for Edutainment Applications
U-CREATE is initiated by Alterface, Imagination and ion2s, three SMEs which are primarily active in the field of edutainment, i.e. the joining of education and entertainment (customers are museums, cultural institutions, entertainment parks¿) They share a common and important problem: efficient content creation. Be it interactive setups, Mixed Reality experiences, location-based services, all these technologies are worthless without content: content is always to be tackled or delivered at the same time as technology. However, content creation is a long process that can turn to nightmare when implementing large-scale projects. The solution is two words: authoring tool. A powerful, graphical, beyond the state-of-the-art authoring tool is needed that allows one to create elaborated contents in a fast and easy way. No such tool exists to date due to the highly innovative products commercialized by the SMEs. Such a tool will be created by the project. The authoring tool will increase competitiveness, because it significantly shortens production time (50% reduction of integration time) and effort (creation process affordable to non-specialists) for content development. It will also enable other people to create contents for the intended systems: SMEs can then sell more software while subcontracting or licensing the content production. It will also strengthen the European position in an authoring market dominated by US companies. SMEs alone cannot afford such a task, in terms of expertise but also in terms of resources. This project gathers the highly-specialized expertise from ZGDV, TUW and DIST which allows for the delivery of a prototype authoring tool. HadroNet will be the end-user serving the consortium and helping it to gather a larger community of end-users, in order to assess requirements, validate results and construct the basis of a commercial distribution system. Doing so, the project will set the first basis of a longer-term collaboration amongst all partners.
EU IST - 6th Framework Program
Contact: Werner Purgathofer
|
1. February 2005 - 31. January 2007
Mobile Augmented Reality-Museumsführer
This project aims at the creation and real-world deployment of a handheld computer guide for museum visitors, based on Augmented Reality.
FWF
Contact: Werner Purgathofer
|