Publications
99 Projects found:
started 1. December 1993
X-Mas Cards
Every year a christmas card showing aspects of our research projects is produced and sent out.
no funding
Contact: Werner Purgathofer |
started 1. January 2000
VRVis Competence Center
The VRVis K1 Research Center is the leading application oriented research center in the area of virtual reality (VR) and visualization (Vis) in Austria and is internationally recognized. You can find extensive Information about the VRVis-Center here ![]()
Contact: Werner Purgathofer |
started 1. February 2018
Smart Communities and Technologies: 3D Spatialization
The Research Cluster "Smart Communities and Technologies" (Smart CT) at TU Wien will provide the scientific underpinnings for next-generation complex smart city and communities infrastructures. Cities are ever-evolving, complex cyber physical systems of systems covering a magnitude of different areas. The initial concept of smart cities and communities started with cities utilizing communication technologies to deliver services to their citizens and evolved to using information technology to be smarter and more efficient about the utilization of their resources. In recent years however, information technology has changed significantly, and with it the resources and areas addressable by a smart city have broadened considerably. They now cover areas like smart buildings, smart products and production, smart traffic systems and roads, autonomous driving, smart grids for managing energy hubs and electric car utilization or urban environmental systems research. 3D spatialization creates the link between the internet of cities infrastructure and the actual 3D world in which a city is embedded in order to perform advanced computation and visualization tasks. Sensors, actuators and users are embedded in a complex 3D environment that is constantly changing. Acquiring, modeling and visualizing this dynamic 3D environment are the challenges we need to face using methods from Visual Computing and Computer Graphics. 3D Spatialization aims to make a city aware of its 3D environment, allowing it to perform spatial reasoning to solve problems like visibility, accessibility, lighting, and energy efficiency.
no funding
Contact: Michael Wimmer |
1. January 2024 - 31. December 2028
Health virtual twins for the personalised management of stroke related to atrial fibrillation
no funding
Contact: Renata Raidou |
1. March 2020 - 29. February 2028
Advanced Computational Design
no funding
|
1. October 2023 - 30. September 2027
Visual Analytics and Computer Vision meet Cultural Heritage
no funding
|
1. February 2023 - 31. January 2027
Toward Optimal Path Guiding for Photorealistic Rendering
no funding
|
2. July 2023 - 1. July 2026
Instant Visualization and Interaction for Large Point Clouds
Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.
no funding
Contact: Eduard Gröller |
1. May 2020 - 30. April 2026
Modeling the World at Scale
Vision: reconstruct a model of the world that permits online level-of-detail extraction. ![]()
Contact: Stefan Ohrhallinger |
1. May 2023 - 30. April 2026
Joint Human-Machine Data Exploration
Wider research contextIn many domains, such as biology, chemistry, medicine, and the humanities, large amounts of data exist. Visual exploratory analysis of these data is often not practicable due to their size and their unstructured nature. Traditional machine learning (ML) requires large-scale labeled training data and a clear target definition, which is typically not available when exploring unknown data. For such large-scale, unstructured, open-ended, and domain-specific problems, we need an interactive approach combining the strengths of ML and human analytical skills into a unified process that helps users to "detect the expected and discover the unexpected". HypothesesWe hypothesize that humans and machines can learn jointly from the data and from each other during exploratory data analysis. We further hypothesize that this joint learning enables a new visual analytics approach that reveals how users' incrementally growing insights fit the data, which will foster questioning and reframing. ApproachWe integrate interactive ML and interactive visualization to learn about data and from data in a joint fashion. To this end, we propose a data-agnostic joint human-machine data exploration (JDE) framework that supports users in the exploratory analysis and the discovery of meaningful structures in the data. In contrast to existing approaches, we investigate data exploration from a new perspective that focuses on the discovery and definition of complex structural information from the data rather than primarily on the model (as in ML) or on the data itself (as in visualization). InnovationFirst, the conceptual framework of JDE introduces a novel knowledge modeling approach for visual analytics based on interactive ML that incrementally captures potentially complex, yet interpretable concepts that users expect or have learned from the data. Second, it proposes an intelligent agent that elicits information fitting the users' expectations and discovers what may be unexpected for the users. Third, it relies on a new visualization approach focusing on how the large-scale data fits the users' knowledge and expectations, rather than solely the data. Fourth, this leads to novel exploratory data analysis techniques -- an interactive interplay between knowledge externalization, machine-guided data inspection, questioning, and reframing. Primary researchers involvedThe project is a joint collaboration between researchers from TU Wien (Manuela Waldner) and the University of Applied Sciences St. Pölten (Matthias Zeppelzauer), Austria, who contribute and join their complementary expertise on information visualization, visual analytics, and interactive ML.
FWF Stand-alone project P 36453 DOI: 10.55776/P36453
no funding
|
1. January 2025 - 31. December 2025
Visual Analytics und Visualisierung
no funding
Contact: Eduard Gröller |
1. January 2024 - 30. June 2025
Bringing Point Clouds to WebGPU
no funding
|
1. April 2021 - 31. March 2025
ECOlogical building enveLOPES: a game-changing design approach for regenerative urban ecosystems
no funding
|
1. March 2021 - 28. February 2025
xCTing - Enabling X-ray CT based Industry 4.0 process chains by training Next Generation research experts
no funding
Contact: Eduard Gröller |
1. December 2023 - 30. November 2024
Massive geographische Datenvisualisierung mit WebGPU
Geographische Daten, wie etwa Bewegungsdaten oder geolokalisierte Messungen über die Zeit, sind oft mehrere Gigabyte groß und können daher nicht mehr mit klassischen online Tools analysiert und präsentiert werden. Wir wollen helfen, Datenwissenschaftler_innen, Datenjournalist_innen und der breiten Masse diese Daten durch interaktive Echtzeitvisualisierung im Web zugänglich machen.
Links
no funding
|
1. September 2021 - 31. August 2024
Smart automated check of BIM models with real buildings
no funding
Contact: Hannes Kaufmann |
2. January 2024 - 1. July 2024
Welten bauen mit Mathematik - Onlinetool zur parametrischen Echtzeitmodellierung
no funding
Contact: Eduard Gröller |
1. July 2022 - 30. June 2024
Ecosystem Modeling Using Rendering Methods
no funding
Contact: Michael Wimmer |
1. November 2021 - 31. October 2023
Multi-User Mixed Reality System for flexible First Responder Training
no funding
Contact: Hannes Kaufmann |
1. September 2019 - 31. August 2023
Superhumans - Walking Through Walls
In recent years, virtual and augmented reality have gained widespread attention because of newly developed head-mounted displays. For the first time, mass-market penetration seems plausible. Also, range sensors are on the verge of being integrated into smartphones, evidenced by prototypes such as the Google Tango device, making ubiquitous on-line acquisition of 3D data a possibility. The combination of these two technologies – displays and sensors – promises applications where users can directly be immersed into an experience of 3D data that was just captured live. However, the captured data needs to be processed and structured before being displayed. For example, sensor noise needs to be removed, normals need to be estimated for local surface reconstruction, etc. The challenge is that these operations involve a large amount of data, and in order to ensure a lag-free user experience, they need to be performed in real time, i.e., in just a few milliseconds per frame. In this proposal, we exploit the fact that dynamic point clouds captured in real time are often only relevant for display and interaction in the current frame and inside the current view frustum. In particular, we propose a new view-dependent data structure that permits efficient connectivity creation and traversal of unstructured data, which will speed up surface recovery, e.g. for collision detection. Classifying occlusions comes at no extra cost, which will allow quick access to occluded layers in the current view. This enables new methods to explore and manipulate dynamic 3D scenes, overcoming interaction methods that rely on physics-based metaphors like walking or flying, lifting interaction with 3D environments to a “superhuman” level. ![]()
Contact: Stefan Ohrhallinger |
1. January 2019 - 31. December 2022
Visual History of the Holocaust: Rethinking Curation in the Digital Age
![]()
Contact: Hannes Kaufmann |
1. December 2020 - 30. November 2022
Efficient workflow transforming large 3D point clouds to Building Information Models with user-assisted automatization
no funding
Contact: Michael Wimmer |
1. February 2020 - 31. October 2022
Virtual Reality Tennis Trainer
This research project focuses on 3D motion analysis and motion learning methodologies. We design novel methods for automated analysis of human motion by machine learning. These methods can be applicable in real training scenario or in VR training setup. The results of our motion analysis can help players better understand the errors in their motion and lead to improvement of motion performance. Our motion analysis methods are based on professional knowledge from tennis experts from our partner company VR Motion Learning GmbH & Co KG. We use numerous motion features, including rotations, positions, velocities and others, to analyze the motion. Our goal is to use virtual reality as scenario for learning correct tennis technique that will be applicable in real tennis game. For this purpose, we plan to join our motion analysis with error visualization techniques in 3D and with novel motion learning methodologies. These methodologies may lead to learning correct sport technique, improvement of performance and prevention of injuries.
no funding
Contact: Hannes Kaufmann |
1. October 2018 - 30. September 2022
Advanced Visual and Geometric Computing for 3D Capture, Display, and Fabrication
This Marie-Curie project creates a leading European-wide doctoral college for research in Advanced Visual and Geometric Computing for 3D Capture, Display, and Fabrication. ![]()
Contact: Michael Wimmer |
1. October 2020 - 30. September 2022
A hyBRID (physical-diGital) multi-user Extended reality platform as a stimulus for industry uptake of interactive technologieS
The BRIDGES project aims at “bridging” the gap between interactive technologies and industries by bringing XR to the real world! For more information please refer to: https://www.bridges-horizon.eu/
no funding
Contact: Hannes Kaufmann |
1. October 2020 - 30. September 2022
Digital Urban Mining Platform: Assessing the material composition of building stocks through coupling of BIM to GIS
no funding
Contact: Michael Wimmer |
1. March 2020 - 31. August 2022
BIM-based digital Plattform for design and optimisation of flexible facilities for Industry 4.0
Industrial Building Design is a design process where the successful implementation of each project is based on collaborative decision making of multiple domain specialists - architects, engineers, production system planners and building owners. Traditionally, such multi-collaborator workflows are subject to conflicting stakeholder goals and frequent changes in production processes inevitably resulting in lengthy planning periods. This particular design process needs novel approaches to decision-making support which would combine the ability to communicate design intent with real-time feedback on the impact of design decisions. ![]()
Contact: Hannes Kaufmann |
1. January 2021 - 31. July 2022
Denoising for Real-Time Ray Tracing
no funding
Contact: Hannes Kaufmann |
1. December 2021 - 30. June 2022
Detection of Vehicle Hiding Places with Augmented Reality
no funding
Contact: Hannes Kaufmann |
1. January 2021 - 30. September 2021
Green and Smart Buildings: Solar Irradiation Analysis for Early Design Phases
no funding
Contact: Michael Wimmer |
1. March 2016 - 31. August 2021
Computational Design of Geometric Materials
In this project we want to research novel materials whose mechanical behavior is described by the complexity of their geometry. Such “geometric materials” are cellular structures whose properties depend on the shape and the connectivity of their cells, while the actual physical substance they are built of is constant across the entire object. ![]()
Contact: Przemyslaw Musialski |
1. September 2019 - 31. August 2021
Wohnen 4.0 - Digital Platform for Affordable Housing
This is a joint project with the civil engineering faculty and several companies. Its aim is the development of an Integrated Framework “Housing 4.0”; a digital platform supporting integrated planning and project delivery through coupling various digital tools and databases, like Building Information Modeling (BIM) for Design to Production and Parametric Habitat Designer. Our goal is to exploit the potential of BIM for modular, off-site housing assembly in order to improve planning and construction processes, reduce cost and construction time and allow for mass customization will be explored. The novel approach in this project is user-involvement; which has been neglected in recent national and international projects on off-site, modular construction supported by digital technologies. A parametric design tool should allow different stakeholders to explore both high-level and low-level options and their impact on the construction project so that mutually optimal solutions can be found easier. ![]()
Contact: Michael Wimmer |
1. January 2013 - 31. December 2020
Visual Computing: Illustrative Visualization
The central focus of our research is to understand visual abstraction. Understanding means 1. to identify meaningful visual abstractions, 2. to assess their effectiveness for human perception and cognition and 3. to formalize them to be executable on a computational machinery. The outcome of the investigation is useful for designing visualizations for a given scenario or need, whose effectiveness can be quantified and thus the most understandable visualization design can be effortlessly determined. The science of visualization has already gained some understanding of structural visual abstraction. When for example illustrators, artists, and visualization designers convey certain structure, or visually express how things look, we can often provide a scientifically-founded argument whether and why is their expression effective for human cognitive processing. What has not been given sufficient scientific attention to, is advancing the understanding of procedural visual abstraction, in other words investigating visual means that convey what things do or how things work. This missing piece of knowledge would be very useful for visual depiction of processes and dynamics that are omnipresent in science, technology, but also in our everyday lives. The upcoming project will therefore investigate theoretical foundations for visualization of processes. Physiological processes that describe the complex machinery of biological life will be picked as a target scenario. The reason for this choice is two-fold. Firstly, these processes are immensely complex, are carried-out on various spatial and temporal levels simultaneously, and can be sufficiently understood only if all scales are considered. Secondly, physiological processes have been modeled as a result of intensive research in biology, systems biology, and biochemistry and are available in a form of digital data. The goal will be to visually communicate how physiological processes participate on life by considering the limitations of human perceptual and cognitive capabilities. By solving individual visualization problems of this challenging target scenario, the research will provide first pieces of understanding of procedural visual abstractions that are generally applicable, beyond the chosen target domain. Prototype implementation of the developed technology is available at the GitHub repository: https://github.com/illvisation/ Prototype implementation of the developed technology is available at the GitHub repository: cellVIEWcellVIEW is a new tool that provides fast rendering of very large biological macromolecular scenes and is inspired by state-of-the-art computer graphics techniques. Click here for additional information. Invited Talks18.11.2016: Arthur J. Olson, Envisioning the Visible Molecular Cell ![]() ![]()
Contact: Ivan Viola |
1. December 2015 - 30. November 2020
Real-Time Shape Acquisition with Sensor-Specific Precision
Acquiring shapes of physical objects in real time and with guaranteed precision to the noise model of the sensor devices. ![]()
Contact: Michael Wimmer |
1. November 2015 - 31. October 2020
Path-Space Manifolds for Noise-Free Light Transport
The project aims to develop new statistical and algorithmic methods to improve light-transport simulation for offline rendering. ![]()
Contact: Michael Wimmer |
1. October 2018 - 30. September 2020
Use of Augmented Reality for Building Inspection and Quality Assurance on Construction Sites
The aim of this research project is the development of a construction site-suitable augmented reality (AR) system included a Remote-Expert-System and a BIM-Closed-Loop data transfer system for improving the quality of construction, building security and energy efficiency as well as increasing the efficiency of construction investigation. ![]()
Contact: Hannes Kaufmann |
1. September 2018 - 31. August 2020
Scanning and data capturing for Integrated Resources and Energy Assessment using Building Information Modelling
The aim of the project is to increase the resources- and energy efficiency through coupling of various digital technologies and methods for data capturing (geometry and materials composition) and modelling (as-built BIM), as well as through gamification. Collaborative project with several companies and institutes. ![]()
Contact: Michael Wimmer |
1. July 2019 - 30. June 2020
Procedural Keyframe Animation for 3D Mesoscale Models
Contact: Eduard Gröller |
1. May 2017 - 30. April 2020
A Test Suite for Photorealistic Rendering and Filtering
This project will research methods to test and compare global-illumination algorithms as well as filtering algorithms, and also develop test data sets for this purpose. ![]()
Contact: Michael Wimmer |
1. April 2015 - 31. March 2020
MAKE-IT-FAB: Modeling of Shapes for Personal Fabrication
The aim of this project is to investigate and to contribute to shape modeling and geometry processing for personal fabrication---a trend that currently receives intensified attention in the science and industry. Our goal is to contribute novel algorithmic solutions for fabrication-aware shape processing and interactive modeling. ![]()
Contact: Przemyslaw Musialski |
1. January 2017 - 31. December 2019
ILLUSTRARE: Integrative Visual Abstraction of Molecular Data
FWF - I 2953-N31 Integrative Visual Abstraction of Molecular Data ![]()
Contact: Ivan Viola |
14. November 2016 - 31. December 2019
Animated Cell Tab development
Animated Cell Tab development
no funding
Contact: Ivan Viola |
1. August 2017 - 31. July 2019
EvaluArte: Systematic Evaluation for AR Controllers
The goal of this project is the development of a systematic evaluation methodology, the evaluation of AR controllers for industrial tasks by utilizing the developed methodology and the publication of guidelines for developers of AR controllers, user interface designers, AR developers in general and the AR research community. ![]()
Contact: Hannes Kaufmann |
1. July 2017 - 30. June 2019
Smile Analytics: Visual Analytics for Realistic and Aesthetic Smile Design
The aim of the project is to improve the digital fabrication of dental prosthetic devices. We employ state of the art visualization techniques to enable a dental pretreatment preview for the patients. ![]()
Contact: Gabriel Mistelbauer
|
21. June 2017 - 20. June 2019
BioNetIllustration: User Centric Illustrations of Biological Networks
In living systems, one molecule is commonly involved in several distinct physiological functions. The roles of molecules are commonly summarized in pathway diagrams, which, however, are abstract, hierarchically nested and thus is difficult to comprehend especially by non-expert audience. The primary goal of this research in visualization is to intuitively support the comprehensive understanding of relationships among biological networks using interactively computed illustrations. Illustrations, especially in textbooks of biology are carefully designed to clearly present reactions between organs as well as interactions within cells. Automatic generation of illustrative visualizations of biological networks is thus the technical content of this proposal. Automatic generation of hand-drawn illustrations has been a challenging task due to the difficulty of algorithmically describing a human creative process such as evaluating and selecting significant information and composing meaningful explanations in a visually plausible manner. The project also involves experts from several disciplines including network and medical visualization, data mining, systems biology as well as perceptual psychology. The result will provide a new direction for physiological process analysis and accelerate the knowledge transfer not only within experts but also to the public. Acknowledgment: The project has received funding from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 747985. ![]()
Contact: Hsiang-Yun Wu |
1. December 2015 - 31. December 2018
Visual Information Foraging on the Desktop
The goal of this project is to design and develop novel interactive visualization techniques to support knowledge workers in making sense of their unstructured, dynamic information collections. ![]()
Contact: Manuela Waldner |