Konversatorium im WS 2012/2013

The Konversatorium is on Fridays at 10h30 in the Seminarroom E186 (Favoritenstraße/ Stiege 1, 5th floor).
Here you can find the upcoming speakers and topics of the Konversatorium in WS 2012/2013

Content:

05 October 2012

Speaker Topic Supervisor Duration
Joaquim Jorge Introduction Talk (Guest Professor) WP 20+5
Artem Amirkhanov PhD Defense Test Talk MEG 40+25

12 October 2012

Speaker Topic Supervisor Duration
Philip Limbeck Interactive Tracking of Markers for Facial Palsy Analysis (Epilogue rehearsal talk) PRIP 9+10
The human face provides a rich source of information from muscular movement and nerval actuation to properties of skin and facial characteristics. This information can be exploited to diagnose and quantify facial impairments. Facial palsy is one of these impairments, and is caused by restrictions of the nerval actuation of muscles responsible for facial expressions. The main symptoms of this condition is asymmetrical facial movement and partial facial paralysis.

To measure its progress and to compare pre-surgical with post-surgical conditions, medical physicians require different clinical measures extracted from those locations of the face which provide most information about facial expression. These locations are indicated by small artificial markers which are placed on the patient's face before an evaluation session. A video of the patient is then recorded which is used to localize these markers. This task is currently performed manually by an operator and can take up to five hours for a single video. Object tracking refers to a research field which deals with the estimation of the position one or many objects from an image sequence. Its methods have been applied successfully to different applications, ranging from video surveillance to robotics.

Traditionally, illumination, changes in pose and occlusion are considered as the main problems when tracking artificial scenarios. While the associated tracking methods proved themselves able to deal with these problems in recent years, tracking scenarios from the medical perspective are still partly unexplored. Just like all natural objects, the human face has a high potential to deform and is characterized by an irregular texture. Additionally, not only one, but multiple objects have to be tracked simultaneously, which imposes additional difficulty by ensuring that markers can be uniquely identified in every frame.

The thesis explores the possibility of tracking the artificial facial markers semi-automatically by applying different, state-of-the-art tracking schemes to the presented problem. The tracking schemes are based on a sequential Bayes estimation technique, the so called particle filter, which assesses a set of hypothesis using their congruence with the target model. Hence, the location of each marker can be accurately estimated and occlusions handled efficiently.

To improve the accuracy and to reset lost markers, the clinical operator can interact with the tracking system. The results showed that the chosen methods were superior in both the number of interactions and accuracy when compared with trackers which used only a single hypothesis concerning the marker locations. Additionally, it was shown that the evaluated schemes were able to replace the task of manual tracking while preserving a high accuracy. As a result, the time to locate the markers was decreased by around 2/3 with an accuracy of around 3-4 pixels towards the available groundtruth. Additionally, only around 2 % of the evaluated frames required operator intervention.

Georg Zankl Semi-automatic Annotation on Image Segmentation Hierarchies (Epilogue rehearsal talk) PRIP 9+10
We study the task of interactive semantic labeling of a segmentation hierarchy. To this end we propose a framework interleaving two components: an automatic labeling step, based on a Conditional Random Field whose dependencies are defined by the inclusion tree of the segmentation hierarchy, and an interaction step that integrates incremental input from a human user. Evaluated on two distinct datasets, the proposed interactive approach efficiently integrates human interventions and illustrates the advantages of structured prediction in an interactive framework.
Andrej Varchola Live Fetoscopic Visualization of 4D Ultrasound Data (PhD Defense Test Talk) MEG 40+10
Ultrasound imaging is due to its real-time character, low cost, non-invasive nature, high availability, and many other factors, considered a standard diagnostic procedure during pregnancy.

The quality of diagnostics depends on many factors, including scanning protocol, data characteristics and visualization algorithms. In this work, several problems of ultrasound data visualization for obstetric ultrasound imaging are discussed and addressed.

19 October 2012

Speaker Topic Supervisor Duration
Johannes Kopf A Blast from the Past --- Digital Reconstruction and Vectorization of Classic Comic Books and Old School Pixel Art (Guest Talk) MW 45+15
Lukas Rössler Rendering Interactive Maps on Mobile Devices Using Graphics Hardware (DAEV) MW 10+10
Mapping and navigation applications on mobile devices such as smart phones or tablets are increasingly popular. Modern maps are often rendered directly from vector data. Since the performance of a previous CPU-based map renderer was unsatisfactory, a hardware accelerated map rendering prototype for mobile devices based on OpenGL ES 2.0 was created. A novel hybrid rendering architecture is introduced to combine the advantages of tile-based and true real time rendering solutions. The architecture consists of a tile server that renders base map tile images and a client to display them . The new third component, the post-processor, draws dynamic map features such as icons and text above the tiles in real time, enabling a 3D fly-over mode. All components run inside the same process directly on the device. For the rendering of lines, an important map feature, a new rending algorithm was developed, enabling to draw lines of arbitrary width with one of three different line cap styles. Additionally the line can be stippled with a user-defined pattern where each line dash is rendered with the selected cap style. Antialiasing of the line is supported with an arbitrary circularly symmetric filter kernel of user-definable radius. To accelerate icon rendering, a texture atlas is used to store the icons, and a simple but effective packing algorithm has been developed to generate the atlas online.

09 November 2012

Speaker Topic Supervisor Duration
Camillo Dellṁour Gase reponsive stereoscopic rendering (DAAV) MW 10+5
Abstract:

Current stereoscopic technology is tiring for users, and often reported as uncomfortable after even short times of use. The reason for this is the so-called vergence/accomodation conflict, i.e., a mismatch between the actual focusing point of the eyes (the display) and the virtual focusing point in the scene. One way to overcome both of these problems is to adapt the focal plane in the rendering system so that it better matches the current state of the human visual system, i.e., where the user is currently focusing in the scene. The goal of this thesis is to develop a proof-of-concept for an interactive application (i.e., a simple computer game) which senses a user's gaze with an eye-tracker to dynamically adjust the configuration of stereo-3D rendering.

Michael Hecher Introduction Talk MW 10+5
Abstract:

In my introduction talk I will tell you some things about myself. E.g., where I come from and how I ended up as Univ.Ass., my hobbies, and some other interesting stuff that is somehow related to my life.

Cristian Rotariu Introduction Talk MW 10+5
Abstract:

In my introductory talk I will tell you few things about myself: educational background, professional experience, future goals, and some other things related to my family and my hobbies.

Michael Schwärzler Fast Accurate Soft Shadows with Adaptive Light Source Sampling (VMV Test Talk) MW 15+30
Abstract:

Physically accurate soft shadows in 3D applications can be simulated by taking multiple samples from all over the area light source and accumulating them. Due to the unpredictability of the size of the penumbra regions, the required sampling density has to be high in order to guarantee smooth shadow transitions in all cases. Hence, several hundreds of shadow maps have to be evaluated in any scene configuration, making the process computationally expensive. Thus, we suggest an adaptive light source subdivision approach to select the sampling points adaptively. The main idea is to start with a few samples on the area light, evaluating there differences using hardware occlusion queries, and adding more sampling points if necessary. Our method is capable of selecting and rendering only the samples which contribute to an improved shadow quality, and hence generate shadows of comparable quality and accuracy. Even though additional calculation time is needed for the comparison step, this method saves valuable rendering time and achieves interactive to real-time frame rates in many cases where a brute force sampling method does not.

23 November 2012

Speaker Topic Supervisor Duration
Ilyana Kirkova CGC Application Talk RP 15+10
Abstract:

In my application talk I will introduce myself by saying a few words about my cultural background and education, as well as some interests and hobbies of mine. I will also outline my work experience and projects at the university. I will tell you about my motivation to apply to the Computer Graphics Club and how I imagine I could contribute to the CG community. My talk will conclude with a short question and answer session.

Heinrich Fink Building a real-time renderer for TV broadcasting using the OpenGL 4.3 pipeline (DAAV) MW 10+10
Abstract:

Broadcasting studios used to employ highly specialized and expensive equipment to inscribe graphics into video material for live TV. As commodity hardware became more powerful and as file-based workflows became widely adopted, a recent trend in broadcasting hardware is to operate one or more TV channels with only a single PC-like machine. All stages of the broadcasting workflow are performed by one or more software components on this single computer. This talk presents a thesis that will focus on the renderer component of such a system and that will propose an implementation with the OpenGL API.

While OpenGL has built-in support for targeting image formats of consumer devices, working with image standards used in broadcasting video requires special attention. Higher bit-rates, studio color spaces and specialized image coding have to be considered. It is suggested that the use of compute shaders, now available in the OpenGL 4.3 pipeline, can be used to address the special requirements when render ing studio material more efficiently than before. This talk gives an outlook to the proposed thesis and summarizes the key challenges that are expected to arise when implementing a renderer for a broadcasting video pipeline.

Michael Hecher A Comparative Perceptual Study of Soft Shadow Algorithms (Epilog Test Talk) MW 10+5
Abstract:

While a huge body of soft shadow algorithms has been proposed, there has been no methodical study for comparing different real-time shadowing algorithms with respect to their plausibility and visual appearance. Therefore, a study was designed to identify and evaluate scene properties with respect to their relevance to shadow quality perception. Since there are so many factors that might influence perception of soft shadows (e.g., complexity of objects, movement, and textures), the study was designed and executed in a way on which future work can build on. The novel evaluation concept not only captures the predominant case of an untrained user experiencing shadows without comparing them to a reference solution, but also the cases of trained and experienced users. We achieve this by reusing the knowledge users gain during the study. Moreover, we thought that the common approach of a two-option forced-choice-study can be frustrating for participants when both choices are so similar that people think they are the same. To tackle this problem a neutral option was provided. For time-consuming studies, where frustrated participants tend to arbitrary choices, this is a useful concept. Speaking with participants after the study and evaluating the results, supports our choice for a third option. The results are helpful to guide the design of future shadow algorithms and allow researchers to evaluate algorithms more effectively. They also allow developers to make better performance versus quality decisions for their applications. One important result of this study is that we can scientifically verify that, without comparison to a reference solution, the human perception is relatively indifferent to a correct soft shadow. Hence, a simple but robust soft shadow algorithm is the better choice in real-world situations. Another finding is that approximating contact hardening in soft shadows is sufficient for the average user and not significantly worse for experts.

30 November 2012

Speaker Topic Supervisor Duration
Silvana Podaras CGC Application Talk RP 15+10
Abstract:

With her talk, the speaker applies for a membership in the famous Computer Graphics Club of our Institute. A short personal introduction of her personal and professional background is given, upon which a decision of her admission is made.

Clemens Arbesser Visualisation of Noise Distribution (DAAV) MEG 10+10
Abstract:

Noise pollution is an ever increasing problem not just in urban environments, but also in more rural areas such as small villages, along country roads or even in very sparsely populated regions. The purpose of this master's thesis is to propose ways to simulate and visualize noise pollution in large-scale, non-urban environments in order to help communicate the impact of new sound emitters on affected neighbors. Knowledge of noise propagation, the influence of the terrain and other obstacles as well as how different emitters add up can provide valuable insights and help in the decision-making process. The developed tool uses nVidia's CUDA architecture and the European norm "ISO 9613-2: Attenuation of sound during propagation outdoors" to create real-time visualizations in both 2D and 3D.

Manuel Hochmayr Parameter Space Visualization (DAAV) MEG 10+10
Abstract:

Users have to adjust a large number of parameters in a sensible way when visualizing objects. Finding a useful presentation of the object can be a cumbersome process, depending on the user's speed and experience.

Normally users already have an idea what the final visualization should look like but the user interface is not supporting them enough in choosing the right parameters to achieve this look. The aim of the master thesis is to develop a program that speeds up the process of creating the desired final image of a scientific visualization problem more quickly.

The users should not have to manually adjust every single parameter by their own. Instead the program is making a sensible pre-selection of parameters and offers the users different possible visualizations of which they select the best. If necessary the parameters can be refined in further steps. This semi-automatic process should speed up the time that is necessary to adjust sensible parameters.

07 December 2012

Speaker Topic Supervisor Duration
Christoph Winklhofer Reflective and Refractive Objects for Mixed Reality (DAEV) MW 10+10
Abstract:

Mixed reality is the idea to merge real and virtual objects in a scene. The visual appearance of such an augmented environment depends on a plausible lighting simulation. Reflective and refractive objects are ubiquitous in the real world but most mixed reality systems neglect them. The reason is that these objects require a global illumination approach. Indeed, such a complex method is hard to embed in a mixed reality system that demands real-time frame rates to handle user interaction.

This thesis describes the integration of reflective and refractive objects in a mixed reality environment. The aim is to create a realistic light distribution that simulates reflection and refraction between real and virtual objects. Another important aspect for a believable perception are caustics, a light focusing due to the scattering from reflective or refractive objects.

The proposed rendering method extends differential instant radiosity with three other image space rendering techniques capable to handle reflection, refraction and caustics in real-time. They link billboard impostors with relief mapping to produce convincing reflections and refractions. Combined with deferred shading and instant radiosity, it is possible to capture indirect-lighted surfaces. For caustics, a buffer stores the photon concentration in screen space and maps them onto objects, analog to a light map. Finally, differential rendering merges real and virtual objects. Therefore, the occurring light paths are analyzed and the differential effect is applied also to reflected and refracted objects.

By combining these techniques, our method successfully simulates the various lighting effects from reflective and refractive objects and is able to handle user interactions at real-time frame rates. This offers a practicable possibility to greatly improve the appearance of a mixed reality environment.

Florian Felberbauer Games with a Purpose - Improving 3D Model and Land Cover Data using Crowdsourcing (DAEV) MW 10+10
Abstract:

A variety of 3D-model databases are available on the internet, but the process of finding the right models is often tiring. This is because the majority of the available models is barely annotated or the quality is low. Annotations often are ambiguous, vague or too specialized. Besides 3Dmodel annotations, remote sensing data can be ambiguous too. Global land cover maps like GlobCover, MODIS and GLC2000 show large differences in certain areas of the world. This lack of correct data is a problem, because it is a basic requirement for a variety of research areas and applications.

Consequently, this thesis aims at tackling both aforementioned problems. The task of recognizing and classifying images as well as 3D-models is easy to solve for human beings, but even today rather hard for computer systems. For that reason, this thesis makes use of the concepts of crowdsourcing. The quality of user annotations can be improved by collecting annotations from a variety of users and extract those with the highest frequency. To achieve this, a game has been implemented that unifies crowdsourcing and social games mechanics. This game consists of game-rounds which lead the user through the process of annotating 3D-models as well as land cover data. Also, a drawing round has been implemented to enable the user to classify a given land cover area using a pre-defined set of categories. As crowdsourcing is related to a large number of users, the focus is on implementing a game that provides incentives for users to spend their free time on playing, while solving useful tasks.

To reach as many users as possible, the game has been implemented using only HTML5 and JavaScript to circumvent limitations due to missing plugins or external players and to support all systems, including mobile devices. It is also integrated into Facebook to further enlarge the number of reachable users. The potential of the approach is demonstrated on the basis of a user study.

The results show that the annotations with the highest frequency are good descriptors for the underlying 3D-models as well as for the land cover maps. None of the top annotations are incorrect for any model or map. Analyzing the user paintings also shows very good results. The majority of maps were classified correctly and even the distribution of categories over the maps are correct to a high degree. We thus show, that the combination of crowdsourcing and social games can improve land cover data and 3D-model annotations. These insights contribute to the ongoing “Landspotting” project, which is further explained in this thesis.

       

14 December 2012

Speaker Topic Supervisor Duration
Joaquim Jorge How to do a good presentation WP 75+15

21 December 2012

CANCELLED

11 January 2013

Speaker Topic Supervisor Duration
Michael Wörister A Caching System for a dependency-aware Scene Graph (DAEV) WP 10+10
Abstract:

Scene graphs are a common way of representing 3-dimensional scenes for graphical applications. A scene is represented as a hierarchical structure of nodes which represent 3D geometry, spatial transformations, surface properties, and other possibly application specific aspects. Scene graph systems can be designed to be very generic and flexible, e.g. by allowing users to implement custom node types and traversals or by providing facilities to dynamically create subgraphs during a traversal. This flexibility comes at the cost of increased time spent in pure traversal logic. Especially for CPU-bound applications this causes a performance drop. This thesis proposes a scene graph caching system that automatically creates an alternative representation of selected subgraphs. This alternative representation poses a render cache in the form of a so-called instruction stream which allows to render the cached subgraph at lower CPU cost and thus more quickly than with a regular render traversal. Additionally, a number of optimizations for render caches were implemented to further increase the performance gain with respect to uncached rendering.

In order to be able to update render caches incrementally in reaction to certain scene graph changes, a dependency system was developed. This system provides a model for describing and tracking changes in the scene graph and enable the scene graph caching system to update only those parts of the render cache that needs to be updated without necessitating a full rebuild of the cache.

The actual performance characteristics of the scene graph caching system were investigated using a number of synthetic test scenes in different configurations. These tests showed that the caching system is most useful in scenes with a high structural complexity (high geometry count and/or deep scene graph hierarchies) and moderate primitive count per geometry. In this kind of scene the scene graph caching system, with all optimizations enabled, reduced average frame times by a factor of 5 to 8 with all objects in the scene changing their transformation each frame. This performance gain could be achieved at the cost of startup times increased by 3 to 4 seconds for scenes with 3000 to 8000 geometry nodes. The additional main memory consumption was measured at 4 MiB for the scene with 3000 geometries and a flat transformation hierarchy and 20 MiB for the scene with 8000 geometries and a deep transformation hierarchy.

       
       

18 January 2013

Speaker Topic Supervisor Duration
Martin Knecht RESHADE MW 30+10
Abstract:

The aim of the RESHADE project is to simulate the mutual influence between real and virtual objects in mixed reality applications. Virtual objects in such applications appear disturbingly artificial because rendering completely ignores the real environment. But from the term mixed reality one would expect that virtual and real objects harmoniously blend into one visual perception and cannot be distinguished easily. The ambitious goal of this project was to provide users with a perfect illusion, so that they cannot perceive a difference between virtual and real objects. After more than three years of research the project will finally end with the beginning of February. This talk will give an overview of the challenges faced and the methods developed during this project.

Peter Mindek, Gabriel Mistelbauer IEEE Visualization Report + interesting papers MEG 20+10
       

25 January 2013

Speaker Topic Supervisor Duration
Ivan Viola Introduction Talk WP, MEG 30
Abstract:

Visual Computing in Ultrasonography

Medical ultrasound is in the recent years experiencing a rapid development in the quality of real-time 3D ultrasound imaging. Image quality of the 3D volume that was previously possible to achieve within the range of few seconds is now possible to achieve in a fraction of a second. This technological advance offers entirely new opportunities for the use of ultrasound in clinics. In my talk, I will discuss several enabling visual computing technologies such as image registration, filtering, segmentation, and visualization, developed in the course of the research project that together give the ultrasound new potential for the use in clinical environment.

Manuela Waldner Introduction Talk MEG 10+5
Abstract:

Information Management in Emerging Display Environments

With affordable large-scale monitors and powerful projector hardware, it is possible to combine multiple heterogeneous display devices of different size, resolution, and orientation into a common interaction space. However, commonly used user interfaces and information management techniques fail to take the complexity of such emerging display environments into account and therefore cannot exploit their full potential.

In this talk, I will present user interface and information management techniques for emerging display environments we designed and developed at the Institute for Computer Graphics and Vision at Graz University of Technology. I will first demonstrate our technical infrastructure for constructing spatially aware display environments. Based on this infrastructure, new interaction and information presentation techniques for window managers in emerging display environments were implemented and evaluated. I will report the lessons learned and potential future research directions.

Zoltan Konyha Interactive Visual Analysis in Automotive Engineering Design (PhD Defense Test Talk) HH, MEG 40+5
Abstract:

Computational simulation has become instrumental in the design process in automotive engineering. Simulations can be repeated with varied parameter settings, representing many possible design choices. The engineers' goal is to generate useful knowledge from the simulations' results. Computational analysis is widely used and necessary, but not always sufficient. This thesis presents techniques and methods for the interactive visual analysis (IVA) of simulation data sets. Compared to computational methods, IVA offers new and different analysis opportunities.

We introduce a data model that represents the results of repeated simulations as families of function graphs. Well-known InfoVis plots and visualization techniques for families of function graphs are integrated into a coordinated multiple views framework. Focus+context visualization and iteratively defined compositions of brushes promote information drill-down. We propose glyph-based spatio-temporal visualizations for rigid and elastic multibody systems. We integrate the on-demand computation of derived data attributes of families of function graphs into the analysis workflow to facilitate the selection of deeply hidden data features. The system supports interactive knowledge discovery. The analysts can explore data features and relations; and generate, verify or reject hypotheses with visual tools; thereby gaining more insight into the data. They can solve complex tasks such as parameter sensitivity analysis and optimization. We discuss common tasks in the analysis of data containing families of function graphs.