Find the upcoming dates on this page.

Previous Talks

Speaker: Knut Hartmann (University of Magdeburg)

Scientific and technical textbooks, documentations and visual dictionaries should convey fairly complex subject matters in an easy-to-understand way. These types of learning materials heavily rely on illustrations to achieve various tasks in parallel: 1. Introduce a large amount of unknown terms either in a domain-specific or foreign language, 2. Explain complicated spatial configurations, 3. Provide classifications and descriptions for domain entities, and 4. Pin-point the reader's attention at important features in the illustration. Therefore, these illustrations are carefully tuned to the above-mentioned communicative functions. Moreover, the illustrations have to be coordinated with the associated text segments. This is mainly achieved by establishing links between visual and textual elements. In practice, human illustrators employ a number of techniques: Labels, legends, and figure captions which provide denotations, technical terms, and descriptions for visual elements. An interactive 3D browser is well suited to explore complex spatial configurations (Task 2) and can ease the mental integration of visual and textual information (Task 1, 3) through a synchronized object selection and highlighting mechanism (Task 4). Moreover, the properties of the visual elements viewing direction, graphical emphasis techniques) as well as the layout of the visual and textual element can be adjusted according to the user specific requirements. This scenario raises a challenging dynamic layout problem which has to be solved by automated and real-time layout algorithms. This talk presents a novel system that integrates 3D information with dynamic textual annotations. It presents real-time layout algorithms for annotation placement. The demo compares several implemented layout styles with their hand-drawn counterparts in order to demonstrate the flexibility of the system.

Details

Category

Duration

45
Host: K. Matkovic

Speaker: Jan Meseth (Universität Bonn)

Rendering highly realistic images requires - besides other things - suitable materials for covering modelled objects. Despite the big advantages of parametric material models (e.g., compact storage and efficient and/or intuitive modification by adjusting, interpolating or otherwise mixing parameters), the best results are currently achieved employing material representations derived from sampling real materials. Due to physical limitations the spatial extent of sampled materials is limited. This limitation can be overcome by texture synthesis methods which handle most types of materials in a satisfying way today. In the talk a method for capturing and reproducing regular structures in materials is presented which is based on fractional Fourier analysis. The approach is applied to texture synthesis to enable fully automatic handling of the special yet highly relevant class of near-regular textures, which was not possible previously. Since regular structures are captured by a parametric model the approach marks a step towards automatic high-quality modelling of real-world materials by parametric models.

Details

Category

Duration

45
Host: T. Fuhrmann

Speaker: David Luebke (University of Virginia)

The ultimate display will not show images. To drive the display of the future, we must abandon our traditional concepts of pixels, and of images as grids of coherent pixels, and of imagery as a sequence of images. So what is this ultimate display? One thing is obvious: the display of the future will have incredibly high resolution. A typical monitor today has 100 dpi-far below a satisfactory printer. Several technologies offer the prospect of much higher resolutions; even today you can buy a 300 dpi e-book. Accounting for hyperacuity, one can make the argument that a "perfect" desktop-sized monitor would require about 6000 dpi-call it 11 gigapixels. Even if we don't seek a perfect monitor, we do want large displays. The very walls of our offices should be active display surfaces, addressable to a resolution comparable to or better than current monitors. It's not just spatial resolution, either. We need higher temporal resolution: hardcore gamers already use single buffering to reduce delays. The human factors literature justifies this: even 15 ms of delay can harm task performance. Exotic technologies (holographic, autostereoscopic...) just increase the spatial, temporal, and directional resolution required. Suppose we settle for 1 gigapixel displays that can refresh at 240 Hz-roughly 4000x typical display bandwidths today. Recomputing and refreshing every pixel every time is a Bad Idea, for power and thermal reasons if nothing else. We will present an alternative: discard the frame. Send the display streams of samples (location+color) instead of sequences of images. Build hardware into the display to buffer and reconstruct images from these samples. Exploit temporal coherence: send samples less often where imagery is changing slowly. Exploit spatial coherence: send fewer samples where imagery is low-frequency. Without the rigid sampling patterns of framed renderers,sampling and reconstruction can adapt with very fine granularity to spatio-temporal image change. Sampling uses closed-loop feedback to guide sampling toward edges or motion in the image. A temporally deep buffer stores all the samples created over a short time interval for use in reconstruction. Reconstruction responds both to sampling density and spatio-temporal color gradients. We argue that this will reduce bandwidth requirements by 1-2 orders of magnitude, and show results from our preliminary experiments.

Speaker: Prof. Charles Hansen (University of Utah, USA)

Is it ridiculous to think of the world has nothing but plastic? That is precisely an assumption most volume renders make by using the Phong illumination model. Direct volume rendering has proven to be an effective and flexible visualization method for interactive exploration and analysis of 3D scalar fields. While widely used, most if not all applications render (semi-transparent) surfaces lit by an approximation to the Phong local surface shading model. This model renders surfaces simplistically (as plastic objects) and does not provide sufficient lighting information for good spatial acuity. In fact, the constant ambient term leads to misperception of information that limits the effectiveness of visualizations. Furthermore, the Phong shading model was developed for surfaces, not volumes. The model does not work well for volumetric media where sub-surface scattering dominates the visual appearance (e.g. tissue, bone, marble, and atmospheric phenomena). As a result, it is easy to miss interesting phenomena during data exploration and analysis. Worse, these types of materials occur often in modeling and simulation of the physical world. Physically correct lighting has been studied in the context of computer graphics where it has been shown that the transport of light is computationally expensive for even simple scenes. Yet, for visualization interactivity is necessary for effective understanding of the underlying data. We seek increased insight into volumetric data through the use of more faithful rendering methods that take into consideration the interaction of light with the volume itself.

Details

Category

Duration

45+15
Host: MEG

Speaker: Baoquan Chen (University of Minnesota, Minneapolis)

Capturing and animating real-world scenes have attracted increasing research interest. To offer unconstrained navigation of the scenes, 3D representations are first needed. Advancement in laser scanning technology is making 3D acquisition feasible for objects of ever larger scales. However, outdoor environment scans demonstrate the following properties: (1) incompleteness - a complete scan of every object in the environment is impossible to obtain due to self- and inter-object obstruction and constrained accessibility of the scanner; (2) complexity - natural objects, such as trees and plants are complex in terms of their geometric shapes; (3) inaccuracy - data can be unreliable due to scanning hardware limitations and movement of objects, such as plants and trees during the scanning process; and (4) large data size. These properties raise unprecedented challenges for existing methods. In this talk, I will describe our solutions towards addressing these challenges. They fall into two directions of approach: the first one is artistic abstraction and depiction of point clouds, and the second one is constructing full geometry out of limited scans.

Speaker: Prof. Reinhard Klein (Universität Bonn, Institut für Informatik II, Germany)

Despite recent advances in finding effective LOD-Representations for gigantic 3D objects, rendering of complex, gigabyte-sized models and environments is still a challenging task, especially under real-time constraints and high demands on the visual accuracy. In the first part of this talk I will give an overview over our recent results on the simplification and efficient hybrid rendering of complex meshes and point clouds. After introducing the general hierarchical concept I will present two hybrid LOD algorithms for real-time rendering of complex models and environments. In the first approach we use points and triangles as the basic rendering primitives. To preserve the appearance of an object a special error measure for simplification was developed which allows us to steer the LOD generation in such a way that the geometric as well as the appearance deviation is bounded in image space. A novel hierarchical approach supports the efficient computation of the Hausdorff distance between the simplified and original mesh during simplification. In the second approach we refrain from using triangles in combination with points. Instead we replace most of the points by planes. Using these planes the filtering and therefore the rendering quality is comparable to elaborate point rendering methods but significantly faster since it is supported in hardware. In the second part we concentrate on efficient GPU based rendering of Trimmed Non-Uniform Rational B-Spline surfaces (NURBS). Due to the irregular mesh data structures required for trimming there were no algorithms that exploit the GPU for tessellation so far. Instead, all recent approaches perform a pre-tessellation and use level-of-detail techniques in order to deal with complex Trimmed NURBS models. In contrast to a simple API these methods require tedious preparation of the models before rendering. In addition this pre-processing hinders interactive editing. With our new method the trimming region can be defined by a trim-texture that is dynamically adapted to the required resolution and allows for an efficient trimming of surfaces on the GPU. Combing this new method with a GPU-based tessellation of cubic rational surfaces allows a new rendering algorithm for arbitrary trimmed NURBS and even T-Spline surfaces with prescribed error in screen space on the GPU. The performance exceeds current CPU-based techniques by a factor of about 200 and makes real-time visualization of trimmed NURBS and T-Spline surfaces possible on consumer-level graphics cards.

Details

Category

Duration

45+10
Host: MW

Speaker: David Laidlaw (Brown University)

The speaker will present the results of several experiments to evaluate visualization environments. Together, the results help to explain some of the tradeoffs between large-format 3D virtual-reality displays (e.g., a Cave) and other display formats. All of the results are motivated by the belief that immersive virtual reality has the potential to accelerate the pace of scientific discovery for scientists studying large complicated 3D problems. The results the speaker will present come from experiments, which represent a number of different approaches: first, anecdotal reports about scientists using visualization applications; second, performance measurements of non-expert subjects on abstracted tasks; third, evidence about the impact of the virtual environment on performance; and fourth, subjective evaluations by visual design experts. As might be expected when asking which displays performed better, the answer is it depends on the scientific application, on the tasks used in evaluations, and on the details of the display technologies. The speaker will conclude with some thoughts on how the different evaluation approaches complement each other to give a more complete picture.

Speaker: Dr. Christopher Giertsen (Christian Michelsen Research Bergen, Norway)

The process of locating oil reserves and position new oil wells involves many complex data types and many professional disciplines. The data sets are often extremely large, irregular, three-dimensional, dynamic, and may include many associated measured or simulated parameters. It is a great challenge to visualize and analyze such data, particularly when data sets from different disciplines need to be combined simultaneously, and be manipulated in real-time.
This talk presents an overview of a long-term research project, where the aim has been to make use of large screen visualization and virtual reality interaction in order to improve critical oil company work processes. First, the project idea and the most important data types will be described. Then, some of the new visualization methodology and interaction techniques developed in the project will be reviewed. This also includes an outline of unsolved visualization research issues. Finally, the business impact of the project results will be summarized.

Details

Category

Duration

45+15
Host: MEG

Speaker: Timo Aila (Helsinki University of Technology)

This talk will cover two new algorithms for rendering physically-based soft shadows. The first method replaces the hundreds of shadow rays commonly used in stochastic ray tracers with a single shadow ray and a local reconstruction of the visibility function. Compared to tracing the shadow rays, our algorithm produces exactly the same image while executing one to two orders of magnitude faster in the test scenes used. Our first contribution is a two-stage method for quickly determining the silhouette edges that overlap an area light source, as seen from the point to be shaded. Secondly, we show that these partial silhouettes of occluders, along with a single shadow ray, are sufficient for reconstructing the visibility function between the point and the light source.
The second method does not cast shadow rays. Instead, we place both the points to be shaded and the samples of an area light source into separate hierarchies, and compute hierarchically the shadows caused by each occluding triangle. This yields an efficient algorithm with memory requirements independent of the complexity of the scene.

Details

Category

Duration

45+15
Host: MW

Speaker: Denis Gracanin (Virginia Tech University)

Details

Category

Duration

45+15
Host: H. Hauser

Speaker: Matthias Teschner (University of Freiburg)

The realistic simulation of complex deformable objects at interactive rates comprises a number of challenging problems, including deformable modeling, collision detection, and collision response.
1. The deformable modeling approach has to provide interactive update rates, while guaranteeing a stable simulation. Furthermore, the approach has to represent objects with varying elasto-mechanical properties.
2. The collision detection algorithm has to handle geometrically complex objects, and also large numbers of potentially colliding objects. In particular, the algorithm has to consider the dynamic deformation of all objects.
3. The collision response method has to handle colliding and resting contacts among multiple deformable objects in a robust and consistent way. The method has to consider the fact that only sampled collision information is available due to the discretized object representations and the discrete-time simulation. The presentation discusses solutions to the aforementioned simulation aspects. Interactive software demonstrations illustrate all models, algorithms, and their potential for applications such as surgery simulation.

Speaker: Prof. Heidrun Schumann (University of Rostock, Germany)

Visuelles Data Mining bezeichnet die Verknüpfung von automatischen und visuellen Methoden für eine effektive Exploration komplexer Datenmengen. Als automatische Methoden werden insbesondere Techniken aus den Bereichen des Knowledge Discovery und der Statistik eingesetzt, als visuelle Methoden Techniken der Informationsvisualisierung. Bisherige Ansätze gehen von abstrakten Daten aus. Im Vortrag sollen spezielle Erweiterungen zu dieser Vorgehensweise diskutiert werden.

Zunächst soll die Frage geklärt werden, inwieweit sich die genannten Konzepte auch auf die Exploration von Strukturen übertragen lassen. Hierzu werden einige Methoden der Graphentheorie ausgewählt mit dem Ziel, die automatische Berechnung von strukturellen Eigenschaften innerhalb des Visuellen Data Mining zu unterstützen. Es wird ein Framework vorgestellt, das diese Funktionalität umsetzt.

Als zweiter Punkt soll die Verbesserung der Usability im Umfeld des Visuellen Data Mining genauer untersucht werden. Hierbei werden 2 Problemkreise angesprochen:

  • Der Entwurf eines History-Managements, um Undo, Redo und die Wiederverwendung von Analyseverläufen zu ermöglichen,
  • Der Entwurf spezieller Linsentechniken, die auf verschiedenen Stufen der Visualisierungspipeline aufsetzen und so, je nach Bedarf, zusätzliche Informationen anzeigen bzw. Informationen aggregieren oder ausblenden.

Details

Category

Duration

45+15
Host:  

Speaker: Janos Schanda (University of Veszprém, Hungary)

The Colour and Multimedia Laboratory is part of the Image Processing and Neurocomputing Department of the Faculty of Technical Informatics of the University of Veszprém. Its task is to teach fundamentals of computer image capturing and displaying devices, multimedia and virtual reality courses, tutoring on visual fundamentals, such as visual ergonomics and colour science - both related to physical processes and psychophysical, human related issues. According to this the Laboratory works in three sub-groups:

  • Physical and visual fundamentals, such as spectral sensitivity investigations of the human observer, under large filed and mesopic conditions, as well as the study of the spectral and spatial dependence of glare; colour rendering, especially for LED lighting, and some colour technology issues.
  • Technology and application of Multimedia and Virtual Reality, especially for handicapped persons, both as tutorial help for children and in rehabilitation (mainly for stroke patients).
  • Colour memory effects: Both short and long term memory effects, colour preference, influence of the background (induction effects).

Details

Category

Duration

45+15
Host: WP

Speaker: Zsolt Toth (University of Comenius at Bratislava, Slovakia)

Visually pleasant image reconstruction has important role in computer graphics. In our work we explore the applicability of triangulations for image reconstruction. Two new algorithms are introduced for generation of data-dependent triangulation. The new deterministic algorithm entitled as image partitioning algorithm (IPA) shifts this reconstruction method closer to real usage. We present a new modification of the optimization technique simulated annealing with generalized look-ahead process (SALA). Also a new way of utilization of color information is presented, to achieve qualitative course of reconstruction of color images. Results show both theoretical and practical superiority over another methods. This work is a part of the APVT project Virtual Bratislava.

Speaker: Xavier Decoret (Grenoble)

Over the past years, impostors and image based simplification have been proposed to replace complex geometry with simpler meshes and appropriate replacement textures. Along this path, the Billboard Clouds approach approximates the global shape of an objects with a small set of planes and uses semi-transparent textures to capture finer details such as silouhettes for example. The problem is cast as a geometric cover, where a minimal set of planes is searched to intersect "regions of validity" of the model's faces. In view-independent billboard clouds (where the BC simplification must be usable for any viewpoint), those regions are defined by spheres around the vertices indicating the maximum displacement allowed during simplification. In view-independent cases (where the BC simplification is to be used for a given viewcell), the definition of the validity region involves an accurate computation of the reprojection error. In this talk, we will briefly present the view-independent billboard clouds and will then introduce our recent results on their extension to the view-dependent case.

Details

Category

Duration

45 + 15

Speaker: Pekka Pehkonen (University of Oulu, Finnland)

Hypermedia enables the use of associative modeling and management of information. One of the topics studied in the ATELIER -project was hypermedia's utilization possibilities with location-based data. Special tool called E-Diary was developed for collecting multimedia data and location information during remote visits and for storing them in a hypermedia database. Hypermedia combined with physical input devices provided interesting new ways to browse through the collected data, including gesture based navigation. This presentation illustrates the framework and tools used to study the use of hypermedia with location-based data.

Details

Category

Duration

45 + 15

Speaker: Cecilia Sikne Lanyi (University of Veszprém, Hungary)

The multimedia and virtual reality projects of our Laboratory during last ten years can be summarized as follows:
Tutorial and entertainment programs for handicapped children
Rehabilitation programs for stroke patients and persons with phobias
We have developed multimedia software for handicapped children having different impairments: partial-vision, hearing difficulties, loco-motive difficulties, mental retardation, dyslexia, etc. We show the advantages of multimedia software to develop handicapped children.
 

What are the advantages of multimedia software to develop handicapped children's skills?

It is an audiovisual medium
It is interactive
The treatment or situation can be reproduced, the same condition can be repeated several times.
The display presentation can be set according to the visus. The size, form, contrast, colour, size of line width, etc. of the objects and the background can be selected for best suiting the patient.
It can be adjusted to the individual needs.
Multimedia systems have an effect on more then one sense, and can be more effective.
It can help creativity, it can be varied.
It is like a game: (the child does not find the exercise as penitence, he/she likes it.
The child feels the success.
One can use motivating audio feed-back.
It can be used both in individual and small-group therapy.
Also the parent can use it with success.
Most important is that the child should get interested and his/her interest is kept for long periods of time. This is not an easy task, but multimedia presentations are very effective in this respect too.
One can include in "games" into the multimedia programs.We show the special needs of the handicapped children that have to be considered when developing multimedia software.

What are the user interface design questions in developing multimedia software for handicapped children?

To draw the picture with thick contour lines for low vision -,
Give short sentences for mentally handicapped -,
Design the navigation tools for children with loco-motive difficulties.
The hearing impaired children want the sounds too.

Rehabilitation programs for stroke patients and persons with phobias.

We developed a computer controlled method which enables - as a difference to methods used internationally - not only the establishment of the diagnosis, but permits measurement of the effectiveness of the therapy. It allows
To produce a database of the patients that contains not only their personal data but also the results of the tests, their drawings and audio recordings.
It is an intensive therapeutic test and contains tutorial programs. Now we are collecting the test results in this project. We developed some virtual worlds for treating phobias: virtual balcony, a ten-storied building with an external glass elevator and an internal glass elevator in the virtual Attrium Hyatt hotel. We developed virtual environment for claustrophobia too: a closed lift and a room, where the wall can be moved. For specific phobias (fear of travelling) we modelled the underground travelling in Budapest. For the education we developed virtual shopping software for autistic children too. I show the advantages of Virtual Reality in the investigation, evaluation and therapy of perception, behaviour and neuropsychological studies.

Details

Category

Duration

35+10
Host: WP

Speaker: Georg Glaeser (Universität für Angewandte Kunst, Wien)

Die Keplerschen Gesetze und Begriffe wie Präzessionsbewegung oder Zeitgleichung werden didaktisch mittels Computeranimationen aufbereitet und damit innerhalb kurzer Zeit anschaulich klar. Wieso dauert das Winterhalbjahr fünf Tage weniger als das Sommerhalbjahr? Wie sind Tag-Nacht-Gleichen geometrisch charakterisiert? Warum kommen wir gerade ins Zeitalter des Wassermanns? Zum Abschluss wird eine realistische Simulation einer minutengenauen Sonnenuhr gezeigt, deren Funktionsweise ohne zu erwähnten Begriffe völlig unverständlich wäre.

Details

Category

Duration

30+10
Host: MEG

Speaker: Francois Faure (University Joseph Fourier, Grenoble)

We present a new framework to efficiently trade off acuracy for speed in collision detection between deformable objects. It combines a conventional proximity detection based on hierarchical bounding volumes with a stochastic method for collision detection. The hierarchical method selects regions of possible collisions. The stochastic method randomly selects pairs of geometric primitives in these regions and make them iteratively converge to local distance minima. By tuning the number of active pairs, a trade-off between complete detection and computation speed is obtained. Preliminary results exhibit significant speedups over previous approaches.

Details

Category

Duration

20+10
Host:  

Speaker: Wolfgang Stürzlinger (York University Toronto, Canada)

The dynamic range of many real-world environments exceeds the capabilities current of display technology by several orders of magnitude. In this talk I present the results of a collaborative research project, namely the design of two different display systems that are capable of displaying images with a dynamic range much more similar to that encountered in the real world. One system can be built from off-the-shelf components and the other relies on a custom backlighting system. Software issues as well as the advantages and the disadvantages of the two designs are discussed together with potential applications.

Details

Category

Duration

30+5
Host: MEG

Speaker: Ragnar Bade (Universität Magdeburg, Deutschland)

In this talk we will first discuss a case-based educational system for treatment decision-making and intervention planning of liver tumors. We will focus on the appropriateness and development of visualization techniques for exploring patient specific data in a problem-oriented learning environment. In this framework, NPR-techniques such as silhouettes and hatching lines are discussed. In the second part, we outline a method for visualization of anatomic tree structures, such as vascular and bronchial trees by means of convolution surfaces. We will go into detail of the filter design to achieve a correct visualization of the vessel diameter and avoid irritating bulges and unwanted blending. Afterwards examples and validation details are presented and discussed.

Details

Category

Duration

45+10
Host: EG

Speaker: Gudrun Klinker (Technische Universität München, Deutschland)

When people hear the term "Augmented Reality" (AR) they currently first think of a head-mounted display and a local tracking system which superimposes virtual information into a user's field of view. Recently, research into AR-setups has started to move away from this primary setup. Information is presented not only in head-mounted displays but also on world-registered monitors that can be attached to portable instruments or which are carefully arranged within an ubiquitous environment. Furthermore, the increasing need for mobile AR-applications requests tracking arrangements to be laid out in a more global scheme integrating a number of heterogeneous trackers. To this end, an ubquitously available tracking system needs to be able to let mobile users establish dynamic connections to various tracking services. This talk will present several examples which show the confluence of concepts from ubiquitous computing and augmented reality.

Details

Category

Duration

45+15
Host: DS

Speaker: Karol Myszkowski (Universität Saarbrücken, Deutschland)

Due to rapid technological progress in high dynamic range (HDR) video capture and display, the efficient storage and transmission of such data is crucial for the completeness of any HDR imaging pipeline. We propose a new approach for inter-frame encoding of HDR video, which is embedded in the well-established MPEG-4 video compression standard. The key component of our technique is luminance quantization that is optimized for the contrast threshold perception in the human visual system. The quantization scheme requires only 10--11 bits to encode 12 orders of magnitude of visible luminance range and does not lead to perceivable contouring artifacts. Besides video encoding, the proposed quantization provides perceptually-optimized luminance sampling for fast implementation of any global tone mapping operator using a lookup table. To improve the quality of synthetic video sequences, we introduce a coding scheme for discrete cosine transform (DCT) blocks with high contrast. We demonstrate the capabilities of HDR video in a player, which enables decoding, tone mapping, and applying post-processing effects in real-time. The tone mapping algorithm as well as its parameters can be changed interactively while the video is playing. We can simulate post-processing effects such as glare, night vision, and motion blur, which appear very realistic due to the usage of HDR data.