Find the upcoming dates on this page.

Previous Talks

Speaker: Ole Ciliox (ZKM Karlsruhe, Germany)

The traditional development of virtual and augmented reality applications usually relied on expensive workstation hardware and custom developed software. Today, this approach seems not reasonable anymore. Alone the development of a modern rendering pipeline for the application can consume several man-months. However, modern game engines incorporate modern and highly efficient rendering pipelines that could support the application development on higher levels. This talk introduces important terms and definitions, presents some examples like the adoption with spatially immersive projector-based displays (SIDs) and shows some problems that are inherent to game engines.

Speaker: Yun Jang (Swiss National Supercomputing Centre)

Functional approximation of scattered data is a popular technique for compactly representing various types of datasets in computer graphics, including surface, volume, and vector datasets. Typically, sums of Gaussians or similar radial basis functions are used in the functional approximation and PC graphics hardware is used to quickly evaluate and render these datasets. While truncated radially symmetric basis functions are quick to evaluate and simple for encoding optimization, they are not the most appropriate choice for data that is not radially symmetric and are especially problematic for representing linear, planar, and many non-spherical structures. Therefore, the functional approximation system is extended to using more general basis functions, such as ellipsoidal basis functions(EBFs) that provide greater compression and visually more accurate encodings of volumetric scattered datasets. In addition to static data approximation, temporal data is encoded using results from encoding previous timestep to speed the encoding time. Moreover, as a part of visual analytics, we developed tools for zoonotic syndromic surveillance, linked animal and human visual analytics for healthcare surveillance, network visualization, etc. In this talk, We will introduce these visual analytics tools and discuss their applications.

Details

Category

Duration

45+15
Host: EV

Speaker: Francois Faure (University Joseph Fourier)

SOFA is a new open source framework primarily targeted at medical simulation research. Based on an advanced software architecture, it allows to (1)~create complex and evolving simulations by combining new algorithms with algorithms already included in SOFA; (2) modify most parameters of the simulation~--~deformable behavior, surface representation, solver, constraints, collision algorithm, etc.~--~by simply editing an XML file; (3) build complex models from simpler ones using a scene-graph description; (4) efficiently simulate the dynamics of interacting objects using abstract equation solvers; and (5) reuse and easily compare a variety of available methods. In this paper we highlight the key concepts of the SOFA architecture and illustrate its potential through a series of examples.

Details

Category

Duration

30+15
Host: MEG

Speaker: David Williams (City University London)

Curved Planar Reformation (CPR) has proved to be a practical and widely used tool for the visualization of curved tubular structures within the human body. It has been useful in medical procedures involving the examination of blood vessels and the spine. However, it is more difficult to use it for large, tubular, structures such as the trachea and the colon because abnormalities may be smaller relative to the size of the structure and may not have such distinct density and shape characteristics.
Our new approach (which we call 'Volumetric CPR') improves on this situation by using volume rendering for hollow regions and standard CPR for the surrounding tissue. This effectively combines gray scale contextual information with detailed color information from the area of interest. The approach is successfully used with each of the standard CPR types and the resulting images are promising as an alternative to virtual flythroughs.
We show that lighting is non-trivial because of the deformation which occurs when the three dimensional curved tubular structure is mapped to a two dimensional image plane. We show how lighting and shading is computed in this scenario and how it can be used to maximize the users understanding of the surface.
Lastly, we demonstrate that our new approach is a useful tool for displaying additional information not typically available during a flythrough, such as real-time surface coverage data or translucency rendering. We also show that, because the Volumetric CPR provides and alternative view on the colon, it increases surface coverage from 86.8% (for a flythrough in each direction) to 99.2%; significantly improving the chances of detecting abnormalities.

Details

Category

Duration

30+10
Host: MEG

Speaker: Josef Neumüller (Zentrum für Anatomie und Zellbiologie)

ET is a powerful tool for visualization of the highly dynamic structures of organelles in living cells. Although investigation using transmission electron microscopy requires fixed and embedded material which is stable in high vacuum and under a high voltage electron beam, the modern preparation method of high pressure fixation (HPF) allows obtaining snapshots from the arrangement of organelles in living cells in relation to a particular experimental condition.
In order to obtain appropriate 3D data, tilt series from semithin sections (200-300 nm), cut in parallel to the plane of the monolayer of cell cultures, are performed using a Tecnai-20 200KV transmission electron microscope (FEI, Eindhoven, The Netherlands) equipped with an eucentric goniometer. In addition a rotation holder (Gatan, Inc., Pleasanton, USA) is used for orientation of rod-like structures parallel to the tilt axis and also for dual axis acquisition. Series of tilted images (range: -70° to +70°) are acquired with a tilt increment of 1°. After holder calibration, dislocations in x, y and z axis are corrected by the Explore 3D acquisition software (FEI). The volume of the semithin sections is reconstructed by the back projection method into serial slices using the software package Inspect 3D (FEI). This software implicates also an alignment tool using cross correlation which is prerequisite for an appropriate reconstruction. Dual axis reconstruction requires acquisition of perpendicular orientated tilt series. It is performed using the Matlab software platform and an advanced version of the "Tomo Toolbox", kindly provided by Dr. Jürgen Plitzko, Dept. of Molecular Structural Biology (Head: Prof. Dr. Wolfgang Baumeister), Max Planck Institute of Biochemistry in Martinsried near Munich.
3D models are performed by tracing the structures of interest in every slice with colored contours that are merged in the Z axis by the help of the Amira 3.0 software (Mercury Computer Systems, Merignac Cedex, France). Models, generated in this way can be rotated in the space and presented as movie.
The aim of this presentation is to introduce interesting applications from cell biology and to discuss problems and limitations in 3D visualization using the commercial software as described above.

Details

Category

Duration

40+10
Host: WP

Speaker: Marco Tarini (Universita' degli Studi di Pisa)

QuteMol is an open source (GPL), interactive, /high quality molecular visualization system/. It exploits the current GPU capabilites through OpenGL shaders to offer an array of innovative visual effects. QuteMol visualization techniques are aimed at improving clarity and an easier understanding of the 3D shape and structure of large molecules or complex proteins.
In this talk, the individual techniques implemented in QuteMol are presented. These include:

  • Real Time Ambient Occlusion
  • Depth Aware Silhouette Enhancement
  • Ball and Sticks, Space-Fill and Liquorice visualization modes
  • High resolution antialiased snapshots for creating publication quality renderings
  • Automatic generation of animated gifs of rotating molecules for web pages animations
  • Interactive rendering of large molecules and protein (>100k atoms)
  • Standard PDB input

Details

Category

Duration

50+10
Host: SJ

Speaker: Robert S Laramee (University of Wales Swansea, UK)

Swirl and tumble motion are two important, common fluid flow patterns from computational fluid dynamics (CFD) simulations typical of automotive engine simulation. We study and visualize swirl and tumble flow using several advanced flow visualization techniques: direct, geometric, texture-based, and feature-based. When illustrating these methods, we describe the relative strengths and weaknesses of each approach across multiple spatlo-temporal domains typical of an engineer's analysis. The result is the most comprehensive, systematic search for swirl and tumble motion ever performed. Based on this investigation we offer perspectives on where and when these techniques are best applied in order to visualize the behavior of swirl and tumble motion.

Details

Category

Duration

40+5
Host: MEG

Speaker: Dr. Renate Sitte (Griffith University)

Details

Category

Duration

40+10
Host: WP

Speaker: Sergi Grau (University of Barcelona)

This presentation introduces myself and explains briefly which are my research activities

Details

Category

Duration

15+5
Host: MEG

Speaker: Philip Willis (University of Bath)

Image compositing is combining two or more images by overlaying them. For this to be meaningful, some of the image areas need to be less than perfectly opaque, so the rearmost images can be seen. When Porter and Duff wrote their 1984 paper on Image Compositing, they used a four channel colour model (r,g,b,a). The extra channel, called alpha, represented the opacity of the colour (r,g,b). We have recently shown that this is mathematically a projective space, which extends the range of use of the alpha colour model, including to applications beyond compositing. This published work will be described. We also have some very recent unpublished results and these too will be presented.

Details

Category

Duration

45+15
Host: MEG

Speaker: Jean Pierre Charalambos (Universitat Politècnica de Catalunya)

We present a coherent hierarchical level of detail (HLOD) culling algorithm that employs a novel metric to perform the refinement of a HLOD-based system that takes into account visibility information. The information is gathered from the result of a hardware occlusion query (HOQ) performed on the bounding volume of a given node in the hierarchy. Although the advantages of doing this are clear, previous approaches treat refinement criteria and HOQ as independent subjects. For this reason, HOQs have been used restrictively as if their result were boolean. In contrast to that, we fully exploit the results of the queries to be able to take into account visibility information within refinement conditions. We do this by interpreting the result of a given HOQ as the virtual resolution of a screen space where the refinement decision takes place. In order to be able to use our proposed metric to perform the refinement of the HLOD hierarchy as well as to schedule HOQs, we exploit the spatial and temporal coherence inherent to hierarchical representations. Despite the simplicity of our approach, in our experiments we obtained a substantial performance boost (compared to previous approaches) in the frame-rate with minimal loss in image quality.

Details

Category

Duration

45+15
Host: J. Bittner

Speaker: Torsten Möller (Simon Fraser University, Canada)

In this talk we investigate the effects of function composition in the form g(f(x)) = h(x) by means of a spectral analysis of h. We decompose the spectral description of h(x) into a scalar product of the spectral description of g(x) and a term that solely depends on f(x) and that is independent of g(x). We then use the method of stationary phase to derive the essential maximum frequency of g(f(x)) bounding the main portion of the energy of its spectrum. This limit is the product of the maximum frequency of g(x) and the maximum derivative of f(x). This leads to a proper sampling of the composition h of the two functions g and f. We apply our theoretical results to a fundamental open problem in volume rendering -- the proper sampling of the rendering integral after the application of a transfer function.

Details

Category

Duration

45+10
Host: MEG

Speaker: Klaus Müller (State University of New York at Stony Brook)

Fully 3D datasets have become ubiquitous in a wide range of disciplines, such as science, engineering, medicine, and even entertainment. There is a vast demand to efficiently create these data, as well as fuse, relate, and visualize them. In this talk I will report on our efforts in all of these domains. First I will discuss techniques that utilize GPUs for rapid tomographic volume reconstruction and even direct volume visualization from X-ray projection data. Then I will describe our Magic Volume Lens framework which fuses and augments different types of volumetric data at different scales into one composite representation, providing a variety of zoom lenses for focus+context GPU-accelerated viewing with semantic context.

Details

Category

Duration

45+10
Host: MEG

Speaker: Leif Kobbelt (RWTH-Aachen, Germany)

Today the generation of raw 3D models has become quite easy. Typical sources for geometric data are: 3D scanning, CAD system output, reconstructions from images and video and so on. However, while these models usually have a sufficient quality at the first glance, the removal of inconsistencies and other optimizations are still necessary to make these raw models any useful for downstream applications beyond mere display. Besides this basic mesh repair, one would also like to convert unstructured polygonal models into meshes where individual faces are of high quality in terms of aspect ratio and the degrees of freedom (i.e. vertices) are aligned to major geometric features. These are the global and the local aspects of remeshing techniques respectively. In my talk I will present a number of mesh repair and mesh optimization techniques which are numerically robust and sufficiently efficient to process large dataset of realistic input quality.

Details

Category

Duration

45+15
Host: MEG

Speaker: Timo Ropinski (Westfälische Wilhelms-Universität, Germany)

In this talk I will describe approaches towards user-oriented exploration of volumetric datasets. A visualization technique is presented, which allows to emphasize certain regions of interest by applying different visual appearances interactively. In order to give better insights into this regions occluding parts can be removed or visualized differently such that a better focussing is allowed. Since when applying these strategies the overall structure of the dataset is modified, spatial comprehension may become more difficult. To diminish this effect a visualization technique to support depth perception is proposed.

Details

Category

Duration

45+15
Host: IV

Speaker: Marie-Paule Cani (GRAVIR lab, INRIA & INP Grenoble, France)

Modeling convincing clothes and hair are essential for achieving realistic virtual humans. They are however among the most difficult features to achieve: Modeling garments is currently very tedious using standard software (the user has to specify 2D patterns, to position and assemble them in 3D around the character body, and then run a costly physically based simulation, even if only a rest shape is needed). Hair styling either uses purely geometric approaches which may lead to unrealistic results or costly simulations. This talk presents some recent advances on both problems.
We first introduce a system that models realistic worn garments (i.e. locally developable surfaces, with the adequate folds and wrinkles caused by wrapping around the human body) from a single contour sketched by the user above a mannequin model. We validate the results by comparing the generated virtual garement with real replica sewn from the 2D patterns we output. The second part of the talk covers hair modeling: we introduce a new Lagangian, reduced coordinates model called "Super-Helices", which is used to accurately discretize Cosserat's continuous model for elastic rods. We show that a static implementation of this model enables to achieve very realistic hair styles for arbitrary ethnical groups, and present the extension of the method to dynamic hair simulation.


Short bio:
Marie-Paule Cani is a Professor of Computer Science at the Institut National Polytechnique de Grenoble (INPG), France. She graduated from the Ecole Normale Supérieure in Paris and was awarded membership of the Institut Universitaire de France in 1999. She was paper co-chair of EUROGRPAHICS 2004, conference co-chair of IEEE Shape Modeling and Applications (SMI) 2005, and is paper co-chair of the ACM-EG Symposium on Computer Animation (SCA) 2006.
Her main research interests cover physically-based simulation, implicit surfaces applied to interactive modelling and animation and the design of layered models incorporating alternative representations and LODs. Recent applications include pattern-based texturing, the animation of natural phenomena such as lava-flows, ocean, vegetation and human hair, real-time virtual surgery and interactive modeling techniques based on sculpting and sketching systems.

Details

Category

Duration

45+15
Host: CL

Speaker: Prof. Bamler (TU München, Germany)

Synthetisches Apertur Radar (SAR) ist ein aktives Mikrowellenabbildungsverfahren, das unabhängig von Bewölkung und Tageslicht von einem Satelliten aus Bilder der Erdoberfläche liefert. Seit mehr als einem Jahrzehnt werden mit Satelliten routinemäßig SAR-Bilder für die Erdbeobachtung erhoben. Ein Bildpunkt in einem SAR-Bild zeichnet sich nicht nur durch seine Helligkeit aus, sondern enthält wegen der kohärenten Natur des Abbildungsprozesses auch die Information über die Phasenlage der empfangenen Radarwelle. Die Kombination mehrerer SAR-Bilder und der bildpunktweise Vergleich der jeweiligen Phasen liefert sog. SAR-Interferogramme. Aus diesen können je nach Aufnahmekonstellation digitale Höhenmodelle berechnet werden. Ebenso können aus SAR-Interferogrammen Bewegungen der Erdoberfläche (Vulkanismus, Erdbeben, Senkungen) oder von Gletschern zwischen zwei Aufnahmezeitpunkten mit bis zu Millimeter-Genauigkeit abgeleitet werden.

Der Vortrag führt in die Technik der SAR-Interferometrie ein, gibt einen Überblick über das heutige Anwendungsspektrum und zeigt die Potenziale zukünftiger SAR-Satelliten, wie des deutschen TerraSAR-X, auf.

Speaker: Erald Vucini (Istanbul Technical University, Turkey)

Speaker: Florian Schulze (VRVis, Austria)

Speaker: Raphael Bürger (Philipps-Universität Marburg, Germany)

Speaker: Anders Strand Vestbø (Nordic Neuro Lab, Norway)

Diffusion Tensor Imaging in MRI enables a non-invasive study of the three-dimensional architecture of axonal tracts in the central nervous system of the human brain. Efficient analysis and intuitive visualization of such structures becomes increasingly important as the technique is advancing from an experimental tool to a frequently used method for clinical evaluation.

Speaker: Miguel Feixas (Universitat de Girona, Spain)

Viewpoint selection is an emerging area in computer graphics with applications in fields such as scene understanding, volume visualization, image-based modeling, and molecular visualization. We present an integrated framework for viewpoint selection and mesh saliency based on the definition of an information channel between a set of viewpoints and the polygons of an object. The mutual information of this channel is a powerful tool to deal with viewpoint selection and to represent the visibility of a mesh. In addition, the Jensen-Shannon divergence, closely related to mutual information, gives us a measure of viewpoint similarity and permits us to obtain the saliency of an object.

Details

Category

Duration

30+10
Host: MEG

Speaker: Peter Kohlmann (Universität Siegen)

This presentation examines statistical models for transfer functions based on an initially generated set of manually assigned transfer functions with respect to a very specific type of data set and a strictly delimited type of application. The process of transfer function design is decoupled from the specialized knowledge about the transfer function domain (intensity, gradient magnitude etc.).Transfer function design is difficult because of the high degrees of freedom and the lack of a truly goal-directed process. Existing approaches have been developed for automatic of semi-automatic transfer function design and can be categorized in image-driven and data-driven techniques. To concentrate on the anatomical or functional structures which are interesting for the user an application-driven method is needed. For a well-defined application scenario it is possible to reduce the complexity of transfer function generation by restricting the classification process to structures of interest for a specific examination procedure. At first, transfer functions are manually generated for an initial collection of volume data sets that has been recorded for one specific clinical purpose. A single transfer function is represented by a set of parameters of geometric primitives (ramps or trapezoids). Each of these individually assigned transfer functions can be regarded as a point sample in the (high-dimensional) parameter space of the transfer function model. From this set of point samples in parameter space a statistical shape model is created by applying Principal Component Analysis. A higher-level transfer function models, with only a very limited set of parameters, based on this analysis is established to make the process of transfer function setup very simple and intuitive.

Details

Category

Duration

30+10
Host: MEG