Find the upcoming dates on this page.

Previous Talks

Speaker: Alan Chalmers (Universität Bristol)

The computer graphics industry, and in particular those involved with films, games and virtual reality, continue to demand more realistic multi-sensory computer generated environments. In addition, there is an ever increasing desire for multi-user networked interaction. Despite the ready availability of modern high performance graphics cards, the complexity of the scenes being modelled, the need for interaction and the high fidelity required of the images and sound means that synthesising such scenes is still simply not possible in a reasonable, let alone real time on a single computer. Two approaches do, however, appear to offer the possibility of helping achieve high fidelity virtual environments in real-time: Parallel Processing and Visual Perception. Parallel Processing has a number of computers working together to render a single image, which appears to provide almost unlimited performance, however, enabling many processors to work efficiently together is a significant challenge. Visual Perception, on the other hand, takes into account that it is the human who will ultimately be looking at the resultant images, and while the human eye is good, it is not that good. Exploiting knowledge of the human visual system can save significant rendering time by simply not computing those parts of a scene which the human will fail to notice. This talk will consider how parallel processing and visual perception may be combined to achieve perceptual realism in real-time. The application considered for this approach is the high fidelity reconstruction of archaeological sites.

Details

Category

Duration

45+15
Host:  

Speaker: Omaira Rodriguez (Universidad Central de Venezuela)

Speaker: Dieter Fellner (Universität Braunschweig, Deutschland)

As if large collections of purely textual documents would not still pose a rich set of research challenges (i.e., robust and reliable algorithms for structuring, content extraction and information filtering) for generations of researchers this presentation advocates a change in the interpretation of the term ’document’: Rather than seeing a document in a classical context of being a ’paper’ predominantly compiled of text with a few figures interspersed we recommend to adopt a more general view which considers a ’document’ as an entity consisting of any media type appropriate to store or exchange information in a given context. Only this shift in the document paradigm will open new application fields to Digital Library (DL) technology for the mutual benefit of DL’s and application domains: DL’s offering an unprecedented level of functionality and (new) application domains (e.g., digital mock-up in engineering) benefiting from a more powerful DL technology. According to a study by Lyman et al [3] the world produces between 1 and 2 exabytes (i.e., 1018 bytes or a billion gigabytes) of unique information per year. From that vast amount of data printed documents of all kinds only comprise 0.003%. The major share being taken by images, animations, sound, 3D models and other numeric data. Of course, a large an increasing proportion of the produced material is created, stored and exchanged in digital form -­ currently ranging at about 90% of the total. Yet, little of this information is accessible through Digital Library collections. This presentation gives a motivation for a ’generalized view’ on the term document and raises several issues stimulating research work in the field of Computer Graphics to make Digital Libraries of the future more accessible.

Details

Category

Duration

45 + 15

Speaker: Balazs Csebfalvi (Budapest University of Technology and Economics, Hungary)

In this paper a novel volume-rendering technique based on Monte Carlo integration is presented. As a result of a preprocessing, a point cloud of random samples is generated using a normalized continuous reconstruction of the volume as a probability density function. This point cloud is projected onto the image plane, and to each pixel an intensity value is assigned which is proportional to the number of samples projected onto the corresponding pixel area. In such a way a simulated X-ray image of the volume can be obtained. Theoretically, for a fixed image resolution, there exists an M number of samples such that the average standard deviation of the estimated pixel intensities is under the level of quantization error regardless of the number of voxels. Therefore Monte Carlo Volume Rendering (MCVR) is mainly proposed to efficiently visualize large volume data sets. Furthermore, network applications are also supported, since the trade-off between image quality and interactivity can be adapted to the bandwidth of the client/server connection by using progressive refinement.

Details

Category

Duration

30+10
Host:  

Speaker: László Neumann (Universitat de Girona, Spain)

In closed environment, especially in bright colored interiors, there occurs a significant change of saturation and some shifting of hue of original selected colors. This is due to multiple light interreflections. The human vision mechanism partly reduces this effect thanks to the change of the reference white. We can use multispectral radiosity or other multispectral global illumination models to compute the physical effects. A color appearance model, the new and powerful CIECAM02 model, will be used to compute the perceptual aspects.

The CIECAM02 includes the luminance and chromatic adaptation effects, and it has compact forward and inverse transformation formulas. The input data for the color appearance model is ensured by computing the multispectral radiosity solution. Thereby both the spectral radiance for every viewpoint and view direction and the spectral irradiance on every path of the scene are known. Nearly all earlier global illumination approaches ignored the often strong changes of originally selected colors. Using the presented method the selection or mixture of paints is possible with the same, after physical and perceptual effects, color appearance previously selected under standard viewing condition in a color atlas.

Finally some questions of perceptual metamerism to ensure highly constant color appearance under different viewing conditions and some aesthetical rules of color design will be discussed.

Details

Category

Duration

45+15
Host: EG

Speaker: Dirk Bartz (Universität Tübingen, Germany)

Medical imaging is one of the most established practical fields of visualization. While most used methods deal with individual images from 3D scanners - volumes are seen as stack of images -, 3D visualizations are slowly moving into the daily practice of research hospitals.

Major challenges in this process is the difficult specification of how the features in volume datasets are visualized (transfer functions, etc), occlusion of interesting features by others, and fast increasing size of datasets. While a few years ago 256^3 datasets were the standard size in radiology, the current standard size already increased to 512^2x1000 volumes. Soon, highfield-MRI scanners will even produce volumes of 2048^2 x 1000 in research applications.

In this talk, I will discuss several techniques how to deal with large medical data. In particular, I will present work in the context of virtual endoscopy, a medical procedure oriented visualization technique that provides an environment familiar to physicians.

Details

Category

Duration

45+15
Host: EG

Speaker: Alessandro Rizzi (Università di Milano, Italy), Alessandro Artusi (ICGA), Caro Gatta (Università di Milano, Italy)

The status of the current output devices as: display and printers, limit to visualize or print correctly High Dynamic Range images. Tone mapping helps to resolve this problem, but when accurate visualization is requested local operators are required. Local operators are able to capture this goal, but they require high computational costs that reduce their use in real applications. In this talk we propose a speed-up technique in order to reduce the computational costs of an existing local operator derived by retinex. It consists to extract both: global and local information from the existing operator and to extrapolate it on the whole image. We show how to extract the global information sampling the input image and using singular value decomposition (SVD). On the other hand, the local information is extracted selecting a small number of samples for each pixel of the input image and applying directly the local operator. We show the efficiency of our method on several images, and the time performances comparing it with the original local operator.

Speaker: Stefan Guthe (University Tübingen, Germany)

Many areas in medicine, computational physics and various other disciplines have to deal with large or animated volumetric data sets that demand for an adequate visualization. An important visualization technique for the exploration of volumetric data sets is direct volume rendering: Each point in space is assigned a density for the emission and absorption of light and the volume renderer computes the light reaching the eye along viewing rays. The rendering can be implemented efficiently using texture mapping hardware: the volume is discretized into textured slices that are blended over each other using alpha blending. However the huge amount of data to be processed for rendering large and animated volumes also demands for compression schemes that are both very efficient in terms of compression ratio and in terms of decompression speed.

Speaker: Hamish Carr (University of British Columbia, Canada)

Geometric algorithms for analyzing and interpreting volumetric data draw from established methods in computational geometry. Geometrically, these algorithms often assume simple geometric primitives, such as tetrahedra. In practice, however, data commonly comes sampled on a cubic grid. Several responses to this are possible, such as modifying the experimental procedure, modifying the data, or modifying algorithm.

I shall discuss various approaches for dealing with this problem: where relevant, I shall use the problem of computing contour trees as a sample algorithm. These approaches include non-cubic sampling, correct analysis of the trilinear interpolant, working directly with marching cubes, and subdividing cubes into tetrahedra.

In particular, I will discuss the side-effects of simplicial subdivision on the final isosurfaces, and how to track the connectivity of the standard marching cubes cases. 

Details

Category

Duration

45 min
Host: TT

Speaker: Torsten Möller (Simon Fraser University, Canada)

 Volume Graphics is part of Computer Graphics whose main subject of study are points and objects made of points. This seeming lack of descriptiveness turns out to be very powerful in describing many natural and complex phenomena from weather patterns to fuel cells to our human body. Besides the creation of 2D images of complex objects the goal of Volume Graphics or Scientific Visualization at large is the creation of tools that enhance the understanding of the objects under investigation. This typically requires the user to interact with the object in real time by extracting only features of interest, creating images that are accurate and reliable.

This talk will give an overview of the research in Scientific Visualization at the Graphics, Usability and Visualization (GrUVi) Lab at Simon Fraser University. The second half of the talk will focus on recent results of utilizing colour phenomena, such as metamers and colour constancy, for novel data exploration algorithms. 

Details

Category

Duration

45 min
Host: TT

Speaker: François Faure (University Grenoble, France)

Details

Category

Duration

15 min
Host: WP

Speaker: Matthias Teschner (ETH Zürich, Schweiz)

Verfahren zur Simulation chirurgischer Eingriffe bieten vielfältige Möglichkeiten zur Ergänzung und zur Verbesserung herkömmlicher Ausbildungsmethoden in der Medizin. Eine wichtige Anforderung an die Simulation ist dabei ein realistisch wirkendes und interaktives Verhalten.

Szenarien zur Chirurgiesimulation bestehen in der Regel aus deformierbaren und starren Objekten, die miteinander interagieren. Daraus ergeben sich zwei wesentliche Komponenten der Simulation: Es werden ef fiziente und robuste Verfahren zur Berechnung deformierbarer Objekte sowie schnelle Verfahren zur Kollisionserkennung benötigt.

Im ersten Teil des Vortrags werden Modelle und Verfahren zur interaktiven Berechnung des dynamischen Verhaltens komplexer deformierbarer Objekte vorgestellt. Es wird gezeigt, wie Feder-Masse-Modelle dazu eingesetzt werden können, elastische und plastische Verformung von Objekten mit einer Komplexität von bis zu zehntausend Tetraedern interaktiv zu berechnen.

Im zweiten Teil des Vortrags wird ein Verfahren zur Kollisionserkennung vorgestellt, das insbesondere für verformbare Objekte geeignet ist. Im Gegensatz zu Verfahren, die auf Hierarchien von Begrenzungsvolumina oder auf Raumunterteilung basieren, wird ein neuer Ansatz vorgestellt, der die Stärken heutiger Graphikhardware ausnutzt. Das vorgestellte Verfahren ermöglicht eine interaktive volumetrische Kollisionserkennung deformierbarer Objekte mit einer Gesamtkomplexität von bis zu einhunderttausend Oberflächendreiecken.

Abschließend werden weitere Komponenten für die Chirurgiesimulation zusammenfassend vorgestellt sowie potentielle medizinische Anwendungen erläutert. 

Details

Category

Duration

60 min
Host: KB

Speaker: Andres Kecskemethy (Universität Duisburg, Germany)

The computer simulation of the human musculoskeletal system is playing an increasingly important role in medical diagnosis as well as in the planning of corrections, physiotherapeutic programs and prosthetic implants. The basic goal of computer simulation is to reproduce mechanical motion within the musculoskeletal system in a biofidelic manner based on individual patient parameters. This makes it possible for physicians to compare functional properties of patients prior and after medical treatment, or even, as a long-term objective, to predict therapeutic effects before grasping the scalpel. In this seminar, the foundations of simulation of mechanical systems based on multibody dynamics and their application to the dynamics of the human leg are presented. Multibody dynamics is a well-known field of research of mechanical engineering that has been developed in the last twenty years and has been applied to a great variety of systems, such as road and rail vehicles, robots, tool machines, etc. In the technical setting, it has become the primary environment of development for innovative systems, as virtual reality methods have proved to reduce costs and design-cycle time significantly. Our approach for multibody dynamics consists of employing object-oriented methods that allow the user to build dynamic models as executable programs, which are then open for extensions and linking to other existing software packages, such as computer graphics, control theory, signal analysis, etc. This is in contrast to existing methods, which use the monolithic, all-inclusive program structure. The basic idea of the object-oriented approach is to mimic real-world mechanical parts by corresponding software objects that transmit motion and forces as in the real system. In this way, a model can be built as an assembly of individual "kinetostatic transmission elements" that can be triggered intuitively at the generic level, i.e., whose transmission properties can be accessed without regard to their internal structure. We show how with these basic functions it is possible to solve all problems of dynamics. The ideas are then applied to the mechanical model of the human lower extremity, displaying a model of hip, upper and lower leg, and foot, consisting of 15 degrees of freedom and 43 individual muscles. Parameters for bones and muscles are taken for a generic case from literature. Simulations involve geometry (muscle extensions during walking), inverse dynamics (joint torques computed from motion capturing systems and force plate output describing contact force at the feet), as well as preliminary results for the dynamics (trajectories of the lower extremity based on muscle activation profiles). The developed software has been extended by a 3D user interface that allows the user to perform simulations online and hence to assess the physical parameters directly at the computer monitor. The software is being applied at the Children's Hospital of the University of Graz for treatment of children with spastic diplegia. Comparison of simulations and measurements at the gait lab show a good agreement of the computed inverse dynamics and experimental data. Further illustrative examples for the concepts developed in this talk are taken from mechanism analysis, rail vehicles, and biomechanics of neck and forearm. 

Details

Category

Duration

60 min
Host: KB

Speaker: Oliver Bimber (Bauhaus University, Weimar)

 The Virtual Showcase is a new projection-based Augmented Reality display that offers an imaginative and innovative way of accessing, presenting, and interacting with scientific and cultural content. Almost three years after the development of the first proof-of-concept prototype, the Virtual Showcase is turning into an efficient multi-user display that effectively addresses several shortcomings of today's Augmented Reality displays. I will give an overview of the different Virtual Showcase prototypes built so far, their technical components, and their recent application to the field of digital story-telling for education and scientific visualization.

References
http://www.nsf.gov/od/lpa/news/02/tip021022.htm http://www.wissenschaft-online.de/artikel/605814&template=d_bnt_n_inhalt&_stempel_datum_bis=1034719199 http://cgw.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTICLE_ID=125448 http://www.vrnews.com/invited/fa20011207.html

Curriculum Vitae
Oliver Bimber is currently a scientist at the Bauhaus University Weimar, Germany. He received a Ph.D. in Engineering at the Technical University of Darmstadt, Germany under supervision of Prof. Dr. Encarnação (TU Darmstadt) and Prof. Dr. Fuchs (UNC at Chapel Hill). From 2001 to 2002 Bimber worked as a senior researcher at the Fraunhofer Center for Research in Computer Graphics in Providence, RI/USA, and from 1998 to 2001 he was a scientist at the Fraunhofer Institute for Computer Graphics in Rostock, Germany. He initiated the Virtual Showcase project in Europe and the Augmented Paleontology project in the U.S.A. He received the degree of Dipl. Inform. (FH) in Scientific Computing from the University of Applied Science Giessen and a B.Sc. degree in Commercial Computing from the Dundalk Institute of Technology. In his career, Bimber received several scientific achievement awards and is author of more than thirty technical papers and journal articles. He was guest editor of the Computer & Graphics special issue on "Mixed Realities - Beyond Conventions", and has served as session chair and review committee member for several international conferences. Bimber also gave a number of guest lectures at recognized institutions. Among them were Brown University, Princeton University, the IBM T.J. Watson Research Center, the DaimlerChrysler Virtual Reality Competence Center, and the Mitsubishi Electronic Research Lab (MERL). His research interests include display technologies, rendering and human-computer interaction for Mixed Realities. Bimber is member of IEEE, ACM and ACM Siggraph.

Details

Category

Duration

60 min
Host: DS

Speaker: Martin Wagner (Lehrstuhl für angewandte Softwaretechnik, Institut für Informatik, TU München, Germany)

Details

Category

Duration

45 + 15

Speaker: Anna Vilanova Bartroli (TU Eindhoven, Holland)

The group Biomedical Image Analysis (BMIA) is part of the Master program Biomedical Imaging and Informatics (BMI2), one of the four Master programs in the Department of Biomedical Engineering at Eindhoven University of Technology. In this talk I will present the structure and the ongoing research projects of BMIA.

Details

Category

Duration

45 + 15

Speaker: Jiri Bittner (Prague, Czech Republic)

Details

Category

Duration

30 min+ disc.
Supervisor:  

Speaker: Klaus Dorfmueller-Ulhaas (Germany)

Tracking user movements is one of the major low-level tasks which every Virtual Re-ality (VR) system needs to fulfill. There are different methods how this tracking may be performed. Common tracking systems use magnetic or ultrasonic trackers in dif-ferent variations as well as mechanical devices. All of these systems have drawbacks which are caused by their principles of work. Typically, the user has to be linked to a measurement instrument, either by cable or, even more restraining for the user, by a mechanical linkage. Furthermore, while mechanical tracking systems are extremely precise, magnetic and acoustic tracking systems suffer from different sources of distor-tions. For this reason, an optical tracking system has been developed which overcomes many of the drawbacks of conventional tracking systems. This work is focused on stereoscopic tracking that provides an effective way to enhance the accuracy of optical based trackers. Vision based trackers in general fa-cilitate wireless interaction with 3D worlds for the users of a virtual reality system. Additionally, the proposed tracker is very economic through the use of standard sensor technology that will furthermore reduce cost. The proposed tracker provides an ac-curacy in the range of sub-millimeters, thus it meets the requirements of most virtual reality applications. The presented optical tracker works with low frequency light and is based on retro-reflective sphere shaped markers illuminated with infrared light to not interfere with the user's perception of a virtual scene on projection based display technology systems in environments with dim light. In contrast to commercial optical tracking systems, the outcome of this work is operating in real-time. Furthermore, the presented sytem can make use of very small cameras to be applicable for inside-out tracking. This work presents novel approaches to calibrate a stereoscopic camera setup. It utilizes the standard equipment used for commercial optical trackers in computer ani-mation, but contrarily to calibration methods available today, it calibrates internal and external camera parameters simultaneously, including lens distortion parameters. The calibration is very easy to use, fast and precise. To provide the robustness required by most virtual reality applications, human mo-tion needs to be tracked over time. This has been often done with a Kalman filter facilitating a prediction of motion which may not only enhance the frequency of the tracking system, but may also cope with display lags of complex virtual scenes or with acquisition or communication delays. A new filter formulation is presented that may also be used with non-optical based trackers providing the pose of an object with six degrees of freedom. Finally, some extensions to natural landmark tracking are presented using a contour tracking approach. First experimental results of an early implementation are shown, detecting a human pointing gesture in environments with different lighting conditions and backgrounds. Perspectives are given how this method could be extended to 3D model based hand tracking using stereoscopic vision. 

Details

Category

Duration

20+10 min
Host: DS

Speaker: Martin Martinov ("St. Kliment Ohridski" University, Sofia, Bulgaria)

The possibilities of Java 2D, Java 3D, Java Swing and Advanced Java Imaging for doing Computer Graphics are going to be presented demonstrating them in an custom-developed program implementing the following algorithms:

  • Line-drawing: Simple method, Bresenham method, Portion method
  • Line-smoothing: area based, distance-based
  • Circle-drawing: Simple method, Bresenham method, by second-order differnces
  • Ellipse-drawing: Bresenham method
  • Line-clipping: Cohen-Sutherland method, Liang-Barsky method 

Details

Category

Duration

30 min
Host: Meister

Speaker: Ronan Boulic (EPFL Lausanne)

In this talk, I will present an IK architecture allowing the enforcement of multiple constraints distributed on an arbitrary number of priority levels. The cost of the IK algorithm building the projection operators is linear with the number of priority levels thus allowing interactive postural control of human characters. The talk will end on presenting our on-going work that evaluate how to apply this technique to motion editing.

Details

Category

Duration

45 min
Host: WP

Speaker: Raphael Grasset (iMAGIS/GRAVIR, Grenoble, France)

Details

Category

Duration

30 + 10 min
Host: DS

Speaker: Michael Haller (FHS Hagenberg, Austria)

Media Technology and Design (MTD) is a 4-year engineering program focused on technical and creative aspects of digital media. MTD is one of several IT-related programs offered by the College of Engineering at Hagenberg as part of the Upper Austrian University of Applied Sciences (Europe).

The engineering part of the MTD curriculum includes audio/video technology, computing, networking and multi-media programming as core subjects, with a careful balance of basic and selected in-depth material.

The design part of the program starts out with elementary drafting techniques, study of shape and color, art and media history, typography, sound, and music; this is followed by advanced subjects such as multi-media design, audio and video design, 3D modeling and animation, media production and experimental media.

The Media Technology and Design (MTD) curriculum focuses on three broad areas:

  • Multimedia
  • 3D computer graphics and animation
  • Internet/WWW

The talk describes activities of MTD concerning computer graphics, Virtual Reality, and Augmented Reality. We will give a short overview of the courses and show some student projects. Finally, we will describe the European founded project AMIRE (Authoring Mixed Reality).

Related links:

Details

Category

Duration

30 + 15 min
Host: DS