Find the upcoming dates on this page.

Previous Talks

Speaker: Ann McNamara (University of Bristol)

Advances in image synthesis techniques allow us to simulate the distribution of light energy in a scene with great precision. Unfortunately, this does not ensure that the displayed image will have authentic visual appearance. Reasons for this include the limited dynamic range of displays, and any residual shortcomings of the rendering process. Furthermore, it is unclear to what extend human vision will encode such departures from perfect physical realism. This leads to a need for including the human observer in any process which attempts to evaluate the perceptual significance of any errors in reproduction. Our psychophysical studies address this need.

This talk provides an introduction to the application of psychophysics to the evaluation and advancement of computer graphics with respect to the real scenes they are intended to depict. It covers the fundamentals of the design and organisation of psychophysical experiments, data collection and analysis and the application of results to rendering algorithms. The emphasis of this seminar is on the practical issues which must be addressed so that human subjects may easily make perceptual evaluations between the real and synthetic scenes. Case studies, involving comparing a test environment consisting of a small room containing complex objects to its rendered counterpart, will also be discussed.

Details

Category

Duration

45 min + 15 min
Host: WP

Speaker: Mark Billinghurst (Human Interface Technology Lab, University of Washington)

Virtual Reality (VR) appears a natural medium for computer supported collaborative work (CSCW). However immersive Virtual Reality separates the user from the real world and their traditional tools. An alternative approach is through Augmented Reality (AR), the overlaying of virtual objects on the real world. This allows users to see each other and the real world at the same time as the virtual images, facilitating a high bandwidth of communication between users and intuitive manipulation of the virtual information. We review AR techniques for developing CSCW interfaces and describe lessons learned from developing a variety of collaborative Augmented Reality interfaces for both face to face and remote collaboration. Our recent work involves the use of computer vision techniques for accurate AR registration. We describe this and identify areas for future research.

Details

Category

Duration

50 min + 15 min
Host:  

Speaker: Jean-Dominique Gascuel (iMAGIS, France)

Details

Category

Duration

30 min
Host:  

Speaker: em.o.Univ.-Prof. Dr. H. Petsche (Institut für Neurophysiologie, Universität Wien)

Erstmals wurden von Hans Berger zur Jahrhundertwende elektrische Spannungsschwankungen des Gehirns vom intakten Schädel des Menschen registriert. Seither hat sich das "EEG" einen bedeutenden Platz in der Diagnostik von Gehirnerkrankungen, insbesondere Epilepsieen erobert. Dennoch ist dessen Natur noch weitgehend unbekannt. Fest steht allerdings, daß es Ausdruck komplexer kooperativer elektrischer Prozesse im Gehirn ist. Angesichts der noch immer verbreiteten Meinung, das Gehirn sei elektrisch nichts anderes als ein Volumsleiter, wurde das EEG lange Zeit als eine Art "elektrischer Hirnlärm" ohne jede funktionelle Bedeutung aufgefaßt. Nachdem aber in den vergangenen Jahren immer mehr Hinweise auf geistige Prozesse auftauchten, die sich im EEG widerspiegeln, hat das EEG zunehmend Interesse für das Studium von Denkvorgängen gefunden. Der Vortrag zeigt die proteusartige Phänomenologie des EEGs, wenn man es auf der Makro-Ebene (am Schädel), der Mikroebene (im Nervenzellverband) und auf der Ebene einzelner Nervenzellen studiert. Er zeigt weiters, daß ein Optimum an Information aus dem EEG dann zu gewinnen ist, wenn es als Ausdruck der funktionellen Zusammenhänge in einem komplexen elektrischen Netzwerk aufgefaßt wird.

Details

Category

Duration

60 min + 15 min
Host:  

Speaker: Vlastimil Havran (Czech Technical University)

Ray shooting is one of the most important problems for computer graphics. The efficiency of the algorithms for ray shooting has great impact on the performance of many global illumination algorithms. In this talk we give a short survey to various methods developed for ray shooting from the perspective of computational geometry and computer graphics. Particularly, we will focus on the concepts of orthogonal Kd-trees more often referred to as BSP trees in computer graphics. Recent developments in ray shooting will be discussed.

Details

Category

Duration

45 min
Host: MEG

Speaker: Jiri Bittner (Czech Technical University)

I will introduce a classification of visibility problems in three dimensions that is based on the dimension of the space of lines involved in the problem. In particular the following three classes will be discussed: visibility along a line, visibility from a point, and visibility from a region. Further, I will present a conservative hierarchical visibility algorithm for a moving viewpoint that is suitable for real time visibility culling. The algorithm uses occlusion tree that is a modification of the shadow volume BSP tree. Finally, I will mention some general refinements that make use of spatial and temporal coherence in the scope of hierarchical visibility algorithms.

Details

Category

Duration

45 min
Host: MEG

Speaker: Bernd Eberhardt (Universität Tübingen)

Details

Category

Duration

45 min
Host:  

Speaker: Martin Kompast (TU Wien), Ina Wagner (TU Wien)

Die Wunderkammer ist als Teil einer kollaborativen elektronischen Arbeitsumgebung für Architekten, Landschaftsplaner und andere design-orientierte Professionen gedacht. Sie wird gegenwärtig im Rahmen des Esprit LTR Projekts DESARTE entwickelt. Die Wunderkammer ist ein multi-mediales Archiv, das das Sammeln und Entdecken inspirationaler Objekte und ihre Darstellung unterstützt. Topographie und Erscheidungsbild sollen der visuellen Kultur der jeweiligen Designdisziplin entsprechen, und beispielweise als modular aufgebauter, symbolischer Stadtraum oder als fliessende Abfolge von Landschaftsformationen gestaltet sein. Feldarbeit im Architekturbüro gibt Aufschluß über die Bedeutung inspirationaler Objekte (dies mögen Bilder, Skizzen, metaphorische Beschreibungen, Filmausschnitte usw. sein) sowohl für die Entwurfsarbeit selbst, als auch für die Kommunikation von Projektideen nach aussen. BenutzerInnen sollen in der Wunderkammer ihre eigene Sammlung anlegen und diese mit anderen teilen können. Es sollen verschiedene Modi des Bereisens und Entdeckens sowie des erzählenden Zusammenfügens von Objekten (als Collage, Animation, Film usw.) unterstützt werden. Sie sollen letztlich ihre eigene Wunderkammer-Welt gestalten können. 

Details

Category

Duration

1 hr
Host: WP

Speaker: Leo Budin (University of Zagreb, Croatia)

A mathematical model for shading analysis developed for purposes of solar engineering is described. Closed form expressions giving the position of the shadow as a function of time for an isolated point are derived. It was found that these expressions define second-order planar curves. Furthermore, the developed equations can be expressed in generalized parametric form enabling a smooth transition between different curves of the same class. Feasibility of these expressions for computer graphics should be further investigated.

Details

Category

Duration

1 hr
Host: Colloquy Cycle

Speaker: Philippe Bekaert (Katholieke Universiteit Leuven, Belgium)

Since the introduction of the radiosity method for image synthesis in 1984, many improvements to it have been proposed. Some of these, such as the computation of form factors using the hemicube algorithm and Southwell iterations for solving the radiosity set of equations, have made that the radiosity method begins to emerge in commercial rendering software systems. These techniques appeared in scientific literature more than 10 years ago.

In this talk, I will give an overview of techniques that appeared since then to make the radiosity method more efficient, user friendly and reliable. Two of these, hierarchical refinement, based on wavelet theory, and the solution of the radiosity system of equations by using Monte Carlo simulation, will be discussed in more detail. A combination of hierarchical refinement and Monte Carlo promises to make radiosity feasible for complex models, even on low cost platforms.

Details

Category

Duration

45 + 15

Speaker: William Ribarsky (Georgia Institute of Technology, USA)

I will describe work to depict, explore, and understand large scale to very large scale data. For data of this size, one cannot just consider visualization techniques alone but must consider them in conjunction with issues of data organization, interactivity, data paging and memory management, efficient visual representation, overall detail management, and techniques for exploration and discovery. Efficiency becomes predominant, time is of the essence, and exploration is key (since nobody will know, in detail, what a very large dataset contains). I will show that these issues are not just important for the applications presented but have much wider applicability.

Speaker: László Szirmay-Kalos (Technical University of Budapest)

Details

Category

Duration

45 min + 15 min
Host:  

Speaker: Hans-Peter Seidel (Universität Erlangen-Nürnberg)

Ein zentrales Problem der Graphischen Datenverarbeitung ist das ständig wachsende Datenaufkommen. Mit der zunehmenden Verfügbarkeit komplexer Modellierungs- und Simulationswerkzeuge und mit der zunehmenden Verbreitung hochaufgelöster 3D-Scanner wird sich dieses Problem in Zukunft weiter verschärfen.

Aus diesem Grund scheinen sich hierarchische Methoden, Mehrfachauflösungen und Wavelets im Augenblick zu einer Schlüsseltechnik für 3D-Graphikanwendungen zu entwickeln. Durch ihre Verwendung wird es möglich, komplexe Funktionen und große Datensätze mit wenigen Koeffizienten gut zu approximieren. Dies führt zu neuartigen Kompressionsalgorithmen und effizienten Berechnungen unter Ausnutzung von Glattheit und Kohärenz. Der Vortrag bespricht grundlegende Anwendungen hierarchischer Methoden in folgenden Teilgebieten der Graphischen Datenverarbeitung:

  • Kurven und Flächenmodellierung,
  • Effiziente Polygonnetze,
  • Globale Beleuchtungsberechnung.

Konkrete Beispiele aus laufenden Implementierungen an der Universität Erlangen illustrieren die zugrundeliegenden Konzepte und unterstreichen die Tragfähigkeit des vorgestellten Ansatzes.

Details

Category

Duration

45 + 15

Speaker: Hans-Christian Hege (Konrad-Zuse-Zentrum für Informationstechnik Berlin)

Krebsbehandlungen können durch Überwärmung des Tumorgewebes unterstützt werden (Hyperthermie). Die Erwärmung von tiefliegendem Gewebe läßt sich durch gezielte Einstrahlung von Radiowellen erreichen. Um eine therapeutisch optimale Temperaturverteilung im Körper des Patienten zu erzielen, muß eine optimale Einstellung der Antennenparameter durch individuelle Planung gefunden werden.

In dem Vortrag wird ein in der Entwicklung befindliches Softwaresystem vorgestellt, das alle Arbeitsschritte der Hyperthermieplanung unterstützt: Erstellung eines individuellen, anatomisch getreuen Patientenmodells, Simulation des elektromagnetischen/thermischen Geschehens sowie Optimierung der freien Parameter. Zur schnellen und verläßlichen Lösung der partiellen Differentialgleichungen werden adaptive Multilevel-Finite-Element-Verfahren verwendet. Ein besonderer Schwerpunkt der Entwicklung sind moderne Visualisierungsverfahren zur kombinierten Darstellung aller auftretenden Daten (z.B. medizinische Bilddaten, Segmentierungsergebnisse, FE-Gitter, Skalar- und Vektorfelder) sowie geeignete Interaktionsmethoden. Erst solche Techniken ermöglichen dem medizinischen Anwender die Durchführung der komplexen Planungsschritte.

Speaker: Roberto Scopigno (CNUCE-C.N.R., Pisa)

Due to the surface meshes produced at increasing complexity in many applications, interest in efficient simplification algorithms and multiresolution representation is very high. An enhanced simplification approach together with a general multiresolution data scheme will be presented in the seminar.

JADE, a new simplification solution based on the Mesh Decimation approach has been designed to provide both increased approximation precision, based on global error management, and multiresolution output. Results will be presented on Jade's empirical time complexity, approximation quality, and simplification power (together with a comparison with other simplification solutions).

Moreover, on the basis of these simplification techniques, we foresee a modeling framework based on three separate stages (shape modeling, multiresolution encoding and resolution modeling), and propose a new approach to the last stage, resolution modeling, which is highly general, user-driven and not strictly tied to a particular simplification method. The approach proposed is based on a multiresolution representation scheme for triangulated, 2-manifold meshes, the Hypertriangulation Model (HyT). This scheme allows to selectively "walk" along the multiresolution surface, moving between adjacent faces efficiently. A prototypal resolution modeling system, Zeta, has been implemented to allow interactive modeling of surface details. It supports: efficient extraction of fixed resolution representations; unified management of selective refinement and selective simplification; easy composition of the selective refinement/simplification actions, with no cracks in the variable resolution mesh produced; multiresolution editing; interactive response times.

Speaker: David Hedley (Bristol university)

The Scope rendering system has been developed over the course of the last 3 years to facilitate research into the fundamentals of computer graphics, with particular emphasis on discontinuity meshing. We discuss the design and architecture of the system, highlighting several software engineering issues using C, and the balancing robustness against efficiency. We then discuss the research currently being undertaken using the system: hierarchical radiosity and dynamic radiosity, and the implications they have for the underlying system.

Speaker: Martin Faust (Universität Bremen)

In verschiedenen Projekten in den Bereichen Produktion und Logistik wurde deutlich, daß trotzt verfügbarer rechnergestützter Modelliersysteme mit grafischer Ausgabe gegenständliche, baukastenorientierte Modelle für die Anschauung und Kommunikation einen hohen Stellenwert besitzen.

Am Forschungszentrum Arbeit und Technik (artec) der Universität Bremen wurde ein neuer Ansatz entwickelt, der die bisher streng getrennten Modellwelten miteinander verbindet. Im Mittelpunkt des Konzeptes steht dabei die Hand des Benutzers, die Interaktionen auf gegenständlichen Objekten durchführt.

Der allgegenwärtige Computer verschwindet im Hintergrund und erhält die Aufgabe, Aktionen der Hand zu erkennen und auf das virtuelle Modell umzusetzen. Um die Aktionen der Hand zu erkennen, werden die Daten mit Hilfe eines Datenhandschuh erfaßt und von einer Gestenerkennung ausgewertet. Auf diese Art und Weise entsteht durch gegenständliches Modellieren simultan ein abstraktes Rechnermodell, das für weitergehende Analyse- und Simulationszwecke genutzt werden kann.

Dieses Konzept nennen wir mit satirischem Hintergedanken Real Reality.

Speaker: Gonzalo Besuievsky (Universität Girona)

We present two Global Monte Carlo based algorithms to perform accurate illumination simulation of motion blur effects in radiosity scenes. Our first results will be presented

Speaker: Mateu Sbert (Universität Girona)

In this talk we will deal with two topics in shooting random walk radiosity. The first is to decide, which of the different estimators we have available is the best. The second question is that, when given different sources, what the optimal probability for a path to begin at a source is.

Speaker: Carlos Urena (Universität Girona)

We present some results on the analysis of variance of Monte Carlo algorithms, and show how they can be applied to improve two-step algorithms for global illumination.

Speaker: Francois Faure (iMAGIS/IMAG, Grenoble, France)

Details

Category

Duration

45 + 15

Speaker: Roberto Grosso (Universität Erlangen-Nürnberg)

Multilevel representations and mesh reduction techniques have been used for accelerating the processing and the rendering of large datasets representing scalar or vector valued functions defined on complex 2 or 3 dimensional meshes. We present a method based on finite elements and hierarchical bases which combines these two approaches in a new and unique way that is conceptually simple and theoretically sound. Starting with a very coarse triangulation of the functional domain a hierarchy of highly non-uniform tetrahedral (or triangular in 2D) meshes is generated adaptively by local refinement. This process is driven by controlling the local error of the piecewise linear finite element approximation of the function (in the least-squares sense) on each mesh element. Flexibility in choosing the underlying error norm allows for gradient information to be included. A reliable and efficient a posteriori estimate of the global approximation error combined with a preconditioned conjugate gradient solver are the key components of the implementation. Many areas where the proposed method can by applied successfully are envisioned, such as mesh reduction of parameterized grids, visualization of scalar and vector volume data, physically based computer animation of extended bodies and global illumination algorithms. The example application we implemented in order to analyze the properties and advantages of the generated tetrahedral mesh is an iso-surface algorithm which combines the so far separated tasks of extraction acceleration and polygonal decimation in one single processing step. The quality of the iso-surface is measured based on a special geometric norm which does not require the full resolution surface.

Details

Category

Duration

45 + 15

Speaker: Martti Mäntylä

Virtual engineering refers to various scenarios where several independent companies or partners must perform product development or production engineering tasks in co-operation - that is, engineering in a virtual enterprise. Common examples of virtual engineering include the producer-supplier scenario, where product and its subcontractor-made components must be designed simultaneously, the mass customization scenario, where a customized product is composed from basic modules and components from several companies, and the multi-supplier project scenario, where several manufacturers contribute to the engineering and construction of a large design such as an industrial plant. Similarly to the related concept "concurrent engineering", virtual engineering must support the inclusion of all life-cycle issues of a product during its design. However, virtual engineering recognises explicitly the fundamental differences between the life-cycle viewpoints to a product, and aims at solutions that can work also on the basis of distributed heterogeneous information and systems. The term is also intended to cover activities denoted by related expressions "virtual prototyping" and "virtual manufacturing". From the viewpoint of information technology, the research challenge of virtual engineering is to identify and develop computational tools that support co-operation, information sharing, and coordination of activities for engineering teams in a virtual organization. These tools must be capable of operating in a distributed, heterogeneous environment where direct data sharing by means of jointly used data repositories or like is impossible or impractical. Further challenges are posed by the potentially large volume of shared data and the ill-structuredness of some of the necessary information. In the presentation, I will first analyse the basic requirements that computational tools for virtual engineering should satisfy. Next, industrial requirements of virtual engineering are discussed on the basis of studying the business processes that underlie and delienate virtual engineering activities. Finally, three application case studies of virtual engineering are briefly discussed to study the suitability of the above techniques. They are

  • Design Process and Rationale Capture and Deployment, where the focus is on sharing product information across design teams during novel design
  • Computational Infrastructure for Life-Cycle Assessment, where the focus is on sharing and reusing product information across different engineering applications (including legacy systems) and company borders during variant design
  • Virtual Engineering and Construction of Multi-Supplier Projects, the focus of which is on sharing engineering process information to coordinate the activities of cooperating companies and on using the shared models to study and visualise the process.