Find the upcoming dates on this page.

Previous Talks

Speaker: Raphael Grasset (iMAGIS/GRAVIR, Grenoble, France)

Details

Category

Duration

30 + 10 min
Host: DS

Speaker: Michael Haller (FHS Hagenberg, Austria)

Media Technology and Design (MTD) is a 4-year engineering program focused on technical and creative aspects of digital media. MTD is one of several IT-related programs offered by the College of Engineering at Hagenberg as part of the Upper Austrian University of Applied Sciences (Europe).

The engineering part of the MTD curriculum includes audio/video technology, computing, networking and multi-media programming as core subjects, with a careful balance of basic and selected in-depth material.

The design part of the program starts out with elementary drafting techniques, study of shape and color, art and media history, typography, sound, and music; this is followed by advanced subjects such as multi-media design, audio and video design, 3D modeling and animation, media production and experimental media.

The Media Technology and Design (MTD) curriculum focuses on three broad areas:

  • Multimedia
  • 3D computer graphics and animation
  • Internet/WWW

The talk describes activities of MTD concerning computer graphics, Virtual Reality, and Augmented Reality. We will give a short overview of the courses and show some student projects. Finally, we will describe the European founded project AMIRE (Authoring Mixed Reality).

Related links:

Details

Category

Duration

30 + 15 min
Host: DS

Speaker: MARCO ANTONIO GÓMEZ-MARTÍN (Universidad Complutense de Madrid, Spain), PEDRO PABLO GÓMEZ-MARTÍN (Universidad Complutense de Madrid, Spain)

The increase in the number of tourists who visit fragile monuments endangers their preservation. In other cases, an interested person can't afford the long trip required to get to the place that she wants to see. Virtual archeology seems to be the solution to both problems, because the user can see the monument in the computer and walk in it without any kind of contact. Additionally, these virtual reconstructions allow the user to ask for information about the different elements she finds there. We will show three examples of this kind of applications, like the virtual visit to the Nefertari's tomb, which we rebuilt using plans and pictures. We will describe some of the computer graphics techniques we have used to develop them. Finally, we will talk about our current research area: the addition of avatars or agents to these environments to guide the visitor.

Speaker: Soeren Grimm (Germany)

Today's real-world medical visualization systems for medical data are much more than just the visualization. Such systems have a back-end that stores the medical data and reports, and a front-end that assists the user in analyze and exam the data. The front-end has means to manual-segment, auto-segment, carve, measure, annotate, etc, and to view the data in 2D or 3D. The visualization is a small, but a very crucial, part of such a system. Since it is the visual feedback the user gets after he performed any operation and therefore it has to be interactive and in high quality. The 3D view is basically volume rendering and needs enormous computational power and memory bandwidth to get high quality and interactivity. There are several ways to do volume rendering, however it is still not clear what is the best way to do it in a real-world visualization system. This talk presents four different ways of volume rendering - based on SIMD, VolumePro, Texture mapping, and finally pure CPU -, their underlying volume memory layouts, and their usability in real-world visualization systems.

Keywords: Volume rendering, Ray casting, Texture Mapping, Multithreading, Hyperthreading, OpenGL, VolumePro, Parallel processing.

Speaker: Kevin Taylor (Purdue University at Kokomo), Emília Mironovová (Slovak University of Technology)

As we continue to merge global markets it is inevitable that many of today s graduates will participate in international activities when they enter the workforce. It is imperative that we prepare our students for this global work environment. Described is a project between students in the United States and the Slovak Republic aimed at improving both technical communications and cultural understanding between the two groups. The students in the United States were seniors in a two-semester capstone design sequence in Electrical Engineering Technology (EET) at Purdue University. The Slovak students were Ph.D. candidates from the Faculty of Materials Science at the Slovak University of Technology (SUT) studying Material Science, Plant Management, Automation and Control, and Machine Technolo gies. The SUT students were enrolled in a course entitled "English for Specific Purposes", allowing all communications to be in English. The students were paired and exchanged biographies, resumes (CVs) and technical works such as design proposals and research abstracts. Internet cameras purchased using funds from a grant facilitated on-line meetings throughout the year long project. Since the two groups were from different disciplines, clear English communication was a necessity. By reviewing the material written by the SUT students, the EET students became sensitized to the problems caused by their own use of idiomatic phrases and incomplete descriptions. The SUT students benefited by practicing reading, writing and speaking in English through their correspondence and online meetings with the EET students. Both groups reviewed technical English written by peers including flaws and idiomatic expressions. The primary advantage of this collaboration is that it is not constrained by curricular discipline , making it easily adaptable by other disciplines. A secondary advantage is that the students gain international experience while avoiding the travel expense.

Details

Category

Duration

45 min
Host: KB

Speaker: Damian Green (Multimedia Research Group, Brunel University)

During an archaeological dig, a great deal of data relating to stratigraphic positioning (SP) is recorded. This data is recorded in a variety of different formats, individual excavation logbooks, stratigraphy forms, and in theodolites. The widely used archaeological practice of analysis and representation of SP is the Harris Matrix approach [Harri89]. This is a valuable technique to analyse and compare 2D SP data, now with the advent of cheap and powerful 3D computing, there is a growing need for the archaeologist on site to test hypotheses and gain immediate results. The 3D representation and analysis of this SP data, with the ability to perform real-time hypotheses without prolonged sifting through hard copies of excavation logbooks presents a real innovation to future archaeological interpretation. The ability to replay the excavation in a timely order, stratum by stratum after it has been allows both the casual user and the specialist archaeologist insight previously not possible.

Details

Category

Duration

45 + 15

Speaker: Francisco Rodriguez (Matic Research Laboratory, Department of Computing Science, University of Glasgow)

Details

Category

Duration

45 + 15

Speaker: Stefan Schlechtweg (Otto-von-Guericke-University Magdeburg)

In den letzten Jahren hat sich das nicht-photorealistische Rendering (NPR) als interessanter neuer Zweig der Computergraphik entwickelt. Ziel der NPR-Methoden ist es, unterschiedliche Darstellungsstile mit computergraphischen Methoden zu generieren, die von den bisherigen photorealistischen Graphiken abweichen. Die Anwendungsgebiete nicht-photorealistischer Bilder sind dabei sehr weitreichend - vom Einsatz als Illustrationen in Handbüchern über ästhetisch ansprechende Graphiken und Animationen bis hin zu Echtzeitgraphiken in Spielen. Der Vortrag gibt einen Überblick über die wichtigsten Methoden zum Erzeugen von NPR-Graphiken und versucht, Möglichkeiten der Anwendung aufzuzeigen.

Speaker: Daniel Thalmann (Swiss Federal Institute of Technology, Lausanne, Switzerland)

Simulation, VR, and Entertainment applications (games, films) needs to have Virtual Humans able to move in a flexible and elegant way. They should more and more react to the other characters and to the user. Moreover, animating crowds is challenging both in character animation and a virtual city modeling. The problem is basically to be able to generate variety among a finite set of motion requests and then to apply it to either an individual or a member of a crowd. A single autonomous agent and a member of the crowd present the same kind of 'individuality'. The only difference is at the level of the modules that control the main set of actions.

Biography

Daniel Thalmann is a pioneer in research on Virtual Humans. His current research interests include Real-time Virtual Humans in Virtual Reality, Networked Virtual Environments, Artificial Life, and Multimedia. He is coeditor-in-chief of the Journal of Visualization and Computer Animation, member of the editorial board of the Visual Computer, the CADDM Journal (China Engineering Society) and Computer Graphics (Russia). He is cochair of the EUROGRAPHICS Working Group on Computer Simulation and Animation and member of the Executive Board of the Computer Graphics Society. Daniel Thalmann was member of numerous Program Committees, Program Chair of several conferences and chair of the Computer Graphics International '93, Pacific Graphics '95, ACM VRST '97, and MMM '98 conferences. He is Program Cochair of IEEE VR 2000. He has also organized 4 courses at SIGGRAPH on human animation. He has published more than 250 papers in Graphics, Animation, and Virtual Reality. He is coeditor of 25 books, and coauthor of several books including the recent book on "Avatars in Networked Virtual Environments", published by John Wiley and Sons. He was also codirector of several computer-generated films with synthetic actors including a synthetic Marilyn shown on numerous TV channels all over the world.

Details

Category

Duration

45 + 15

Speaker: Bernd Froehlich (Bauhaus Univ. Weimar)

In this talk we describe tools and techniques for the exploration of geo-scientific data from the oil and gas domain in stereoscopic virtual environments. The two main sources of data in the exploration task are seismic volumes and multivariate well logs of physical properties down a bore hole. We have developed a props-based interaction device called the cubic mouse to allow more direct and intuitive interaction with a cubic seismic volume. The device consists of a cube-shaped box with three perpendicular rods passing through the center and buttons on the top for additional control. The rods represent the X, Y, and Z axes of a given coordinate system. Pushing and pulling the rods specifies constrained motion along the corresponding axes. Twisting the rods typically rotations around the corresponding axes. Embedded within the device is a six degree of freedom tracking sensor, which allows the rods to be continually aligned with a coordinate system located in a virtual world. This device effectively places the seismic cube in the user's hand. We have also integrated the device with other visualization systems for crash engineers and flow simulations. In these systems the Cubic Mouse controls the position and orientation of a virtual model and the rods move three orthogonal cutting or slicing planes through the model. We have also developed a 3D texture based multi-resolution approach for handling massive volumetric data sets common in the oil and gas industry, the medical domain, and as a result from computer simulations. Due to the restricted texture memory available and the limited bandwidth into the texture memory, these data sets cannot be rendered at full resolution. Our approach uses a two-level hierarchical paging technique to guarantee a given frame rate. This technique displays lower resolutions of the data when a slice or volume rendering is moved fast through the data set, and fills in the high resolution, when the user slows down or stops. This behaviour correlates really well with motion blur.

Details

Category

Duration

60 min

Speaker: Torsten Möller (Simon Fraser University, Vancouver, Canada)

Volume Rendering is a subfield of graphics that deals with the exploration, communication, and presentation of medical or scientific data. The presentation on a computer screen reduces the 3D nature of the data by one dimension. The 3D understanding of these data sets can be enhanced using so called motion parallax, i.e. the real-time interaction with the 2D display. Hence real-time rendering algorithms are crucial for the visualization of complex volumetric data.

In this talk I will survey typical volume rendering techniques and the current status of such algorithms. I will include the premise of high-quality visualization, since for many applications, especially medically based, reliability of the representation plays an important role. I will survey current software and hardware developments. Especially I will talk about several results for the improvement of splatting - one specific volume rendering method. I will argue for splatting to be one of the most promising volume rendering algorithms, since it can achieve high frame rates as well as high quality images. I hope that some conclusion I have to offer will stimulate some debate.

Details

Category

Duration

45 + 15

Speaker: Klaus Pirklbauer (Software Competence Center Hagenberg), Werner Winiwarter (Software Competence Center Hagenberg)

This talk offers an overview of the research activities at the Software Competence Center Hagenberg (SCCH). After a general presentation of the SCCH, we provide a summary of the main research topics at the SCCH. In the second part of the talk we focus on current projects in the knowledge-based area of the SCCH. Finally, we desribe multilevel data mining methods in detail and present results for their application to image segmentation.

Details

Category

Duration

60 min
Host: VRVis

Speaker: Joaquim Jorge (Instituto Superior Técnico, Lisboa, Portugal)

Details

Category

Duration

60 min
Host: Meister

Speaker: Attila Neumann (Budapest, Hungary)

Details

Category

Duration

90 min
Host: WP

Speaker: Stefan Krass (MeVis, University Bremen, Germany)

Das Bronchialkarzinom ist die Krebserkrankung mit der höchsten Mortalitätsrate. Diagnostik und Planung der chirurgischen Therapie erfordern eine exakte Lokalisierung des Tumors und eine möglichst genaue Prognose der postoperativen Lungenfunktion. Bei der Erfüllung dieser morphologischen und funktionellen Anforderungen würde die präoperative Bestimmung und Visualisierung der Lungenlappensegmente einen wichtigen Beitrag liefern. Der Vortrag beschreibt eine Methode zur Segmentbestimmung auf Grundlage computertomographischer (CT) Daten.

Der Bronchialbaum wird durch ein spezielles Bereichswachstumsverfahren segmentiert. Nach Skelettierung des Segmentierungsergebnisses und Überführung des Skeletts in eine Graphenrepräsentation können die Unterbäume der Lungenlappen und der Lappensegmente identifiziert werden. Ein auf Wachstumsmodellen basierender Algorithmus approximiert aus den identifizierten Unterbäumen und der ebenfalls segmentierten Parenchymgrenze die Grenzen der Lappensegmente.

In einer Machbarkeitsstudie wurde die Methode auf klinische Einzeilen- und Mehrzeilen-Spiral-CTs angewandt. Die Validierung erfolgte in-vitro anhand von Präparaten der menschlichen Lunge.

In klinischen CT-Daten war eine sichere Segmentierung des Bronchialbaums bis zur 3. Ordnung (Einzeilen-CT) bzw. bis zur 5. Ordnung (Mehrzeilen-CT) möglich. Die Validierung ergab eine Genauigkeit der Segmentapproximation von 70 % (Einzeilen-CT) bzw. 80 % (Mehrzeilen-CT).

Mit der vorgestellten Methode wird die Beurteilung der Lagebeziehung von Tumoren und Segmenten verbessert. Weiterhin ist eine verbesserte Abschätzung der postoperativen Lungenfunktion zu erwarten.

Details

Category

Duration

60 min
Host: MEG

Speaker: Keith Andrews (IICM, Graz University of Technology, Austria)

Information visualisation seeks to take advantage of the human visual perception system's ability to rapidly process graphical displays, making the presented information and its associated structure both rapidly understood and easily explored. This talk will look both at general principles for information visualisation and at specific examples of techniques under development at the IICM.

Biography

Keith Andrews is an assistant professor at the Institute for Information Processing and Computer Supported New Media (IICM) at Graz University of Technology, in Austria. His research interests include hypermedia, human-computer interaction, computer graphics, and the web. He holds a B.Sc.(Hons) in Mathematics and Computer Science from the University of York, England, and an M.Sc. and Ph.D. in Technical Mathematics/Computer Science from Graz University of Technology. Having lead the Harmony (Unix/X11 browser for Hyperwave) and VRwave VRML browser projects for several years, he is currently pursuing research in the emerging field of information visualisation. He teaches a graduate-level course on Human-Computer Interaction.

Details

Category

Duration

45 min + questions
Host: MEG

Speaker: Prof. Dr. Christian Breiteneder (Interactive and Multimedia Systems Group, Institute of Software Engineering, Vienna University of Technology)

Details

Category

Duration

30 min
Host: DS

Speaker: Jiri Bittner (Computer Graphics Group, Czech Technical University of Prague)

Details

Category

Duration

30 min
Host: WP

Speaker: Wolfgang Birkfellner (Department of Biomedical Engineering and Physics, AKH Vienna, Austria)

Details

Category

Duration

45 + 15

Speaker: Fredo Durand (LCS Graphics Group, MIT)

Visibility problems are central to many computer graphics applications. The most common examples include hidden-part removal for view computation, shadow boundaries, mutual visibility of pairs of points, etc. In this document, we first present a theoretical study of 3D visibility properties in the space of light rays. We group rays that see the same object; this defines the 3D visibility complex. The boundaries of these groups of rays correspond to the visual events of the scene (limits of shadows, disappearance of an object when the viewpoint is moved, etc.). We simplify this structure into a graph in line-space which we call the visibility skeleton. Visual events are the arcs of this graph, and our construction algorithm avoids the intricate treatment of the corresponding 1D sets of lines. We simply compute the extremities (lines with 0 degrees of freedom) of these sets, and we topologically deduce the visual events using a catalogue of adjacencies. Our implementation shows that the skeleton is more general, more efficient and more robust than previous techniques. Applied to lighting simulation, the visibility skeleton permits more accurate and more rapid simulations. We have also developed an occlusion culling preprocess for the display of very complex scenes. We compute the set of potentially visible objects with respect to a volumetric region. In this context, our method is the first which handles the cumulative occlusion due to multiple blockers. Our occlusion tests are performed in planes using extended projections, which makes them simple, efficient and robust. In the second part of the document, we present a vast survey of work related to visibility in various domains.

Details

Category

Duration

60 min
Host: MEG

Speaker: Gernot Schaufler (LCS Graphics Group, MIT)

Visibility determination is a key requirement in a wide range of graphics applications. This work introduces a new approach to the computation of volumetric visibility, the detection of occluded portions of space as seen from a given region. The method is conservative and classifies regions as occluded only then they are guatanteed to be invisible. It operates on a discrete representation of space and uses the opaque interior of objects as occluders. This choice of occluders facilitates their extension into opaque regions of space, in essence maximizing their size and impact. Out method efficiently detects and represents the regions of space hidden by such occluders and is the first one to use the property that occluders can also be extended into empty space provides that space is itself occluded as seen from the viewing volume. This proves extremely effective for computing the occlusion by a set of occluders, effectively realizing occluder fusion. An auxiliary data structure represents occlusion in the scene, which can then be querried to answer volume visibility questions. We demonstrate the applicability to visibility preprocessing for real-time walkthroughs anbd to shadow-ray acceleration for extended light sources in ray tracing, with significant speed-up in all cases.

Details

Category

Duration

60 min
Host: MEG

Speaker: Nassir Navab (Siemens Corporate Research)

This talk aims at presenting the research and development on Augmented Reality at Siemens Corporate Research. Due to lack of time I only present one application, Camera Augmented Mobile C-arm (CAMC), in detail. The rest of the presentation provides an overview of our other research activities. I also present a series of live demos.

Camera Augmented Mobile C-arm (CAMC) consists of an optical camera attached to a mobile X-ray C-arm. This was originally introduced for dynamic calibration of X-ray C-arm for 3D tomographic reconstruction(MICCAI'99). We compare the CAMC reconstruction results with the one obtained using an external tracking system (Polaris from Northern digital) for dynamic calibration(CVPR'00-1). We then add a double mirror system in order to create similar geometry for both X-ray and optical imaging systems. This results in the first real-time integration of X-ray and optical images. Finally, we run our Visual Servoing Based Precise Needle Placement(CVPR'00-2) under X-ray augmented video control. This introduces a new visualization tool and reduces the X-ray exposure to both patient and physician.

A series of demonstrations present other areas of research and development in our augmented reality group (WACV'98, IWAR'99, CVPR'00, ICME'00). In particular, we present a software called CyliCon for 3D reconstruction and AR applications in industrial environment.

Related Publications:

  • MICCAI'99: N. Navab and M. Mitschke and O. Schuetz, "Camera augmented Mobile C-arm (CAMC) Application: 3D reconstruction using a low-cost mobile C-arm", Proceeding of the Second International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, England, September 1999.
  • CVPR'00-1: M. Mitschke and N. Navab, "Recovering projection geometry: how a cheap camera can outperform an expensive stereo system", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-2: N. Navab and B. Bascle and M. H. Loser and B. Geiger and R. H. Taylor, "Visual servoing for Automatic and uncalibrated needle placement for percutaneous procedures", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-3: N. Navab, Y. Genc, and M. Appel. "Lines in one orthographic and two perspective views", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-4: B. Thirion, B. Bascle, V. Ramesh, and N. Navab, "Fusion of Color, Shading and Boundary Information For Factory Pipe Segmentation", CVPR, Hilton Head Island, SC, USA, June 2000.
  • ICME'00: X. Zhang, N. Navab, S. Liou, 'E-Commerce Direct Marketing using Augmented Reality', IEEE International Conference on Multimedia and Expo, Jul. 30 - Aug. 2, 2000, New York City.
  • IWAR'99: N. Navab, B. Bascle, M. Appel, and E. Cubillo. "Scene augmentation via the fusion of industrial drawings and uncalibrated images with a view to marker-less calibration". In Proc. IEEE International Workshop on Augmented Reality, San Francisco, CA, USA, October 1999.

Details

Category

Duration

45 + 15

Speaker: Jiri Sochor (Faculty of Informatics, Masaryk University Brno)

Haptic visualization refers to perception of information through the haptic sense. Haptic devices capable of teleoperation often use a force-feedback control scheme that plays an important role in human-computer interactions. The talk will describe several projects currently investigated at our HCI Laboratory using the PHANTOM device. These include haptic visualization, FFB enhanced manipulation and application in computational chemistry. Open problems will be mentioned like: haptic tracking, FFB stability, haptic hints and haptic textures.

Details

Category

Duration

60 min
Host:  

Speaker: Pavel Slavik (Computer Graphics Group, Czech Technical University of Prague)

Scientific visualization penetrates into new applications in order to give the user better possibility to interpret application specific data. Our research has been concentrated on technological processes in power plants. Simulation and visualization of some processes could be covered by some existing software but some specific processes are not covered at all. The software used is mostly based on complex mathematical theories what results in computationaly demanding calculations. The target of our research was to create simulation and visualization tools that could be used in education. The algorithms developed will be generally less accurate in comparison with complex algorithms currently used but they will provide results in a very fast way. This will allow the students to get a feeling of behavior of some specific processes in a short time. The algorithms developed during our research were mostly based on particle systems and include simulation and visualization of the following processes:

  • air polution
  • combustion processes
  • coal transport
  • hot fluid gas filtering
  • coal drying
  • etc.

The algoritms developed are subject of improvement and verification based on real data obtained from measurements in real power plants.

Details

Category

Duration

60 min
Host: MEG