Current Schedule

In the summer term of 2000 the following talks will be organized by our Institute. The talks are partially financed by the "Arbeitskreis Graphische Datenverarbeitung" of the OCG (Austrian Computer Society)

Date SpeakerTitleTimeLocation
19.05.2000 Pavel Slavik (Computer Graphics Group, Czech Technical University of Prague) Visualization of Technological Processes 10.00-11.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock
19.05.2000 Jiri Zara (Computer Graphics Group, Czech Technical University of Prague) Distributed Learning Environment Without Avatars 11.00-12.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock
09.06.2000 Jiri Sochor (Faculty of Informatics, Masaryk University Brno) Human Computer Interaction with Force-feedback 10.00-11.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock
21.06.2000 Nassir Navab (Siemens Corporate Research) Medical and Industrial Augmented Reality Research at Siemens Corporate Research 10.00-11.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock
30.06.2000 Fredo Durand (LCS Graphics Group, MIT) 3D visibility, analytical study and applications 10.00-11.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock
30.06.2000 Gernot Schaufler (LCS Graphics Group, MIT) Conservative Volumetric Visibility with Occluder Fusion 11.00-12.00 s.t.Seminarraum 186, Favoritenstraße 9, 5. Stock

Previous Schedules

Visualization of Technological Processes

Pavel Slavik, Computer Graphics Group, Czech Technical University of Prague

Scientific visualization penetrates into new applications in order to give the user better possibility to interpret application specific data. Our research has been concentrated on technological processes in power plants. Simulation and visualization of some processes could be covered by some existing software but some specific processes are not covered at all. The software used is mostly based on complex mathematical theories what results in computationaly demanding calculations. The target of our research was to create simulation and visualization tools that could be used in education. The algorithms developed will be generally less accurate in comparison with complex algorithms currently used but they will provide results in a very fast way. This will allow the students to get a feeling of behavior of some specific processes in a short time. The algorithms developed during our research were mostly based on particle systems and include simulation and visualization of the following processes:

  • air polution
  • combustion processes
  • coal transport
  • hot fluid gas filtering
  • coal drying
  • etc.

The algoritms developed are subject of improvement and verification based on real data obtained from measurements in real power plants.

Distributed Learning Environment Without Avatars

Jiri Zara, Computer Graphics Group, Czech Technical University of Prague

The experimental system DILEWA utilizing virtual reality for educational purposes will be described. The main difference from commonly used systems for distributed VR (like Blaxxun) is that DILEWA users can act in three different roles - tutor, dependent, and independent participant. Within one shared virtual world, a single user acts as a tutor and the others can either watch the world through his/her eyes or work independently. Such a system does not need visible avatars, but a distribution of all activities of a tutor to the audience. The pilot version of DILEWA has been implemented using VRML, EAI, and Java.

Human Computer Interaction with Force-feedback

Jiri Sochor, Faculty of Informatics, Masaryk University Brno

Haptic visualization refers to perception of information through the haptic sense. Haptic devices capable of teleoperation often use a force-feedback control scheme that plays an important role in human-computer interactions. The talk will describe several projects currently investigated at our HCI Laboratory using the PHANTOM device. These include haptic visualization, FFB enhanced manipulation and application in computational chemistry. Open problems will be mentioned like: haptic tracking, FFB stability, haptic hints and haptic textures.

edical and Industrial Augmented Reality Research at Siemens Corporate Research

Nassir Navab, Siemens Corporate Research

This talk aims at presenting the research and development on Augmented Reality at Siemens Corporate Research. Due to lack of time I only present one application, Camera Augmented Mobile C-arm (CAMC), in detail. The rest of the presentation provides an overview of our other research activities. I also present a series of live demos.

Camera Augmented Mobile C-arm (CAMC) consists of an optical camera attached to a mobile X-ray C-arm. This was originally introduced for dynamic calibration of X-ray C-arm for 3D tomographic reconstruction(MICCAI'99). We compare the CAMC reconstruction results with the one obtained using an external tracking system (Polaris from Northern digital) for dynamic calibration(CVPR'00-1). We then add a double mirror system in order to create similar geometry for both X-ray and optical imaging systems. This results in the first real-time integration of X-ray and optical images. Finally, we run our Visual Servoing Based Precise Needle Placement(CVPR'00-2) under X-ray augmented video control. This introduces a new visualization tool and reduces the X-ray exposure to both patient and physician.

A series of demonstrations present other areas of research and development in our augmented reality group (WACV'98, IWAR'99, CVPR'00, ICME'00). In particular, we present a software called CyliCon for 3D reconstruction and AR applications in industrial environment.

Related Publications:

  • MICCAI'99: N. Navab and M. Mitschke and O. Schuetz, "Camera augmented Mobile C-arm (CAMC) Application: 3D reconstruction using a low-cost mobile C-arm", Proceeding of the Second International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, England, September 1999.
  • CVPR'00-1: M. Mitschke and N. Navab, "Recovering projection geometry: how a cheap camera can outperform an expensive stereo system", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-2: N. Navab and B. Bascle and M. H. Loser and B. Geiger and R. H. Taylor, "Visual servoing for Automatic and uncalibrated needle placement for percutaneous procedures", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-3: N. Navab, Y. Genc, and M. Appel. "Lines in one orthographic and two perspective views", CVPR, Hilton Head Island, SC, USA, June 2000.
  • CVPR'00-4: B. Thirion, B. Bascle, V. Ramesh, and N. Navab, "Fusion of Color, Shading and Boundary Information For Factory Pipe Segmentation", CVPR, Hilton Head Island, SC, USA, June 2000.
  • ICME'00: X. Zhang, N. Navab, S. Liou, 'E-Commerce Direct Marketing using Augmented Reality', IEEE International Conference on Multimedia and Expo, Jul. 30 - Aug. 2, 2000, New York City.
  • IWAR'99: N. Navab, B. Bascle, M. Appel, and E. Cubillo. "Scene augmentation via the fusion of industrial drawings and uncalibrated images with a view to marker-less calibration". In Proc. IEEE International Workshop on Augmented Reality, San Francisco, CA, USA, October 1999.

3D visibility, analytical study and applications

Fredo Durand, LCS Graphics Group, MIT

Visibility problems are central to many computer graphics applications. The most common examples include hidden-part removal for view computation, shadow boundaries, mutual visibility of pairs of points, etc. In this document, we first present a theoretical study of 3D visibility properties in the space of light rays. We group rays that see the same object; this defines the 3D visibility complex. The boundaries of these groups of rays correspond to the visual events of the scene (limits of shadows, disappearance of an object when the viewpoint is moved, etc.). We simplify this structure into a graph in line-space which we call the visibility skeleton. Visual events are the arcs of this graph, and our construction algorithm avoids the intricate treatment of the corresponding 1D sets of lines. We simply compute the extremities (lines with 0 degrees of freedom) of these sets, and we topologically deduce the visual events using a catalogue of adjacencies. Our implementation shows that the skeleton is more general, more efficient and more robust than previous techniques. Applied to lighting simulation, the visibility skeleton permits more accurate and more rapid simulations. We have also developed an occlusion culling preprocess for the display of very complex scenes. We compute the set of potentially visible objects with respect to a volumetric region. In this context, our method is the first which handles the cumulative occlusion due to multiple blockers. Our occlusion tests are performed in planes using extended projections, which makes them simple, efficient and robust. In the second part of the document, we present a vast survey of work related to visibility in various domains.

Conservative Volumetric Visibility with Occluder Fusion

Gernot Schaufler, LCS Graphics Group, MIT

Visibility determination is a key requirement in a wide range of graphics applications. This work introduces a new approach to the computation of volumetric visibility, the detection of occluded portions of space as seen from a given region. The method is conservative and classifies regions as occluded only then they are guatanteed to be invisible. It operates on a discrete representation of space and uses the opaque interior of objects as occluders. This choice of occluders facilitates their extension into opaque regions of space, in essence maximizing their size and impact. Out method efficiently detects and represents the regions of space hidden by such occluders and is the first one to use the property that occluders can also be extended into empty space provides that space is itself occluded as seen from the viewing volume. This proves extremely effective for computing the occlusion by a set of occluders, effectively realizing occluder fusion. An auxiliary data structure represents occlusion in the scene, which can then be querried to answer volume visibility questions. We demonstrate the applicability to visibility preprocessing for real-time walkthroughs anbd to shadow-ray acceleration for extended light sources in ray tracing, with significant speed-up in all cases.

TU Wien
Institute of Visual Computing & Human-Centered Technology
Favoritenstr. 9-11 / E193-02
A-1040 Vienna
Austria - Europe

Tel. +43-1-58801-193201

How to find us