Find the upcoming dates on this page.

Previous Talks

Speaker: Oliver Bimber (Bauhaus University, Weimar)

 The Virtual Showcase is a new projection-based Augmented Reality display that offers an imaginative and innovative way of accessing, presenting, and interacting with scientific and cultural content. Almost three years after the development of the first proof-of-concept prototype, the Virtual Showcase is turning into an efficient multi-user display that effectively addresses several shortcomings of today's Augmented Reality displays. I will give an overview of the different Virtual Showcase prototypes built so far, their technical components, and their recent application to the field of digital story-telling for education and scientific visualization.

References
http://www.nsf.gov/od/lpa/news/02/tip021022.htm http://www.wissenschaft-online.de/artikel/605814&template=d_bnt_n_inhalt&_stempel_datum_bis=1034719199 http://cgw.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTICLE_ID=125448 http://www.vrnews.com/invited/fa20011207.html

Curriculum Vitae
Oliver Bimber is currently a scientist at the Bauhaus University Weimar, Germany. He received a Ph.D. in Engineering at the Technical University of Darmstadt, Germany under supervision of Prof. Dr. Encarnação (TU Darmstadt) and Prof. Dr. Fuchs (UNC at Chapel Hill). From 2001 to 2002 Bimber worked as a senior researcher at the Fraunhofer Center for Research in Computer Graphics in Providence, RI/USA, and from 1998 to 2001 he was a scientist at the Fraunhofer Institute for Computer Graphics in Rostock, Germany. He initiated the Virtual Showcase project in Europe and the Augmented Paleontology project in the U.S.A. He received the degree of Dipl. Inform. (FH) in Scientific Computing from the University of Applied Science Giessen and a B.Sc. degree in Commercial Computing from the Dundalk Institute of Technology. In his career, Bimber received several scientific achievement awards and is author of more than thirty technical papers and journal articles. He was guest editor of the Computer & Graphics special issue on "Mixed Realities - Beyond Conventions", and has served as session chair and review committee member for several international conferences. Bimber also gave a number of guest lectures at recognized institutions. Among them were Brown University, Princeton University, the IBM T.J. Watson Research Center, the DaimlerChrysler Virtual Reality Competence Center, and the Mitsubishi Electronic Research Lab (MERL). His research interests include display technologies, rendering and human-computer interaction for Mixed Realities. Bimber is member of IEEE, ACM and ACM Siggraph.

Details

Category

Duration

60 min
Host: DS

Speaker: Martin Wagner (Lehrstuhl für angewandte Softwaretechnik, Institut für Informatik, TU München, Germany)

Details

Category

Duration

45 + 15

Speaker: Anna Vilanova Bartroli (TU Eindhoven, Holland)

The group Biomedical Image Analysis (BMIA) is part of the Master program Biomedical Imaging and Informatics (BMI2), one of the four Master programs in the Department of Biomedical Engineering at Eindhoven University of Technology. In this talk I will present the structure and the ongoing research projects of BMIA.

Details

Category

Duration

45 + 15

Speaker: Jiri Bittner (Prague, Czech Republic)

Details

Category

Duration

30 min+ disc.
Supervisor:  

Speaker: Klaus Dorfmueller-Ulhaas (Germany)

Tracking user movements is one of the major low-level tasks which every Virtual Re-ality (VR) system needs to fulfill. There are different methods how this tracking may be performed. Common tracking systems use magnetic or ultrasonic trackers in dif-ferent variations as well as mechanical devices. All of these systems have drawbacks which are caused by their principles of work. Typically, the user has to be linked to a measurement instrument, either by cable or, even more restraining for the user, by a mechanical linkage. Furthermore, while mechanical tracking systems are extremely precise, magnetic and acoustic tracking systems suffer from different sources of distor-tions. For this reason, an optical tracking system has been developed which overcomes many of the drawbacks of conventional tracking systems. This work is focused on stereoscopic tracking that provides an effective way to enhance the accuracy of optical based trackers. Vision based trackers in general fa-cilitate wireless interaction with 3D worlds for the users of a virtual reality system. Additionally, the proposed tracker is very economic through the use of standard sensor technology that will furthermore reduce cost. The proposed tracker provides an ac-curacy in the range of sub-millimeters, thus it meets the requirements of most virtual reality applications. The presented optical tracker works with low frequency light and is based on retro-reflective sphere shaped markers illuminated with infrared light to not interfere with the user's perception of a virtual scene on projection based display technology systems in environments with dim light. In contrast to commercial optical tracking systems, the outcome of this work is operating in real-time. Furthermore, the presented sytem can make use of very small cameras to be applicable for inside-out tracking. This work presents novel approaches to calibrate a stereoscopic camera setup. It utilizes the standard equipment used for commercial optical trackers in computer ani-mation, but contrarily to calibration methods available today, it calibrates internal and external camera parameters simultaneously, including lens distortion parameters. The calibration is very easy to use, fast and precise. To provide the robustness required by most virtual reality applications, human mo-tion needs to be tracked over time. This has been often done with a Kalman filter facilitating a prediction of motion which may not only enhance the frequency of the tracking system, but may also cope with display lags of complex virtual scenes or with acquisition or communication delays. A new filter formulation is presented that may also be used with non-optical based trackers providing the pose of an object with six degrees of freedom. Finally, some extensions to natural landmark tracking are presented using a contour tracking approach. First experimental results of an early implementation are shown, detecting a human pointing gesture in environments with different lighting conditions and backgrounds. Perspectives are given how this method could be extended to 3D model based hand tracking using stereoscopic vision. 

Details

Category

Duration

20+10 min
Host: DS

Speaker: Martin Martinov ("St. Kliment Ohridski" University, Sofia, Bulgaria)

The possibilities of Java 2D, Java 3D, Java Swing and Advanced Java Imaging for doing Computer Graphics are going to be presented demonstrating them in an custom-developed program implementing the following algorithms:

  • Line-drawing: Simple method, Bresenham method, Portion method
  • Line-smoothing: area based, distance-based
  • Circle-drawing: Simple method, Bresenham method, by second-order differnces
  • Ellipse-drawing: Bresenham method
  • Line-clipping: Cohen-Sutherland method, Liang-Barsky method 

Details

Category

Duration

30 min
Host: Meister

Speaker: Ronan Boulic (EPFL Lausanne)

In this talk, I will present an IK architecture allowing the enforcement of multiple constraints distributed on an arbitrary number of priority levels. The cost of the IK algorithm building the projection operators is linear with the number of priority levels thus allowing interactive postural control of human characters. The talk will end on presenting our on-going work that evaluate how to apply this technique to motion editing.

Details

Category

Duration

45 min
Host: WP

Speaker: Raphael Grasset (iMAGIS/GRAVIR, Grenoble, France)

Details

Category

Duration

30 + 10 min
Host: DS

Speaker: Michael Haller (FHS Hagenberg, Austria)

Media Technology and Design (MTD) is a 4-year engineering program focused on technical and creative aspects of digital media. MTD is one of several IT-related programs offered by the College of Engineering at Hagenberg as part of the Upper Austrian University of Applied Sciences (Europe).

The engineering part of the MTD curriculum includes audio/video technology, computing, networking and multi-media programming as core subjects, with a careful balance of basic and selected in-depth material.

The design part of the program starts out with elementary drafting techniques, study of shape and color, art and media history, typography, sound, and music; this is followed by advanced subjects such as multi-media design, audio and video design, 3D modeling and animation, media production and experimental media.

The Media Technology and Design (MTD) curriculum focuses on three broad areas:

  • Multimedia
  • 3D computer graphics and animation
  • Internet/WWW

The talk describes activities of MTD concerning computer graphics, Virtual Reality, and Augmented Reality. We will give a short overview of the courses and show some student projects. Finally, we will describe the European founded project AMIRE (Authoring Mixed Reality).

Related links:

Details

Category

Duration

30 + 15 min
Host: DS

Speaker: MARCO ANTONIO GÓMEZ-MARTÍN (Universidad Complutense de Madrid, Spain), PEDRO PABLO GÓMEZ-MARTÍN (Universidad Complutense de Madrid, Spain)

The increase in the number of tourists who visit fragile monuments endangers their preservation. In other cases, an interested person can't afford the long trip required to get to the place that she wants to see. Virtual archeology seems to be the solution to both problems, because the user can see the monument in the computer and walk in it without any kind of contact. Additionally, these virtual reconstructions allow the user to ask for information about the different elements she finds there. We will show three examples of this kind of applications, like the virtual visit to the Nefertari's tomb, which we rebuilt using plans and pictures. We will describe some of the computer graphics techniques we have used to develop them. Finally, we will talk about our current research area: the addition of avatars or agents to these environments to guide the visitor.

Speaker: Soeren Grimm (Germany)

Today's real-world medical visualization systems for medical data are much more than just the visualization. Such systems have a back-end that stores the medical data and reports, and a front-end that assists the user in analyze and exam the data. The front-end has means to manual-segment, auto-segment, carve, measure, annotate, etc, and to view the data in 2D or 3D. The visualization is a small, but a very crucial, part of such a system. Since it is the visual feedback the user gets after he performed any operation and therefore it has to be interactive and in high quality. The 3D view is basically volume rendering and needs enormous computational power and memory bandwidth to get high quality and interactivity. There are several ways to do volume rendering, however it is still not clear what is the best way to do it in a real-world visualization system. This talk presents four different ways of volume rendering - based on SIMD, VolumePro, Texture mapping, and finally pure CPU -, their underlying volume memory layouts, and their usability in real-world visualization systems.

Keywords: Volume rendering, Ray casting, Texture Mapping, Multithreading, Hyperthreading, OpenGL, VolumePro, Parallel processing.

Speaker: Kevin Taylor (Purdue University at Kokomo), Emília Mironovová (Slovak University of Technology)

As we continue to merge global markets it is inevitable that many of today s graduates will participate in international activities when they enter the workforce. It is imperative that we prepare our students for this global work environment. Described is a project between students in the United States and the Slovak Republic aimed at improving both technical communications and cultural understanding between the two groups. The students in the United States were seniors in a two-semester capstone design sequence in Electrical Engineering Technology (EET) at Purdue University. The Slovak students were Ph.D. candidates from the Faculty of Materials Science at the Slovak University of Technology (SUT) studying Material Science, Plant Management, Automation and Control, and Machine Technolo gies. The SUT students were enrolled in a course entitled "English for Specific Purposes", allowing all communications to be in English. The students were paired and exchanged biographies, resumes (CVs) and technical works such as design proposals and research abstracts. Internet cameras purchased using funds from a grant facilitated on-line meetings throughout the year long project. Since the two groups were from different disciplines, clear English communication was a necessity. By reviewing the material written by the SUT students, the EET students became sensitized to the problems caused by their own use of idiomatic phrases and incomplete descriptions. The SUT students benefited by practicing reading, writing and speaking in English through their correspondence and online meetings with the EET students. Both groups reviewed technical English written by peers including flaws and idiomatic expressions. The primary advantage of this collaboration is that it is not constrained by curricular discipline , making it easily adaptable by other disciplines. A secondary advantage is that the students gain international experience while avoiding the travel expense.

Details

Category

Duration

45 min
Host: KB

Speaker: Damian Green (Multimedia Research Group, Brunel University)

During an archaeological dig, a great deal of data relating to stratigraphic positioning (SP) is recorded. This data is recorded in a variety of different formats, individual excavation logbooks, stratigraphy forms, and in theodolites. The widely used archaeological practice of analysis and representation of SP is the Harris Matrix approach [Harri89]. This is a valuable technique to analyse and compare 2D SP data, now with the advent of cheap and powerful 3D computing, there is a growing need for the archaeologist on site to test hypotheses and gain immediate results. The 3D representation and analysis of this SP data, with the ability to perform real-time hypotheses without prolonged sifting through hard copies of excavation logbooks presents a real innovation to future archaeological interpretation. The ability to replay the excavation in a timely order, stratum by stratum after it has been allows both the casual user and the specialist archaeologist insight previously not possible.

Details

Category

Duration

45 + 15

Speaker: Francisco Rodriguez (Matic Research Laboratory, Department of Computing Science, University of Glasgow)

Details

Category

Duration

45 + 15

Speaker: Stefan Schlechtweg (Otto-von-Guericke-University Magdeburg)

In den letzten Jahren hat sich das nicht-photorealistische Rendering (NPR) als interessanter neuer Zweig der Computergraphik entwickelt. Ziel der NPR-Methoden ist es, unterschiedliche Darstellungsstile mit computergraphischen Methoden zu generieren, die von den bisherigen photorealistischen Graphiken abweichen. Die Anwendungsgebiete nicht-photorealistischer Bilder sind dabei sehr weitreichend - vom Einsatz als Illustrationen in Handbüchern über ästhetisch ansprechende Graphiken und Animationen bis hin zu Echtzeitgraphiken in Spielen. Der Vortrag gibt einen Überblick über die wichtigsten Methoden zum Erzeugen von NPR-Graphiken und versucht, Möglichkeiten der Anwendung aufzuzeigen.

Speaker: Daniel Thalmann (Swiss Federal Institute of Technology, Lausanne, Switzerland)

Simulation, VR, and Entertainment applications (games, films) needs to have Virtual Humans able to move in a flexible and elegant way. They should more and more react to the other characters and to the user. Moreover, animating crowds is challenging both in character animation and a virtual city modeling. The problem is basically to be able to generate variety among a finite set of motion requests and then to apply it to either an individual or a member of a crowd. A single autonomous agent and a member of the crowd present the same kind of 'individuality'. The only difference is at the level of the modules that control the main set of actions.

Biography

Daniel Thalmann is a pioneer in research on Virtual Humans. His current research interests include Real-time Virtual Humans in Virtual Reality, Networked Virtual Environments, Artificial Life, and Multimedia. He is coeditor-in-chief of the Journal of Visualization and Computer Animation, member of the editorial board of the Visual Computer, the CADDM Journal (China Engineering Society) and Computer Graphics (Russia). He is cochair of the EUROGRAPHICS Working Group on Computer Simulation and Animation and member of the Executive Board of the Computer Graphics Society. Daniel Thalmann was member of numerous Program Committees, Program Chair of several conferences and chair of the Computer Graphics International '93, Pacific Graphics '95, ACM VRST '97, and MMM '98 conferences. He is Program Cochair of IEEE VR 2000. He has also organized 4 courses at SIGGRAPH on human animation. He has published more than 250 papers in Graphics, Animation, and Virtual Reality. He is coeditor of 25 books, and coauthor of several books including the recent book on "Avatars in Networked Virtual Environments", published by John Wiley and Sons. He was also codirector of several computer-generated films with synthetic actors including a synthetic Marilyn shown on numerous TV channels all over the world.

Details

Category

Duration

45 + 15

Speaker: Bernd Froehlich (Bauhaus Univ. Weimar)

In this talk we describe tools and techniques for the exploration of geo-scientific data from the oil and gas domain in stereoscopic virtual environments. The two main sources of data in the exploration task are seismic volumes and multivariate well logs of physical properties down a bore hole. We have developed a props-based interaction device called the cubic mouse to allow more direct and intuitive interaction with a cubic seismic volume. The device consists of a cube-shaped box with three perpendicular rods passing through the center and buttons on the top for additional control. The rods represent the X, Y, and Z axes of a given coordinate system. Pushing and pulling the rods specifies constrained motion along the corresponding axes. Twisting the rods typically rotations around the corresponding axes. Embedded within the device is a six degree of freedom tracking sensor, which allows the rods to be continually aligned with a coordinate system located in a virtual world. This device effectively places the seismic cube in the user's hand. We have also integrated the device with other visualization systems for crash engineers and flow simulations. In these systems the Cubic Mouse controls the position and orientation of a virtual model and the rods move three orthogonal cutting or slicing planes through the model. We have also developed a 3D texture based multi-resolution approach for handling massive volumetric data sets common in the oil and gas industry, the medical domain, and as a result from computer simulations. Due to the restricted texture memory available and the limited bandwidth into the texture memory, these data sets cannot be rendered at full resolution. Our approach uses a two-level hierarchical paging technique to guarantee a given frame rate. This technique displays lower resolutions of the data when a slice or volume rendering is moved fast through the data set, and fills in the high resolution, when the user slows down or stops. This behaviour correlates really well with motion blur.

Details

Category

Duration

60 min

Speaker: Torsten Möller (Simon Fraser University, Vancouver, Canada)

Volume Rendering is a subfield of graphics that deals with the exploration, communication, and presentation of medical or scientific data. The presentation on a computer screen reduces the 3D nature of the data by one dimension. The 3D understanding of these data sets can be enhanced using so called motion parallax, i.e. the real-time interaction with the 2D display. Hence real-time rendering algorithms are crucial for the visualization of complex volumetric data.

In this talk I will survey typical volume rendering techniques and the current status of such algorithms. I will include the premise of high-quality visualization, since for many applications, especially medically based, reliability of the representation plays an important role. I will survey current software and hardware developments. Especially I will talk about several results for the improvement of splatting - one specific volume rendering method. I will argue for splatting to be one of the most promising volume rendering algorithms, since it can achieve high frame rates as well as high quality images. I hope that some conclusion I have to offer will stimulate some debate.

Details

Category

Duration

45 + 15

Speaker: Klaus Pirklbauer (Software Competence Center Hagenberg), Werner Winiwarter (Software Competence Center Hagenberg)

This talk offers an overview of the research activities at the Software Competence Center Hagenberg (SCCH). After a general presentation of the SCCH, we provide a summary of the main research topics at the SCCH. In the second part of the talk we focus on current projects in the knowledge-based area of the SCCH. Finally, we desribe multilevel data mining methods in detail and present results for their application to image segmentation.

Details

Category

Duration

60 min
Host: VRVis

Speaker: Joaquim Jorge (Instituto Superior Técnico, Lisboa, Portugal)

Details

Category

Duration

60 min
Host: Meister

Speaker: Attila Neumann (Budapest, Hungary)

Details

Category

Duration

90 min
Host: WP

Speaker: Stefan Krass (MeVis, University Bremen, Germany)

Das Bronchialkarzinom ist die Krebserkrankung mit der höchsten Mortalitätsrate. Diagnostik und Planung der chirurgischen Therapie erfordern eine exakte Lokalisierung des Tumors und eine möglichst genaue Prognose der postoperativen Lungenfunktion. Bei der Erfüllung dieser morphologischen und funktionellen Anforderungen würde die präoperative Bestimmung und Visualisierung der Lungenlappensegmente einen wichtigen Beitrag liefern. Der Vortrag beschreibt eine Methode zur Segmentbestimmung auf Grundlage computertomographischer (CT) Daten.

Der Bronchialbaum wird durch ein spezielles Bereichswachstumsverfahren segmentiert. Nach Skelettierung des Segmentierungsergebnisses und Überführung des Skeletts in eine Graphenrepräsentation können die Unterbäume der Lungenlappen und der Lappensegmente identifiziert werden. Ein auf Wachstumsmodellen basierender Algorithmus approximiert aus den identifizierten Unterbäumen und der ebenfalls segmentierten Parenchymgrenze die Grenzen der Lappensegmente.

In einer Machbarkeitsstudie wurde die Methode auf klinische Einzeilen- und Mehrzeilen-Spiral-CTs angewandt. Die Validierung erfolgte in-vitro anhand von Präparaten der menschlichen Lunge.

In klinischen CT-Daten war eine sichere Segmentierung des Bronchialbaums bis zur 3. Ordnung (Einzeilen-CT) bzw. bis zur 5. Ordnung (Mehrzeilen-CT) möglich. Die Validierung ergab eine Genauigkeit der Segmentapproximation von 70 % (Einzeilen-CT) bzw. 80 % (Mehrzeilen-CT).

Mit der vorgestellten Methode wird die Beurteilung der Lagebeziehung von Tumoren und Segmenten verbessert. Weiterhin ist eine verbesserte Abschätzung der postoperativen Lungenfunktion zu erwarten.

Details

Category

Duration

60 min
Host: MEG

Speaker: Keith Andrews (IICM, Graz University of Technology, Austria)

Information visualisation seeks to take advantage of the human visual perception system's ability to rapidly process graphical displays, making the presented information and its associated structure both rapidly understood and easily explored. This talk will look both at general principles for information visualisation and at specific examples of techniques under development at the IICM.

Biography

Keith Andrews is an assistant professor at the Institute for Information Processing and Computer Supported New Media (IICM) at Graz University of Technology, in Austria. His research interests include hypermedia, human-computer interaction, computer graphics, and the web. He holds a B.Sc.(Hons) in Mathematics and Computer Science from the University of York, England, and an M.Sc. and Ph.D. in Technical Mathematics/Computer Science from Graz University of Technology. Having lead the Harmony (Unix/X11 browser for Hyperwave) and VRwave VRML browser projects for several years, he is currently pursuing research in the emerging field of information visualisation. He teaches a graduate-level course on Human-Computer Interaction.

Details

Category

Duration

45 min + questions
Host: MEG

Speaker: Prof. Dr. Christian Breiteneder (Interactive and Multimedia Systems Group, Institute of Software Engineering, Vienna University of Technology)

Details

Category

Duration

30 min
Host: DS