next up previous contents
Next: Basic Concepts Up: State of the Art Previous: Volume Compression   Contents

Networked Volume Visualization

Figure 2.3: Client/server visualization pipeline with network
\includegraphics[width=.9\linewidth]{Figures/vis-pipeline.eps}

After the appearance of the first easy to use web browsers, the importance of networks in general and especially the Internet for providing visualization to remote users has been soon realized. Considering the visualization pipeline [21] (figure 2.3) as a general model for visualization, three major approaches for network inclusion can be distinguished [25,61]:

The 2D and 3D visualization publishing schemes are also called thin client solutions, while visualization software publishing is also known as a fat client solution, reflecting the demands put on the user's workstation [25].

One of the problems common to all three approaches above is the difficulty to provide the user the possibility to interactively explore the data without requiring high-performance clients and networks. As interactivity is a main factor to the efficiency of data exploration [16], ways have to be found to facilitate it in Internet-based visualization tools.

As volume visualization is considered to be a memory and computationally intensive task, fat client approaches have not been considered for the first approaches to volume visualization over large scale networks [1]. An additional problem is the heterogeneous nature of clients, thus making a deployment of portable client software difficult. After the appearance of Java and it's integration into Web browsers, it was soon shown that the new technology could be useful to provide fat client solutions even for volume visualization [37].

If the visualization is restricted to (iso-)surfaces - which can be represented using a polygonal model, the surface extraction can be performed at the server. Rendering, and thus also the response to changes in viewing parameters are performed at the client. Only after changes of parameters which influence the geometry of the surface, the new polygonal model has to be retransmitted from the server. Again, the major problem of this approach is the size of the polygonal model which has to be transmitted and rendered - typically several hundreds of thousands of triangles. To overcome this problem, Engel and others [15] place the data set on a server and use progressive transmission and progressive refinement to allow interactive surface extraction and viewing. They also presented an approach for providing direct volume rendering (DVR) at low-end clients [14]. First a small, subsampled version of the data set is transmitted to the client. During interactions which influence the rendered image, the local copy of the data is rendered using texture-mapping capabilities of consumer 3D hardware. After finishing the interaction, a high-quality rendering of the full-resolution data set is computed on a server and transmitted to the client. Although these approaches work well for a limited number of users who share the same server, they can not be applied if an interactive visualization is published to a large group of viewers, for example, over the Internet.

An approach which is more suited for ``public'' distribution of visualization results has been presented by Höhne and others [54]. A multi-dimensional array of images is rendered and stored using an extended Quicktime VR format. The viewer can browse through different views of the data, imitating an interactive rotation, dissection, or segmentation, for example. Additional object label data allows selection of objects and can be used for retrieval of additional information on the selected object. While this approach provides high-quality images on low-end hardware, the user interaction is restricted by the ``hidden'' browsing mechanism (in between pre-computed views). Furthermore, the size of even small-scale movies already becomes a limiting factor for viewing over low-bandwidth networks.

The approaches for volume rendering and transmission which are presented in this work can be used to implement a scenario which is located in between the two methods discussed above. The amount of data which actually has to be transmitted to the client for visualization is very low (about the size of several images), especially in comparison to the Quicktime VR approach. Using the presented rendering algorithms, the viewer is not restricted to pre-computed views and has full control over visualization parameters. The only restriction for rendering is that just those parts of the volume which have been classified as relevant and pre-selected for presentation and transmission can be rendered. When used in a distributed client-server scenario, the software-only rendering approach provides much more flexibility in terms of rendering parameters than volume previewing using texture mapping hardware, still at comparable or even lower costs in terms of bandwidth requirements.


next up previous contents
Next: Basic Concepts Up: State of the Art Previous: Volume Compression   Contents
Lukas Mroz, May 2001,
mailto:mroz@cg.tuwien.ac.at.