After the appearance of the first easy to use web browsers, the importance of networks in general and especially the Internet for providing visualization to remote users has been soon realized. Considering the visualization pipeline [21] (figure 2.3) as a general model for visualization, three major approaches for network inclusion can be distinguished [25,61]:
In this case, all computations are performed at a visualization server. Visualization parameters are set by the user within an HTML page or in an applet and sent to the server, which recomputes the visualization and sends the resulting image to the client for display. The implications of this solution are, that even the smallest change in the parameters, which might affect only a few stages of the visualization pipeline, like a change of the viewpoint, for example, requires computational resources at the server and leads to a retransmission of a whole visualization image over the network. Due to this fact, this approach is in most cases not suited for interactive data exploration, except for cases when special rendering hardware of the server should be exploited [65] and sufficiently fast network connections are available.
As for the previous approach, the largest part of the visualization is performed on a server [31,56,61]. Instead of sending images to the client, an intermediate representation of the visualization is created and transmitted (usually a polygonal representation of objects within the visualized data). As rendering of the model is performed at the client, a more efficient and interactive response to changes of viewing parameters can be achieved. In case of volume visualization this approach can be applied to visualize iso-surfaces generated using the marching cubes algorithm or some similar approach. Rendering geometry at the client bears two main problems: First, the memory and computational resources available at the client pose a limit to the complexity of the visualization and strongly influence the ability to work interactively. Additionally, interactions which lead to changes at stages deeper within the visualization pipeline, which are executed at the server, impose delays and require sufficient computational resources at the server.
The entire software needed to create and display the visualization is transmitted to the client in advance and runs, e.g., as a Java applet within a web browser. Publishing visualization software greatly reduces the demands put on the server and has the potential for more interactive data exploration by tightening the loop between the user and the visualization system [37,62,63,64]. This approach also allows to create easy-to-use visualization software which is able to reach large user communities and run on various platforms without the need for local installation. The main drawbacks of this solution with respect to volume visualization are the huge amounts of ``raw'' volume data which have to be transmitted and processed at the client.
The 2D and 3D visualization publishing schemes are also called thin client solutions, while visualization software publishing is also known as a fat client solution, reflecting the demands put on the user's workstation [25].
One of the problems common to all three approaches above is the difficulty to provide the user the possibility to interactively explore the data without requiring high-performance clients and networks. As interactivity is a main factor to the efficiency of data exploration [16], ways have to be found to facilitate it in Internet-based visualization tools.
As volume visualization is considered to be a memory and computationally intensive task, fat client approaches have not been considered for the first approaches to volume visualization over large scale networks [1]. An additional problem is the heterogeneous nature of clients, thus making a deployment of portable client software difficult. After the appearance of Java and it's integration into Web browsers, it was soon shown that the new technology could be useful to provide fat client solutions even for volume visualization [37].
If the visualization is restricted to (iso-)surfaces - which can be represented using a polygonal model, the surface extraction can be performed at the server. Rendering, and thus also the response to changes in viewing parameters are performed at the client. Only after changes of parameters which influence the geometry of the surface, the new polygonal model has to be retransmitted from the server. Again, the major problem of this approach is the size of the polygonal model which has to be transmitted and rendered - typically several hundreds of thousands of triangles. To overcome this problem, Engel and others [15] place the data set on a server and use progressive transmission and progressive refinement to allow interactive surface extraction and viewing. They also presented an approach for providing direct volume rendering (DVR) at low-end clients [14]. First a small, subsampled version of the data set is transmitted to the client. During interactions which influence the rendered image, the local copy of the data is rendered using texture-mapping capabilities of consumer 3D hardware. After finishing the interaction, a high-quality rendering of the full-resolution data set is computed on a server and transmitted to the client. Although these approaches work well for a limited number of users who share the same server, they can not be applied if an interactive visualization is published to a large group of viewers, for example, over the Internet.
An approach which is more suited for ``public'' distribution of visualization results has been presented by Höhne and others [54]. A multi-dimensional array of images is rendered and stored using an extended Quicktime VR format. The viewer can browse through different views of the data, imitating an interactive rotation, dissection, or segmentation, for example. Additional object label data allows selection of objects and can be used for retrieval of additional information on the selected object. While this approach provides high-quality images on low-end hardware, the user interaction is restricted by the ``hidden'' browsing mechanism (in between pre-computed views). Furthermore, the size of even small-scale movies already becomes a limiting factor for viewing over low-bandwidth networks.
The approaches for volume rendering and transmission which are presented in this work can be used to implement a scenario which is located in between the two methods discussed above. The amount of data which actually has to be transmitted to the client for visualization is very low (about the size of several images), especially in comparison to the Quicktime VR approach. Using the presented rendering algorithms, the viewer is not restricted to pre-computed views and has full control over visualization parameters. The only restriction for rendering is that just those parts of the volume which have been classified as relevant and pre-selected for presentation and transmission can be rendered. When used in a distributed client-server scenario, the software-only rendering approach provides much more flexibility in terms of rendering parameters than volume previewing using texture mapping hardware, still at comparable or even lower costs in terms of bandwidth requirements.