One major challenge of visualization in general is to deal with
large amounts of data. Especially in volume visualization, sizes of
common data sets range
between several hundreds of Kilobytes, at the minimum, up to Gigabytes
of uncompressed data. In medical visualization, for example,
volumetric data sets of size
bit, i.e.,
32MBytes in total, are quite usual. If standard compression like
gzip [18], for example, is applied, data sets usually shrink
to about 30-60 percent of the original size - which is still in
the range of MBytes.
Processing huge data sets itself poses high-performance requirements on the visualization software, but also storage and transmission of volumetric data sets easily produce bandwidth problems, especially if multiple data sets are to be treated. From medical applications, for example, we know that archiving 3D data sets, which accompany diagnosis data, significantly stresses storage devices currently available in common clinical setups.
Visualization over the Internet is even more critical, concerning the size of volumetric data sets, and concerning storage problems. Web applications like remote diagnosis suffer from low transmission rates, even over local networks (LANs).
The applicability of the more flexible fat-client solution to volume visualization strongly depends on the effectivity of the compression techniques used for transmission of the data set. Lossless compression techniques - for general data [18] as well as especially for volumetric data [17] - usually achieve rather low compression ratios (around 2), which is not sufficient to significantly widen the bandwidth bottleneck. Using lossy compression [7,32,46], reduction ratios in the range of 5:1 to 50:1 can be achieved while maintaining acceptable quality of the visualization results. On the other hand, medical applications, typically prohibit changes to the accuracy of the data, as induced by lossy compression methods. Hierarchical methods, like wavelet compression [32] combine advantages of lossy and lossless compression. By transmitting and considering just a small fraction of the coefficients (around 5%) images of acceptable quality can be generated. On the other hand data values of the original volume can be reconstructed if all coefficients are considered. A useful property of wavelet compression and many lossy compression techniques is the ability to render compressed data directly, without prior expansion and decompression.
Polygonal representations of structures within the volume (e.g., of iso-surfaces) can be used to realize solutions which are a compromise between a pure thin- and fat-client approach. The volume is kept at the server, just the polygonal surface model is transmitted and rendered at the client. Changes of viewing parameters require local rendering only, just changes affecting the shape of the model require a recomputation at the server and transmission of surface data over the network. To reduce the bandwidth required to transmit the model and to improve the interactivity of rendering at low-end clients, progressive refinement as well as focus-and-context techniques can be used [15], trading quality of representation (in less relevant regions of the volume) for speed.
Pure thin-client solutions on the other hand, allow to perform visualization on low-end clients making at the same time shared use of special purpose hardware at the server (multiple CPUs and/or VolumePro board [48], for example).
One approach to determine the effectiveness of compression techniques
for volumetric data sets and their suitability for Internet-based
visualization is to compare the size of
compressed volumes versus the size of images of the same data. This
comparison is useful as it directly corresponds to the trade-off
between thin- and fat-client solutions. If sizes of compressed
volume data sets are in the same range as sizes of images
thereof, and given the client to provide sufficient computational
performance to carry out most of the visualization steps itself,
then fat-client solutions become feasible even via the Internet.
The proposed technique [40] achieves compression rates
such that, given a
data set as well as
images (24bits per pixel) in
compressed GIF-format, about 2-5 images already are bigger in size
than the compressed volume data set.