next up previous contents
Next: Extraction of Boundary Voxels Up: Space-Efficient Object Representation for Previous: Space-Efficient Object Representation for   Contents

The Basic Idea

The effectivity of the presented approach is based on the observation that for the vast majority of applications, especially in medical visualization, volumetric data is rendered by displaying either iso-surfaces [33] or surface-like structures defined by areas of high gradient magnitude [29]. In both cases, the result of the visualization is determined by contributions of just a small fraction of all data samples. By just coding those voxels of an object, which actually contribute to its visual appearance, i.e., the voxels stored within the RenderList, the size of the data set is greatly reduced. Thereby, a small-scale boundary representation of volumetric objects is generated (figure 5.1, Sect. 5.2). Compression of the RenderList representation, which exploits spatial coherence among neighboring voxels, produces a very compact object representation (Sect. 5.3) which is well-suited for network transmission (Sect. 5.4). The information contained within this representation of objects allows interactive rendering at a client without any dependency on hardware-support, and with more flexibility regarding visualization parameters than polygonal surface representations (a demonstration applet is available from
http://bandviz.cg.tuwien.ac.at/basinviz/compression/).

The first step to obtain an efficient representation of bounded objects within a volumetric data set is the identification and extraction of voxels which contribute to the object's visual representation, i.e., the boundary of the object. This is performed during a preprocessing step using one of the techniques described in chapter 4. Best compression results are obtained if object surfaces (iso-surfaces in general) are created and stored into RenderLists. Usually just 5-10% of all voxels belong to the boundary representation.

Within the RenderList, voxels are grouped into slices sharing the same $z$ coordinate (see figure 5.1). Within a slice, the boundary voxels form contours of the object - a set of connected sequences of voxels. Exploiting spatial coherence of the contour, the positions of voxels within the slice are efficiently encoded into a compressed data stream. Voxel gradients are compressed in the same order as the corresponding positions, using a special compression scheme. Additional streams of voxel attributes (= data channels), like data value, gradient magnitude, etc., can be optionally encoded in a similar way. The output of the compression step is a boundary representation of volumetric objects, typically compressed by a factor of 10-100 compared to the original volume.

By transmitting the data channels in a specific order, for example, position data first, gradients last, a preview of the objects with full spatial accuracy can be displayed (figure 5.4) after transmitting just a few Kilobytes of data (using estimated gradients for shading).

The decompressed boundary representation can be directly converted to a RenderList and rendered. Compared with a polygonal representation of the boundary surfaces, this approach preserves the full accuracy of the data set at much lower memory cost, allows interactive rendering on low-end hardware and provides more flexibility with respect to rendering parameters. Transparency, non-photorealistic shading, and the fusion with truly volumetric objects are easily possible without performance degradation (see figure 5.2 for examples).

Figure 5.2: a) By adjusting the visualization mappings at the client, the skin surface has been rendered using a non-photorealistic technique on top of the Phong shaded skull. b) A data channel containing distance information has been used to modulate opacity of the basin surface to emphasize areas of almost-contact between the surface and the attractor contained within.
\includegraphics[width=6.3cm]{Figures/nprvr-c.ps} \includegraphics[width=6.3cm]{Figures/contact.ps}
(a) (b)


next up previous contents
Next: Extraction of Boundary Voxels Up: Space-Efficient Object Representation for Previous: Space-Efficient Object Representation for   Contents
Lukas Mroz, May 2001,
mailto:mroz@cg.tuwien.ac.at.