next up previous contents
Next: Volume Compression Up: State of the Art Previous: State of the Art   Contents

Volume Rendering

In the following, single data samples within the volume which are given at well-defined positions in three-space will be referred to as voxels. Each voxel has a position ($x, y, z$) and one or more scalar or vector attributes, like density, pressure, and gradient. A scalar attribute will be referred to as data value. A cell is built up from a set of neighboring voxels which are located at the cell's vertices. When considering a volume as being built up from cells, attribute values within a cell are obtained by interpolation of attribute values at the vertices of the cell. For volumes defined on a Cartesian grid, cells are regular hexahedra. The following descriptions will focus on the rendering of rectilinear grids.

Since the first approaches for direct rendering of volumetric data in the early 1980s, four major groups of techniques have emerged: ray casting [26,29], splatting [60], shear/warp projection [28], and hardware-assisted rendering based on texture mapping [59]. As a special case, the rendering of surfaces from volume data can be performed by constructing a polygonal representation of the surface first and rendering it using polygon rendering hardware [33].

Ray casting is a straight-forward, image-order algorithm. A ray is shot from the eye through each pixel of the image into the volume. Along the ray's intersection with the volume several operations can be performed to obtain the color of the pixel. The operation may be a simple summation of data values along the ray to obtain X-ray like images (figure 2.1a), or the selection of the maximum value along each ray (maximum intensity projection, MIP, figure 2.1b). The most commonly used operation is the integration (or weighted summation in the case of sampled volumes) of color contributions along each ray [35] (figure 2.1c, d). Each data sample within the volume is assigned a set of optical properties (color, opacity, emission and reflection coefficients, ...by the use of so called transfer functions), which determine the contribution of data samples to pixel values.

Figure 2.1: Some of the most important image compositing methods for volume rendering

\includegraphics[width=.4\textwidth]{Figures/showATT.ps} \includegraphics[width=.4\textwidth]{Figures/showMIP.ps}
a) summation b) maximum intensity selection
\includegraphics[width=.4\textwidth]{Figures/showDVR.ps} \includegraphics[width=.4\textwidth]{Figures/showSSD.ps}
c) opacity weighted blending d) opacity weighted blending
(without shading) (shaded, surfaces emphasized)

Ray casting is a rather time consuming method of volume rendering. Performance of ray casting algorithms can be significantly improved, if regions which do not contribute to the image are skipped from rendering. Such regions are parts of the volume, which contain only entirely transparent voxels, or inner parts of objects with a high opacity. Transparent parts of a volume are skipped, for example, by encoding at each voxel of the volume the distance to the closest non-transparent voxel [8,55,66]. This information can be used to efficiently skip empty regions. Data within opaque regions can be easily omitted, if the ray is tracked from the eye towards more distant regions. By keeping track of the opacity of the data encountered so far, the ray can be stopped as soon as the cumulative opacity is close to total - further samples would not be visible (early ray termination [29]).

In contrast to ray casting, which computes one pixel of the image at a time, splatting is an object-order algorithm - the contributions of each voxel to all pixels of the image are computed at a time. The area affected by the projection of a voxel (footprint) is usually a circle (for parallel projection) or an ellipsoid (perspective projection). Within the affected area, the voxel contributes to the color of pixels according to a Gaussian distribution (or similar) around the center of the footprint. Empty (transparent) regions of a volume can be easily skipped during splatting. Skipping of opaque, invisible regions (interior parts of opaque objects) is more difficult, as a voxel may not contribute to some pixels of the footprint, but may contribute to others.

Both, ray casting and splatting are considered to be high-quality methods, capable of generating images at arbitrary view parameters, image sizes and quality. The rendering times of splatting and ray casting are comparable, with a better performance of splatting for volumes with large amounts of transparent data. On the other hand, ray casting is perfectly suited for parallel implementation [47], as pixel values are computed independently of each other.

Approaches which utilize shear/warp-based projection (like the one presented in this work) are the fastest software-based methods for volume rendering. The usually quite costly process of transforming data from the volume coordinate system into image coordinates for projection is split into two shears along axes of the volume (plus a scaling operation if perspective projection is performed), and a 2D warp operation. The data is sheared and projected onto one of the faces of the volume (base plane, a plane which is normal to an axis of the volume coordinate system). The cheap shear-based projection is performed for all voxels of the volume, creating a distorted version of the rendered image. The warp (which can be, for example, efficiently done by texture mapping hardware) transforms the base plane image into the final image (see figure 2.2).

Figure 2.2: Parallel projection using a shear/warp-factorization of the viewing transformation
\includegraphics[width=\linewidth]{Figures/shearwarp.eps}

As the decomposition of the projection into two separate steps requires to perform resampling twice - first as voxels are projected onto the base plane, and second during the warp step, images produced using this technique are more blurred as compared to the results of ray casting or splatting. Usually, no scaling is performed during the projection to the base plane. Each voxel is projected onto an area of approximately one pixel. Thus, zooming into the volume is performed by zooming into the base plane image during the warp step, which leads to stronger blurring as the zoom factor is increased. Usual approaches to accelerating shear/warp-based volume rendering use run-length encoding for sequences of voxels with similar optical properties, for example for transparent regions. All pixels covered by the projection of a run can be treated equally, thus accelerating the rendering.

The texture mapping capabilities of recent polygon-rendering hardware can be exploited to perform rendering of volumetric data sets. Two different approaches can be distinguished here. Their applicability depends on the capabilities of the used rendering hardware. If the application of 3D textures to polygons is supported, a set of polygons perpendicular to the viewing direction can be placed within the volume and textured using color and opacity information from the volume [59]. By blending the textured polygons in a back-to-front order, the volume is rendered. The quality of the image depends on the number of slices rendered, and is in general lower than the output of software-based rendering. If no 3D texture support is available, single slices of the volume can be mapped as 2D textures on polygons. Three perpendicular sets of polygons and textures are required to avoid viewing the polygons edge-on [50]. The set of polygons which is most perpendicular to the viewing direction is rendered. For small volumes (up to $256^3$) hardware-based approaches can achieve frame rates of up to 30Hz even on consumer 3D hardware. Hardware-based approaches usually provide just a subset of the capabilities of software-based renderers, for example, the user may have to choose between color rendering and shading of the volume, but can not apply both simultaneously.

A more detailed comparison of the four classes of volume rendering techniques discussed above has been published by Meißner and others [36].


next up previous contents
Next: Volume Compression Up: State of the Art Previous: State of the Art   Contents
Lukas Mroz, May 2001,
mailto:mroz@cg.tuwien.ac.at.