next up previous contents
Next: Interactive Rendering Up: Basic Concepts Previous: Fast Rendering   Contents

Two-Level Volume Rendering

Good visualization strongly depends on what data is to be visualized, what structure this data consists of, as well as on the visualization goals of the user. Depending on these prerequisites several useful approaches exist, and individual decisions (what rendering method to choose) have to be made for specific applications. Different parts of a volume (objects) might require different rendering methods to best depict their structure. If segmentation information is available, the presented approach can be used to provide this functionality at interactive frame rates.

The basic idea of two-level volume rendering [22] is to investigate a viewing ray into the data set for every pixel, and detect what objects are intersected. For every object intersected, a meaningful and representative contribution is computed using an object-specific compositing method (for example, MIP or DVR). These object representatives are finally composed into a pixel value by combining them using a global compositing method (usually DVR compositing).

Figure 3.4: object segmentation implicitly yields viewing rays to be partitioned into segments (one per object intersection).
\includegraphics[width=.9\linewidth]{Figures/ray.eps}

The principles of two-level volume rendering can be easily explained using a ray casting based approach: a 3D segmentation mask specifies which regions of the data set belong to which objects. The subdivision of the data set into objects also segments viewing rays into sets of distinct segments (figure 3.4).

During ray traversal, two simultaneous tracks of rendering are processed. For every segment of the ray, local rendering is performed using the object's compositing strategy, to compute an object representative associated to the segment (rendering at object level). On the scene level a global track of rendering is computed which combines the object representatives to final image values. Whenever the ray leaves an object and enters a new one, the local value of the old object is merged into the global rendering track using the global compositing method.

Figure 3.5: joining MIP and DVR - a simple example (bones and vessels: DVR; skin: MIP).
\includegraphics[width=.57\linewidth]{Figures/hand-miphaut.ps}

For an example see figure 3.5. In this case, DVR rendering gives good results for ray segments within vessels and bones. MIP rendering works best for ray segments in soft tissue regions. This is mainly due to the fact that MIP generates rather equal transparency regardless of the object thickness.

Usually, using DVR-compositing on the global level seems to be appropriate. The only exception, where MIP seems to be more useful instead, is if all objects in the data set are rendered by the use of MIP themselves, also. In contrast to standard MIP, this ``MIP of MIP'' approach allows to easily distinguish between different objects within the scene, as different transfer functions and thus colors can be assigned to different objects.

For implementing the two-level rendering approach based on the RenderList structure, two sets of buffers are used for the base plane image. An object buffer is used for performing rendering within an object, while a global buffer is used to perform inter-object rendering. In addition to intermediate pixel values, each pixel of the object buffer additionally stores a unique ID for the currently front-most object. If a voxel is projected onto the intermediate image, it's ID is compared with the stored ID in the object buffer. If both IDs match, the value in the object buffer is updated using an operation which corresponds to the local rendering mode of the object (maximum selection or blending of the voxel value with the buffer content). If the ID of the voxel differs from the ID of the pixel in the buffer, the viewing ray though this pixel must have entered a new object. The content of the object buffer pixel has to be combined with the corresponding global buffer pixel using an operation which depends on the global rendering strategy (MIP or DVR). Afterwards the object buffer pixel is initialized according to the voxel of the new object and the new local rendering mode.

After all voxels have been projected, the contribution of the front-most segment at each pixel has to be included by performing an additional scan of the buffers and merging the segment values left in the local buffer into the global buffer. See http://bandviz.cg.tuwien.ac.at/basinviz/two-level/ for sample images and animations produced using this technique.


next up previous contents
Next: Interactive Rendering Up: Basic Concepts Previous: Fast Rendering   Contents
Lukas Mroz, May 2001,
mailto:mroz@cg.tuwien.ac.at.