|
The basic object and rendering primitive of RTVR is a voxel, i.e., a single data sample within the volume. During a segmentation and data extraction step (figure 6.2), voxels which actually are relevant for the user-defined visualization are extracted and stored (object by object) within a RenderList data structure. The voxel extraction step usually leads to a significant data reduction. First, only a portion of the original volume actually belongs to objects of interest. Second, depending on the desired visual representation of the object, only a subset of the object's voxels has to be considered for rendering. If surface rendering (using a fixed iso-value) is performed, for example, a thin layer of voxels is sufficient as a representation of the object. When rendering an object by using opacity transfer-functions which depend on gradient magnitude [29], voxels with a low gradient magnitude do not noticeably contribute to an image and therefore can be omitted.
The voxel sets which result from object extraction are the basic data structures of RTVR. For visualization, this data is inserted into a scene graph and rendered. An intermediate representation, which can be used to store visualization results for later interactive viewing is produced by transforming extracted voxel data into a space-efficient compressed format (see chapter 5) [40] and storing it on disk together with the currently used visualization and rendering parameters. For rendering, RTVR uses fast shear/warp projection as described in chapter 4, which requires the data to be given as isotropically spaced voxels. Fortunately, this does not really restrict the use of RTVR for visualization. Data which is given on non-Cartesian grids, can be resampled on-the-fly during the extraction of object voxels.
After extraction, the next step in the visualization process is the assignment of voxel attributes to optical properties (transfer-function mapping). One or two voxel attributes can be selected to influence a voxel's contribution to the visualization result. This attributes usually are the data value, the gradient direction, and/or the gradient magnitude. The restriction to two arguments per transfer function is imposed for performance reasons. For rendering, both values have to fit into a 16 bit field, which is typically subdivided into a 12 bit main channel for a more significant data value, and a 4 bit channel for a second, additional value.
The data values are used to index look-up tables to obtain and modulate color and opacity values in a way which is defined by the selected rendering mode. The look-up tables are used to implement different transfer functions and shading models in a very effective way.
Depending on the visualization parameters in use, some object voxels may not contribute to a visualization at all - as they may be, for example, totally transparent after application of the transfer function. A background thread identifies those voxels during idle-time and rearranges the data in a way that later no effort is spent on skipping them during the next rendering pass. This is especially useful for accelerating the rendering of ``fuzzy'' objects, where no exact information about object shape is available at the time of extraction.
The object data contained within a single scene graph can be
simultaneously displayed in several views - a 3D view and several
sections through the volume, for example. Parameter changes which
influence the results of the visualization can be carried out either
by using GUI components, or by directly interacting with objects within
the rendered view. GUI components for parameter adjustments are
automatically derived from the visualization
pipeline and grouped into a ``control panel''. Within a (3D) view,
objects can be selected by clicking on them, parameters like the
camera position, zoom factor, light source position, or object opacity
and transfer function can be manipulated by dragging the mouse.