![]() |
To achieve interactive rendering rates even on standard desktop hardware,
fast shear/warp-based parallel projection is used. Rendering to the
base-plane is performed using a back-to-front compositing of voxels
by the use of nearest-neighbor interpolation. In comparison to
previously presented versions of this fast algorithm [44], RTVR
includes an extended version, which provides more flexibility for
mapping voxel attributes to
color and opacity. Three look-up tables (typically
bit,
bit)
are available at each RenderListEntry for implementing shading
and transfer functions. A set of combination patterns for the voxel
attributes and look-up tables is provided by RTVR (See
figure 6.5) and selected by choosing an appropriate
rendering mode for an object. This scheme of combining LUTs allows
efficient processing while still enabling various ways of selectively
applying visualization techniques to objects within the data.
The RenderListEntry can also be
extended to provide user-defined rendering functionality for it's voxels, which
finally allows to implement any desired operation on voxel attributes and
look-up tables.
|
Shading operations are performed using a look-up table based approach, with a 12-bit representation of the gradient vector as an index. Using this approach various shading models can be implemented efficiently and with acceptable quality and applied even on a per-object basis. Two shading models are provided by RTVR: a Phong shading table (figure 6.6a) and a non-photorealistic shading table (figure 6.6b) which enhances the contour of an object [11]. The shading tables have to be re-computed after every change of viewer or light source position, which is not time critical due to their small size (4096 entries). For rendering, the shading table is placed into LUT2 (figure 6.5), and indexed by the 12 bit data channel which contains the gradient vector. The output of the look-up is not an RGB color but an intensity value, which is then used to access the color transfer function in LUT3. Although it would be possible to combine lighting and transfer function mapping within a single look-up into LUT2 (like described in section 4.4), splitting it into two stages allows to reuse the same shading table for objects with different color transfer functions.
The opacity of a pixel is influenced by several sources. An all-object opacity value is always included into the computation and can be used to tune the overall opacity of entire objects, independently of individual per-voxel opacity calculations. The individual opacity of each voxel can be derived from various combinations of data channel and look-up operations. In the following, a few sample color and opacity calculation setups will be discussed, which implement different volume rendering approaches.
In addition to color and opacity values, also the compositing mode can be individually defined for each object - for example, maximum intensity projection, or the usual opacity-weighted blending (DVR). The action to be performed for compositing in-between objects can be defined independently from the object-compositing modes (two-level volume rendering [22], figure 6.7a). Object-aware compositing requires the use of two separate pixel buffers, one for compositing within an object and one for compositing of the global image (figure 6.5).
|
Clipping of objects is handled in a way which differs from the usual approach. Instead of simply not displaying parts of objects which have been clipped, clipped data is rendered using a different set of attributes. Separate values can be set for clipped-object opacity, rendering mode (LUT configuration) and look-up table content. The compositing mode has to remain the same for clipped and non-clipped parts of an object. By setting clipped object opacity to zero, the usual effect of removing clipped data is obtained (figure 6.7b). By using, for example, Phong shading for non-clipped voxels and a contour-only rendering for clipped parts, insight into an object can be given, while still providing a sketch of the most significant features of the clipped part as a context (figure 6.1).
To obtain high frame rates, despite of the flexibility of
color and opacity calculation and compositing mode selection,
optimized routines are implemented for frequently used rendering modes
and compositing mode combinations. Scenes which require only MIP or
DVR (within and in between objects), can be rendered with the
usual approach and do
not require two pixel buffers. If pure MIP is used, voxels can be
sorted and grouped into RenderListEntrys by value instead of the
coordinate [42]. In this case, projecting sorted voxels from
lowest valued to highest valued ones eliminates the need for maximum
search. However, if MIP is combined with other compositing techniques within
the scene, back-to-front rendering and thus sorting by the
coordinate is required also for objects composited by MIP, as they may
interleave with other objects rendered with different techniques.