As already mentioned before, various shading models can be applied at interactive frame rates to render the RenderList-based volume representation, if a quantized representation of the gradient is stored as an attribute with every voxel. Gradients which are usually given using 3 float (=96 bit), or at least byte coordinates (=24 bit), are first quantized to 12-16 bit to obtain a compact representation. As the number of distinct gradient directions is rather low after quantization, a good distribution of the quantized directions over the unit sphere is necessary.
If parallel projection is performed, and only directional light sources (located at infinity) are used, the evaluation of common lighting models like Phong shading does not depend on the position of the voxel, but only on the gradient vector, viewing direction, and light direction. For shading models where this assumptions hold, the quantized gradients can be used as indices to access an array of precomputed shading values during rendering (figure 4.20). The table has only to be recomputed, if one of the influencing factors (light or viewing direction) changes. A gradient dictionary table is used to store non-quantized representations of each possible quantized gradient (required for the computation of table content). To update the table, the specific lighting equation is evaluated for each gradient vector in the gradient dictionary and the result is stored into the corresponding entry of the look-up table. The value stored in the look-up table is the shaded voxel color. A more flexible approach, based on storing an intensity value instead, which can be used to modulate the color of voxels is described in chapter 6.
The limited number of evaluations required to
update the look-up table, allows to apply even complex shading models
without impact on the interactivity. The simplest model used is the Phong
shading model, with the intensity
depending on gradient
direction, viewing direction and light source position
The look-up table based shading also allows to implement various
non-photorealistic shading methods, for example, contour
enhancement [11,10]. The model assigns high intensity
(and opacity) values to voxels with gradients most perpendicular to
the viewing direction. Lower values are assigned to voxels with
gradient vectors facing towards or away from the viewer:
As the application of the pure contour enhancement method provides
only a sketch-like representation of the objects, without much
information on shape details, both approaches, Phong and contour
enhancement, can be combined to obtain a color
which depends on
both models:
|