Implementation
My task was to implement the Shear-Warp Algorithm as described in [1] and [2] as an extension for vuVolume (a tool for volume rendering). It can be subdivided into the following steps:
  • Using runlength encoding of the data-slices
  • Fast classification of volume data by using an octree
  • Perspective Shear-Warp Algorithm
  • Implementation of the above items on BCC-grids.
The User Interface
In the general settings the specular light, use of the fast classification (using an octree) and use of hardware-supported (OpenGL) warping can be activated. The user can also choose between orthogonal and perspective projection mode. In case of perspective projection the distance between eye and image plane can be regulated with a scrollbar. The volume can be rotated interactively by dragging the mouse over the graphical output. Exact alignment of view and right vector can also be entered in the text-fields. Last but not least the GUI includes a button that opens a dialog for editing the transfer-function (the transfer-function-editor). It can also be invoked by a double-click on the graphical output.


User interface of the implementation

 
Using OpenGL for Warping
To speed up warping it is possible to use the intermediate image as an OpenGL texture. The warping is then indirectly done by OpenGL when the texture is mapped on a plane. This plane is translated into such a position that the intermediate image upon it is warped. This method indeed speeds up the algorithm because warping can be done directly by the graphics hardware. The problem is that with great zooming-factors the produced image starts vibrating. This is why also software-warping is implemented.
Problems With BCC-Grids
As mentioned before the adaptation of the Shear-Warp Algorithm for BCC-grids is straight forward. However there are some details that have to be taken into consideration. If one wants to know the x-, y- and z-coordinates of a voxel with index i, one simply has to do some divisions and modulo-operations. With BCC-grids this is similar, but a little more tricky. To understand how the right positions of voxels in a BCC-grid can be determined it is first neccessary to know how data sets of BCC-grids are stored. The following figure shows how this is done. Following the red lines voxels are stored one by one in memory. In the figure you can also see that there are seven voxels in one row (i.e. slice) and all together there are seven slices. Anyway the width of the volume is two times as big as the depth. This is the case To obtain a data set with equal width, height and length this data set must have "red lines" with double length. Indeed if a conversion of the regular fuel data set (size: 64x64x64) is applied, the resulting BCC-data set has a size of 90 slices each with size 45x45.

When the data is viewed with a main viewing direction not orthogonal to the 90 slices, the situation is a little tricky. Specially if there is an odd number of slices one has to deal with the following problem: If the data set is viewed not from the front, the slices (illustrated by blue lines) have alternating size. This problem can be removed by simply adding an empty slice to make the odd number even.


User interface of the implementation
A comparison of rendered images of BCC-data sets with images of regular cartesian data sets showed that the regular cartesian data sets result in better quality than the BCC-grids. In fact this could also be the case because all used BCC-data sets were interpolations of the regular cartesian data sets. So the worse quality of the BCC-data sets might be a result of this additional interpolation. To make a real comparison it would be neccessary to have some really as BCC-grids sampled data sets.

back main next