![]() | ![]() | ![]() |
The following API-Documentation and its described IDVR application was developed in the context of the Bakkalaureatsarbeit of the bachelor of Medieninformatik at the Technology University of Vienna between August 2004 and March 2005. All components of the application and the additional documetnation were solely created by the following two students under the supervision of the Institute of Computer Graphics:
This API-Documentation includes all relevant classes and its variable and method members which actually set up the program structure of the IDVR application. Our application embeds <IDVR> in combination with Importance Driven Volume Rendering (IDVR) into a standard ray casting process which basically refers to our render structure. Importance Driven Volume Rendering means the capability to visualize specific volume objects despite of any occlusion depending on the current view direction. Thus, an application based an IDVR is able to guarantee full visibility of volume objects and represents graphical information which would be at least partly hidden by using standard volume rendering methods. Due to an importance hierarchy of all contained volume objects it is possible to favor specific volume objects during the rendering process and thus a weight composition will be applied to visualize most important volume objects in any case. IDVR actually is an enhancement to standard volume rendering which provides easy implementation into the standard rendering pipeline which was firstly introduced by Mark Levoy. Additionally, Two-Level Volume Rendering (2lVR) has been integrated into this application which is an essential rendering method for IDVR. Instead of treat all sample points in the same manner during ray casting 2lVR differs between the corresponding volume objects. All voxels of the volume data set are classificated to the corresponding volume object by an explicit identity number. Thus, each volume object can include separate rendering properties, i.e. shading model and composition method, which have to be considered during the rendering process. Due to the use of 2lVR we implemented two separate composition passes into the standard rendering pipeline, namely the local and the global composition. Local composition only depends on the corresponding volume object composition method (and thus various separate local composition methods exists) whereas the global composition is valid for the entire data set and will be applied after the local pass. The following figure illustrates our implemented rendering pipeline with all essential rendering steps to calculate a correct graphical representation of the current volume data set.
IDVR rendering pipeline
The classification step includes the assignment of a valid opacity value and the interpolation of importance, gradient and object identity number to the corresponding sample point. The sample point is represented by the class IDVR.VolumeRenderer.RenderPrimitives.RaySample which could be linked to an instance of the class IDVR.VolumeRenderer.RenderPrimitives.Ray. This class includes all data of a specific ray during ray casting (class IDVR.VolumeRenderer.SWVolumeRenderer.RenderMachineSW). The opacity assignment is based on an one dimensional transfer function (linear or defined by the user) and is implemented into classes
The shading step calculates the RGB color values for the sample point and four different shading models are selectable by the user. Those shading model are LMIP, Phong Illumination, Contour and Tone Shading and are integrated into the corresponding classes
The two selectable composition methods Direct Volume Rendering (DVR) and LMIP Composition can be both defined as either local or/and global composition pass. Those two composition methods are implemented into the classes
We have implemented no explicit class representation of 2lVR but we simply use separate instances of those two composition classes to realize either the local pass (separate call of the corresponding composition method) or the global pass of 2lVR. Finally the IDVR enhancements are represented by the classes
Class IDVRColAndOpModulatorSW calculates the footprints of the included volume objects and saves those information in instances of the class IDVR.VolumeRenderer.RenderPrimitives.Footprint. The actual opacity modulation based and the Maximum Importance Projection (MImP) is fully implemented into class IDVRMImPCompositingModelSW.
We have used the Microsoft .NET 2003 program environment with 1.1 Framework .NET which is absolute necessary to correctly compile and execute this application. All classes are programmed in Microsoft Visual C++ with Managed Extensions for C++ and thus the classes are either defined as __gc class type or __value class type. The main advantage of Managed Extensions for C++ which we actually use is the inherited garbage collector of the .NET framework which automatically controls the application's storage for unneeded saved class instances and other data. Additionally, we used the Sandbar toolkit which provides advanced GUI components to the standard .NET GUI components library Systems.Window.Forms. Also the CSGL library is integrated into our application to provide simply use of OpenGL 1.0 for output the rendered image data. Particularly, we use texture objects of OpenGL for efficient output onto the screen and fast triLinear sub sampling of the OpenGL's rasterization step. Thus, size and resolution changes of the already rendered images could be most efficient conducted.
Two file formats are essential to provide data information of the actual volume data set and the object identity classification to the application. Actual voxel data has to be loaded by .dat files and the object membership classification is represented by .mask files. After a successful finish of the loading process the read data is saved in an instance of the class IDVR.Volume.DataStructures.VolumeData which contains all needed attributes and methods to easily access those volume data. For further details of the loading process and its corresponding class structure please see class IDVR.Volume.DataLoader. The following description gives an overview to the file structure of those two main data files:
Contains a file header with general information about the volume dimension and the actual voxel data (voxel density) in a closed sequence. Note that the voxel data part is directly located after the header.
The header consists of 3 x 16 bit to represent the voxel count of each single dimension x, y, and z. Next to the header the actual voxel data follows which is also represented by 16 bit. Note that actually only 12 bits (from LSB to bit number 11) can be used to save the voxel density and thus the density range is 0 ? 4095. The other 4 bits (from bit number 12 to MSB) are reserved for the optional identity number of that corresponding voxel. Basically, those bits will not be normally used (filled with zeros) because of the explicit use of .mask files to define identity number for volume objects.
Contains only a single bit stream with the length of the corresponding voxel count where each .mask file describes a specific volume object. Each single bit refers to the corresponding voxel of the data set and has either the value 1 (is part of the volume object) or 0 (is not a part of the volume object).
Please note the following reading order for both data file formats during the reading process (also see class IDVR.Volume.DataLoader):
for (currentZ = 0; currentZ < dimensionZ; currentX++) for (currentY = 0; currentY < dimensionY; currentY++) for (currentX = 0; currentX < dimensionX; currentX++) loadVoxelData(currentX, currentY, currentZ);
Finally, we want to remark that this API documentation is completely developed by Doxygen v1.4.2 which should give you detailed information of all included classes, their member variables and methods and the overall structure of the application in relation to the IDVR enhanced rendering pipeline (see figure above).