VolRendYo!

Description

VolRendYo! is a program for visualization of volume data. It features a depth-of-field effect for a slice-based direct volume renderer. This approach was proposed by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. This program is meant to demonstrate that a depth-of-field effect helps the users to better perceive what is in front of what in volume renderings.

Downloads

Executable

Source (Visual C++ 2015)

Doxygen Documentation

Website

How to start

Edit the "start.bat" to set your resolution, monitor refresh rate, full-screen state, and dataset index. Then double click the "start.bat". Or start the volrendyo.exe directly to use the default settings. With a resolution of 1024x768, the NVidia GeForce 780GTX reaches approx. 20 fps.

If it says that DLLs are missing, install the Visual C++ 2015 Redistributable Packages and / or DirectX End-User Runtimes.

How to build

  1. Open solution with Visual Studio 2015.
  2. Select normal release mode. (debug takes a while to preprocess the normals, other build are not set-up)
  3. Build
  4. Run in VS or start with batch file in release folder
  5. You can choose a volume data set per command line parameter

Controls

Steer camera with WASD + Space + Ctrl + Mouse + shift for booster.

Implementation details

Libraries

Volume rendering

We re-use the slice-based volume renderer from the previous project. It bases on the GPU Gems Chapter 39 including the shadowing technique. Instead of sheep wool or clouds, it is now used for downloaded volume data from industrial or medical CT scans.

Important code parts

The important newly implemented parts for visualization 2 are: Related but not new classes

Preprocessing

The program can load some pre-defined volume data sets. Only pre-defined because some data sets have a header containing the resolution and bit depth, others lack any header. Our program loads the binary information from the file into the main memory. There, it converts the data to a data usable by the graphics card. I.e. bit shift the usually 14 used bit in 16 bit from the file to 8 bit. Then, it calculates the gradient (normals) for each voxel, which is filtered with a simple box filter afterwards. The density information and the gradient are then sent to the graphics memory as a 3D texture with 4 components. The program creates a scene graph consisting of a node for the volumetric object, a circling light source, and a moveable camera.

Rendering

Our renderer implements the process described in the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro. For the slice-based rendering, we use quads as proxy geometry. These quads are rotated to face the camera. For each fragment of the slices, we sample the 3D-texture to get the density and normal of the corresponding voxel. Then, we use this information as input for the transfer function, a 2D texture. The result from the transfer function is the fragment's color. We have two stacks of slices: one rendered back-to-front and one rendered front-to-back. The first uses the over operator to blend its fragments with the colors behind. The second uses the under operator. The results of both stacks are blended and displayed.

To achieve the depth-of-field effect, we don't just blend the slice's fragments with the fragments of the previous slice. Instead, we sample multiple times in a certain circle of confusion on the previous slice. The radius of this circle of confusion is determined by the slice's distance to the focus slice and the strength of the depth-of-field effect.

About the project

We created this demo during the visualization 2 lecture of our studies at the TU Wien. We re-used parts from our engines for the real-time graphics exercise. The depth-of-field effect bases on the paper by Mathias Schott, Pascal Grosset, Tobias Martin, Charles Hansen, and Vincent Pegoraro.

Credits