Compression algorithms for volumetric data can be classified according to various criteria. Compression may be lossy, or lossless, the method may use a general compression algorithm or an approach which exploits specific characteristics of volumes. Compression may be based on hierarchical approaches, and suitable for progressive rendering even if only a part of the data is available. Some techniques allow direct rendering of compressed data, others require a decompression first.
The efficiency of compression, of course, depends on the characteristics of the data. In the following, compression rates are given as examples which are typical for data sets from medicine. Lossless compression provides significantly lower compression rates (around 2:1) than lossy compression, which usually allows to choose the desired compression ratio, i.e., quality degradation. Acceptable rendering quality can be obtained for lossy compression by factors of 5-50:1. Despite of the superior compression rates of lossy compression, many applications, like the diagnosis of medical data, prohibit changes to the volumetric data and thus require lossless methods.
Compressing volume data by the use of general purpose tools, like zip [18], just exploits coherence in one of the three dimensions of the data. Fowler and Yagel [17] presented a technique which uses prediction based on the data values of neighbors of a voxel in all three dimensions and which stores just a Huffman-encoded prediction error. Although coherence is exploited better compared to zip, the compression ratios are just insignificantly better.
Ning and Hesselink [46] presented a lossy compression method based on vector quantization. They group neighboring voxels into so-called bricks. Attributes of voxels within a brick (data value, gradient, ...) are the elements of a vector, which is then quantized by mapping to the closest representative within a codebook. The compression rates depend on the size of the codebook, and are typically around 5:1, if acceptable quality should be preserved. Rendering can be performed without decompression, by projecting precomputed templates of the codebook entries.
An approach which is similar to the methods used by JPEG compression [57] has been presented by Chiueh and others [7]. The volume is subdivided into bricks which are then transformed into frequency domain using a discrete Fourier transform (JPEG uses the discrete cosine transform in contrast). The coefficients in frequency domain are quantized and the results are entropy encoded. For compression factors close to 30 the rendering results exhibit an acceptable image quality. Rendering can be performed without prior transformation into spatial domain. Within the bricks, frequency domain rendering [30] is performed. If summation rendering (X-ray like attenuation) is performed in-between the bricks, a difference to rendering of the uncompressed volume is hardly noticeable. Other compositing techniques between the bricks lead to visible artefacts.
Lippert and others [32] presented a volume compression technique based on wavelet decomposition, which is well-suited for progressive rendering and transmission over networks. Coefficients of the wavelet representation are quantized and stored in a way, which allows to reconstruct and render a preview of the volume even if only a part of the data is available. In general, hierarchical (for example, wavelet-based) approaches are the most flexible form of volume compression. On one hand, sufficiently accurate coefficient information can be stored to allow lossless reconstruction of a volume. The compression rates achieved in this case are comparable to other lossless techniques. On the other hand, a proper arrangement of the coefficient information in the compressed data file or stream, allows to approximately render a volume using just a small fraction of the whole data.