next up previous contents
Next: Color Science Basics Up: Tone Mapping Techniques and Previous: Contents

Introduction

Computer graphics is one of the newest visual media. It has become established as an important tool in design, entertainment, advertisements, fine arts, and many other applications, where images are needed. One of the many fields of computer graphics is image synthesis, often called rendering. Photorealistic rendering turns the rules of geometry and physics into pictures that could hardly be distinguished from photographs. Local illumination methods can render images ignoring the impact of objects on the illumination of other objects in the scenes. Therefore, if local illumination methods are used, shadows, penumbras, specular reflections and refractions, diffuse interreflections, etc., cannot be taken into account. On the other hand, global illumination models, ray tracing and radiosity (the two most popular), try to model light in an environment. Of course, such methods take much more time (local illumination methods are implemented in modern graphics hardware), but, as stated before, the results can not be distinguished from photographs.

Every rendering process consists of two steps. The first is the computing of luminance values, and the second is the mapping of the computed values to the values appropriate for displaying on common display devices. There is a lot of research dealing with the first step, but the second step is surprisingly often neglected, although it is far from trivial. Actually there are just a few authors dealing with this problem, in contrast to hundreds of researchers who are improving the first rendering step. Our work is primarily concerned with the final step of the rendering process. It is assumed that image is rendered, and floating point values of pixels' color components are known. We will not deal with methods that are used to compute these values. The floating point image will be called the "raw image".

In the ideal case the raw image should be mapped to the display device so that the displayed image creates the same sensation in the viewer as would have been experienced by seeing the real environment. Unfortunately, there are many obstacles to realizing this ideal. These include the display devices' nonlinearities, limited color gamut, limited display device contrast, changing lighting conditions in the viewing environment, human vision rules, the basic limitations of representing 3D reality as a 2D projection, etc. Some of these obstacles will be explained later.

Various mapping methods are described in this work. Some methods take into account the above mentioned problems, or at least some of them, while other, more simple methods, do not. Some familiarity with color science, radiometry and photometry is necessary to understand this work, therefore chapter 2 deals with color science basics, radiometry and photometry and some aspects of human vision are described in chapter 2 as well.

Chapter 3 describes various display devices. Actually CRT as the most used display device in computer graphics, slides, and some printers are described. Data measured by the authors are also given in this chapter.

In chapter 4 linear scale factor methods are introduced. Probably the most widely used mapping is the use of a single scale factor which maps the average luminance to 0.5 input value, assuming that the display device's input range is [0,1], and that the device has linear response. Unfortunately, such scale factor can not reproduce the original atmosphere of the scene. Actually it will display the scene lit by a very weak light source, and the same scene lit by a very strong light source as being the same image, because of linearity of the integral operator in the rendering equation [ArKi90].

An interactive mapping technique introduced by Matković and Neumann in [MaNe96] makes it possible to display images with the proper atmosphere if this is known. The method uses two parameters called contrast and aperture, and maps the raw image according to subjective user settings. The interactive calibration mapping method is one of the contributions of this thesis. At the end of the fourth chapter a contrast based scale factor proposed by Greg Ward [Ward94] s described. Ward's mapping makes differences just visible in the real world become just visible in the displayed image. If the visibility analysis is crucial (e.g. the design of emergency lighting) this could be the right mapping method. Improvements of this method are introduced by Ferwerda et al. [FPSG96] and Ward et al. [LaRP97] and they are described in the next chapter.

In chapter 5 non-linear scale factors are introduced. A mapping technique proposed by Schlick [Schl94] is described first. Schlick's method is actually a computational improvement of the logarithmic mapping based on Weber's law. This is an automatic method that yields good results if the overall raw image contrast is not too high. Further, a non-linear mapping technique as suggested by Ferschin et. al. in [FeTP94] is described. Ferschin et al. introduced a method which suppresses the influence of a few very bright pixels influencing the average too much. If luminances in the raw image are computed in absolute units, the appropriate atmosphere based on preserving the original brightness, could be reproduced using Tumblin and Rushmeier's mapping technique. This method is introduced in [TuRu93] and [TuRu91], and this is still one of the most comprehensive solutions of the raw image mapping problem. Unfortunately it is solved only for gray scale pictures. The method is described in section 5.3, and section 5.4 describes a model of visual adaptation introduced by Ferwerda et al. [FPSG96]. The model of visual adaptation is based on the Ward's model. Here the rules of human adaptation are taken into account. Even temporal effects well known from real life (e.g. the inability to see when entering a cinema until the eyes have adapted) can be simulated in computer graphics using this mapping method. Chapter 5 finishes with an overview of the visibility matching tone operator [LaRP97]. This is a further improvement of Ward's original operator.

Chapters 6 and 7 describe the main contribution of this thesis. Methods described in this chapters were introduced together with László Neumann [NeMP96], [NMNP97]. The whole family of methods called minimum information loss mapping is described in chapter 6. The main idea is to find the clipping interval so that minimum amount of information is lost, thereby preserving the original contrast of all correctly displayed pixels. Two variants are described, in the first the color component is assumed to be essential information, and in the second the pixel is treated as essential information. The second variant is called the minimum area loss. The method works especially well in back light scenes, which are often displayed as too dark if average value mapping is used. The methods are not conditioned by knowledge of absolute units. Another possibility is to limit allowed information loss, and find the smallest contrast interval which still satisfies limited error condition. Of course, in this case the original contrast is not always preserved.

Chapter 7 describes incident light metering in computer graphics. Incident light metering is a well known method in professional photography and the movie industry. In fact, it was used at the beginning of the photography era by portrait photographers. Although it is not practical for amateur photographers (light should be measured at the subject position, not at the camera), it can be implemented in computer graphics. It overcomes the problem of average mapping, where a very bright scene (e.g. a snow covered mountain) and a very dark one (e.g. a heap of coal) are displayed as medium gray (or close to it), which makes the bright scene too dark and the dark scene too bright. When incident light metering is used, raw images are mapped correctly, and the absolute units should not be known. We recommend using this method when absolute units are not known (which is most often the case due to difficulties in getting appropriate data for light sources and materials) and the scene settings are not usual (e.g. very bright, or very dark scenes, scenes with back light etc.). Note that bright or dark scene here does not mean well or poorly lit scene, but rather the scene with low or high object's reflectances. Actually, this is the only method that will reproduce selected colors even for the scenes with very low or very high average reflectance. The tone mapping part of this thesis ends with chapter 7.

The next chapter, perception based color image difference, presents a new algorithm, developed together with László Neumann, for computing the difference between two images. A good image metric is often needed in computer graphics. All progrssive rendering methods should check convergance somehow, lossy compression algorithms should be evaluated, sometimes the resulting images of various rendering or tone mapping techiques are compared, etc. The most often used metric in computer graphics is the mean squared error. Unfortunatelly it does not corespond to human perception, and sometimes images that look similar can have larger difference than obviously different images, when the mean square metric is used.

Two recent papers by Rushmeier et al. [RWPSR95] and Gaddipati et al. [GaMY97], deal with perception based image metrics. They compute the image distance either in Fourier or in the wavelet space, which makes them computationally expensive and not intuitive. The color is not handled completely correctly in these two approaches.

We introduce a new method that operates in the original space and handles the color more accurately.

This thesis ends with results and conclusion chapters.


next up previous contents
Next: Color Science Basics Up: Tone Mapping Techniques and Previous: Contents

matkovic@cg.tuwien.ac.at