C. Brandorff, P. Rautek (1), W. Weninger (2), E. Gröller (1)
„ONTOVIS“– Illumination Correction of Histological Cut Imagery for Ontology Data
The “Center for Anatomy and Cell Biology” (2), a department of the Medical University of Vienna, developed a technique to generate series of exactly aligned images of biomedical specimens. This technique is used to study the impact of different genes during morphogenesis. However this technique, also introduces artefacts which make it hard to use automatic or semi automatic segmentation algorithms. The topic of this internship was to reduce the artefacts in the images. In this work we concentrated on the artefact resulting from the uneven illumination.

Figure 1: A specimen embedded in a polymer block.
The specimens are subjected to immunohistochemistry or whole mount in situ hybridization in order to specifically stain proteins or sites of gene expression. After this the tissue is embedded in a medium based on polymer to generate a small hard block for slicing. An example for a block containing a specimen can be seen in Figure 1. Then the block-surface is photographed with a digital camera from the front using two separate filters, one to capture the morphology (morpho image), the other to capture the signal generated by the marker (signal image). After these images are generated a thin slice is cut from the front. The process of taking photographs and slicing is repeated until stopped manually or the object of interest is completely cut. Since the thickness of the slices is in the order of µms, a block of an embryonic chicken generates typically about 2000 slices. The image generation takes several hours. The data generated with this technique is in the order of several gigabytes (two images per slice each 2560 x1920 stored with 8 bit precision results in approximately 2.3GB).

Figure 2: A morpho image of a chicken embryo.
Figure 2 shows a morpho image and Figure 3 shows the corresponding signal image of a chicken embryo. As stated above there are several artefacts in this images. In the morpho image of Figure 2, the uneven illumination across the image can be seen. Figure 4shows the slicing artefacts generated by the blade. Since the images are always taken from the front, the structures below the photographed slices shine through. This effect results in the shadow-artefacts shown in Figure 5. Figure 6 shows some remaining blood that cannot be removed before the tissue is embedded into the block. All these artefacts make the segmentation of the data very hard, and standard automatic segmentation techniques completely fail. Therefore most of the segmentation tasks are currently carried out manually which is very time consuming.

Figure 3: A signal image of a chicken embryo.
|
Figure 4: Slicing artifacts caused by the blade. |
Figure 5: Shadow artifacts caused from objects shining through the top of the block. |
|
Figure 6: Remaining blood which could not be extracted before the embedding. |
|
In this work we concentrated on the illumination correction of the images. An easy way to correct the illumination is to capture a blankfield (an image without the embedded object) and to use the blankfield to correct the illumination. However it is not possible to take a blankfield for every block. Therefore we tried several standard Image Processing techniques like the TopHatFilter. Unfortunately, the results were not very pleasing and an alternative approach had to be implemented. The novel approach is to interpret the image as points in three dimensional space (using the brightness as z-value).

Figure 7: The pixels of the image in Figure 8 plotted as points in 3D-space.

Figure 8: Morpho image of a slice if a chicken heart.
Figure 7 shows the image pixels (plotted red) in 3d of the image in Figure 8. The background pixels approximately form a parabolic surface (plotted green). This led us to the hypothesis that the illumination artefact can be described by a parabolic function. We fit a quadric surface in the background pixels. As fitting algorithm we used a simple least squares technique (without weights or iterations). We tried this algorithm on a blankfield to explore which pixels are the most important for the fitting algorithm. We did masked out several regions from the original image and measured the root mean square error. We found out that the most important pixels are the border pixels. This can be seen in Figure 9 and 10, while we started masking out the center pixels, and let the mask grow towards the border in Figure 9, we started with the border pixels and let the mask grow towards the center in Figure 10, this results in a higher error if we only use the center pixels for the fit. When using this technique on real images (images with objects embedded), we had to spare out as much object pixels as possible while preserving as much from the background pixels (and especially the background border pixels) as possible. Making use of our findings we implemented a tool to apply our algorithm to a large number of images.

Figure 9: A square mask growing form the center was used to delete data points. The plot here shows the RMS error made with our computation compared to the original blankfield image (1 = 100%).

Figure 10: The same computation as in Figure 9 with the inverted mask (1 = 100%).


Figure 11: This image shows the graphical user interface of the illumination correction tool
![]()
Description of the usage:
Figure 11 shows the graphical user interface of the tool. Datasets with objects which consist of several parts distributed on different locations across several slices can be improved using our maximum intensity projection technique (MIP). This technique stores the maximum gray value on each pixel for all slices, resulting in an image where the object pixels (usually low values) are replaced by background pixels (usually high values) if the object parts are distributed across the image on different slices. To use MIP first the Open button is used. After selecting the files (currently we don’t support jpg files) the MIP is computed pressing the MIP button. Alternatively a single image can be loaded (visible as the first item in the combobox) that is used as the source for the lightmap computations. To set a marker the Set Marker button is pressed (rectangle is drawn to the image; the sides of the rectangle are grabable and dragable to adjust the marker). The Compute Lightmap button is used to start the lightmap computation. After the lightmap is computed, the item “Test” in available in the combobox to get a preview (which is computed using the source image used for the lightmap computation). The lightmap is used to perform illumination correction on an image stack. The images for correction are choosen using the Apply Lightmap button.
All steps (MIP (max.png), marker (marker.png), lightmap (lm.png) and preview (test.png)) are saved in the same directory as the source files and can be viewed with ordinary image viewers. It is also possible to use a predefined picture in each step using the appropriate load buttons (note that the files have to be named and found in the same directory as mentioned before). If for example a rectangular marker is not sufficient, it is possible to generate a marker with conventional image processing software, and to use it in the illumination correction tool (Note that the marker has to be an image of the same size as the source images, named “marker.png” and has to be in the same directory as the source files (white for the selection and black for marked out regions).
In our implementation of our illumination correction tool we used Qt (3). For the user interface and to load the images, for the matrix computations we used the Newmat library (4).
Figure 8 shows exemplarily a source image (in this case a blankfield), Figure 9 shows the lightmap, that is computed from the blankfield. Figure 10 shows the illumination corrected image. The error of our method is shown in Figure 11. We subtracted the computed lightmap from the source image and used a grayscale to show the difference in the pixels. This difference image shows that the errors are mostly in the order of the slicing artefacts, which are not intended to be corrected with our method.

Figure 12: The blankfield used to test the tool.

Figure 13: The lightmap computed from figure 7

Figure 14: Illumination corrected image of figure 8 using the lightmap of figure 9.

Figure 15: Absolute difference of the blankfield (Figure 7) and the lightmap (Figure 8) .
To correct the uneven illumination in the source images we implemented a tool interpreting the image pixels in three dimensional space and fitting a quadric surface through the pixels. If the dynamic range of the images fits the 8 bits of the greyscale image (depending on the exposure of the camera while taking the images), and enough background on the images (especially on the border) is present, our technique works well for the correction of the uneven illumination. In the future we plan to correct other artefacts, and make use of efficient automatic or semiautomatic segmentation techniques to reduce the manual intervention of the experts.
1. The Institute of Computer Graphics and Algorithms / Computer Graphics Group. http://www.cg.tuwien.ac.at. Vienna, Austria : s.n., 08 06, 2008.
2. Center for Anatomy und Cell Biology. http://www.meduniwien.ac.at/centeracb/. Vienna : s.n., 08 06, 2008.
3. Qt by Trolltech. http://trolltech.com/products/qt/. 08 06, 2008.
4. Newmat by Robert Davies. http://www.robertnz.net/nm_intro.htm. 08/ 06, 2008.