The rendering of 3D models and scenes has been a major, long-standing challenge in computer graphics that can be divided into two major subfields. Real-time rendering refers to the real-time synthesis of images on commodity graphics hardware to facilitate interaction for visualizations or computer games. The field of physically based rendering focuses on algorithms that accurately simulate the physics of light for use in the movie industry, architectural visualizations, etc. Compared to real-time rendering methods, these algorithms generally require a considerable amount of time (i.e., hours up to days) to generate acceptable results. In the context of real-time rendering, our work revolves around the development of algorithms to achieve higher visual quality. Furthermore, we work on techniques to accelerate physically based rendering algorithms to make them viable for interactive rendering. Another aspect of our research work revolves around tools that facilitate the work of content creators.