Screen-Space Material Based Variable Rate Shading
User expecations regarding visual fidelity in real-time rendering continue to increase with each hardware generation. In particular, resolution requirements are expected to become an important demand in the near future, wich will require more from the GPU. This is due, in part, to the surging availability of 2K and 4K monitors, but especially due to the high resolution requirements of VR to achieve good visual fidelity. As such, VR Headset Manufacturers continue to produce models with ever-increasing display resolutions:
The extremely high resolutions of VR systems are a big hurdle to performance, and resolutions are not expected to stop increasing.
Balancing all these factors if one decides to render everything at native resolution becomes extremely challenging. Higher resolution plus increased visual fidelity means systems struggle to maintain 90 FPS. This unfortunately means sacrificing visual effects just to meet the frame rate budget. However, there is a drawback of the traditional rendering approach that we can explore: how it always wastes precious pixel shading cycles due to inherent “overshading”. In other words, there are always regions of the screen which don't benefit much or at all from the increased resolution.
Hence, GPUs are starting to introduce a feature called Variable Rate Shading, which allows for programs to specify different shading rates for different regions of the screen. The most obvious use would be to take into account VR headset distortion, or track the viewer's eyes and use a higher resolution where theyare looking at.
A simple use case of VRS, given eye tracking hardware is available.
However, the case we are more interested in is Adaptive Variable Rate Shading, a technique which estimates the ideal shading rate for a given frame before it is drawn, given the result of the previous draw call, because it can be applied to any scenario, including both VR and traditional gaming. In fact, it is already being used in some games, including Wolfenstein: The New Order. It is not without limitations though. For example, it tends to underestimate resolution in shiny materials, and needs to be disabled completly for these, it needs to rely on very few samples of neighbouring color data and, finally, these samples are from the color of the previous frame - which needs to be reprojected and was already rendered with adaptive VRS. That is, error could build up.
Visualization of adaptive shading rate in Wolfenstein: The New Order
We're in direct contact with the authors of adaptive VRS at NVidia, and are working on an alternative method or potential improvement: we're exploring the use of deep learning to estimate the ideal variable shading rate before rendering, but using screen-space material proprieties instead of or in addition to color.
Multiple possible topics are open for applicants, and can b chosen after discussion with the supervisor according to the applicant's qualifications and interests. Any topic will fit in one of these categories:
- Rendering data generation: the student will improve upon our training data generator, which samples scenes for pairs of screen-space material proprieties and final renderings in multiple resolutions. For example, the student could work on computing screen-space motion vectors.
- Convolutional network development: the student will explore designing and training different deep-learning models for estimating shading rate given material data.
- Real-time 3D scene deployment: the student will make a standalone implementation capable of using an existing trained neural network to predict appropriate shading rates in real-time.
- Fluent in English, spoken and written (supervisor speaks English, code and reports must be written in English).
- In case of convolutional network development, pytorch and python.
- In case of real-time 3D scene deployment, C++ and CUDA.