Joao Afonso CardosoORCID iD, Bernhard KerblORCID iD, Lei Yang, Yury Uralsky, Michael WimmerORCID iD
Training and Predicting Visual Error for Real-Time Applications
Proceedings of the ACM on Computer Graphics and Interactive Techniques, 5(1):1-17, May 2022. [paper] [Paper Website]

Information

  • Publication Type: Journal Paper with Conference Talk
  • Workgroup(s)/Project(s):
  • Date: May 2022
  • Journal: Proceedings of the ACM on Computer Graphics and Interactive Techniques
  • Volume: 5
  • Open Access: yes
  • Number: 1
  • Location: online
  • Lecturer: Joao Afonso CardosoORCID iD
  • ISSN: 2577-6193
  • Event: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
  • DOI: 10.1145/3522625
  • Call for Papers: Call for Paper
  • Pages: 17
  • Publisher: Association for Computing Machinery
  • Conference date: 3. May 2022 – 5. May 2022
  • Pages: 1 – 17
  • Keywords: perceptual error, variable rate shading, real-time

Abstract

Visual error metrics play a fundamental role in the quantification of perceived image similarity. Most recently, use cases for them in real-time applications have emerged, such as content-adaptive shading and shading reuse to increase performance and improve efficiency. A wide range of different metrics has been established, with the most sophisticated being capable of capturing the perceptual characteristics of the human visual system. However, their complexity, computational expense, and reliance on reference images to compare against prevent their generalized use in real-time, restricting such applications to using only the simplest available metrics.

In this work, we explore the abilities of convolutional neural networks to predict a variety of visual metrics without requiring either reference or rendered images. Specifically, we train and deploy a neural network to estimate the visual error resulting from reusing shading or using reduced shading rates. The resulting models account for 70%--90% of the variance while achieving up to an order of magnitude faster computation times. Our solution combines image-space information that is readily available in most state-of-the-art deferred shading pipelines with reprojection from previous frames to enable an adequate estimate of visual errors, even in previously unseen regions. We describe a suitable convolutional network architecture and considerations for data preparation for training. We demonstrate the capability of our network to predict complex error metrics at interactive rates in a real-time application that implements content-adaptive shading in a deferred pipeline. Depending on the portion of unseen image regions, our approach can achieve up to 2x performance compared to state-of-the-art methods.

Additional Files and Images

Additional images and videos

Additional files

Weblinks

BibTeX

@article{cardoso-2022-rtpercept,
  title =      "Training and Predicting Visual Error for Real-Time
               Applications",
  author =     "Joao Afonso Cardoso and Bernhard Kerbl and Lei Yang and Yury
               Uralsky and Michael Wimmer",
  year =       "2022",
  abstract =   "Visual error metrics play a fundamental role in the
               quantification of perceived image similarity. Most recently,
               use cases for them in real-time applications have emerged,
               such as content-adaptive shading and shading reuse to
               increase performance and improve efficiency. A wide range of
               different metrics has been established, with the most
               sophisticated being capable of capturing the perceptual
               characteristics of the human visual system. However, their
               complexity, computational expense, and reliance on reference
               images to compare against prevent their generalized use in
               real-time, restricting such applications to using only the
               simplest available metrics.  In this work, we explore the
               abilities of convolutional neural networks to predict a
               variety of visual metrics without requiring either reference
               or rendered images. Specifically, we train and deploy a
               neural network to estimate the visual error resulting from
               reusing shading or using reduced shading rates. The
               resulting models account for 70%--90% of the variance while
               achieving up to an order of magnitude faster computation
               times. Our solution combines image-space information that is
               readily available in most state-of-the-art deferred shading
               pipelines with reprojection from previous frames to enable
               an adequate estimate of visual errors, even in previously
               unseen regions. We describe a suitable convolutional network
               architecture and considerations for data preparation for
               training. We demonstrate the capability of our network to
               predict complex error metrics at interactive rates in a
               real-time application that implements content-adaptive
               shading in a deferred pipeline. Depending on the portion of
               unseen image regions, our approach can achieve up to 2x
               performance compared to state-of-the-art methods.",
  month =      may,
  journal =    "Proceedings of the ACM on Computer Graphics and Interactive
               Techniques",
  volume =     "5",
  number =     "1",
  issn =       "2577-6193",
  doi =        "10.1145/3522625",
  pages =      "17",
  publisher =  "Association for Computing Machinery",
  pages =      "1--17",
  keywords =   "perceptual error, variable rate shading, real-time",
  URL =        "https://www.cg.tuwien.ac.at/research/publications/2022/cardoso-2022-rtpercept/",
}