Luminance-Contrast-Aware Foveated Rendering

Okan Tarhan Tursun*, Elena Arabadzhiyska-Koleva, Marek Wernikowski, RadosLaw Mantiuk, Hans-Peter Seidel, Karol Myszkowski, Piotr Didyk

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

35 Citations (Scopus)

Abstract

Current rendering techniques struggle to fulfill quality and power efficiency requirements imposed by new display devices such as virtual reality headsets. A promising solution to overcome these problems is foveated rendering, which exploits gaze information to reduce rendering quality for the peripheral vision where the requirements of the human visual system are significantly lower. Most of the current solutions model the sensitivity as a function of eccentricity, neglecting the fact that it also is strongly influenced by the displayed content. In this work, we propose a new luminance-contrast-aware foveated rendering technique which demonstrates that the computational savings of foveated rendering can be significantly improved if local luminance contrast of the image is analyzed. To this end, we first study the resolution requirements at different eccentricities as a function of luminance patterns. We later use this information to derive a low-cost predictor of the foveated rendering parameters. Its main feature is the ability to predict the parameters using only a low-resolution version of the current frame, even though the prediction holds for high-resolution rendering. This property is essential for the estimation of required quality before the full-resolution image is rendered. We demonstrate that our predictor can efficiently drive the foveated rendering technique and analyze its benefits in a series of user experiments.

Original languageEnglish
Article number98
Number of pages14
JournalAcm transactions on graphics
Volume38
Issue number4
DOIs
Publication statusPublished - Jul-2019
Externally publishedYes

Keywords

  • foveated rendering
  • perception
  • IMAGE
  • SENSITIVITY
  • MASKING

Cite this