Noise-based Enhancement for Foveated Rendering

Taimoor Tariq, Cara Tursun, Piotr Didyk

OnderzoeksoutputAcademicpeer review

10 Downloads (Pure)


Human visual sensitivity to spatial details declines towards the periphery.
Novel image synthesis techniques, so-called foveated rendering, exploit this
observation and reduce the spatial resolution of synthesized images for the
periphery, avoiding the synthesis of high-spatial-frequency details that are
costly to generate but not perceived by a viewer. However, contemporary
techniques do not make a clear distinction between the range of spatial
frequencies that must be reproduced and those that can be omitted. For a
given eccentricity, there is a range of frequencies that are detectable but
not resolvable. While the accurate reproduction of these frequencies is not
required, an observer can detect their absence if completely omitted. We use
this observation to improve the performance of existing foveated rendering
techniques. We demonstrate that this specific range of frequencies can be
efficiently replaced with procedural noise whose parameters are carefully
tuned to image content and human perception. Consequently, these fre-
quencies do not have to be synthesized during rendering, allowing more
aggressive foveation, and they can be replaced by noise generated in a less
expensive post-processing step, leading to improved performance of the ren-
dering system. Our main contribution is a perceptually-inspired technique
for deriving the parameters of the noise required for the enhancement and
its calibration. The method operates on rendering output and runs at rates
exceeding 200 FPS at 4K resolution, making it suitable for integration with
real-time foveated rendering systems for VR and AR devices. We validate our
results and compare them to the existing contrast enhancement technique
in user experiments.
Originele taal-2English
Aantal pagina's14
TijdschriftAcm transactions on graphics
Nummer van het tijdschrift4
StatusPublished - jul.-2022

Citeer dit