Samenvatting
Human visual sensitivity to spatial details declines towards the periphery.
Novel image synthesis techniques, so-called foveated rendering, exploit this
observation and reduce the spatial resolution of synthesized images for the
periphery, avoiding the synthesis of high-spatial-frequency details that are
costly to generate but not perceived by a viewer. However, contemporary
techniques do not make a clear distinction between the range of spatial
frequencies that must be reproduced and those that can be omitted. For a
given eccentricity, there is a range of frequencies that are detectable but
not resolvable. While the accurate reproduction of these frequencies is not
required, an observer can detect their absence if completely omitted. We use
this observation to improve the performance of existing foveated rendering
techniques. We demonstrate that this specific range of frequencies can be
efficiently replaced with procedural noise whose parameters are carefully
tuned to image content and human perception. Consequently, these fre-
quencies do not have to be synthesized during rendering, allowing more
aggressive foveation, and they can be replaced by noise generated in a less
expensive post-processing step, leading to improved performance of the ren-
dering system. Our main contribution is a perceptually-inspired technique
for deriving the parameters of the noise required for the enhancement and
its calibration. The method operates on rendering output and runs at rates
exceeding 200 FPS at 4K resolution, making it suitable for integration with
real-time foveated rendering systems for VR and AR devices. We validate our
results and compare them to the existing contrast enhancement technique
in user experiments.
Novel image synthesis techniques, so-called foveated rendering, exploit this
observation and reduce the spatial resolution of synthesized images for the
periphery, avoiding the synthesis of high-spatial-frequency details that are
costly to generate but not perceived by a viewer. However, contemporary
techniques do not make a clear distinction between the range of spatial
frequencies that must be reproduced and those that can be omitted. For a
given eccentricity, there is a range of frequencies that are detectable but
not resolvable. While the accurate reproduction of these frequencies is not
required, an observer can detect their absence if completely omitted. We use
this observation to improve the performance of existing foveated rendering
techniques. We demonstrate that this specific range of frequencies can be
efficiently replaced with procedural noise whose parameters are carefully
tuned to image content and human perception. Consequently, these fre-
quencies do not have to be synthesized during rendering, allowing more
aggressive foveation, and they can be replaced by noise generated in a less
expensive post-processing step, leading to improved performance of the ren-
dering system. Our main contribution is a perceptually-inspired technique
for deriving the parameters of the noise required for the enhancement and
its calibration. The method operates on rendering output and runs at rates
exceeding 200 FPS at 4K resolution, making it suitable for integration with
real-time foveated rendering systems for VR and AR devices. We validate our
results and compare them to the existing contrast enhancement technique
in user experiments.
Originele taal-2 | English |
---|---|
Artikelnummer | 143 |
Aantal pagina's | 14 |
Tijdschrift | Acm transactions on graphics |
Volume | 41 |
Nummer van het tijdschrift | 4 |
DOI's | |
Status | Published - jul.-2022 |