What is foveated rendering?

  • by Doug Eggert
  • 5 min

Dynamic foveated rendering

Foveated rendering may sound like a term intended to describe a complex technology. But the truth is, the underlying concept of foveation is straightforward. Using information about where a person looks on a screen, you can reduce the processing needed to generate a scene by rendering the small area where the user is looking in hi-resolution and the rest of the scene — in the user’s periphery — with lower resolution and fewer details. The primary application of foveated rendering is in display technologies, like VR headsets and AR glasses, where resource optimization is essential.

In this post, I will answer the question: what is foveated rendering? I will talk about dynamic foveated rendering, static (or fixed) foveated rendering, and how these technologies can, for example, lower compute load on the GPU. And how you can reduce network bandwidth requirements using a sister concept — dynamic foveated transport. Enjoy!

What is foveated rendering?

Foveated rendering is a device-performance optimization technique that concentrates rendering resources on the area of the display where the user looks. The content in the area immediately surrounding the user’s gaze point is rendered in high-resolution. The rest of the image — the part in the user’s peripheral vision — is rendered at lower resolutions, reducing the resources needed to render a scene without any perceived degradation in user experience.

Illustration of inside the human eye and effect of dynamic foveated rendering on a car image
The apple illustration shows how our eyes render images and the car shows how our brains render content

Foveated rendering works because it mimics human vision and how our perception degrades across the field of view. Our brains render what we see by blending what we focus on in high resolution— the apple in the illustration — with the rest of what we see in medium and low resolutions.

Illustration of fixed foveated rendering — large area to render
Fixed foveated rendering in a VR headset

What is static (or fixed) foveated rendering?

The static approach to foveated rendering, or what some call fixed foveated rendering, assumes that the user focuses on the center of the screen (which is true some of the time). The illustration shows how fixed foveated rendering works in a VR headset by portioning the screen into hardcoded zones. The assumed region of user attention — indicated by the white part in the center of the screen — is rendered at 100%. The gray sections are rendered at medium resolution, and the light blue areas at low resolution — reducing the resources needed to render the full scene. You can implement static foveated rendering on just about any device, and you will likely see some degree of resource optimization, but it does not always yield an optimum user experience. The level of peripheral distortion introduced by early generations of lenses provided an opportunity to reduce the resolution in areas where the image would be blurred anyway, but as lens quality has risen, so has the need to render the entire field of view in high resolution.

What is dynamic foveated rendering?

Illustration of dynamic foveated rendering in XR, small area to render
Dynamic foveated rendering in a VR headset

Dynamic foveated rendering leverages the actual region of the user’s attention to fully render a small portion of the image (illustrated by the white area), expanding outward to medium (gray) and low resolution (light blue) with no degradation of quality or user experience. To implement dynamic foveated rendering, you need accurate, low latency eye tracking that can repeatedly deliver the exact gaze point of the user in real-time. Some of the benchmark tests we’ve done on dynamic foveated rendering have yielded phenomenal results. In one of the tests we ran on a Pico headset with the Unity engine, GPU shading load dropped by up to 72% with an average of about 60%. Our tests revealed a drastic improvement in the stability of frame rates, which didn’t drop below 90 frames per second with dynamic foveated rendering enabled. And that’s great for user experience. If you want to dive deeper into the results, I suggest you look at our e-book — eye tracking and dynamic foveated rendering.

Benefits of dynamic foveated rendering

Because it lowers the processing load, dynamic foveated rendering can potentially enable the GPU to run at lower temperatures, with a corresponding drop in power consumption, which reduces the need for cooling and lowers ventilation-related noise — promoting comfort and prolonging battery life.

Limiting the region of full-resolution rendering reduces the load on complex shaders, reducing the time it takes to render a scene. Freed-up resources can be used to deliver realistic shading and higher levels of scene complexity.

Most importantly, dynamic foveated rendering is an optimization technique that improves the performance of a given hardware architecture. In practice, DFR extends the life of a resource-constrained GPU running on a standalone headset to support emerging content and display technologies and deliver realistic and immersive user experiences at lower price points.

What is dynamic foveated transport?

Dynamic foveated transport is a fundamental enabler in the adoption of untethered lightweight wearables. As devices become lighter and resources scarcer, the need for dedicated low-latency networking and off-device processing is crucial for many applications. And one way to reduce the amount of data traveling between devices and cloud or edge processing is to leverage dynamic foveated transport.

Dynamic foveated transport leverages eye tracking to capture the user’s gaze, instructing the remote processor what parts of a scene to render in high-, medium-, or low-level resolution based on where the user looks (in much the same way as on-device dynamic foveated rendering works) — reducing the amount of data transported over the network for each scene.

Why implement foveated rendering?

Foveated rendering is a crucial technology for XR. To implement dynamic foveated rendering on a device, it needs eye tracking components — cameras, illuminators, and algo that can leverage ocular physiology. And that technology needs to deliver accurate gaze points in real-time, and it needs to work for the global population that will use the device. For commercial devices, the solution design needs to enable high-level applications to leverage the benefits of dynamic foveated rendering without extensive re-programming. If you want to know more about how we have achieved this in scalable commercial solutions, please reach out to us.

Front cover of ebook about Tobii's solution for foveated rendering

Written by

  • Tobii Doug Eggert

    Doug Eggert

    VP of XR, Tobii

    In my role, I get to work directly with headset manufacturers, helping them integrate eye tracking into their hardware. My focus is the introduction of eye tracking for effortless interaction and immersion in virtual and mixed reality as well as enabling more capable devices with solutions such as foveated rendering and analytics. Personally, I am excited about the future of spatial computing, which helps me greatly in my role because I am passionate about working closely with our customers and engineering team to drive the widespread adoption of eye tracking in XR.

Related content

Subscribe to our blog

Subscribe to our stories about how people are using eye tracking and attention computing.