home

Beginner’s guide to eye tracking: Expert tips you need to know

  • Blog
  • by Dr. Tim Holmes
  • 6 min

Eye tracking 101 series - #2

Eye tracking has changed dramatically since I first started working with it. We’ve gone from fixed desktop systems in tightly controlled labs to lightweight wearables, AI‑assisted analytics, and the ability to capture attention in real‑world environments like shops, cars, hospitals, and workplaces. 

What hasn’t changed quite as much as you might expect are the mistakes people make. 

After years of working with eye tracking across academic and commercial settings, I can say with confidence that most problems don’t come from the technology — they come from how it’s used. The good news is that the same few checks and habits, applied consistently, will dramatically improve the quality of your data. 

Whether you’re new to eye tracking or have been running studies for years, these tips are the ones I come back to again and again. 

Tip #1: Replay your data before you analyze it

This tip exists because I once saw an entire study unravel due to something that should have been obvious, but wasn’t. 

The gaze data looked reasonable numerically, but when we replayed it over the stimulus, it simply didn’t make sense. Fixations were consistently offset from the objects participants were clearly interacting with. The issue turned out to be a mismatch between the resolution of the stimulus as presented and the resolution used during analysis. 

This still happens, even today. 

No matter how advanced your software is, it can’t correct for

  • Stimuli shown at different pixel dimensions than your analysis files.

  • Aspect ratio mismatches between recording and playback.

  • Scaling differences between wearable camera footage and AOI overlays.

If the gaze replay doesn’t visually align with the task objects, your data is compromised — even if the numbers look convincing. 

A quick “sense check” will usually catch these issues immediately.

I always recommend: 

  • Include a calibration or alignment stimulus within the actual experiment.

  • Replay multiple samples of raw gaze over the real stimuli.

  • Confirm that expected eye movements (like reading patterns or logo fixations) appear where they should.

If your gaze clusters aren’t where the meaningful content is, investigate the mapping before you touch the analysis.

Tobii Pro Spectrum used in behavioral studies.
Tobii Pro Spectrum used in behavioral studies.

Tip #2: Pilot your study — properly 

I could probably write an entire article on this one alone. 

As eye tracking has moved out of the lab and into real environments, the number of variables you don’t control has increased dramatically. Lighting changes. Weather happens. Shoppers wander into shot. Glasses reflect. Schedules slip. Someone moves the shelf you were relying on. 

If you haven’t piloted your study, you’ll discover all of this after you’ve spent your budget. 

And when I say “pilot,” I don’t mean simply running one participant through the task. I mean: 

  • Walking the route at different times of day for shopper studies 

  • Calibrating multiple participants in the actual test environment 

  • Running your analytics on pilot data to ensure the outputs are usable 

  • Checking performance with participants who wear glasses, makeup, hats, or head coverings 

My rule is simple? 

If you haven’t run at least one full calibration → recording → analysis cycle before the real sessions, you’re not ready. 

Pilots save studies. Every time. 

Calibrate the glasses, test them on the participant and check their performance.
Calibrate the glasses, test them on the participant and check their performance.

Tip #3: Heatmaps are useful but can also mislead

Heatmaps are popular for a reason. They’re intuitive, visually compelling, and great for telling a story. But they come with an assumption that’s often overlooked: that everyone in your sample behaved in roughly the same way, over roughly the same period of time. 

If that assumption isn’t true, your heatmap may be hiding more than it reveals. 

Heatmaps work best when: 

  • Participants view stimuli for comparable durations 

  • Behavior is broadly consistent 

  • Outliers have been identified and reviewed 

  • AOIs are defined carefully 

Without that, a heatmap can hide: 

  • Drifting calibrations 

  • People looking at completely different objects 

  • Delayed attention shifts 

  • False “hotspots” caused by a single participant with unusual gaze patterns 

If you must use heatmaps, pair them with: 

  • Individual gaze replays 

  • Scatter plots 

  • AOI‑based metrics 

And please , never rely on heatmaps alone for insights. 

Tip #4: Central bias is real and it matters

People tend to look at the center of an image almost immediately after it appears. This is known as central fixation bias, and it’s a fundamental characteristic of human vision, not a flaw in your study. 

This means: 

📌 Center placement ≠ organic attention
📌 Early fixations often reflect bias, not interest
📌 Anything placed dead‑center is artificially privileged

To reduce the impact of central bias: 

  • Analyze fixations starting 0.5–1 second after onset

  • Place key elements away from the dead center

  • Avoid testing competing designs where one is centered and another isn’t

If a stakeholder proudly shows you a heatmap with a huge red blob in the middle and asked to draw conclusions from it — proceed with caution

Heat mapping used in packaging research studies.
Heat mapping used in packaging research studies.
Here's the really important thing to remember: If you want to test whether a brand or product or claim will attract and capture attention, don't place it in the center of the scene.
Dr. Tim Holmes Independent Neuroscientist, Researcher and Educator

Tip #5a: AOIs should never touch

Even with today’s eye tracking technology accuracy still has limits. Tobii's remote systems typically operate between 0.2 and 0.5 degrees, with wearable solutions from 0.6 degrees of visual angle.

If your AOIs touch, you introduce ambiguity. 

When fixations land on borders, it becomes unclear which AOI they belong to, and noise can easily be mistaken for meaningful data. 

Well‑defined AOIs should:  

  • Be slightly larger than the object (add ~0.5° padding) 

  • Be separated from other AOIs with clear white space 

  • Reflect the real visual grouping your research question requires 

Clean AOIs produce clean metrics. Poorly defined AOIs produce fiction. 

An example of good AOIs (Areas of Interest) product placement.
An example of good AOIs (Areas of Interest) product placement.

Tip #5b: Overlapping AOIs cause more problems than you think

If touching AOIs are risky, overlapping AOIs are worse, particularly in
wearable eye tracking, where the scene is constantly moving. 

Overlapping AOIs can cause: 

  • One fixation being counted twice 

  • Misassigned fixations due to layer priority 

  • Lost fixations if occlusion isn't handled 

  • Errors introduced by interpolation when objects resize or move in depth 

When working with dynamic AOIs: 

  • Use keyframes generously 

  • Review interpolated frames, not just endpoints 

  • Explicitly manage occlusions 

If AOIs overlap, you’re no longer analyzing attention, you’re analyzing how the software resolves conflicts. 

If a real store isn’t practical, Tobii Sticky lets you set up realistic digital shelves for online packaging studies. By arranging pack shots as they would appear in store, you can control layout, spacing, and AOI boundaries far more precisely. Done well, this makes it easier to study visibility and findability while keeping AOIs clean, interpretable, and fit for reliable analysis.

If you want to go deeper here, our AOI Area Calculator can help validate AOI size and placement beyond what’s available in Tobii Pro Lab.

Why every step matters

Over the years, I’ve seen plenty of studies where one or two things were done well and the rest quietly assumed and that’s usually where problems start.

Reliable insight comes from treating the process as a whole. Replaying your data without piloting the study won’t save you if the setup was flawed. Piloting without respecting the limits of the methodology won’t help if your analysis overreaches.

Each step supports the others and it’s only when they’re all in place that the data starts to earn your trust.

Remember these three things: 

  1. Replay your data 

  2. Pilot your study 

  3. Respect the limits of the methodology 

Get those right, eye tracking will reward you with insights that few other methods can offer.

Article updated for 2026

Written by

  • Dr. Tim Holmes

    Dr. Tim Holmes

    Independent Neuroscientist, Researcher and Educator

    Dr. Tim Holmes is a visual neuroscientist who researches the role that environment and design play in decision making and behavior. He is recognized as a leading authority on eye tracking and visual attention and has worked with top brands, retailers, architects, content creators, and sports teams to educate on, and develop, behavioral interventions. Tim also works with many academic institutions and is an award-winning educator and public speaker on the application of neuroscience to behavioral influence.

Expand your knowledge of eye tracking studies

Sign up for our newsletter

Sign up for our newsletter

Register for our newsletter to get Tobii’s latest blog posts and insights delivered to your inbox. Explore our articles focusing on real‑world behavior, attention, and the innovations shaping tomorrow’s technology.