home

Online shoppers don’t read, they glance

  • Blog
  • by Oliver Bradley
  • 6 min

Image courtesy of Neem
Image courtesy of Neem

Most online shoppers are not “shopping” in the romantic sense. They are fast scrolling on a mobile screen with their thumbs, half‑focused, multitasking as you do on your mobile, trying to get a job done in under 10 minutes, and the brutal truth, easily observed using eye tracking, is simple: They barely read anything.  

They glance at images, skim just enough to feel confident, and move on. If your brand story, size, or variant only exists as tiny, low‑contrast text on a beautifully crafted pack, it may as well not exist on a mobile screen. 

This blog post brings together three strands of my work:  

The findings of a large‑scale eye tracking study on UK online grocery shoppers which was done with the Tobii Insight team, the Cambridge‑ and GS1‑backed Mobile Ready Hero Image standard, and Rhino, Neem's tool that automates visual clarity and contrast testing at scale.  

Together they tell a single story: If you want to win on the digital shelf, you must design for how people actually see and decide online, not how you wish they did.

Image courtesy of Neem
Image courtesy of Neem

How eye tracking led Unilever to create a new standard for primary eCommerce images   

When we ran a deep online grocery eye tracking study with Tobii across the “big four” UK grocers, the scale alone was serious: 58 heavy online shoppers, 28 hours of real shopping, 4,000 products bought and 4,600 pages captured, plus full video and interviews. The brief was to understand what people genuinely look at, ignore, and act on when they shop in their natural habitat, not in a focus‑group fantasy. We did both mobile and desktop and we told participants to simply do their weekly shop online as they usually did and we recorded what happened next. 

The data blew up a lot of comfortable assumptions. There was no single “normal” way to shop, but there were clear behavioral styles: Search‑first “Searchaholics”, Favourite list‑driven “Smart‑listers”, and “Menu‑hangers” who browse departments. 

Despite these differences, one pattern was universal: Huge areas of every page were functionally invisible. Banners, promo blocks and text‑heavy elements suffered from near‑total banner blindness; one tranche of work logged purchase rates of fewer than five items from roughly 5,000 banner impressions. 
Oliver Bradley, Chief Digital Officer, Neem

The one element that consistently cut through? Primary pack thumbnails. Heatmaps, gaze replays and fixation plots showed large parts of the page is simply ignored, but shoppers almost always fixated on the primary product image tile in lists and search results. Seeing this clearly time and time again through the eye tracking results brought home the reality to the stakeholders that the existing images were not capturing the shopper's attention. 

People scanned vertically down the column of thumbnails, using images as their primary selection tool, and only dipped into text when something was unclear or missing. If the image did not communicate brand, format, variant and size at a glance, errors and frustration spiked. 

The 4Ws: What shoppers must see instantly 

 The insight from our work with Tobii set up the foundation for Cambridge University’s Inclusive Design team and GS1 to define exactly what a thumbnail must communicate to be truly “shopper‑first”. The conclusion was boiled down into four simple questions — the 4Ws — that shoppers should be able to answer from the image alone, without reading the product title: 

  • What brand? A clear, recognisable brand name or device.   

  • What format? The product type or category keyword (ie. shampoo, ice cream).   

  • What variant? The key differentiator such as flavour, fragrance or functional variant.   

  • What size / count? The quantity — millilitres, washes, tablet count, diaper size, and so on. 

Conventional front‑of‑pack photography often fails on glance readability at least one of these: 

Size and count are the most common failure: On the digital shelf, relative size cues from physical packaging disappear, and tiny pack copy becomes an unreadable blur at thumbnail scale. Shoppers dislike reading long product titles; they want to confirm what they’re buying from the visual alone, and “wrong size” is one of the most frequent online shopping mistakes.

Online shopping eye tracking study - Tesco, Neem and Tobii
Online shopping eye tracking study - Tesco, Neem and Tobii

Where brands adopted true Mobile Ready Hero Images, the payoff was measurable.

A/B tests at Unilever showed hero images driving uplift in click‑through and conversion of around 19% in deodorants and skin cleansing, 24% on Magnum ice cream and 4% on laundry when wash count was made obvious.
Oliver Bradley, Chief Digital Officer, Neem

That is not “coloring in” — it is material revenue movement from removing friction in one small, but critical tile. Recent work by Neem with Sodastream has confirmed this again with double digit uplift for the brand online since adding mobile-ready visuals. 

Why mobile makes all of this non‑negotiable 

If this was just about desktop, brands might still get away with tiny typography. But mobile is now the primary front door to retail. Traffic has tilted decisively: Around three‑quarters of all retail traffic is mobile, and shoppers scroll continuously, often spending as little as 1.7 seconds per product in a fast swipe through listings. The average smartphone screen is 6.1 inch small, with usable image areas often around 90 pixels square for a product tile, and most shoppers are multitasking, distracted and impatient. 

Authors: Pelli, D. G., & Robson, R. C. (1988). Image courtesy of Neem.
Authors: Pelli, D. G., & Robson, R. C. (1988). Image courtesy of Neem.

At the same time, eyesight is not improving. Demographic and clinical data shows that by 2050 and beyond the median age is rising, and ocular accommodation — the eye’s ability to focus on small detail — declines steadily from the 40-year-old onwards. Many shoppers over 40 struggle with small, low‑contrast text even in ideal conditions; add motion, glare, sub‑optimal eyesight and a cluttered screen, and legibility falls off a cliff.

DCG analysis of USA retailers across 5000 Product Display Pages (PDPs) in Target, Amazon and Walmart revealed more than 90% of product images are not truly mobile‑ready: Text is too small, contrast too low, or key claims are buried in decorative clutter.
Oliver Bradley, Chief Digital Officer, Neem

Brands are, in effect, funding beautifully crafted assets that a significant chunk of their audience literally cannot read at the moment of choice. In a world where mobile conversion continues to lag behind desktop, as the gap narrows, failing to act is self‑sabotage.

From insight to standard: Mobile Ready Hero Images 

To turn these insights into something scalable, Cambridge University’s Inclusive Design team, GS1 and leading CPGs created formal Mobile Ready Hero Image guidelines, which was endorsed globally in 2022. 

The GS1 standards do three important things:

First, they define the role of a Mobile Ready Hero image 
A simplified, decluttered, screen‑optimized representation of the pack that prioritizes the 4Ws at a small size, rather than a literal photograph of the full pack. This often means zooming in on the brand device, adding a clear lozenge with size or count for “tall, thin” packs, and stripping away non‑essential claims and noise. 

Second, they set stringent rules to preserve consistency and avoid "Visual Armageddon"

Size call‑outs must sit in a standardized bottom‑right lozenge, using Open Sans Bold in black for legibility, on a clean white background that matches GS1’s general image specifications. The lozenge can only contain size or count — not marketing slogans — and no floating elements are allowed elsewhere on the canvas. 

Third, they tie design decisions to objective tests 

The guidelines link directly into Cambridge’s visual clarity test, which assesses whether an image is clear enough to be recognized at mobile thumbnail size; if an image fails this test, it is by definition not mobile‑ready. Open‑source Photoshop templates and layout examples are made available for suppliers and retailers so that compliance is not guesswork. 

The net result is a shared language between brands, agencies and retailers: Instead of debating taste, everyone can talk about whether an image delivers the 4Ws under controlled clarity and contrast thresholds on a typical phone screen. 

Online shopping eye tracking study - Tesco, Neem and Tobii
Online shopping eye tracking study - Tesco, Neem and Tobii

The gap: You can’t fix what you don’t test 

Even with a solid global standard, most organizations hit the same wall: Scale

A typical multi-brand portfolio can easily span tens of thousands of SKUs across multiple retailers and markets. Manually checking every image for minimum text size, stroke width, colour contrast, and clarity for multiple screen sizes is impossible in practice. 

Many teams fall back on “eyeballing”: If a designer or marketer can read it on a high‑resolution monitor in a well‑lit office, they assume it’s fine. Others try to run manual WCAG contrast checks colour by colour, or trust agencies to do the right thing without explicit contracts or automated checks.

It takes at least 150 seconds per manual test, so validating 50,000 product images would take around 500 person‑days and cost in the region of six figures.
Oliver Bradley, Chief Digital Officer, Neem

This is where well‑meaning accessibility aspirations usually die. The intention is there, but the operational load is too heavy, so teams do the odd spot check and hope for the best. At a time when regulators (and activist groups) are starting to scrutinize digital accessibility in private‑sector ecommerce, that “hope” is an expensive risk posture. 

Automating clarity and contrast at scale 

Rhino was built precisely to close this gap. It bulk‑tests digital images at speed using two critical components: The APCA contrast algorithm (a core part of emerging WCAG 3 thinking) and the Cambridge visual clarity model that underpins the Mobile Ready Hero standard. In other words, it encodes the science and the standards into a repeatable, automated pipeline. 

The workflow is simple: Upload single images or entire batches or point Rhino at a URL; the system uses Azure AI Computer Vision and OCR to detect all text in the image — horizontal, vertical, angled or curved — then isolates text from background using a proprietary three‑phase algorithm. Each text‑background pair is then tested with APCA to assess whether it meets defined AA and AAA contrast thresholds, and the Cambridge algorithm measures text height, stroke width and clarity for specific target sizes like mobile hero, web banner or secondary images. 

Results are presented as clear pass/fail and advisory flags for each text element, with detailed overlays and metrics available for designers and legal reviewers. Teams can tag image sets, export reports, search history, and integrate Rhino via API into DAMs, CMSs or packaging workflows, so checks become part of the everyday production pipeline rather than a one‑off audit. Critically, Rhino supports realistic exclusions for decorative text or brand phrases, keeping noise low and focusing effort where it actually affects shoppers. 

Video thumbnail

Online shopping eye tracking study - Tesco, Neem and Tobii

From “nice to have” to non-negotiable 

Putting all this together, the direction of travel is clear. Eye tracking shows that shoppers systematically ignore most Product Display Page written text, but almost always use images as their primary decision anchor.   

Cambridge University’s research, alongside my work with GS1, show exactly what those images must convey and how to design them for small screens. Demographic and physiological reality shows that an ageing audience needs larger, higher‑contrast text to participate fully. The performance of these images on retailers and commerce platforms can then be easily tested using tools from the Tobii portfolio, like
Sticky by Tobii or using a Tobii Pro Spark eye tracker combine with Tobii Pro Lab software, to test on mobile or desktop devices. 

For CPGs and retailers, this is no longer a marginal design topic 

It sits at the intersection of growth, inclusion and risk: 

a) Growth: Because removing friction at the point of choice lifts conversion and share of search in the channel where most traffic now lives. 

b) Inclusion: Because accessible, legible visuals let older and low‑vision shoppers participate independently rather than being excluded by tiny, low‑contrast text. 

c) Risk: Because regulators and activist groups are increasingly scrutinising digital experiences that quietly discriminate against large parts of the population with the new European Accessibility Act, this is now in sharp focus with French retailers already being targeted by activists. 

Each image or screen should be self-evident, like having good lighting in a store, so everything appears better.
Steve Krug, Don’t Make Me Think, A Common Sense Approach to Web Usability

Here’s how to win the glance in the scroll… 

The brands that win the next phase of digital commerce will be those that respect how people truly see and decide online. That means accepting the uncomfortable truth from eye tracking data:  Shoppers don’t read, they glance so designing every pack, hero image, and banner means understanding text on images should be “glance readable”. 

To pass glance readability: 

  1. Declutter, remove all small text, and more than 4 messages is too much. 

  2. Choose high contrast colours for text vs background 

  3. Avoid gradients behind text — use flat design. 

  4. Use a rule of thumb text at a minimum should be 8% of canvas size 

  5. Avoid TITLE CASE it's harder to read. 

  6. Run both tests, Contrast and Clarity, as you design. 

  7. Use Tobii eye tracking to prove people glance / fixate on your new hero designs. 

  8. Use Rhino from Neem to prove your final designs are glance readable. 

 

Need help with your online shopping study?

Our specialists combine eye tracking and behavioral research to show you how shoppers really navigate your Product Display Pages and what drives conversion.

Written by

  • Oliver Bradley

    Oliver Bradley

    Chief Digital Officer, Neem

    Oli Bradley is a digital commerce UX leader focused on making brand content measurably superior for shoppers. With 18 years of experience, he started his dCom journey at Unilever in the UK before moving into a global role, where he designed shopper first frameworks and scaled winning content across 14 markets and 77 brands, partnering closely with Neem Consulting to deliver at pace and scale. A self confessed tech geek with a passion for shopper insight, Oli has led extensive eye tracking research with Tobii to understand how people really shop online, particularly on mobile. He became a vocal champion for AI and automation inside Unilever, helping transform how digital content was created, optimised, and deployed. That work contributed to Unilever winning multiple Digital Commerce Global awards, including Best CPG at the Digital Shelf and Unilever winning No.1 in the DCI Index in October 2024. Through his collaboration with Neem, Oli helped cut content delivery costs by 50% using AI and automation, while also improving consistency, brand quality, and mobile performance. He is known for pushing UX and accessibility onto the CPG agenda, making the case to brand and retailer teams that mobile first, inclusive design is now a commercial necessity, not a “nice to have”.

Continue learning about shopper behavior

Swoosh Top

Subscribe to our blog

Subscribe to our stories about how people are using eye tracking and attention computing.