home

Eye tracking insights in human factors Q2 2024

In Q2 2024, eye tracking technology has significantly advanced human factors research, providing crucial insights across diverse fields. Studies have leveraged eye tracking to explore visual attention in sports, retail, and autonomous driving, enhancing understanding of human-robot interactions and situation awareness. Eye tracking also facilitated breakthroughs in medical training, construction safety, and emotional recognition, offering objective data that enriches traditional assessment methods and supports the development of innovative solutions.

Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring

Yao Wang, Yang Lu, Cheng-Yi Shen, Shi-Jian Luo & Long-Yu Zhang

Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males ...

Investigating the eye movement characteristics of basketball players executing 3-point shot at varied intensities and their correlation with shot accuracy

Xuetong Zhao, Chunzhou Zhao, Na Liu & Sunnan Li

The 3-point shot plays a significant and pivotal role in the historical context of basketball competitions. Visual attention exerts a crucial influence on the shooting performance of basketball players. This study aims to investigate the eye movement characteristics exhibited by high-level basketball players while executing 3-point shot at varying exercise intensities, as well as explore the correlation between these eye movement characteristics and 3-point field goal percentage.

Assessing Human Visual Attention in Retail Human-Robot Interaction: A YOLOv8-Nano and Eye-Tracking Approach

Kamlesh Kumar, Yuhao Chen, Boyi Hu & Yue Luo

Objectives: This research delves into the dynamics of human-robot interaction (HRI) in retail environments, with a focus on robot detection from videos captured via an eye-tracking system. Methods: The study employs YOLOv8-nano model for real-time robot detection during grocery shopping tasks. All videos were processed using the YOLOv8 model to test inference speed while performing eye-tracking data analysis as a case study. Results: The YOLOv8 model demonstrated high precision in robot detection, with a mean average precision (mAP) of approximately 97.3...

Hospital and Home Environments Automation for Amyotrophic Lateral Sclerosis Patients: Building Information Modeling and the Internet of Things in Digital Environments

Francesco Alotto, Matteo Del Giudice, Roberta Surian, Nicola Rimella, Andrea Acquaviva, Edoardo Patti & Anna Osello

In this work, we present a novel distributed software platform for patients with neurodegenerative diseases that affect motor neurons. The linking point between this wide range of diseases is the strong social impact they have, degrading the freedom of action of the patient; the loss of functionality of motor neurons, caused by progressive degradation, makes the body unable to move. Thus, our solution aims at making patients more autonomous in their daily activities and at improving remote health-care monitoring for medical staff by combining virtual rea...

    Prediction of Robotic Anastomosis Competency Evaluation (RACE) metrics during vesico-urethral anastomosis using electroencephalography, eye-tracking, and machine learning

    Somayeh B. Shafiei, Saeed Shadpour, James L. Mohler, Parisa Rashidi, Mehdi Seilanian Toussi, Qian Liu, Ambreen Shafqat & Camille GutierrezPLOS One

    Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth ‘anastomosis’) on plastic models and animal tissue using the da...

    VALIO: Visual attention-based linear temporal logic method for explainable out-of-the-loop identification

    Mengtao Lyu, Fan Li, Ching-Hung Lee & Chun-Hsien Chen

    The phenomenon of being Out-Of-The-Loop (OOTL) can significantly undermine pilots’ performance and pose a threat to aviation safety. Previous attempts to identify OOTL status have primarily utilized “black-box” machine learning techniques, which fail to provide explainable insights into their findings. To address this gap, our study introduces a novel application of Linear Temporal Logic (LTL) methods within a framework named Visual Attention for Identifying OOTL (VALIO), leveraging eye-tracking technology to non-intrusively capture the pilots’ attenti...

    Personality traits affecting construction Workers near-miss recognition performance: Analysis based on eye tracking

    Shashank Muley, Chao Wang & Fereydoun Aghazadeh

    The construction industry is widely acknowledged as hazardous in nature, requiring proactive measures to mitigate accidents and minimize fatalities. While hazard recognition is recognized as a key preventive measure, research gaps persist regarding the impact of workers' personalities on near-miss identification. This study aimed to investigate the influence of the big five personality traits on construction workers' recognition of the Fatal-four near-miss incidents. Using an eye-tracking experiment conducted in a controlled environment, 35 participants ...

    From Distraction to Action: Elevating Situation Awareness with Visual Assistance in Level 3 Autonomous Driving

    Yancong Zhu, Chengyu Li, Zixuan Qiao, Rong Qu, Yu Wang, Jiaqing Xiong & Wei Liu

    This study examines the impact of visual assistance and cognitive load on situation awareness (SA) during takeover events in Level 3 (L3) autonomous driving, where drivers are permitted to engage in Non-Driving Related Tasks (NDRTs). Utilizing a driving simulator, the research explores how different NDRTs, occupying various sensory channels, influence drivers’ SA under two visual assistance conditions: full marking and key marking. Results from the Situation Awareness Global Assessment Technique (SAGAT) and NASA Task Load Index (NASA-TLX) scales, along w...

    Exploring the Impact of Facade Color Elements on Visual Comfort in Old Residential Buildings in Shanghai: Insights from Eye-Tracking Technology

    Zhanzhu Wang, Maoting Shen & Yongming Huang

    Building façade color plays a key role in shaping urban image, enhancing urban vitality, and optimizing citizens’ living experience. Moreover, colors can influence people’s perception of space, but the multiple interrelationships between color elements and users’ color evaluation and visual perception have not yet been thoroughly studied. In order to explore the relationships between color elements and visual perception and subjective comfort, this study discusses the matching relationship between color and the comfort of a residential building façade fr...

    The Effect of Danmaku Font Size on Online Learning Outcomes for Learners with Different Cognitive Styles: Evidence from Eye Movements

    Fengqiang Gao, Chunze Xu, Qing Lv, Zhong Liu & Lei Han

    Danmaku is increasingly used during online interactions - such as learner communication in online learning. As an emerging online interaction method, Danmaku not only enhances the viewer interaction experience, but also allows learners to achieve better learning outcomes. Therefore, this study used a 2 (Danmaku font size: large, small) × 2 (cognitive style: field-independent, field-dependent) between-participants design to investigate the effect of Danmaku font size on online learning through eye-tracking studies and to reveal the mechanism of the effect...

    Intelligent emotion recognition in product design using multimodal physiological signals and machine learning

    Lekai Zhang, Fo Hu, Xingyu Liu, Yingfan Wang, Hailong Zhang, Zheng Liu & Chunyang Yu

    Identifying emotional responses in products is essential for product design and user research. Traditional methods, such as interviews and surveys, for gathering product experience data are time-consuming and resource-intensive, and often fail to capture users’ genuine emotional intentions. This article introduces an intelligent method for accurately identifying user-product emotions using multimodal physiological signals and machine learning techniques. The study involves designing experiments with 63 representative product images, and collecting variou...

    Virtual reality as an engaging and enjoyable method for delivering emergency clinical simulation training: a prospective, interventional study of medical undergraduates

    Risheka Walls, Priyanka Nageswaran, Adrian Cowell, Tunav Sehgal, Thomas White, James McVeigh, Stefan Staykov, Paul Basett, Daniel Mitelpunkt & Amir H. Sam

    It is a requirement that medical students are educated in emergencies and feel well prepared for practice as a doctor, yet national surveys show that many students feel underprepared. Virtual reality (VR), combined with 360-degree filming, provides an immersive, realistic, and interactive simulation experience. Unlike conventional in-person simulation, it is scalable with reduced workforce demands. We sought to compare students’ engagement and enjoyment of VR simulation to desktop computer-based simulation.

    • Tobii VR

    A Multimodal Assistive-Robotic-Arm Control System to Increase Independence After Tetraplegia

    Taylor C. Hansen, Troy N. Tully, V. John Mathews & David J. Warren

    Following tetraplegia, independence for completing essential daily tasks, such as opening doors and eating, significantly declines. Assistive robotic manipulators (ARMs) could restore independence, but typically input devices for these manipulators require functional use of the hands. We created and validated a hands-free multimodal input system for controlling an ARM in virtual reality using combinations of a gyroscope, eye-tracking, and heterologous surface electromyography (sEMG). These input modalities are mapped to ARM functions based on the user’s ...

    • Tobii VR

    Real-World Scanpaths Exhibit Long-Term Temporal Dependencies: Considerations for Contextual AI for AR Applications

    Charlie S Burlingham, Naveen Sendhilnathan, Xiuyun Wu, T. Scott Murdison & Michael J Proulx

    All-day augmented reality (AR) requires contextually-aware artificial intelligence (AI) models that excel across diverse daily contexts. Eye tracking could be a key source of information about user context and intention. However, such models using gaze sometimes struggle to outperform egocentric video-based baseline models. We propose that learning representations of scanpath history in a perceptually-relevant state space may solve this problem. However, scanpaths are often assumed to obey a Markovian assumption, i.e., only the current and previous fixat...

    Analyzing and Interpreting Eye Movements in C++: Using Holistic Models of Image Perception

    Florian Hauser, Lisa Grabinger, Timur Ezer, Jürgen Horst Mottok & Hans Gruber

    This study uses holistic models of image perception originating from radiology and psychology to analyze and interpret eye movements during code reviews in the C++ programming language. The study design is based on former experiments, but is supplemented by approaches from expertise research. The study utilizes a sample of 34 subjects whose eye movements are recorded by a Tobii Pro Spectrum 600 Hz. The results show that the holistic models of image perception are suitable for application to source code. In addition, it can be observed that the code revie...

    Emotion Prediction in\xa0Real-Life Scenarios: On the\xa0Way to\xa0the\xa0BIRAFFE3 Dataset

    Krzysztof Kutt & Grzegorz J. Nalepa

    Despite over 20 years of research in affective computing, emotion prediction models that would be useful in real-life out-of-the-lab scenarios such as health care or intelligent assistants have still not been developed. The identification of the fundamental problems behind this concern led to the initiation of the BIRAFFE series of experiments, whose main goal is to develop a set of techniques, tools and good practices to introduce personalized context-based emotion processing modules in intelligent systems/assistants. The aim of this work is to present ...

    Assessment of cognitive workload based on information theory enabled eye metrics

    Souvik Das & J. Maiti

    Despite the deployment of sophisticated automation and control mechanisms in modern industries, accidents continue to occur. According to the literature, human error is the predominant contributor to these accidents. This depicts the deteriorating cognitive functions of human operators in managing abnormal situations. To proactively prevent human error, this study proposes information theory-enabled eye metrics to quantify the cognitive workload of the operator. The proposed metrics are based on gaze entropy and are evaluated using experimental studies. ...

    Automated Assesment of Eye-hand Coordination Skill using a Vertical Tracing Task on a Gaze-sensitive Human Computer Interaction Platform for children with Autism

    Dharma Rane, Madhu Singh & Uttama Lahiri

    Children with Autism often demonstrate atypical gaze pattern and eye-hand coordination skill deficits marked by difficulties in reaching out for an object, tracing on a vertically mounted canvas, etc. Currently existing conventional methods can assess one's coordination skill during hand movement in 3D-space. But such methods can be subjective and devoid of gaze tracking. Investigation of coordination skill and gaze tracking of this target group in tasks set in 3D-space has been largely unexplored. To quantitatively assess one's eye-hand coordination ski...

      Evaluation of Biomechanical and Mental Workload During Human–Robot Collaborative Pollination Task

      Mustafa Ozkan Yerebakan, Yu Gu, Jason Gross & Boyi Hu

      The purpose of this study is to identify the potential biomechanical and cognitive workload effects induced by human robot collaborative pollination task, how additional cues and reliability of the robot influence these effects and whether interacting with the robot influences the participant’s anxiety and attitude towards robots.

      Uniss-FGD: A Novel Dataset of Human Gazes Over Images of Faces

      Pietro Ruiu, Mauro Fadda, Andrea Lagorio, Seth Nixon, Matteo Anedda, Enrico Grosso & Marinella Iole CadoniLecture Notes in Information Systems and Organisation Technology Driven Transformation

      Face detection and recognition play pivotal roles across various domains, spanning from personal authentication to forensic investigations, surveillance, entertainment, and social media. In our interconnected world, pinpointing an individual’s identity amidst millions remains a formidable challenge. While contemporary face recognition techniques now rival or even surpass human accuracy in critical scenarios like border identity control, they do so at the expense of poor explainability, leaving the underlying causes of errors largely unresolved. Moreover,...

      Real-time driving risk prediction using a self-attention-based bidirectional long short-term memory network based on multi-source data

      Zhuopeng Xie, Yongfeng Ma, Ziyu Zhang & Shuyan Chen

      Early warning of driving risks can effectively prevent collisions. However, numerous studies that predicted driving risks have suffered from the use of single data sources, insufficiently advanced models, and lack of time window analysis. To address these issues, this paper proposes a self-attention-based bidirectional long short-term memory (Att-Bi-LSTM) network model to predict driving risk based on multi-source data. First, driving simulation tests are conducted. Driver demographic, operation, visual, and physiological data as well as kinematic data a...

      Eye tracking and audio sensors to evaluate surgeons non-technical skills: An empirical study

      Shraddhaa Narasimha, Marian Obuseh, Nicholas Eric Anton, Haozhi Chen, Raunak Chakrabarty, Dimitrios Stefanidis & Denny YuInternational Journal of Human–Computer Interaction

      Non-Technical Skills (NTS) of medical teams are currently measured using subjective and resource-intensive ratings given by experts. This study explores if objective NTS assessment approaches with eye-tracking and audio sensors can measure teamwork and communication skills in surgery. Eight surgeons participated in a simulated two-phase surgical scenario developed to assess their NTS. Sensor-based audio, eye tracking and video data were collected and analyzed along with rating from the NOTSS scale. Different levels of communication were detected by the s...

      A Study on the Development of Driver Behavior Simulation Dummy for the Performance Evaluation of Driver Monitoring System

      Jin Hae Yae, Young Dal Oh, Moon Sik Kim & Sun Hong Park

      Driver monitoring system (DMS) was mainly developed to prevent accident risks by analyzing facial movements related to drowsiness and carelessness in real time such as driver’s gaze, blink, and head angle through cameras and warning the driver. Recently, the scope has been expanded to monitor passengers, and it has been linked to safety functions such as neglecting children, empty seats, or controlling airbags on seats with people under safety weight. However, evaluation research for algorithm advancement and performance optimization is relatively insuff...

      Reducing Driving Risk Factors in Adolescents with Attention Deficit Hyperactivity Disorder (ADHD): Insights from EEG and Eye-Tracking Analysis

      Anat Keren, Orit Fisher, Anwar Hamde, Shlomit Tsafrir & Navah Z. RatzonAdvances in Autism

      Adolescents with attention deficit hyperactivity disorder (ADHD) face significant driving challenges due to deficits in attention and executive functioning, elevating their road risks. Previous interventions targeting driving safety among this cohort have typically addressed isolated aspects (e.g., cognitive or behavioral factors) or relied on uniform solutions. However, these approaches often overlook this population’s diverse needs. This study introduces the “Drive-Fun” innovative intervention (DFI), aimed at enhancing driving skills among this vulnera...

      Deep learning–based eye tracking system to detect distracted driving

      Song Xin, Shuo Zhang, WanRong Xu, Yuxiang Yang & Xiao Zhang

      Drivers obtain road traffic information from various locations while driving, resulting in distracted driving and eventually traffic accidents. Thus, the position and duration of the driver’s gaze while driving must be studied. This study primarily aims to detect the driver’s gaze dispersion using eye-tracking technology and YOLOv5 algorithm. The eye-tracking technology uses the traditional method of extracting the area of interest (AOI) to analyze the changes in driver’s pupil diameter and gaze position and duration in each area during driving. The impr...

      Augmented reality in team-based search and rescue: Exploring spatial perspectives for enhanced navigation and collaboration

      Fang Xu, Tianyu Zhou, Tri Nguyen & Jing Du

      Indoor search and rescue (SAR) missions frequently encounter challenges due to complex spatial layouts and conditions of high stress. Augmented Reality (AR) has emerged as a promising tool to aid SAR efforts by offering spatial information through exocentric and egocentric perspectives. Specifically, AR systems introduce an egocentric perspective, overlaying detailed spatial information onto physical environments, which holds promise for navigating the complexities of indoor SAR operations. However, the effectiveness of these AR perspectives in a team-ba...

      • Tobii VR

      Using Multimodal Methods and Machine Learning to Recognize Mental Workload: Distinguishing Between Underload, Moderate Load, and Overload

      Zebin Jiang, Xinyan Li, Liezhong Ge, Jie Xu, Yandi Lu, Yijing Zhang & Ming Mao

      Mental workload recognition is of great significance in preventing human errors and accidents. This study constructed a multimodal recognition scheme to recognize three mental workload states: underload, moderate load, and overload. Based on driving scenarios, these three states were induced in this study by changing the driving modes and situations. Multimodal recognition of underload, moderate load, and overload was performed using electroencephalography (EEG), electrocardiography (ECG), and pupillometry. In addition, various machine learning methods w...

      Visual effects of a forward-curled 3D map of the Forbidden City with eye-tracking

      Shen Ying, Junru Su, Yuan Zhuang & Lina Huang

      In urban environment visualization, including both traditional two-dimensional (2D) and three-dimensional (3D) visualization, the height of ground objects results in visual occlusions in ordinary 3D maps, which leads to challenges in displaying spatial relationships. We empirically studied the visual effects of a curled deformation method and assessed whether curled deformation visualization could help participants complete wayfinding tasks. The results revealed that a forward-curled map can include both ego-view and bird-view perspectives, ensure contin...

      Assembly complexity and physiological response in human-robot collaboration: Insights from a preliminary experimental analysis

      Matteo Capponi, Riccardo Gervasi, Luca Mastrogiacomo & Fiorenzo Franceschini

      Industry 5.0 paradigm has renewed interest in the human sphere, emphasizing the importance of workers’ well-being in manufacturing activities. In such context, collaborative robotics originated as a technology to support humans in tiring and repetitive tasks. This study investigates the effects of assembly complexity in Human-Robot collaboration using physiological indicators of cognitive effort. In a series of experiments, participants performed assembly processes of different products with varying complexity, in two modalities: manually and with cobot ...

      Post-Takeover Proficiency in Conditionally Automated Driving: Understanding Stabilization Time with Driving and Physiological Signals

      Timotej Gruden, Sašo Tomažič & Grega JakusAdvances in Autism

      In the realm of conditionally automated driving, understanding the crucial transition phase after a takeover is paramount. This study delves into the concept of post-takeover stabilization by analyzing data recorded in two driving simulator experiments. By analyzing both driving and physiological signals, we investigate the time required for the driver to regain full control and adapt to the dynamic driving task following automation. Our findings show that the stabilization time varies between measured parameters. While the drivers achieved driving-relat...

      Fatigue driving state detection for tanker truck drivers based on multi-feature fusion

      Ning Zhang, Ziyi Zhang, Chuanyi Ma, Ziliang Yang, Shengtao Zhang & Jianqing Wu

      As the significance of truck transportation in the modern economy continues to grow, the issue of fatigue driving in tanker trucks has garnered significant attention. Therefore, this study proposes a multimodal fatigue driving detection method. It involves conducting driving experiments with tanker trucks, collecting driving operation data, electrocardiogram data, and eye-tracking data. After data preprocessing, a multimodal driving dataset is generated. Data mining techniques are used to extract 42 driving feature values, and then, through correlation a...

      A proposed framework for data-driven human factors evaluation

      Isabelle Ormerod, Henrikke Dybvik, Mike Fraser & Chris Snider

      Human-centred approaches within the design cycle are crucial to enhance usability and inclusivity of products. However, the qualitative nature of traditional human factors evaluation can create bottle necks, prompting the need for more data driven methods. A framework for data-driven human factors is presented, looking to integrate mixed-method approaches. Case studies illustrate its usage in real-world scenarios and challenges are summarised, calling for robust data collection methods, balancing of mixed methods, a need for explainable systems, and inte...

      Evaluation of apparent effectiveness of safety sign group in underground cavern construction

      Qin Zeng, Yanhua Chen & Donghui Li

      Effective sign layouts are essential for guiding driving in underground construction caverns and improving transportation safety. While previous studies concentrated on evaluating drivers' gaze behavior in tunnels, the absence of a theoretical framework for visual perception of sign groups impedes comprehensive perception measurement and layout optimization. This paper aims to bridge this gap, which study explores drivers' visual cognition by analyzing eye movement and EEG indicators in sign group recognition tasks. It establishes an intuitive evaluation...

      Assessing the Legibility of Arabic Road Signage Using Eye Gazing and Cognitive Loading Metrics

      Mohammad Lataifeh, Naveed Ahmed, Shaima Elbardawil & Somayeh Gordani

      This research study aimed to evaluate the legibility of Arabic road signage using an eye-tracking approach within a virtual reality (VR) environment. The study was conducted in a controlled setting involving 20 participants who watched two videos using the HP Omnicept Reverb G2. The VR device recorded eye gazing details in addition to other physiological data of the participants, providing an overlay of heart rate, eye movement, and cognitive load, which in combination were used to determine the participants’ focus during the experiment. The data were pr...

      • Tobii VR

      Theoretical Framework for Utilizing Eye-Tracking Data to Understand the Cognitive Mechanism of Situational Awareness in Construction Hazard Recognition

      Yanfang Luo, Qiang Yang, JoonOh Seo & Seungjun Ahn

      Comprehending the cognitive processes underlying hazard identification is crucial for enhancing worker safety behavior in construction. Recent studies have explored eye-tracking technology’s potential in understanding human cognition across contexts. However, limited research delves into the intricate cognitive processes linking eye movements and hazard recognition, particularly in the context of situational awareness (SA). Thus, this study investigates the relationship between eye movement data and SA’s cognitive processes in hazard recognition virtual ...

      • Tobii VR

      Eye-tracking-based analysis of pharmacists’ thought processes in the dispensing work: research related to the efficiency in dispensing based on right-brain thinking

      Toshikazu Tsuji, Kenichiro Nagata, Masayuki Tanaka, Shigeru Hasebe, Takashi Yukita, Mayako Uchida, Kimitaka Suetsugu, Takeshi Hirota & Ichiro Ieiri

      Pharmacists should be aware of their thought processes in dispensing work, including differences in the dispensing complexities owing to different drug positions in the left, center, and right areas. Dispensing errors associated with “same-name drugs (a pair of drugs with the same name but a different ingredient quantity)” are prevalent and often negatively affect patients. In this study, using five pairs of comparative models, the gaze movements of pharmacists in dispensing work were analyzed using an eye-tracking method to elucidate their thought proce...

      Evidence of elevated situational awareness for active duty soldiers during navigation of a virtual environment

      Leah R. Enders, Stephen M. Gordon, Heather Roy, Thomas Rohaly, Bianca Dalangin, Angela Jeter, Jessica Villarreal, Gary L. Boykin & Jonathan TouryanSoftware and Systems Modeling

      U.S. service members maintain constant situational awareness (SA) due to training and experience operating in dynamic and complex environments. Work examining how military experience impacts SA during visual search of a complex naturalistic environment, is limited. Here, we compare Active Duty service members and Civilians’ physiological behavior during a navigational visual search task in an open-world virtual environment (VE) while cognitive load was manipulated. We measured eye-tracking and electroencephalogram (EEG) outcomes from Active Duty (N = 21)...

      Greening Indoor Workplace in High-Density Cities

      Qinghua Lei, Chao Yuan & Stephen Siu Yu Lau

      It is well known that greenery biophilic design can improve health and productivity, but studies are still needed to quantify greenery dose and corresponding well-being benefits to support design practice. In this study, we investigated impacts of various greenery dose on workplace well-being from the perspectives of physiological, psychological, and productivity performance. An experiment was conducted, in which green coverage ratios, 0, 0.2%, 5%, 12%, and 20%, were tested, and both health and productivity performance of 15 participants were measured by...

      The influence of uncertainty visualization on cognitive load in a safety- and time-critical decision-making task

      Suvodip Chakraborty, Peter Kiefer & Martin Raubal

      Decisions with spatial visualizations are often made under uncertainty and high time pressure. However, missing or improper representation of uncertainty can hamper the decision-making process. This paper investigates the impact of uncertainty visualization on cognitive load in the context of a safety-critical, time-sensitive decision-making task with a transportation system map. In a controlled experiment (n = 40) with a dual-task paradigm, we compared three different uncertainty visualization techniques and a baseline for different levels of time press...

      Early Eye Disengagement Is Regulated by Task Complexity and Task Repetition in Visual Tracking Task

      Yun Wu, Zhongshi Zhang, Farzad Aghazadeh & Bin ZhengAdvances in Autism

      Understanding human actions often requires in-depth detection and interpretation of bio-signals. Early eye disengagement from the target (EEDT) represents a significant eye behavior that involves the proactive disengagement of the gazes from the target to gather information on the anticipated pathway, thereby enabling rapid reactions to the environment. It remains unknown how task difficulty and task repetition affect EEDT. We aim to provide direct evidence of how these factors influence EEDT. We developed a visual tracking task in which participants vie...

      Integration of Eye-Tracking and Object Detection in a Deep Learning System for Quality Inspection Analysis

      Seung-Wan Cho, Yeong-Hyun Lim, Kyung-Min Seo & Jungin Kim

      During quality inspection in manufacturing, the gaze of a worker provides pivotal information for identifying surface defects of a product. However, it is challenging to digitize the gaze information of workers in a dynamic environment where the positions and postures of the products and workers are not fixed. A robust, deep learning-based system, ISGOD (Integrated System with worker’s Gaze and Object Detection), is proposed, which analyzes data to determine which part of the object is observed by integrating object detection and eye-tracking information...

      An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors

      Sabrina Iarlori, David Perpetuini, Michele Tritto, Daniela Cardone, Alessandro Tiberio, Manish Chinthakindi, Chiara Filippini, Luca Cavanini, Alessandro Freddi, Francesco Ferracuti, Arcangelo Merla & Andrea Monteriù

      Background: Human-Machine Interaction (HMI) has been an important field of research in recent years, since machines will continue to be embedded in many human actvities in several contexts, such as industry and healthcare. Monitoring in an ecological mannerthe cognitive workload (CW) of users, who interact with machines, is crucial to assess their level of engagement in activities and the required effort, with the goal of preventing stressful circumstances. This study provides a comprehensive analysis of the assessment of CW using wearable sensors in HMI...

      Hazard warning modalities and timing thresholds for older drivers with impaired vision

      Jing Xu & Alex R. Bowers

      Purpose: We examined collision warning systems with different modalities and timing thresholds, assessing their impact on responses to pedestrian hazards by drivers with impaired contrast sensitivity (ICS). Methods: Seventeen ICS (70–84 y, median CS 1.35 log units) and 17 normal vision (NV: 68–73 y, median CS 1.95) participants completed 6 city drives in a simulator with 3 bimodal warnings: visual-auditory, visual-directional-tactile, and visual-non-directional-tactile. Each modality had one drive with early and one with late warnings, triggered at 3.5 s...

      • Tobii Pro X60 / X120 / T60 / T120

      The effects of interaction with audiovisual elements on perceived restoration in urban parks in freezing weather

      Ruining Zhang, Ling Zhu, Xinhao Yang, Rumei Han, Yuan Zhang & Jian Kang

      Urban green spaces, crucial for urban residents’ wellbeing, offer restorative benefits mainly through natural elements, as established in existing literature which were generally conducted in warm weather. Yet, cold weather modifies how both natural and anthropogenic elements appear and function, and hence their impacts on restorativeness may deviate. In cold weather, whether an urban green space would be overall restorative, and which type of elements would be more beneficial remains poorly understood. Here we present the results of a walk experience ex...

      Understanding Relations Between Product Icon Type, Feature Type, and Abstraction: Evidence From ERPs and Eye-Tracking Studys

      Jinchun Wu, Yixuan Liu, Lulu Gan, Mu Tong & Chengqi Xue

      The representation and recognition of icons play a crucial role in interface interaction efficiency and user experience within human–computer interaction. However, the intricate relationship between product icon types, feature types, and abstraction in cognitive contexts has yet to be clarified. This study aimed to delve into the cognitive mechanisms concerning practical and hedonic product icons across varying abstraction levels using EEG analysis. Moreover, it investigated how the explicitness and implicitness of these icons and their abstraction level...

      The impact of varied correlated color temperatures on visual comfort in museum exhibitions: integrating physiological and subjective assessments

      Liang Qian, Xiwen Zeng, Xiaorong Liu & Li Peng

      Correlated Color Temperature (CCT) significantly influences mood, comfort, and potentially overall health. However, its impact on visitors’ visual experience in museum design remains insufficiently explored. This study aims to investigate the effects of different CCT settings (3000 K, 4500 K, 6000 K) on visual comfort within a simulated museum space. Using 3D modeling and physiological recordings, 200 participants assessed visual comfort. Consistent findings support that a CCT of 4500 K provides the highest comfort level, aligning with the observed trend...

      Evaluation of mental load using EEG and eye movement characteristics

      Xin Zheng, Huiyu Wang, Tengteng Hao, Shoukun Chen, Kaili Xu & Yicheng Wang

      Mental load is a major cause of human-induced accidents. In this study, an explosive impact sensitivity experiment was used to induce mental load. A combination of subjective questionnaires and objective prospective time-distance tests were used to judge whether subjects experienced mental load. Four indicators, namely, β, γ, mean pupil diameter, and fixation time were selected by statistical analysis and PCA for the construction of a mental load assessment model. The study found that the occipital lobe was the most sensitive to mental load, especially β...

      Investigating emotional design of the intelligent cockpit based on visual sequence data and improved LSTM

      Nanyi Wang, Di Shi, Zengrui Li, Pingting Chen & Xipei Ren

      To enhance affective experience and customer satisfaction in the intelligent cockpit of new energy vehicle (NEV-IC), this article proposes a novel method that combines the visual sequence data of eye movements with the sentiment prediction using improved Long Short-Term Memory (LSTM). Specifically, we used eye-tracking technology to capture users' visual sequence of design morphology for NEV-IC. We then adopted entropy-TOPSIS to compute the ranking of morphological components based on experts’ opinions, establishing the coupling between users' visual per...

      Pairing in-vehicle intelligent agents with different levels of automation: implications from driver attitudes, cognition, and behaviors in automated vehicles

      Manhua Wang, Seul Chan Lee & Myounghoon Jeon

      In-vehicle intelligent agents (IVIAs) have been developed to improve user experience in autonomous vehicles. Yet, the impact of the automation system on driver behavior and perception toward IVIAs is unclear. In this study, we conducted three experiments with 73 participants in a driving simulator to examine how automation system parameters (the level of automation system and IVIA features) influence driver attitudes, cognition, and behaviors when driving or riding in a simulated vehicle. We focused on subjective evaluations of driver-agent interaction a...

      An Early Warning Approach for Pilots’ Cognitive Tipping Points Based Multi-Modal Signals

      Si Wang, Yadong Liu & Dewen Hu

      When executing complex missions in emergency scenarios, pilots’ cognitive state may deteriorate, posing significant challenges to flight safety and mission execution. This paper proposes a cross-subject and cross-session early warning approach based on small-sample and multi-modal signals for predicting cognitive collapse state. We designs an experimental paradigm that has been demonstrated to induce cognitive collapse in 87 % of trials by analyzing questionnaire scores, physiological signals, and task performance. The extracted multi-modal features are ...

      Visual Perception of Obstacles: Do Humans and Machines Focus on the Same Image Features?

      Constantinos Kyriakides, Marios Thoma, Zenonas Theodosiou, Harris Partaourides, Loizos Michael & Andreas Lanitis

      Contemporary cities are fractured by a growing number of barriers, such as on-going construction and infrastructure damages, which endanger pedestrian safety. Automated detection and recognition of such barriers from visual data has been of particular concern to the research community in recent years. Deep Learning (DL) algorithms are now the dominant approach in visual data analysis, achieving excellent results in a wide range of applications, including obstacle detection. However, explaining the underlying operations of DL models remains a key challeng...

      Pedestrians’ responses to scalable automated vehicles with different external human-machine interfaces: Evidence from a video-based eye-tracking experiment

      Wei Lyu, Wen-gang Zhang, Xueshuang Wang, Yi Ding & Xinyue Yang

      To enhance the efficiency and safety of interactions with pedestrians, numerous external Human-Machine Interfaces (eHMIs) concepts for automated vehicles (AVs) have been proposed and evaluated, predominately based on singular pedestrian-AV interaction scenarios. This leaves a gap in comprehending the efficiency and robustness of eHMIs during interactions with scalable AVs. To bridge the gap, this study pioneers an exploration of pedestrians' road-crossing decisions, perceived clarity, and gaze behaviour during synchronous interactions with multiple AVs e...

      Using mobile eye tracking to measure cognitive load through gaze behavior during walking in lower limb prosthesis users: A preliminary assessment

      Sabina Manz, Thomas Schmalz, Michael Ernst, Thomas Maximilian Köhler, Jose Gonzalez-Vargas & Strahinja Dosen

      Background: Lower limb amputation does not affect only physical and psychological functioning but the use of a prosthetic device can also lead to increased cognitive demands. Measuring cognitive load objectively is challenging, and therefore, most studies use questionnaires that are easy to apply but can suffer from subjective bias. Motivated by this, the present study investigated whether a mobile eye tracker can be used to objectively measure cognitive load by monitoring gaze behavior during a set of motor tasks. Methods: Five prosthetic users and eigh...

      Single-pilot operations in commercial flight: Effects on neural activity and visual behaviour under abnormalities and emergencies

      Qinbiao LI, Chun-Hsien CHEN, Kam K.H. NG, Xin YUAN & Cho Yin YIU

      With cutting-edge technologies and considering airline human-resource-saving, a single pilot in commercial jets could be technically feasible. Investigating changes in captains’ natural behaviours are initially required to comprehend the specific safe human performance envelope for safeguarding single-pilot flight, particularly in high-risk situations. This paper investigates how captains’ performance transforms for fixing emergencies when operating from Dual-Pilot Operations (DPO) to Single-Pilot Operations (SPO) through a physiological-based approach. ...

      A multimodal physiological dataset for driving behaviour analysis

      Xiaoming Tao, Dingcheng Gao, Wenqi Zhang, Tianqi Liu, Bing Du, Shanghang Zhang & Yanjun Qin

      Physiological signal monitoring and driver behavior analysis have gained increasing attention in both fundamental research and applied research. This study involved the analysis of driving behavior using multimodal physiological data collected from 35 participants. The data included 59-channel EEG, single-channel ECG, 4-channel EMG, single-channel GSR, and eye movement data obtained via a six-degree-of-freedom driving simulator. We categorized driving behavior into five groups: smooth driving, acceleration, deceleration, lane changing, and turning. Throu...

      The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMI

      Jue Li, Jiawen Liu, Xiaoshan Wang & Long Liu

      Automation transparency offers a promising way for users to understand the uncertainty of automated driving systems (ADS) and to calibrate their trust in them. However, not all levels of information may be necessary to achieve transparency. In this study, we conceptualized the transparency of the automotive human–machine interfaces (HMIs) in three levels, using driving scenarios comprised of two degrees of urgency to evaluate drivers’ trust and reliance on a highly automated driving system. The dependent measures included non-driving related task (NDRT) ...

      Context‐aware hand gesture interaction for human–robot collaboration in construction

      Xin Wang, Dharmaraj Veeramani, Fei Dai & Zhenhua Zhu

      Construction robots play a pivotal role in enabling intelligent processes within the construction industry. User-friendly interfaces that facilitate efficient human–robot collaboration are essential for promoting robot adoption. However, most of the existing interfaces do not consider contextual information in the collaborative environment. The situation where humans and robots work together in the same jobsite creates a unique environmental context. Overlooking contextual information would limit the potential to optimize interaction efficiency. This pap...

      Modelling attention allocation and takeover performance in two-stage takeover system via a cognitive computational model: considering the role of multiple monitoring requests

      Lie Guo, Xu Wang, Linli Xu & Longxin Guan

      Studies have demonstrated two-stage takeover systems’ feasibility and advantages. However, existing cognitive models mainly focus on simulating drivers’ performance in single-stage takeover systems, with limited insights into cognitive modelling of effects of monitoring requests (MRs) within two-stage takeover systems. This study constructed a cognitive computational model for two-stage takeover systems based on queueing network-adaptive control of thought rational (QN-ACTR) architecture. Our model aims to capture variations in drivers’ attention allocat...

      Predictive modeling of gaze patterns in drivers: a machine learning approach with tobii glass 2

      Daniela Daniel Ndunguru, Liu Zhanwen, Chrispus Zacharia Oroni, Seth Mabyo kabamba, Arsenyan Ani, Moussa Sali, Gadi Gilleard Lyatuu & Aletas Athanas Haule

      Understanding and predicting drivers' gaze patterns is essential for improving road safety and optimizing in-vehicle displays. This study delves into the nuanced dynamics of drivers’ visual attention across varied road segments, employing both statistical analyses and machine learning models. Ten participants, spanning diverse demographics, participated in a real driving experiment, navigating curves and straight stretches while their eye movements were tracked using Tobii Pro Glasses 2. Statistical analysis unveiled significant variations in gaze behavi...

      A method to enhance drivers hazard perception at night based on knowledge-attitude-practice theory

      Bin Zhou, Zhongxiang Feng, Jing Liu, Zhipeng Huang & Ya Gao

      During nighttime driving, the inherent challenges of low-illuminance conditions often lead to an increased crash rate and higher fatalities by impairing drivers' ability to recognize imminent hazards. While the severity of this issue is widely recognized, a significant research void exists with regard to strategies to enhance hazard perception under such circumstances. To address this lacuna, our study examined the potential of an intervention grounded in the knowledge-attitude-practice (KAP) framework to bolster nighttime hazard detection among drivers....

      Eye movement analysis for real-world settings using segmented linear regression

      Kritika Johari, Rishabh Bhardwaj, Jung-Jae Kim, Wei Quin Yow & U-Xuan Tan

      Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear fu...

      Discover cutting-edge publications in Human factors
      See all publications

      Enhance your human factors expertise

      Discover cutting-edge research using eye tracking to improve safety, training, and interaction in various fields.