pith. machine review for the scientific record. sign in

arxiv: 2605.06810 · v1 · submitted 2026-05-07 · 💻 cs.HC · cs.CV

Recognition: no theorem link

Enhancing Eye Movement Biometrics for User Authentication via Continuous Gaze Offset Score Fusion

Authors on Pith no claims yet

Pith reviewed 2026-05-11 01:28 UTC · model grok-4.3

classification 💻 cs.HC cs.CV
keywords eye movement biometricsgaze offsetscore fusionuser authenticationnonlinear fusionvirtual realityeye trackingbiometric enhancement
0
0 comments X

The pith

Fusing continuous gaze offset scores with other eye movement features improves user authentication accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Eye movement biometrics authenticate people through their distinctive gaze dynamics, yet most current systems ignore continuous gaze offset even though prior work shows it carries user-specific signals. This paper tests whether combining that offset with existing features via linear and nonlinear score fusion can raise performance. Experiments on two datasets, one from lab eye trackers and one from VR headsets, across varied tasks and durations, find that fusion helps, with nonlinear methods giving the clearest gains and multi-task fusion adding more. A sympathetic reader would care because this points to a low-cost way to make eye-based login more reliable when tracking quality drops, without new hardware.

Core claim

The paper establishes that continuous gaze offset supplies complementary user-discriminative information that, when fused with standard eye movement biometric features through score-level combination, raises authentication performance on both lab-grade and virtual-reality datasets. Nonlinear fusion outperforms linear fusion, and pooling information across multiple tasks yields further gains. The results back the view that gaze offset functions as useful auxiliary data especially when eye tracking is degraded or noisy.

What carries the argument

Score fusion, linear and nonlinear, of continuous gaze offset with existing biometric feature scores.

If this is right

  • Nonlinear fusion methods produce larger accuracy gains than linear fusion on both datasets tested.
  • Combining biometric scores across multiple tasks and observation lengths further raises authentication performance.
  • Gaze offset serves as practical auxiliary information when eye tracking quality is reduced or noisy.
  • The fusion approach works across different hardware, from laboratory eye trackers to virtual reality headsets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the independence holds, fusion could be added to consumer eye-tracking devices to strengthen biometric security without extra sensors.
  • Real-world systems might adapt the fusion rule according to the current task to keep error rates low during varied user activities.
  • The same auxiliary-signal idea could be tried on other gaze-related or motion biometrics to handle noisy data.

Load-bearing premise

Continuous gaze offset carries information about users that is independent enough from the main eye movement features for fusion to add value instead of mere redundancy.

What would settle it

A new dataset collected with comparable eye trackers where neither linear nor nonlinear fusion of gaze offset reduces equal-error rates below the baseline feature set alone would falsify the performance benefit.

Figures

Figures reproduced from arXiv: 2605.06810 by Hashim Aziz, Mehedi Hasan Raju, Oleg V. Komogortsev.

Figure 1
Figure 1. Figure 1: Overview of the proposed eye movement biometric fusion pipeline, consisting of EKYT feature extraction [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
read the original abstract

Eye movement biometrics (EMB) use subject-specific gaze dynamics for user authentication and identification. Recent deep learning-based EMB systems achieve strong performance by modeling temporal eye movement behavior. However, these systems typically overlook continuous gaze offset, despite prior evidence that it contains user-discriminative information. This work examines whether continuous gaze offset can improve biometric performance when combined with existing biometric features. We evaluate linear and nonlinear fusion methods on two publicly available datasets, collected via the lab-grade eye tracker and virtual reality headset across multiple tasks and observation durations. Results indicate that fusion offers performance benefits on both datasets, particularly when using nonlinear fusion. Additionally, fusing biometric information across multiple tasks further improves authentication performance. These findings support the hypothesis that continuous gaze offset may serve as useful auxiliary information under conditions of degraded or noisy eye tracking.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that continuous gaze offset contains user-discriminative information that can be fused with existing temporal eye-movement features to improve authentication performance in eye movement biometrics. It evaluates linear and nonlinear fusion on two public datasets (lab-grade eye tracker and VR headset) across multiple tasks and durations, reporting that nonlinear fusion yields benefits and that multi-task fusion further improves results, supporting use of gaze offset as auxiliary information under noisy tracking conditions.

Significance. If the fusion gains prove robust and the offset feature supplies genuinely complementary information, the approach could provide a simple, hardware-agnostic way to strengthen EMB systems in practical settings such as VR or lower-quality trackers. The reliance on public datasets is a positive for reproducibility, yet the absence of detailed metrics and independence diagnostics limits the strength of the current evidence.

major comments (2)
  1. Abstract: reports positive fusion results but provides no quantitative metrics, error bars, statistical tests, or details on feature extraction and fusion implementation, making it impossible to verify whether the claimed benefits are robust or affected by post-hoc choices.
  2. Methods: no explicit checks (pairwise correlations, mutual information, or ablation removing the offset component) are described to confirm that continuous gaze offset supplies user-discriminative information sufficiently independent from the base temporal EMB features. This is load-bearing for the central claim, as any fusion gain could arise from redundant information rather than new variance.
minor comments (1)
  1. Clarify the exact linear and nonlinear fusion implementations (e.g., via equations or pseudocode) and report per-dataset, per-task performance numbers with confidence intervals to support the multi-task claim.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which identify key opportunities to strengthen the clarity and evidentiary support in our manuscript. We address each major comment below and will revise the paper accordingly to incorporate additional details and analyses.

read point-by-point responses
  1. Referee: Abstract: reports positive fusion results but provides no quantitative metrics, error bars, statistical tests, or details on feature extraction and fusion implementation, making it impossible to verify whether the claimed benefits are robust or affected by post-hoc choices.

    Authors: We agree that the abstract is currently high-level and would benefit from quantitative support. In the revised manuscript, we will update the abstract to include specific performance metrics (such as EER reductions achieved via linear and nonlinear fusion), any available error bars or statistical test results, and concise details on feature extraction and fusion methods. This will improve verifiability while preserving the abstract's brevity. revision: yes

  2. Referee: Methods: no explicit checks (pairwise correlations, mutual information, or ablation removing the offset component) are described to confirm that continuous gaze offset supplies user-discriminative information sufficiently independent from the base temporal EMB features. This is load-bearing for the central claim, as any fusion gain could arise from redundant information rather than new variance.

    Authors: The referee is correct that the current version lacks explicit independence diagnostics. To address this, we will add to the Methods section an ablation analysis (comparing performance with and without the gaze offset component) along with pairwise correlation or mutual information metrics between the continuous gaze offset scores and the base temporal features. These additions will directly demonstrate complementarity and rule out redundancy as the source of observed gains. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical evaluation is self-contained

full rationale

The paper contains no equations, derivations, or parameter-fitting steps that could reduce claims to inputs by construction. Central results are obtained by applying linear and nonlinear fusion to existing biometric features plus continuous gaze offset on two independent public datasets, with performance measured via standard authentication metrics. No self-definitional loops, fitted-input predictions, or load-bearing self-citations appear; the hypothesis that gaze offset supplies auxiliary information is tested directly rather than assumed via prior author work. This is the expected outcome for an empirical fusion study without theoretical modeling.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the work implicitly assumes that gaze offset is an independent biometric channel and that standard fusion techniques transfer without domain-specific tuning.

pith-pipeline@v0.9.0 · 5443 in / 1054 out tokens · 29107 ms · 2026-05-11T01:28:23.464342+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

25 extracted references · 25 canonical work pages

  1. [1]

    Eye movements in biometrics

    Pawel Kasprowski and Jacek Ober. Eye movements in biometrics. In David Maltoni and Anil K. Jain, editors, Biometric Authentication, volume 3087 ofLecture Notes in Computer Science, Berlin, Heidelberg, 2004. Springer

  2. [2]

    Komogortsev

    Ioannis Rigas and Oleg V . Komogortsev. Current research in eye movement biometrics: An analysis based on bioeye 2015 competition.Image and Vision Computing, 58:129–141, 2017

  3. [3]

    Iris print attack detection using eye movement signals

    Mehedi Hasan Raju, Dillon J Lohr, and Oleg Komogortsev. Iris print attack detection using eye movement signals. In2022 Symposium on Eye Tracking Research and Applications, ETRA ’22, New York, NY , USA, 2022. Association for Computing Machinery

  4. [4]

    Komogortsev

    Ioannis Rigas and Oleg V . Komogortsev. Eye Movement-Driven Defense against Iris Print-Attacks.Pattern Recogn. Lett., 68(P2):316–326, dec 2015

  5. [5]

    Komogortsev

    Dillon Lohr and Oleg V . Komogortsev. Eye know you too: Toward viable end-to-end eye movement biometrics for user authentication.IEEE Transactions on Information Forensics and Security, 17:3151–3164, 2022

  6. [6]

    Eye movement biometrics in virtual reality: A comparison between vr headset and high-end eye-tracker collected dataset

    Mehedi Hasan Raju and Oleg Komogortsev. Eye movement biometrics in virtual reality: A comparison between vr headset and high-end eye-tracker collected dataset. In2025 IEEE Security and Privacy Workshops (SPW), pages 236–241. IEEE Computer Society, 2025

  7. [7]

    Proulx, and Oleg Komogortsev

    Dillon Lohr, Michael J. Proulx, and Oleg Komogortsev. Establishing a baseline for gaze-driven authentication performance in vr: A breadth-first investigation on a very large dataset. In2024 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10, 2024

  8. [8]

    Identifying users based on their eye tracker calibration data

    Pawel Kasprowski. Identifying users based on their eye tracker calibration data. InSymposium on Eye Tracking Research and Applications, ETRA ’20 Adjunct, pages 1–2, Stuttgart, Germany, June 2020. ACM. 9 Enhancing Eye Movement Biometrics via Continuous Gaze Offset Score Fusion

  9. [9]

    Komogortsev

    Ioannis Rigas and Oleg V . Komogortsev. Biometric recognition via probabilistic spatial projection of eye movement trajectories in dynamic visual environments.IEEE Transactions on Information Forensics and Security, 9(10):1743–1754, 2014

  10. [10]

    Gazebase, a large-scale, multi-stimulus, longitudinal eye movement dataset.Scientific Data, 8(1):184, 2021

    Henry Griffith, Dillon Lohr, Evgeny Abdulin, and Oleg Komogortsev. Gazebase, a large-scale, multi-stimulus, longitudinal eye movement dataset.Scientific Data, 8(1):184, 2021

  11. [11]

    Gazebasevr, a large-scale, longitudinal, binocular eye-tracking dataset collected in virtual reality.Scientific Data, 10(1), 2023

    Dillon Lohr, Samantha Aziz, Lee Friedman, and Oleg V Komogortsev. Gazebasevr, a large-scale, longitudinal, binocular eye-tracking dataset collected in virtual reality.Scientific Data, 10(1), 2023

  12. [12]

    Komogortsev

    Corey Holland and Oleg V . Komogortsev. Biometric identification via eye movement scanpaths in reading. In 2011 International Joint Conference on Biometrics (IJCB), pages 1–8, 2011

  13. [13]

    Biometric recognition through eye movements using a recurrent neural network

    Shaohua Jia, Do Hyong Koh, Amanda Seccia, Pasha Antonenko, Richard Lamb, Andreas Keil, Matthew Schneps, and Marc Pomplun. Biometric recognition through eye movements using a recurrent neural network. In Proceedings - 9th IEEE International Conference on Big Knowledge, ICBK 2018, pages 57–64. Institute of Electrical and Electronics Engineers Inc., dec 2018

  14. [14]

    Dillon Lohr, Henry Griffith, and Oleg V Komogortsev. Eye know you: Metric learning for end-to-end biometric authentication using eye movements from a longitudinal dataset.IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022

  15. [15]

    Reich, Daniel Krakowczyk, Lena A

    Silvia Makowski, Paul Prasse, David R. Reich, Daniel Krakowczyk, Lena A. Jäger, and Tobias Scheffer. Deep- eyedentificationlive: Oculomotoric biometric identification and presentation-attack detection using deep neural networks.IEEE Transactions on Biometrics, Behavior, and Identity Science, 3(4):506–518, 2021

  16. [16]

    Deep distributional sequence embeddings based on a wasserstein loss

    Ahmed Abdelwahab and Niels Landwehr. Deep distributional sequence embeddings based on a wasserstein loss. Neural Processing Letters, 54(5):3749–3769, 2022

  17. [17]

    Arun Ross, Karthik Nandakumar, and Anil K. Jain. Handbook of multibiometrics. InThe Kluwer international series on biometrics, 2006

  18. [18]

    Proulx, Mehedi Hasan Raju, and Oleg V

    Dillon Lohr, Michael J. Proulx, Mehedi Hasan Raju, and Oleg V . Komogortsev. Ocular authentication: Fusion of gaze and periocular modalities. In2025 IEEE International Joint Conference on Biometrics (IJCB), pages 1–12, 2025

  19. [19]

    Multibiometric fusion strategy and its applications: A review.Information Fusion, 49, 11 2018

    Sandip Modak and Vijay Jha. Multibiometric fusion strategy and its applications: A review.Information Fusion, 49, 11 2018

  20. [20]

    Savitzky and M

    Abraham. Savitzky and M. J. E. Golay. Smoothing and differentiation of data by simplified least squares procedures.Analytical Chemistry, 36(8):1627–1639, 1964

  21. [21]

    Filtering eye-tracking data from an eyelink 1000: Comparing heuristic, savitzky-golay, iir and fir digital filters.Journal of Eye Movement Research, 14(3):17, 2023

    Mehedi H Raju, Lee Friedman, Troy M Bouman, and Oleg V Komogortsev. Filtering eye-tracking data from an eyelink 1000: Comparing heuristic, savitzky-golay, iir and fir digital filters.Journal of Eye Movement Research, 14(3):17, 2023

  22. [22]

    Evaluating eye tracking signal quality with real-time gaze interaction simulation: A study using an offline dataset

    Mehedi Hasan Raju, Samantha Aziz, Michael J Proulx, and Oleg Komogortsev. Evaluating eye tracking signal quality with real-time gaze interaction simulation: A study using an offline dataset. InProceedings of the 2025 Symposium on Eye Tracking Research and Applications, ETRA ’25, New York, NY , USA, 2025. Association for Computing Machinery

  23. [23]

    Salvucci and Joseph H

    Dario D. Salvucci and Joseph H. Goldberg. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, ETRA ’00, page 71–78, New York, NY , USA, 2000. Association for Computing Machinery

  24. [24]

    The effect of fixational eye movements on fixation identification with a dispersion-based fixation detection algorithm.Journal of Eye Movement Research, 2, 04 2009

    Pieter Blignaut and Tanya Beelders. The effect of fixational eye movements on fixation identification with a dispersion-based fixation detection algorithm.Journal of Eye Movement Research, 2, 04 2009

  25. [25]

    Fido biometrics requirements, 01 2025

    Stephanie Schukers, Greg Cannon, Nils Tekampe, and Anthony Lam. Fido biometrics requirements, 01 2025. 10