Recognition: no theorem link
Enhancing Eye Movement Biometrics for User Authentication via Continuous Gaze Offset Score Fusion
Pith reviewed 2026-05-11 01:28 UTC · model grok-4.3
The pith
Fusing continuous gaze offset scores with other eye movement features improves user authentication accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that continuous gaze offset supplies complementary user-discriminative information that, when fused with standard eye movement biometric features through score-level combination, raises authentication performance on both lab-grade and virtual-reality datasets. Nonlinear fusion outperforms linear fusion, and pooling information across multiple tasks yields further gains. The results back the view that gaze offset functions as useful auxiliary data especially when eye tracking is degraded or noisy.
What carries the argument
Score fusion, linear and nonlinear, of continuous gaze offset with existing biometric feature scores.
If this is right
- Nonlinear fusion methods produce larger accuracy gains than linear fusion on both datasets tested.
- Combining biometric scores across multiple tasks and observation lengths further raises authentication performance.
- Gaze offset serves as practical auxiliary information when eye tracking quality is reduced or noisy.
- The fusion approach works across different hardware, from laboratory eye trackers to virtual reality headsets.
Where Pith is reading between the lines
- If the independence holds, fusion could be added to consumer eye-tracking devices to strengthen biometric security without extra sensors.
- Real-world systems might adapt the fusion rule according to the current task to keep error rates low during varied user activities.
- The same auxiliary-signal idea could be tried on other gaze-related or motion biometrics to handle noisy data.
Load-bearing premise
Continuous gaze offset carries information about users that is independent enough from the main eye movement features for fusion to add value instead of mere redundancy.
What would settle it
A new dataset collected with comparable eye trackers where neither linear nor nonlinear fusion of gaze offset reduces equal-error rates below the baseline feature set alone would falsify the performance benefit.
Figures
read the original abstract
Eye movement biometrics (EMB) use subject-specific gaze dynamics for user authentication and identification. Recent deep learning-based EMB systems achieve strong performance by modeling temporal eye movement behavior. However, these systems typically overlook continuous gaze offset, despite prior evidence that it contains user-discriminative information. This work examines whether continuous gaze offset can improve biometric performance when combined with existing biometric features. We evaluate linear and nonlinear fusion methods on two publicly available datasets, collected via the lab-grade eye tracker and virtual reality headset across multiple tasks and observation durations. Results indicate that fusion offers performance benefits on both datasets, particularly when using nonlinear fusion. Additionally, fusing biometric information across multiple tasks further improves authentication performance. These findings support the hypothesis that continuous gaze offset may serve as useful auxiliary information under conditions of degraded or noisy eye tracking.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that continuous gaze offset contains user-discriminative information that can be fused with existing temporal eye-movement features to improve authentication performance in eye movement biometrics. It evaluates linear and nonlinear fusion on two public datasets (lab-grade eye tracker and VR headset) across multiple tasks and durations, reporting that nonlinear fusion yields benefits and that multi-task fusion further improves results, supporting use of gaze offset as auxiliary information under noisy tracking conditions.
Significance. If the fusion gains prove robust and the offset feature supplies genuinely complementary information, the approach could provide a simple, hardware-agnostic way to strengthen EMB systems in practical settings such as VR or lower-quality trackers. The reliance on public datasets is a positive for reproducibility, yet the absence of detailed metrics and independence diagnostics limits the strength of the current evidence.
major comments (2)
- Abstract: reports positive fusion results but provides no quantitative metrics, error bars, statistical tests, or details on feature extraction and fusion implementation, making it impossible to verify whether the claimed benefits are robust or affected by post-hoc choices.
- Methods: no explicit checks (pairwise correlations, mutual information, or ablation removing the offset component) are described to confirm that continuous gaze offset supplies user-discriminative information sufficiently independent from the base temporal EMB features. This is load-bearing for the central claim, as any fusion gain could arise from redundant information rather than new variance.
minor comments (1)
- Clarify the exact linear and nonlinear fusion implementations (e.g., via equations or pseudocode) and report per-dataset, per-task performance numbers with confidence intervals to support the multi-task claim.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which identify key opportunities to strengthen the clarity and evidentiary support in our manuscript. We address each major comment below and will revise the paper accordingly to incorporate additional details and analyses.
read point-by-point responses
-
Referee: Abstract: reports positive fusion results but provides no quantitative metrics, error bars, statistical tests, or details on feature extraction and fusion implementation, making it impossible to verify whether the claimed benefits are robust or affected by post-hoc choices.
Authors: We agree that the abstract is currently high-level and would benefit from quantitative support. In the revised manuscript, we will update the abstract to include specific performance metrics (such as EER reductions achieved via linear and nonlinear fusion), any available error bars or statistical test results, and concise details on feature extraction and fusion methods. This will improve verifiability while preserving the abstract's brevity. revision: yes
-
Referee: Methods: no explicit checks (pairwise correlations, mutual information, or ablation removing the offset component) are described to confirm that continuous gaze offset supplies user-discriminative information sufficiently independent from the base temporal EMB features. This is load-bearing for the central claim, as any fusion gain could arise from redundant information rather than new variance.
Authors: The referee is correct that the current version lacks explicit independence diagnostics. To address this, we will add to the Methods section an ablation analysis (comparing performance with and without the gaze offset component) along with pairwise correlation or mutual information metrics between the continuous gaze offset scores and the base temporal features. These additions will directly demonstrate complementarity and rule out redundancy as the source of observed gains. revision: yes
Circularity Check
No significant circularity; empirical evaluation is self-contained
full rationale
The paper contains no equations, derivations, or parameter-fitting steps that could reduce claims to inputs by construction. Central results are obtained by applying linear and nonlinear fusion to existing biometric features plus continuous gaze offset on two independent public datasets, with performance measured via standard authentication metrics. No self-definitional loops, fitted-input predictions, or load-bearing self-citations appear; the hypothesis that gaze offset supplies auxiliary information is tested directly rather than assumed via prior author work. This is the expected outcome for an empirical fusion study without theoretical modeling.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Pawel Kasprowski and Jacek Ober. Eye movements in biometrics. In David Maltoni and Anil K. Jain, editors, Biometric Authentication, volume 3087 ofLecture Notes in Computer Science, Berlin, Heidelberg, 2004. Springer
work page 2004
-
[2]
Ioannis Rigas and Oleg V . Komogortsev. Current research in eye movement biometrics: An analysis based on bioeye 2015 competition.Image and Vision Computing, 58:129–141, 2017
work page 2015
-
[3]
Iris print attack detection using eye movement signals
Mehedi Hasan Raju, Dillon J Lohr, and Oleg Komogortsev. Iris print attack detection using eye movement signals. In2022 Symposium on Eye Tracking Research and Applications, ETRA ’22, New York, NY , USA, 2022. Association for Computing Machinery
work page 2022
-
[4]
Ioannis Rigas and Oleg V . Komogortsev. Eye Movement-Driven Defense against Iris Print-Attacks.Pattern Recogn. Lett., 68(P2):316–326, dec 2015
work page 2015
-
[5]
Dillon Lohr and Oleg V . Komogortsev. Eye know you too: Toward viable end-to-end eye movement biometrics for user authentication.IEEE Transactions on Information Forensics and Security, 17:3151–3164, 2022
work page 2022
-
[6]
Mehedi Hasan Raju and Oleg Komogortsev. Eye movement biometrics in virtual reality: A comparison between vr headset and high-end eye-tracker collected dataset. In2025 IEEE Security and Privacy Workshops (SPW), pages 236–241. IEEE Computer Society, 2025
work page 2025
-
[7]
Dillon Lohr, Michael J. Proulx, and Oleg Komogortsev. Establishing a baseline for gaze-driven authentication performance in vr: A breadth-first investigation on a very large dataset. In2024 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10, 2024
work page 2024
-
[8]
Identifying users based on their eye tracker calibration data
Pawel Kasprowski. Identifying users based on their eye tracker calibration data. InSymposium on Eye Tracking Research and Applications, ETRA ’20 Adjunct, pages 1–2, Stuttgart, Germany, June 2020. ACM. 9 Enhancing Eye Movement Biometrics via Continuous Gaze Offset Score Fusion
work page 2020
-
[9]
Ioannis Rigas and Oleg V . Komogortsev. Biometric recognition via probabilistic spatial projection of eye movement trajectories in dynamic visual environments.IEEE Transactions on Information Forensics and Security, 9(10):1743–1754, 2014
work page 2014
-
[10]
Henry Griffith, Dillon Lohr, Evgeny Abdulin, and Oleg Komogortsev. Gazebase, a large-scale, multi-stimulus, longitudinal eye movement dataset.Scientific Data, 8(1):184, 2021
work page 2021
-
[11]
Dillon Lohr, Samantha Aziz, Lee Friedman, and Oleg V Komogortsev. Gazebasevr, a large-scale, longitudinal, binocular eye-tracking dataset collected in virtual reality.Scientific Data, 10(1), 2023
work page 2023
-
[12]
Corey Holland and Oleg V . Komogortsev. Biometric identification via eye movement scanpaths in reading. In 2011 International Joint Conference on Biometrics (IJCB), pages 1–8, 2011
work page 2011
-
[13]
Biometric recognition through eye movements using a recurrent neural network
Shaohua Jia, Do Hyong Koh, Amanda Seccia, Pasha Antonenko, Richard Lamb, Andreas Keil, Matthew Schneps, and Marc Pomplun. Biometric recognition through eye movements using a recurrent neural network. In Proceedings - 9th IEEE International Conference on Big Knowledge, ICBK 2018, pages 57–64. Institute of Electrical and Electronics Engineers Inc., dec 2018
work page 2018
-
[14]
Dillon Lohr, Henry Griffith, and Oleg V Komogortsev. Eye know you: Metric learning for end-to-end biometric authentication using eye movements from a longitudinal dataset.IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022
work page 2022
-
[15]
Reich, Daniel Krakowczyk, Lena A
Silvia Makowski, Paul Prasse, David R. Reich, Daniel Krakowczyk, Lena A. Jäger, and Tobias Scheffer. Deep- eyedentificationlive: Oculomotoric biometric identification and presentation-attack detection using deep neural networks.IEEE Transactions on Biometrics, Behavior, and Identity Science, 3(4):506–518, 2021
work page 2021
-
[16]
Deep distributional sequence embeddings based on a wasserstein loss
Ahmed Abdelwahab and Niels Landwehr. Deep distributional sequence embeddings based on a wasserstein loss. Neural Processing Letters, 54(5):3749–3769, 2022
work page 2022
-
[17]
Arun Ross, Karthik Nandakumar, and Anil K. Jain. Handbook of multibiometrics. InThe Kluwer international series on biometrics, 2006
work page 2006
-
[18]
Proulx, Mehedi Hasan Raju, and Oleg V
Dillon Lohr, Michael J. Proulx, Mehedi Hasan Raju, and Oleg V . Komogortsev. Ocular authentication: Fusion of gaze and periocular modalities. In2025 IEEE International Joint Conference on Biometrics (IJCB), pages 1–12, 2025
work page 2025
-
[19]
Multibiometric fusion strategy and its applications: A review.Information Fusion, 49, 11 2018
Sandip Modak and Vijay Jha. Multibiometric fusion strategy and its applications: A review.Information Fusion, 49, 11 2018
work page 2018
-
[20]
Abraham. Savitzky and M. J. E. Golay. Smoothing and differentiation of data by simplified least squares procedures.Analytical Chemistry, 36(8):1627–1639, 1964
work page 1964
-
[21]
Mehedi H Raju, Lee Friedman, Troy M Bouman, and Oleg V Komogortsev. Filtering eye-tracking data from an eyelink 1000: Comparing heuristic, savitzky-golay, iir and fir digital filters.Journal of Eye Movement Research, 14(3):17, 2023
work page 2023
-
[22]
Mehedi Hasan Raju, Samantha Aziz, Michael J Proulx, and Oleg Komogortsev. Evaluating eye tracking signal quality with real-time gaze interaction simulation: A study using an offline dataset. InProceedings of the 2025 Symposium on Eye Tracking Research and Applications, ETRA ’25, New York, NY , USA, 2025. Association for Computing Machinery
work page 2025
-
[23]
Dario D. Salvucci and Joseph H. Goldberg. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, ETRA ’00, page 71–78, New York, NY , USA, 2000. Association for Computing Machinery
work page 2000
-
[24]
Pieter Blignaut and Tanya Beelders. The effect of fixational eye movements on fixation identification with a dispersion-based fixation detection algorithm.Journal of Eye Movement Research, 2, 04 2009
work page 2009
-
[25]
Fido biometrics requirements, 01 2025
Stephanie Schukers, Greg Cannon, Nils Tekampe, and Anthony Lam. Fido biometrics requirements, 01 2025. 10
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.