pith. machine review for the scientific record. sign in

arxiv: 2605.13396 · v1 · submitted 2026-05-13 · 💻 cs.CV

Recognition: no theorem link

PreFIQs: Face Image Quality Is What Survives Pruning

Andrea Atzori, Fadi Boutros, Guray Ozgur, Jan Niklas Kolf, Naser Damer, Vitomir \v{S}truc, \v{Z}iga Babnik

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:46 UTC · model grok-4.3

classification 💻 cs.CV
keywords face image quality assessmentFIQAmodel pruningembedding driftunsupervised qualityface recognitionparameter sensitivitysparsification
0
0 comments X

The pith

Face image quality equals the embedding shift that occurs when a face recognition model is pruned.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

PreFIQs offers an unsupervised way to score how useful a face image is for recognition systems. It calculates the Euclidean distance between the embeddings produced by a complete pre-trained model and a pruned version of the same model. The core hypothesis is that images with low utility depend on parameters that are fragile and get removed during pruning, causing bigger changes in their representation. This distance serves as a practical approximation of how sensitive the image is within the model's embedding space. The method requires no training and performs at or above the level of existing techniques on standard benchmarks.

Core claim

PreFIQs quantifies image utility as the Euclidean distance between L2-normalized embeddings extracted from a pre-trained FR model and its pruned counterpart. A first-order theoretical justification shows that this drift approximates the geometric sensitivity of the latent embedding manifold through Jacobian-vector product analysis. Across eight benchmarks and four FR models, the approach achieves competitive or superior performance to state-of-the-art FIQA methods without any training or supervision, validating parameter sparsification as a signal for face image utility.

What carries the argument

The Pruning Identified Exemplar (PIE) hypothesis, which holds that low-utility face images rely disproportionately on fragile network parameters and therefore exhibit larger embedding displacements under sparsification.

If this is right

  • PreFIQs achieves competitive or superior performance compared to state-of-the-art FIQA methods across multiple benchmarks.
  • New state-of-the-art results are established on several face image quality assessment datasets.
  • The framework requires no training data or supervision of any kind.
  • Parameter sparsification acts as a computationally efficient proxy for determining image utility in face recognition.
  • Face image quality can be understood as the component of the image representation that remains stable under model pruning.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Pruning-based sensitivity measures could extend to evaluating data quality for other recognition tasks such as object or speech recognition.
  • Lightweight pruning operations might enable on-the-fly quality filtering in deployed face recognition systems.
  • This view suggests that model compression can serve as a diagnostic tool for identifying problematic inputs.
  • Exploring alternative pruning methods could yield refined signals for different aspects of image utility.

Load-bearing premise

Low-utility face images depend disproportionately on the network parameters that are removed during pruning, which produces larger shifts in their embeddings than for high-utility images.

What would settle it

Finding a dataset where the embedding distance after pruning shows no correlation with actual recognition accuracy or with established quality labels would disprove the utility of the measure.

Figures

Figures reproduced from arXiv: 2605.13396 by Andrea Atzori, Fadi Boutros, Guray Ozgur, Jan Niklas Kolf, Naser Damer, Vitomir \v{S}truc, \v{Z}iga Babnik.

Figure 1
Figure 1. Figure 1: Given face images, e.g, xi , xj , and xk, we extract their L2-normalized embeddings using a pre￾trained FR and its sparsified counterpart. FIQ is quantified, for each image, as the Euclidean distance between its corresponding embeddings, measuring the pruning-induced representation drift. Smaller drift indicates stable identity encoding and thus higher image utility, while larger drift reflects structural … view at source ↗
Figure 2
Figure 2. Figure 2: Density maps using SynFIQA [38] dataset (550k images), and their proxy labels (x-axis, higher value indicates higher utility) versus various FIQA predictions (y-axis). Figs. 2a and 2b validate our approximation (Eq. 9), showing consistent distributions between Jacobian-based drift (Eq. 9) and empirical pruned-model distance (Eq. 4, lower indicates higher utility). Figs. 2c–2e compare our normalized, unsupe… view at source ↗
Figure 4
Figure 4. Figure 4: Comparison of EDC curves (FNMR at FMR=1e −3 ) of PreFIQs against recent FIQA approaches. The results are shown for four FR models on eight benchmarks. Unsupervised approaches are visualized using dotted lines. Supervised methods are visualized with dashed lines. PreFIQs is visualized using a continuous line with shaded AUC. For PreFIQs, unstructured L1 magnitude pruning with ρ = 0.4 is used. 0.0 0.2 0.4 0.… view at source ↗
Figure 5
Figure 5. Figure 5: Comparison of EDC curves (FNMR at FMR=1e −4 ) of PreFIQs against recent FIQA approaches. The results are shown for four FR models on eight benchmarks. Unsupervised approaches are visualized using dotted lines. Supervised methods are visualized with dashed lines. PreFIQs is visualized using a continous line with shaded AUC. For PreFIQs, unstructured L1 magnitude pruning with ρ = 0.4 is used. 0.0 0.2 0.4 0.6… view at source ↗
read the original abstract

Face Image Quality Assessment (FIQA) evaluates the utility of a face image for automated face recognition (FR) systems. In this work, we propose PreFIQs, an unsupervised and training-free FIQA framework grounded in the Pruning Identified Exemplar (PIE) hypothesis. We hypothesize that low-utility face images rely disproportionately on fragile network parameters, resulting in larger geometric displacement of their embeddings under model sparsification. Accordingly, PreFIQs quantifies image utility as the Euclidean distance between L2-normalized embeddings extracted from a pre-trained FR model and its pruned counterpart. We provide a first-order theoretical justification via a Jacobian-vector product analysis, demonstrating that this empirical drift serves as a computationally efficient approximation of the exact geometric sensitivity of the latent embedding manifold. Extensive experiments across eight benchmarks and four FR models demonstrate that PreFIQs achieves competitive or superior performance compared to state-of-the-art FIQA methods, including establishing new state-of-the-art results on several benchmarks, without any training or supervision. These results validate parameter sparsification as a principled and practically efficient signal for face image utility, and demonstrate that quality is, in essence, what survives pruning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes PreFIQs, a training-free and unsupervised FIQA method grounded in the Pruning Identified Exemplar (PIE) hypothesis. It quantifies face image utility as the Euclidean distance between L2-normalized embeddings from a pre-trained FR model and its pruned counterpart, justified via a first-order Jacobian-vector product analysis approximating geometric sensitivity. Experiments on eight benchmarks and four FR models show competitive or superior performance to SOTA FIQA methods, with new SOTA results on several, validating sparsification as a signal for image utility.

Significance. If the central claim holds, the work is significant for introducing a parameter-sensitivity view of image quality that requires no training or supervision, achieving strong benchmark results through a simple post-pruning embedding drift metric. The Jacobian justification and cross-model validation provide a mechanistic angle that could generalize beyond FIQA to other embedding-based tasks, though the finite-pruning regime needs tighter linkage to the infinitesimal analysis.

major comments (3)
  1. [§3.2] §3.2 (PIE hypothesis and pruning definition): The claim that low-utility images rely on 'fragile network parameters' leading to larger embedding drift is load-bearing, yet the manuscript does not specify the pruning criterion (e.g., magnitude-based, gradient-based), sparsity ratio, or whether pruning is unstructured/structured. Without these, the Euclidean distance cannot be reproduced exactly and the finite-pruning results may not follow from the first-order Jacobian approximation, which holds only for infinitesimal perturbations.
  2. [§4.3] §4.3 (experimental validation): The reported SOTA gains on several benchmarks are presented without ablations isolating whether drift magnitude tracks per-image parameter importance versus global factors such as embedding norm or input noise level. This leaves open whether the method reduces to generic sensitivity rather than confirming the PIE hypothesis.
  3. [Eq. (3)] Eq. (3) (Jacobian-vector product): The first-order approximation is derived, but the transition to finite pruning (whose magnitude is not quantified) is not bounded; a concrete error term or empirical check showing that higher-order terms remain negligible for the chosen sparsity levels is needed to support the 'computationally efficient approximation' claim.
minor comments (2)
  1. [§3.1] Notation for the pruned model (e.g., f_θ' vs. f_θ̂) is introduced inconsistently across sections; standardize and define once in §3.1.
  2. [Figure 2] Figure 2 (embedding drift visualization): Axis scales and normalization details are unclear; add explicit L2-norm confirmation and units for the distance metric.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thoughtful and constructive comments, which have helped us identify areas where the manuscript can be strengthened. We address each major comment point by point below, indicating the revisions we will incorporate.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (PIE hypothesis and pruning definition): The claim that low-utility images rely on 'fragile network parameters' leading to larger embedding drift is load-bearing, yet the manuscript does not specify the pruning criterion (e.g., magnitude-based, gradient-based), sparsity ratio, or whether pruning is unstructured/structured. Without these, the Euclidean distance cannot be reproduced exactly and the finite-pruning results may not follow from the first-order Jacobian approximation, which holds only for infinitesimal perturbations.

    Authors: We agree that explicit specification of the pruning procedure is essential for reproducibility and for clarifying the connection to the Jacobian analysis. In our experiments we used magnitude-based unstructured pruning, removing the 50% of weights with the smallest absolute values independently per layer. We will add a new paragraph in §3.2 that states the criterion, the fixed sparsity ratio of 50%, and the unstructured nature of the pruning. We will also include a short discussion explaining why the finite-pruning drift remains a useful proxy for the first-order sensitivity even though the Jacobian derivation is infinitesimal; specifically, we will note that the ranking of images by drift is preserved across a range of moderate sparsity levels. revision: yes

  2. Referee: [§4.3] §4.3 (experimental validation): The reported SOTA gains on several benchmarks are presented without ablations isolating whether drift magnitude tracks per-image parameter importance versus global factors such as embedding norm or input noise level. This leaves open whether the method reduces to generic sensitivity rather than confirming the PIE hypothesis.

    Authors: We acknowledge that additional controls are needed to isolate the contribution of the PIE hypothesis. We will expand §4.3 with three new ablation studies: (1) comparison against raw embedding norm, (2) drift under random (non-magnitude) pruning at the same sparsity, and (3) drift under additive Gaussian noise of matched magnitude. These results will be reported with statistical significance tests across the same eight benchmarks. Preliminary internal checks suggest that PreFIQs outperforms these baselines, but the full tables will be added to demonstrate that the observed utility signal is tied to parameter fragility rather than generic sensitivity measures. revision: yes

  3. Referee: [Eq. (3)] Eq. (3) (Jacobian-vector product): The first-order approximation is derived, but the transition to finite pruning (whose magnitude is not quantified) is not bounded; a concrete error term or empirical check showing that higher-order terms remain negligible for the chosen sparsity levels is needed to support the 'computationally efficient approximation' claim.

    Authors: We agree that an empirical validation of the approximation quality is warranted. We will augment the discussion surrounding Eq. (3) with a new figure and accompanying text that plots the absolute difference between the first-order Jacobian-vector product and the actual finite-pruning embedding drift for sparsity ratios ranging from 10% to 70%. This will show that, for the 50% sparsity used in the main experiments, the higher-order contributions are small relative to the first-order term and do not alter the relative ordering of image utilities. If space allows, we will also provide a brief Lipschitz-based error bound to complement the empirical evidence. revision: yes

Circularity Check

0 steps flagged

No circularity detected in derivation chain

full rationale

The paper explicitly defines PreFIQs as the Euclidean distance between L2-normalized embeddings from a pre-trained FR model and its pruned version, then supports this choice via an independently derived first-order Jacobian-vector product approximation that links the distance to local sensitivity of the embedding manifold. The PIE hypothesis is presented as an assumption to motivate the definition rather than as a result derived from self-citation or prior fitted parameters. No equations reduce the claimed utility metric to its inputs by construction, no parameters are fitted to benchmark outcomes, and validation relies on external benchmark comparisons rather than tautological renaming or self-referential justification. The derivation remains self-contained against external data.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the PIE hypothesis as a domain assumption and the first-order Jacobian approximation as justification; no free parameters or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption PIE hypothesis: low-utility face images rely disproportionately on fragile network parameters
    Explicitly stated as the grounding hypothesis for the method.

pith-pipeline@v0.9.0 · 5532 in / 1172 out tokens · 36922 ms · 2026-05-14T20:46:31.142595+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

56 extracted references · 5 canonical work pages · 2 internal anchors

  1. [1]

    Deep network pruning: A comparative study on cnns in face recognition

    Fernando Alonso-Fernandez, Kevin Hernandez-Diaz, Jose Maria Buades Rubio, Prayag Tiwari, and Josef Bi- gun. Deep network pruning: A comparative study on cnns in face recognition. Pattern Recognition Letters, 189:221–228, 2025

  2. [2]

    Vit- fiqa: Assessing face image quality using vision trans- formers

    Andrea Atzori, Fadi Boutros, and Naser Damer . Vit- fiqa: Assessing face image quality using vision trans- formers. In 2025 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) , 2025

  3. [3]

    Faceqan: Face image quality assessment through adversarial noise exploration

    Ziga Babnik, Peter Peer , and Vitomir Struc. Faceqan: Face image quality assessment through adversarial noise exploration. In 2022 26th International Con- ference on Pattern Recognition (ICPR), pages 748–754, 2022

  4. [4]

    Diffiqa: Face image quality assessment using denoising dif- fusion probabilistic models

    Žiga Babnik, Peter Peer , and Vitomir Štruc. Diffiqa: Face image quality assessment using denoising dif- fusion probabilistic models. In 2023 IEEE Interna- tional Joint Conference on Biometrics (IJCB) , pages 1– 10, 2023

  5. [5]

    eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models

    Žiga Babnik, Peter Peer , and Vitomir Štruc. eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models. IEEE Transactions on Biometrics, Behavior , and Iden- tity Science (TBIOM) , 2024

  6. [6]

    FROQ: Observing Face Recognition Mod- els for Efficient Quality Assessment

    Žiga Babnik, Deepak Kumar Jain, Peter Peer , and Vit- omir Štruc. FROQ: Observing Face Recognition Mod- els for Efficient Quality Assessment. 2025

  7. [7]

    Elasticface: Elastic margin loss for deep face recognition

    Fadi Boutros, Naser Damer , Florian Kirchbuchner , and Arjan Kuijper . Elasticface: Elastic margin loss for deep face recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, pages 1577–1586. IEEE, 2022

  8. [8]

    CR-FIQA: face image quality as- sessment by learning sample relative classifiability

    Fadi Boutros, Meiling Fang, Marcel Klemt, Biying Fu, and Naser Damer . CR-FIQA: face image quality as- sessment by learning sample relative classifiability. In IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2023, Vancouver , BC, Canada, June 17-24, 2023 , pages 5836–5845. IEEE, 2023

  9. [9]

    Face image quality assessment based on learning to rank

    Jiansheng Chen, Yu Deng, Gaocheng Bai, and Guangda Su. Face image quality assessment based on learning to rank. IEEE Signal Process. Lett. , 22(1): 90–94, 2015

  10. [10]

    Dsl-fiqa: As- sessing facial image quality via dual-set degradation learning and landmark-guided transformer

    Wei-Ting Chen, Gurunandan Krishnan, Qiang Gao, Sy-Yen Kuo, Sizhuo Ma, and Jian Wang. Dsl-fiqa: As- sessing facial image quality via dual-set degradation learning and landmark-guided transformer . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2931–2941, 2024

  11. [11]

    Arcface: Additive angular margin loss for deep face recognition

    Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pages 4690–

  12. [12]

    Computer Vision Foundation / IEEE, 2019

  13. [13]

    Age and gender estimation of unfiltered faces

    Eran Eidinger , Roee Enbar , and Tal Hassner . Age and gender estimation of unfiltered faces. IEEE Trans. Inf. Forensics Secur ., 9(12):2170–2179, 2014

  14. [14]

    Depgraph: Towards any structural pruning

    Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang. Depgraph: Towards any structural pruning. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 16091–16101, 2023

  15. [15]

    Roy, and Michael Carbin

    Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, and Michael Carbin. Pruning neural networks at initialization: Why are we missing the mark? In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021

  16. [16]

    A deep insight into measuring face image utility with general and face-specific image quality metrics

    Biying Fu, Cong Chen, Olaf Henniger , and Naser Damer . A deep insight into measuring face image utility with general and face-specific image quality metrics. In IEEE/CVF Winter Conference on Applica- tions of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022 , pages 1121–1130. IEEE, 2022

  17. [17]

    Grother and E

    P . Grother and E. Tabassi. Performance of biomet- ric quality measures. IEEE Trans. on Pattern Analysis and Machine Intelligence , 29(4):531–543, 2007

  18. [18]

    Grother , M

    P . Grother , M. Ngan A. Hom, and K. Hanaoka. Ongoing face recognition vendor test (frvt) part 5: Face image quality assessment (4th draft). In National Institute of Standards and Technology . Tech. Rep., Sep. 2021

  19. [19]

    Ms-celeb-1m: A dataset and bench- mark for large-scale face recognition

    Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. Ms-celeb-1m: A dataset and bench- mark for large-scale face recognition. In Computer Vision - ECCV 2016 - 14th European Conference, Am- sterdam, The Netherlands, October 11-14, 2016, Pro- ceedings, Part III , pages 87–102. Springer , 2016

  20. [20]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2016, Las Vegas, NV , USA, June Page 11 of 19 27-30, 2016 , pages 770–778. IEEE Computer Society, 2016

  21. [21]

    Face- qnet: Quality assessment for face recognition based on deep learning

    Javier Hernandez-Ortega, Javier Galbally, Julian Fiérrez, Rudolf Haraksim, and Laurent Beslay. Face- qnet: Quality assessment for face recognition based on deep learning. In 2019 International Conference on Biometrics, ICB 2019, Crete, Greece, June 4-7, 2019 , pages 1–8. IEEE, 2019

  22. [22]

    Biometric quality: Re- view and application to face recognition with face- qnet

    Javier Hernandez-Ortega, Javier Galbally, Julian Fiérrez, and Laurent Beslay. Biometric quality: Re- view and application to face recognition with face- qnet. CoRR, abs/2006.03298, 2020

  23. [23]

    Sparsity in deep learn- ing: pruning and growth for efficient inference and training in neural networks

    Torsten Hoefler , Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learn- ing: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. , 22 (1), 2021

  24. [24]

    Courville, Gregory Clark, Yann Dauphin, and Andrea Frome

    Sara Hooker , Aaron C. Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget. arXiv: Learning, 2019

  25. [25]

    Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller

    Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller . Labeled faces in the wild: A database for studying face recognition in uncon- strained environments. Technical Report 07-49, Uni- versity of Massachusetts, Amherst, 2007

  26. [26]

    Curricularface: Adaptive curriculum learn- ing loss for deep face recognition

    Yuge Huang, Yuhan Wang, Ying Tai, Xiaoming Liu, Pengcheng Shen, Shaoxin Li, Jilin Li, and Feiyue Huang. Curricularface: Adaptive curriculum learn- ing loss for deep face recognition. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 5900–5909. Computer Vision Foundation / IEEE, 2020

  27. [27]

    ISO/IEC TR 29794- 5:2010 Information technology - Biometric sample quality - Part 5: Face image data

    ISO/IEC JTC1 SC37 Biometrics. ISO/IEC TR 29794- 5:2010 Information technology - Biometric sample quality - Part 5: Face image data. International Or- ganization for Standardization, 2010

  28. [28]

    ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and frame- work

    ISO/IEC JTC1 SC37 Biometrics. ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and frame- work. International Organization for Standardiza- tion, 2021

  29. [29]

    Self- damaging contrastive learning

    Zhiyu Jiang, Zhe Liu, Chen Sun, Yantao Shen, Xiao- hua Xue, Hongyuan Zha, and Zhiwu Huang. Self- damaging contrastive learning. In International Con- ference on Machine Learning (ICML) , 2021

  30. [30]

    Jain, and Xiaoming Liu

    Minchul Kim, Anil K. Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recogni- tion. In CVPR, pages 18729–18738. IEEE, 2022

  31. [31]

    Cross-quality LFW: A database for analyzing cross- resolution image face recognition in uncon- strained environments

    Martin Knoche, Stefan Hörmann, and Gerhard Rigoll. Cross-quality LFW: A database for analyzing cross- resolution image face recognition in uncon- strained environments. In 16th IEEE International Conference on Automatic Face and Gesture Recogni- tion, FG 2021, Jodhpur , India, December 15-18, 2021 , pages 1–5. IEEE, 2021

  32. [32]

    Grafiqs: Face image quality assessment using gra- dient magnitudes

    Jan Niklas Kolf, Naser Damer , and Fadi Boutros. Grafiqs: Face image quality assessment using gra- dient magnitudes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1490–1499, 2024

  33. [33]

    Pruning and quantization for deep neural network acceleration: A survey

    Tailin Liang, John Glossner , Lei Wang, Shaobo Shi, and Xiaotong Zhang. Pruning and quantization for deep neural network acceleration: A survey. Neuro- computing, 461:370–403, 2021

  34. [34]

    SphereFace: Deep hyper- sphere embedding for face recognition

    Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. SphereFace: Deep hyper- sphere embedding for face recognition. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recog- nition, pages 212–220, 2017

  35. [35]

    Adams, James A

    Brianna Maze, Jocelyn C. Adams, James A. Duncan, Nathan D. Kalka, Tim Miller , Charles Otto, Anil K. Jain, W. Tyler Niggel, Janet Anderson, Jordan Cheney, and Patrick Grother . IARPA janus benchmark - C: face dataset and protocol. In 2018 International Con- ference on Biometrics, ICB 2018, Gold Coast, Australia, February 20-23, 2018, pages 158–165. IEEE, 2018

  36. [36]

    Magface: A universal representation for face recognition and quality assessment

    Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. Magface: A universal representation for face recognition and quality assessment. In IEEE Con- ference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021 , pages 14225– 14234. Computer Vision Foundation / IEEE, 2021

  37. [37]

    Agedb: The first manually collected, in-the-wild age database

    Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. Agedb: The first manually collected, in-the-wild age database. In 2017 IEEE CVPRW, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017 , pages 1997–2005. IEEE Computer Society, 2017

  38. [38]

    SDD-FIQA: unsupervised face im- age quality assessment with similarity distribution distance

    Fu-Zhao Ou, Xingyu Chen, Ruixin Zhang, Yuge Huang, Shaoxin Li, Jilin Li, Yong Li, Liujuan Cao, and Yuan-Gen Wang. SDD-FIQA: unsupervised face im- age quality assessment with similarity distribution distance. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 7670–7679. Computer Vision Foundation / IEEE, 2021

  39. [39]

    MR-FIQA: face image quality assessment with multi-reference representations from synthetic data generation

    Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. MR-FIQA: face image quality assessment with multi-reference representations from synthetic data generation. In IEEE/CVF International Con- ference on Computer Vision, ICCV 2025, Honolulu, Hawaii, USA, October 19-23, 2025 , pages 12915– 12925. Computer Vision Foundation / IEEE, 2025

  40. [40]

    Clib-fiqa: Face image quality assessment Page 12 of 19 with confidence calibration

    Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. Clib-fiqa: Face image quality assessment Page 12 of 19 with confidence calibration. In 2024 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 1694–1704, 2024

  41. [41]

    Clib-fiqa: Face image quality assessment with confidence calibration

    Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. Clib-fiqa: Face image quality assessment with confidence calibration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1694–1704, 2024

  42. [42]

    Vitnt-fiqa: Training-free face image quality assessment with vision transformers, 2026

    Guray Ozgur , Eduarda Caldeira, Tahar Chettaoui, Jan Niklas Kolf, Marco Huber , Naser Damer , and Fadi Boutros. Vitnt-fiqa: Training-free face image quality assessment with vision transformers, 2026

  43. [43]

    Vitnt-fiqa: Training-free face image quality assessment with vision transformers

    Guray Ozgur , Eduarda Caldeira, Tahar Chettaoui, Jan Niklas Kolf, Marco Huber , Naser Damer , and Fadi Boutros. Vitnt-fiqa: Training-free face image quality assessment with vision transformers. CoRR, abs/2601.05741, 2026

  44. [44]

    Pytorch: An im- perative style, high-performance deep learning li- brary

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer , James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner , Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An im- perative style, high...

  45. [45]

    Tapia, and Christoph Busch

    Torsten Schlett, Christian Rathgeb, Juan E. Tapia, and Christoph Busch. Considerations on the evaluation of biometric quality assessment algorithms. IEEE Trans. Biom. Behav. Identity Sci. , 6(1):54–67, 2024

  46. [46]

    Patel, Rama Chel- lappa, and David W

    Soumyadip Sengupta, Jun-Cheng Chen, Car- los Domingo Castillo, Vishal M. Patel, Rama Chel- lappa, and David W. Jacobs. Frontal to profile face verification in the wild. In 2016 IEEE Winter Con- ference on Applications of Computer Vision, WACV 2016, Lake Placid, NY, USA, March 7-10, 2016 , pages 1–9. IEEE Computer Society, 2016

  47. [47]

    Yichun Shi and Anil K. Jain. Probabilistic face embed- dings. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), Oc- tober 27 - November 2, 2019 , pages 6901–6910. IEEE, 2019

  48. [48]

    Hidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, and Surya Ganguli. Pruning neural networks with- out any data by iteratively conserving synaptic flow. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2020. Curran Associates Inc

  49. [49]

    SER-FIQ: un- supervised estimation of face image quality based on stochastic embedding robustness

    Philipp Terhörst, Jan Niklas Kolf, Naser Damer , Flo- rian Kirchbuchner , and Arjan Kuijper . SER-FIQ: un- supervised estimation of face image quality based on stochastic embedding robustness. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 5650–5659. Computer Vision Foundation...

  50. [50]

    Cos- face: Large margin cosine loss for deep face recog- nition

    Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cos- face: Large margin cosine loss for deep face recog- nition. In CVPR, pages 5265–5274. Computer Vision Foundation / IEEE Computer Society, 2018

  51. [51]

    Inducing predictive uncertainty estimation for face verification

    Weidi Xie, Jeffrey Byrne, and Andrew Zisserman. Inducing predictive uncertainty estimation for face verification. In 31st British Machine Vision Confer- ence 2020, BMVC 2020, Virtual Event, UK, September 7-10, 2020. BMVA Press, 2020

  52. [52]

    Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z. Li. Learning face representation from scratch. CoRR, abs/1411.7923, 2014

  53. [53]

    To- wards pose invariant face recognition in the wild

    Jie Zhao, Yuxiang Xiong, Jian Cheng, Jianshu Li, Yao Zhao, Jian Xing, Shuicheng Yan, and Jiashi Feng. To- wards pose invariant face recognition in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2018

  54. [54]

    Zheng and W

    T. Zheng and W. Deng. Cross-pose lfw: A database for studying cross-pose face recognition in uncon- strained environments. Technical Report 18-01, Bei- jing University of Posts and Telecommunications, 2018

  55. [55]

    Cross-Age LFW: A Database for Studying Cross-Age Face Recognition in Unconstrained Environments

    Tianyue Zheng, Weihong Deng, and Jiani Hu. Cross- age LFW: A database for studying cross-age face recognition in unconstrained environments. CoRR, abs/1708.08197, 2017. Page 13 of 19

  56. [56]

    These are pro- vided for unstructured L1 magnitude pruning (Table 6), unstructured random pruning (Ta- ble 7), and structured pruning (Table 8)

    Supplementary Material This supplementary material sections contains the following supporting content: • Detailed pAUC results across all four evaluated FR models and pruning ratios ρ. These are pro- vided for unstructured L1 magnitude pruning (Table 6), unstructured random pruning (Ta- ble 7), and structured pruning (Table 8). • A comprehensive compariso...