Recognition: no theorem link
PreFIQs: Face Image Quality Is What Survives Pruning
Pith reviewed 2026-05-14 20:46 UTC · model grok-4.3
The pith
Face image quality equals the embedding shift that occurs when a face recognition model is pruned.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PreFIQs quantifies image utility as the Euclidean distance between L2-normalized embeddings extracted from a pre-trained FR model and its pruned counterpart. A first-order theoretical justification shows that this drift approximates the geometric sensitivity of the latent embedding manifold through Jacobian-vector product analysis. Across eight benchmarks and four FR models, the approach achieves competitive or superior performance to state-of-the-art FIQA methods without any training or supervision, validating parameter sparsification as a signal for face image utility.
What carries the argument
The Pruning Identified Exemplar (PIE) hypothesis, which holds that low-utility face images rely disproportionately on fragile network parameters and therefore exhibit larger embedding displacements under sparsification.
If this is right
- PreFIQs achieves competitive or superior performance compared to state-of-the-art FIQA methods across multiple benchmarks.
- New state-of-the-art results are established on several face image quality assessment datasets.
- The framework requires no training data or supervision of any kind.
- Parameter sparsification acts as a computationally efficient proxy for determining image utility in face recognition.
- Face image quality can be understood as the component of the image representation that remains stable under model pruning.
Where Pith is reading between the lines
- Pruning-based sensitivity measures could extend to evaluating data quality for other recognition tasks such as object or speech recognition.
- Lightweight pruning operations might enable on-the-fly quality filtering in deployed face recognition systems.
- This view suggests that model compression can serve as a diagnostic tool for identifying problematic inputs.
- Exploring alternative pruning methods could yield refined signals for different aspects of image utility.
Load-bearing premise
Low-utility face images depend disproportionately on the network parameters that are removed during pruning, which produces larger shifts in their embeddings than for high-utility images.
What would settle it
Finding a dataset where the embedding distance after pruning shows no correlation with actual recognition accuracy or with established quality labels would disprove the utility of the measure.
Figures
read the original abstract
Face Image Quality Assessment (FIQA) evaluates the utility of a face image for automated face recognition (FR) systems. In this work, we propose PreFIQs, an unsupervised and training-free FIQA framework grounded in the Pruning Identified Exemplar (PIE) hypothesis. We hypothesize that low-utility face images rely disproportionately on fragile network parameters, resulting in larger geometric displacement of their embeddings under model sparsification. Accordingly, PreFIQs quantifies image utility as the Euclidean distance between L2-normalized embeddings extracted from a pre-trained FR model and its pruned counterpart. We provide a first-order theoretical justification via a Jacobian-vector product analysis, demonstrating that this empirical drift serves as a computationally efficient approximation of the exact geometric sensitivity of the latent embedding manifold. Extensive experiments across eight benchmarks and four FR models demonstrate that PreFIQs achieves competitive or superior performance compared to state-of-the-art FIQA methods, including establishing new state-of-the-art results on several benchmarks, without any training or supervision. These results validate parameter sparsification as a principled and practically efficient signal for face image utility, and demonstrate that quality is, in essence, what survives pruning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes PreFIQs, a training-free and unsupervised FIQA method grounded in the Pruning Identified Exemplar (PIE) hypothesis. It quantifies face image utility as the Euclidean distance between L2-normalized embeddings from a pre-trained FR model and its pruned counterpart, justified via a first-order Jacobian-vector product analysis approximating geometric sensitivity. Experiments on eight benchmarks and four FR models show competitive or superior performance to SOTA FIQA methods, with new SOTA results on several, validating sparsification as a signal for image utility.
Significance. If the central claim holds, the work is significant for introducing a parameter-sensitivity view of image quality that requires no training or supervision, achieving strong benchmark results through a simple post-pruning embedding drift metric. The Jacobian justification and cross-model validation provide a mechanistic angle that could generalize beyond FIQA to other embedding-based tasks, though the finite-pruning regime needs tighter linkage to the infinitesimal analysis.
major comments (3)
- [§3.2] §3.2 (PIE hypothesis and pruning definition): The claim that low-utility images rely on 'fragile network parameters' leading to larger embedding drift is load-bearing, yet the manuscript does not specify the pruning criterion (e.g., magnitude-based, gradient-based), sparsity ratio, or whether pruning is unstructured/structured. Without these, the Euclidean distance cannot be reproduced exactly and the finite-pruning results may not follow from the first-order Jacobian approximation, which holds only for infinitesimal perturbations.
- [§4.3] §4.3 (experimental validation): The reported SOTA gains on several benchmarks are presented without ablations isolating whether drift magnitude tracks per-image parameter importance versus global factors such as embedding norm or input noise level. This leaves open whether the method reduces to generic sensitivity rather than confirming the PIE hypothesis.
- [Eq. (3)] Eq. (3) (Jacobian-vector product): The first-order approximation is derived, but the transition to finite pruning (whose magnitude is not quantified) is not bounded; a concrete error term or empirical check showing that higher-order terms remain negligible for the chosen sparsity levels is needed to support the 'computationally efficient approximation' claim.
minor comments (2)
- [§3.1] Notation for the pruned model (e.g., f_θ' vs. f_θ̂) is introduced inconsistently across sections; standardize and define once in §3.1.
- [Figure 2] Figure 2 (embedding drift visualization): Axis scales and normalization details are unclear; add explicit L2-norm confirmation and units for the distance metric.
Simulated Author's Rebuttal
We thank the referee for the thoughtful and constructive comments, which have helped us identify areas where the manuscript can be strengthened. We address each major comment point by point below, indicating the revisions we will incorporate.
read point-by-point responses
-
Referee: [§3.2] §3.2 (PIE hypothesis and pruning definition): The claim that low-utility images rely on 'fragile network parameters' leading to larger embedding drift is load-bearing, yet the manuscript does not specify the pruning criterion (e.g., magnitude-based, gradient-based), sparsity ratio, or whether pruning is unstructured/structured. Without these, the Euclidean distance cannot be reproduced exactly and the finite-pruning results may not follow from the first-order Jacobian approximation, which holds only for infinitesimal perturbations.
Authors: We agree that explicit specification of the pruning procedure is essential for reproducibility and for clarifying the connection to the Jacobian analysis. In our experiments we used magnitude-based unstructured pruning, removing the 50% of weights with the smallest absolute values independently per layer. We will add a new paragraph in §3.2 that states the criterion, the fixed sparsity ratio of 50%, and the unstructured nature of the pruning. We will also include a short discussion explaining why the finite-pruning drift remains a useful proxy for the first-order sensitivity even though the Jacobian derivation is infinitesimal; specifically, we will note that the ranking of images by drift is preserved across a range of moderate sparsity levels. revision: yes
-
Referee: [§4.3] §4.3 (experimental validation): The reported SOTA gains on several benchmarks are presented without ablations isolating whether drift magnitude tracks per-image parameter importance versus global factors such as embedding norm or input noise level. This leaves open whether the method reduces to generic sensitivity rather than confirming the PIE hypothesis.
Authors: We acknowledge that additional controls are needed to isolate the contribution of the PIE hypothesis. We will expand §4.3 with three new ablation studies: (1) comparison against raw embedding norm, (2) drift under random (non-magnitude) pruning at the same sparsity, and (3) drift under additive Gaussian noise of matched magnitude. These results will be reported with statistical significance tests across the same eight benchmarks. Preliminary internal checks suggest that PreFIQs outperforms these baselines, but the full tables will be added to demonstrate that the observed utility signal is tied to parameter fragility rather than generic sensitivity measures. revision: yes
-
Referee: [Eq. (3)] Eq. (3) (Jacobian-vector product): The first-order approximation is derived, but the transition to finite pruning (whose magnitude is not quantified) is not bounded; a concrete error term or empirical check showing that higher-order terms remain negligible for the chosen sparsity levels is needed to support the 'computationally efficient approximation' claim.
Authors: We agree that an empirical validation of the approximation quality is warranted. We will augment the discussion surrounding Eq. (3) with a new figure and accompanying text that plots the absolute difference between the first-order Jacobian-vector product and the actual finite-pruning embedding drift for sparsity ratios ranging from 10% to 70%. This will show that, for the 50% sparsity used in the main experiments, the higher-order contributions are small relative to the first-order term and do not alter the relative ordering of image utilities. If space allows, we will also provide a brief Lipschitz-based error bound to complement the empirical evidence. revision: yes
Circularity Check
No circularity detected in derivation chain
full rationale
The paper explicitly defines PreFIQs as the Euclidean distance between L2-normalized embeddings from a pre-trained FR model and its pruned version, then supports this choice via an independently derived first-order Jacobian-vector product approximation that links the distance to local sensitivity of the embedding manifold. The PIE hypothesis is presented as an assumption to motivate the definition rather than as a result derived from self-citation or prior fitted parameters. No equations reduce the claimed utility metric to its inputs by construction, no parameters are fitted to benchmark outcomes, and validation relies on external benchmark comparisons rather than tautological renaming or self-referential justification. The derivation remains self-contained against external data.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption PIE hypothesis: low-utility face images rely disproportionately on fragile network parameters
Reference graph
Works this paper leans on
-
[1]
Deep network pruning: A comparative study on cnns in face recognition
Fernando Alonso-Fernandez, Kevin Hernandez-Diaz, Jose Maria Buades Rubio, Prayag Tiwari, and Josef Bi- gun. Deep network pruning: A comparative study on cnns in face recognition. Pattern Recognition Letters, 189:221–228, 2025
2025
-
[2]
Vit- fiqa: Assessing face image quality using vision trans- formers
Andrea Atzori, Fadi Boutros, and Naser Damer . Vit- fiqa: Assessing face image quality using vision trans- formers. In 2025 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) , 2025
2025
-
[3]
Faceqan: Face image quality assessment through adversarial noise exploration
Ziga Babnik, Peter Peer , and Vitomir Struc. Faceqan: Face image quality assessment through adversarial noise exploration. In 2022 26th International Con- ference on Pattern Recognition (ICPR), pages 748–754, 2022
2022
-
[4]
Diffiqa: Face image quality assessment using denoising dif- fusion probabilistic models
Žiga Babnik, Peter Peer , and Vitomir Štruc. Diffiqa: Face image quality assessment using denoising dif- fusion probabilistic models. In 2023 IEEE Interna- tional Joint Conference on Biometrics (IJCB) , pages 1– 10, 2023
2023
-
[5]
eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models
Žiga Babnik, Peter Peer , and Vitomir Štruc. eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models. IEEE Transactions on Biometrics, Behavior , and Iden- tity Science (TBIOM) , 2024
2024
-
[6]
FROQ: Observing Face Recognition Mod- els for Efficient Quality Assessment
Žiga Babnik, Deepak Kumar Jain, Peter Peer , and Vit- omir Štruc. FROQ: Observing Face Recognition Mod- els for Efficient Quality Assessment. 2025
2025
-
[7]
Elasticface: Elastic margin loss for deep face recognition
Fadi Boutros, Naser Damer , Florian Kirchbuchner , and Arjan Kuijper . Elasticface: Elastic margin loss for deep face recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, pages 1577–1586. IEEE, 2022
2022
-
[8]
CR-FIQA: face image quality as- sessment by learning sample relative classifiability
Fadi Boutros, Meiling Fang, Marcel Klemt, Biying Fu, and Naser Damer . CR-FIQA: face image quality as- sessment by learning sample relative classifiability. In IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2023, Vancouver , BC, Canada, June 17-24, 2023 , pages 5836–5845. IEEE, 2023
2023
-
[9]
Face image quality assessment based on learning to rank
Jiansheng Chen, Yu Deng, Gaocheng Bai, and Guangda Su. Face image quality assessment based on learning to rank. IEEE Signal Process. Lett. , 22(1): 90–94, 2015
2015
-
[10]
Dsl-fiqa: As- sessing facial image quality via dual-set degradation learning and landmark-guided transformer
Wei-Ting Chen, Gurunandan Krishnan, Qiang Gao, Sy-Yen Kuo, Sizhuo Ma, and Jian Wang. Dsl-fiqa: As- sessing facial image quality via dual-set degradation learning and landmark-guided transformer . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2931–2941, 2024
2024
-
[11]
Arcface: Additive angular margin loss for deep face recognition
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pages 4690–
2019
-
[12]
Computer Vision Foundation / IEEE, 2019
2019
-
[13]
Age and gender estimation of unfiltered faces
Eran Eidinger , Roee Enbar , and Tal Hassner . Age and gender estimation of unfiltered faces. IEEE Trans. Inf. Forensics Secur ., 9(12):2170–2179, 2014
2014
-
[14]
Depgraph: Towards any structural pruning
Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang. Depgraph: Towards any structural pruning. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 16091–16101, 2023
2023
-
[15]
Roy, and Michael Carbin
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, and Michael Carbin. Pruning neural networks at initialization: Why are we missing the mark? In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021
2021
-
[16]
A deep insight into measuring face image utility with general and face-specific image quality metrics
Biying Fu, Cong Chen, Olaf Henniger , and Naser Damer . A deep insight into measuring face image utility with general and face-specific image quality metrics. In IEEE/CVF Winter Conference on Applica- tions of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022 , pages 1121–1130. IEEE, 2022
2022
-
[17]
Grother and E
P . Grother and E. Tabassi. Performance of biomet- ric quality measures. IEEE Trans. on Pattern Analysis and Machine Intelligence , 29(4):531–543, 2007
2007
-
[18]
Grother , M
P . Grother , M. Ngan A. Hom, and K. Hanaoka. Ongoing face recognition vendor test (frvt) part 5: Face image quality assessment (4th draft). In National Institute of Standards and Technology . Tech. Rep., Sep. 2021
2021
-
[19]
Ms-celeb-1m: A dataset and bench- mark for large-scale face recognition
Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. Ms-celeb-1m: A dataset and bench- mark for large-scale face recognition. In Computer Vision - ECCV 2016 - 14th European Conference, Am- sterdam, The Netherlands, October 11-14, 2016, Pro- ceedings, Part III , pages 87–102. Springer , 2016
2016
-
[20]
Deep residual learning for image recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2016, Las Vegas, NV , USA, June Page 11 of 19 27-30, 2016 , pages 770–778. IEEE Computer Society, 2016
2016
-
[21]
Face- qnet: Quality assessment for face recognition based on deep learning
Javier Hernandez-Ortega, Javier Galbally, Julian Fiérrez, Rudolf Haraksim, and Laurent Beslay. Face- qnet: Quality assessment for face recognition based on deep learning. In 2019 International Conference on Biometrics, ICB 2019, Crete, Greece, June 4-7, 2019 , pages 1–8. IEEE, 2019
2019
-
[22]
Biometric quality: Re- view and application to face recognition with face- qnet
Javier Hernandez-Ortega, Javier Galbally, Julian Fiérrez, and Laurent Beslay. Biometric quality: Re- view and application to face recognition with face- qnet. CoRR, abs/2006.03298, 2020
-
[23]
Sparsity in deep learn- ing: pruning and growth for efficient inference and training in neural networks
Torsten Hoefler , Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learn- ing: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. , 22 (1), 2021
2021
-
[24]
Courville, Gregory Clark, Yann Dauphin, and Andrea Frome
Sara Hooker , Aaron C. Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget. arXiv: Learning, 2019
2019
-
[25]
Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller
Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller . Labeled faces in the wild: A database for studying face recognition in uncon- strained environments. Technical Report 07-49, Uni- versity of Massachusetts, Amherst, 2007
2007
-
[26]
Curricularface: Adaptive curriculum learn- ing loss for deep face recognition
Yuge Huang, Yuhan Wang, Ying Tai, Xiaoming Liu, Pengcheng Shen, Shaoxin Li, Jilin Li, and Feiyue Huang. Curricularface: Adaptive curriculum learn- ing loss for deep face recognition. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 5900–5909. Computer Vision Foundation / IEEE, 2020
2020
-
[27]
ISO/IEC TR 29794- 5:2010 Information technology - Biometric sample quality - Part 5: Face image data
ISO/IEC JTC1 SC37 Biometrics. ISO/IEC TR 29794- 5:2010 Information technology - Biometric sample quality - Part 5: Face image data. International Or- ganization for Standardization, 2010
2010
-
[28]
ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and frame- work
ISO/IEC JTC1 SC37 Biometrics. ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and frame- work. International Organization for Standardiza- tion, 2021
2021
-
[29]
Self- damaging contrastive learning
Zhiyu Jiang, Zhe Liu, Chen Sun, Yantao Shen, Xiao- hua Xue, Hongyuan Zha, and Zhiwu Huang. Self- damaging contrastive learning. In International Con- ference on Machine Learning (ICML) , 2021
2021
-
[30]
Jain, and Xiaoming Liu
Minchul Kim, Anil K. Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recogni- tion. In CVPR, pages 18729–18738. IEEE, 2022
2022
-
[31]
Cross-quality LFW: A database for analyzing cross- resolution image face recognition in uncon- strained environments
Martin Knoche, Stefan Hörmann, and Gerhard Rigoll. Cross-quality LFW: A database for analyzing cross- resolution image face recognition in uncon- strained environments. In 16th IEEE International Conference on Automatic Face and Gesture Recogni- tion, FG 2021, Jodhpur , India, December 15-18, 2021 , pages 1–5. IEEE, 2021
2021
-
[32]
Grafiqs: Face image quality assessment using gra- dient magnitudes
Jan Niklas Kolf, Naser Damer , and Fadi Boutros. Grafiqs: Face image quality assessment using gra- dient magnitudes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1490–1499, 2024
2024
-
[33]
Pruning and quantization for deep neural network acceleration: A survey
Tailin Liang, John Glossner , Lei Wang, Shaobo Shi, and Xiaotong Zhang. Pruning and quantization for deep neural network acceleration: A survey. Neuro- computing, 461:370–403, 2021
2021
-
[34]
SphereFace: Deep hyper- sphere embedding for face recognition
Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. SphereFace: Deep hyper- sphere embedding for face recognition. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recog- nition, pages 212–220, 2017
2017
-
[35]
Adams, James A
Brianna Maze, Jocelyn C. Adams, James A. Duncan, Nathan D. Kalka, Tim Miller , Charles Otto, Anil K. Jain, W. Tyler Niggel, Janet Anderson, Jordan Cheney, and Patrick Grother . IARPA janus benchmark - C: face dataset and protocol. In 2018 International Con- ference on Biometrics, ICB 2018, Gold Coast, Australia, February 20-23, 2018, pages 158–165. IEEE, 2018
2018
-
[36]
Magface: A universal representation for face recognition and quality assessment
Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. Magface: A universal representation for face recognition and quality assessment. In IEEE Con- ference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021 , pages 14225– 14234. Computer Vision Foundation / IEEE, 2021
2021
-
[37]
Agedb: The first manually collected, in-the-wild age database
Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. Agedb: The first manually collected, in-the-wild age database. In 2017 IEEE CVPRW, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017 , pages 1997–2005. IEEE Computer Society, 2017
2017
-
[38]
SDD-FIQA: unsupervised face im- age quality assessment with similarity distribution distance
Fu-Zhao Ou, Xingyu Chen, Ruixin Zhang, Yuge Huang, Shaoxin Li, Jilin Li, Yong Li, Liujuan Cao, and Yuan-Gen Wang. SDD-FIQA: unsupervised face im- age quality assessment with similarity distribution distance. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 7670–7679. Computer Vision Foundation / IEEE, 2021
2021
-
[39]
MR-FIQA: face image quality assessment with multi-reference representations from synthetic data generation
Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. MR-FIQA: face image quality assessment with multi-reference representations from synthetic data generation. In IEEE/CVF International Con- ference on Computer Vision, ICCV 2025, Honolulu, Hawaii, USA, October 19-23, 2025 , pages 12915– 12925. Computer Vision Foundation / IEEE, 2025
2025
-
[40]
Clib-fiqa: Face image quality assessment Page 12 of 19 with confidence calibration
Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. Clib-fiqa: Face image quality assessment Page 12 of 19 with confidence calibration. In 2024 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 1694–1704, 2024
2024
-
[41]
Clib-fiqa: Face image quality assessment with confidence calibration
Fu-Zhao Ou, Chongyi Li, Shiqi Wang, and Sam Kwong. Clib-fiqa: Face image quality assessment with confidence calibration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1694–1704, 2024
2024
-
[42]
Vitnt-fiqa: Training-free face image quality assessment with vision transformers, 2026
Guray Ozgur , Eduarda Caldeira, Tahar Chettaoui, Jan Niklas Kolf, Marco Huber , Naser Damer , and Fadi Boutros. Vitnt-fiqa: Training-free face image quality assessment with vision transformers, 2026
2026
-
[43]
Vitnt-fiqa: Training-free face image quality assessment with vision transformers
Guray Ozgur , Eduarda Caldeira, Tahar Chettaoui, Jan Niklas Kolf, Marco Huber , Naser Damer , and Fadi Boutros. Vitnt-fiqa: Training-free face image quality assessment with vision transformers. CoRR, abs/2601.05741, 2026
-
[44]
Pytorch: An im- perative style, high-performance deep learning li- brary
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer , James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner , Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An im- perative style, high...
2019
-
[45]
Tapia, and Christoph Busch
Torsten Schlett, Christian Rathgeb, Juan E. Tapia, and Christoph Busch. Considerations on the evaluation of biometric quality assessment algorithms. IEEE Trans. Biom. Behav. Identity Sci. , 6(1):54–67, 2024
2024
-
[46]
Patel, Rama Chel- lappa, and David W
Soumyadip Sengupta, Jun-Cheng Chen, Car- los Domingo Castillo, Vishal M. Patel, Rama Chel- lappa, and David W. Jacobs. Frontal to profile face verification in the wild. In 2016 IEEE Winter Con- ference on Applications of Computer Vision, WACV 2016, Lake Placid, NY, USA, March 7-10, 2016 , pages 1–9. IEEE Computer Society, 2016
2016
-
[47]
Yichun Shi and Anil K. Jain. Probabilistic face embed- dings. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), Oc- tober 27 - November 2, 2019 , pages 6901–6910. IEEE, 2019
2019
-
[48]
Hidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, and Surya Ganguli. Pruning neural networks with- out any data by iteratively conserving synaptic flow. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2020. Curran Associates Inc
2020
-
[49]
SER-FIQ: un- supervised estimation of face image quality based on stochastic embedding robustness
Philipp Terhörst, Jan Niklas Kolf, Naser Damer , Flo- rian Kirchbuchner , and Arjan Kuijper . SER-FIQ: un- supervised estimation of face image quality based on stochastic embedding robustness. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 5650–5659. Computer Vision Foundation...
2020
-
[50]
Cos- face: Large margin cosine loss for deep face recog- nition
Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cos- face: Large margin cosine loss for deep face recog- nition. In CVPR, pages 5265–5274. Computer Vision Foundation / IEEE Computer Society, 2018
2018
-
[51]
Inducing predictive uncertainty estimation for face verification
Weidi Xie, Jeffrey Byrne, and Andrew Zisserman. Inducing predictive uncertainty estimation for face verification. In 31st British Machine Vision Confer- ence 2020, BMVC 2020, Virtual Event, UK, September 7-10, 2020. BMVA Press, 2020
2020
-
[52]
Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z. Li. Learning face representation from scratch. CoRR, abs/1411.7923, 2014
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[53]
To- wards pose invariant face recognition in the wild
Jie Zhao, Yuxiang Xiong, Jian Cheng, Jianshu Li, Yao Zhao, Jian Xing, Shuicheng Yan, and Jiashi Feng. To- wards pose invariant face recognition in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2018
2018
-
[54]
Zheng and W
T. Zheng and W. Deng. Cross-pose lfw: A database for studying cross-pose face recognition in uncon- strained environments. Technical Report 18-01, Bei- jing University of Posts and Telecommunications, 2018
2018
-
[55]
Cross-Age LFW: A Database for Studying Cross-Age Face Recognition in Unconstrained Environments
Tianyue Zheng, Weihong Deng, and Jiani Hu. Cross- age LFW: A database for studying cross-age face recognition in unconstrained environments. CoRR, abs/1708.08197, 2017. Page 13 of 19
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[56]
Supplementary Material This supplementary material sections contains the following supporting content: • Detailed pAUC results across all four evaluated FR models and pruning ratios ρ. These are pro- vided for unstructured L1 magnitude pruning (Table 6), unstructured random pruning (Ta- ble 7), and structured pruning (Table 8). • A comprehensive compariso...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.