Recognition: no theorem link
AuraMask: An Extensible Pipeline for Developing Aesthetic Anti-Facial Recognition Image Filters
Pith reviewed 2026-05-14 20:08 UTC · model grok-4.3
The pith
AuraMask pipeline produces aesthetic filters that block facial recognition while matching Instagram styles.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
AuraMask is an extensible pipeline for creating anti-facial recognition image filters that emulate popular Instagram one-click styles. The authors use it to generate 40 aesthetic filters. These filters achieve adversarial effectiveness against open-source facial recognition models that meets or exceeds prior methods. In a controlled online study with 630 participants the same filters obtain significantly higher user acceptance than earlier anti-facial recognition techniques.
What carries the argument
AuraMask, an extensible pipeline that generates anti-facial recognition filters by emulating popular Instagram image processing styles.
If this is right
- Forty aesthetic filters are produced that equal or surpass prior methods against open-source facial recognition models.
- The filters receive significantly higher user acceptance than prior methods in a study of 630 participants.
- Releasing the pipeline enables faster community research on effective and acceptable protections.
- Filters integrate as simple one-click edits similar to existing social media tools.
Where Pith is reading between the lines
- Widespread use could reduce the reliability of facial recognition in public photo sharing platforms.
- Integration into standard photo apps might make privacy-preserving edits a default option for users.
- Further tests on proprietary systems and diverse real-world images would clarify practical limits.
Load-bearing premise
That results shown on open-source facial recognition models and in a controlled online user study will hold for proprietary real-world systems and everyday photo use.
What would settle it
A direct test in which AuraMask filters fail to lower recognition accuracy on a major commercial facial recognition API would disprove the effectiveness claim.
Figures
read the original abstract
Anti-facial recognition (AFR) image filters alter images in ways that are subtle to people but blinding to computer vision. Yet, despite widespread interest in these technologies to subvert surveillance, users rarely use them in practice -- because the ``subtle'' alterations are visible enough to conflict with users' self-presentation goals. To address this challenge, we propose AuraMask: a novel approach to creating AFR filters that are both adversarially effective and aesthetically acceptable. Using AuraMask, we produce 40 ``aesthetic'' filters that emulate popular ``one-click'' Instagram image filters. We show that AuraMask filters meet or exceed the adversarial effectiveness of prior methods against open-source facial recognition models. Moreover, in a controlled online user study ($N=630$) we confirm these filters achieve significantly higher user acceptance than prior methods. Lastly, we provide our AFR pipeline to the community for accelerated research in adversarially effective and aesthetically acceptable protections.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces AuraMask, an extensible pipeline for generating aesthetic anti-facial recognition (AFR) image filters that emulate popular one-click Instagram filters. The authors produce 40 such filters and report that they meet or exceed the adversarial effectiveness of prior methods when tested against open-source facial recognition models. A controlled online user study with N=630 participants is presented to show significantly higher user acceptance compared to previous AFR approaches. The pipeline is released publicly to support further community research.
Significance. If the empirical results on open-source models and the user acceptance findings hold under broader conditions, this work could meaningfully advance practical AFR tools by reducing the aesthetic barrier that currently limits adoption. The public release of the extensible pipeline is a clear strength, as it directly supports reproducibility and allows other researchers to extend or adapt the approach.
major comments (2)
- [Abstract and §4] Abstract and §4 (Evaluation): The central claim of practical AFR protection is load-bearing on generalization beyond the tested open-source models, yet all reported adversarial results are confined to open-source facial recognition models with no transferability experiments, black-box evaluations, or tests against proprietary systems that may use different backbones, ensembles, or adversarial training.
- [§5] §5 (User Study): The claim of significantly higher user acceptance rests on the N=630 study, but the manuscript provides insufficient detail on the precise statistical tests, effect sizes, confidence intervals, or controls for image content and presentation order, which are necessary to assess whether the acceptance advantage is robust.
minor comments (2)
- [§3] §3 (Pipeline): The description of how the 40 filters were selected from the extensible pipeline would benefit from an explicit enumeration or selection criterion to aid reproducibility.
- [Figure 2] Figure 2: Side-by-side visual comparisons of original images, prior AFR methods, and AuraMask outputs would improve clarity when illustrating aesthetic differences.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address each major comment below and describe the planned revisions to strengthen the manuscript while maintaining its core contributions on the extensible pipeline and empirical results for open-source models.
read point-by-point responses
-
Referee: [Abstract and §4] Abstract and §4 (Evaluation): The central claim of practical AFR protection is load-bearing on generalization beyond the tested open-source models, yet all reported adversarial results are confined to open-source facial recognition models with no transferability experiments, black-box evaluations, or tests against proprietary systems that may use different backbones, ensembles, or adversarial training.
Authors: We acknowledge that our adversarial evaluations are restricted to open-source models, which aligns with standard practice in the field for ensuring reproducibility. To address this, we will add transferability experiments across multiple open-source facial recognition architectures in the revised §4 and include an expanded limitations discussion on the difficulties of black-box and proprietary evaluations. The public release of the AuraMask pipeline is explicitly intended to enable the community to perform such extensions. We cannot, however, directly test proprietary systems as they are not accessible. revision: partial
-
Referee: [§5] §5 (User Study): The claim of significantly higher user acceptance rests on the N=630 study, but the manuscript provides insufficient detail on the precise statistical tests, effect sizes, confidence intervals, or controls for image content and presentation order, which are necessary to assess whether the acceptance advantage is robust.
Authors: We agree that additional statistical details are required for full transparency. In the revised manuscript, we will specify the exact tests (e.g., paired t-tests or mixed-effects models), report effect sizes and confidence intervals, and detail the controls including use of identical base images across filter conditions and randomized presentation order. These additions will appear in §5 with supporting tables in the supplementary material. revision: yes
- Direct evaluation against proprietary facial recognition systems, as these are not publicly available to researchers.
Circularity Check
No circularity: purely empirical pipeline with external validation
full rationale
The paper describes an extensible pipeline (AuraMask) for generating aesthetic anti-facial-recognition filters by emulating Instagram styles, then evaluates them via direct experiments on open-source facial recognition models and a separate controlled user study (N=630). No equations, derivations, fitted parameters renamed as predictions, or self-citation chains appear in the provided abstract or reader summary; the central claims rest on measured adversarial success rates and acceptance scores rather than any self-referential construction. The work is therefore self-contained against its own benchmarks and receives the default non-circularity finding.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
William Agnew, Kevin R McKee, I Gabriel, J Kay, W Isaac, AS Bergman, S El-Sayed, and S Mohamed. 2023. Technologies of Resistance to AI.Equity and Access in Algorithms, Mechanisms, and Optimization(2023), 1–13. 9Available here: https://gitlab.com/raccs-lab/auramask-library 22 Lagogiannis et al
2023
-
[2]
Good, Simon King, Mor Naaman, and Rahul Nair
Shane Ahern, Dean Eckles, Nathaniel S. Good, Simon King, Mor Naaman, and Rahul Nair. 2007. Over-Exposed? Privacy Patterns and Considerations in Online and Mobile Photo Sharing. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07). Association for Computing Machinery, New York, NY, USA, 357–366. doi:10/dhkjj4
2007
-
[3]
Naveed Akhtar and Ajmal Mian. 2018. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.IEEE Access6 (Feb. 2018), 14410–14430. arXiv:1801.00553 doi:10/gfxzrs
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[4]
Md Zahangir Alom, Mahmudul Hasan, Chris Yakopcic, Tarek M. Taha, and Vijayan K. Asari. 2018. Recurrent Residual Convolutional Neural Network Based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv:1802.06955 [cs] doi:10.48550/arXiv.1802.06955
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1802.06955 2018
-
[5]
Bertenthal, and Apu Kapadia
Mary Jean Amon, Rakibul Hasan, Kurt Hugenberg, Bennett I. Bertenthal, and Apu Kapadia. 2020. Influencing Photo Sharing Decisions on Social Media: A Case of Paradoxical Findings. In2020 IEEE Symposium on Security and Privacy (SP) (IEEE Symposium on Security and Privacy 2020). IEEE, San Francisco, CA, USA, 1350–1366. doi:10/gmbpwp
2020
-
[6]
N. D. Amsden, L. Chen, and X. Yuan. 2014. Transmitting Hidden Information Using Steganography via Facebook. InFifth International Conference on Computing, Communications and Networking Technologies (ICCCNT). 1–7. doi:10/gsnqr9
2014
-
[7]
Saeideh Bakhshi, David Shamma, Lyndon Kennedy, and Eric Gilbert. 2015. Why We Filter Our Photos and How It Impacts Engagement.Proceedings of the International AAAI Conference on Web and Social Media9, 1 (2015), 12–21. doi:10/gsnqvd
2015
-
[8]
Shumeet Baluja and Ian Fischer. 2017. Adversarial Transformation Networks: Learning to Generate Adversarial Examples. arXiv:1703.09387 [cs] doi:10.48550/arXiv.1703.09387
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1703.09387 2017
-
[9]
You Take Fifty Photos, Delete Forty Nine and Use One
Beth T. Bell. 2019. “You Take Fifty Photos, Delete Forty Nine and Use One”: A Qualitative Study of Adolescent Image-Sharing Practices on Social Media.International Journal of Child-Computer Interaction20 (June 2019), 64–71. doi:10/gjvm4p
2019
-
[10]
Sam Biddle. 2020. Police Surveilled George Floyd Protests With Help From Twitter-Affiliated Startup Dataminr. https://theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests/
work page 2020
-
[11]
Russell Brandom. 2016. Facebook, Twitter, and Instagram Surveillance Tool Was Used to Arrest Baltimore Protestors. https://www.theverge.com/2016/10/11/13243890/facebook-twitter-instagram-police-surveillance-geofeedia-api
work page 2016
-
[12]
Kat Brewster, Aloe DeGuia, and Samuel Mayworm. 2025. "That Moment of Curiosity": Augmented Reality Face Filters for Transgender Identity Exploration, Gender Affirmation, and Radical Possibility. (2025)
work page 2025
-
[13]
André Brock. 2012. From the Blackhand Side: Twitter as a Cultural Conversation.Journal of Broadcasting & Electronic Media56, 4 (Oct. 2012), 529–549. doi:10/gddxwm
work page 2012
-
[14]
2015.Dark Matters: On the Surveillance of Blackness
Simone Browne. 2015.Dark Matters: On the Surveillance of Blackness. Duke University Press
work page 2015
-
[15]
2015.Obfuscation: A User’s Guide for Privacy and Protest
Finn Brunton and Helen Fay Nissenbaum. 2015.Obfuscation: A User’s Guide for Privacy and Protest. MIT Press, Cambridge, Massachusetts
work page 2015
-
[16]
Nhat-Tan Bui, Ngoc-Thao Nguyen, and Xuan-Nam Cao. 2022. Structure-Aware Photorealistic Style Transfer Using Ghost Bottlenecks. InPattern Recognition and Artificial Intelligence, Mounîm El Yacoubi, Eric Granger, Pong Chi Yuen, Umapada Pal, and Nicole Vincent (Eds.). Springer International Publishing, Cham, 15–24. doi:10/gtpk4k
work page 2022
-
[17]
Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. 2011. Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs. InThe Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition
work page 2011
-
[18]
Hey, I’m Having These Experiences
Paul Byron, Brady Robards, Benjamin Hanckel, Son Vivienne, and Brendan Churchill. 2019. "Hey, I’m Having These Experiences": Tumblr Use and Young People’s Queer (Dis)Connections.International Journal of Communication13 (2019), 2239–2259
work page 2019
-
[19]
Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A Dataset for Recognising Faces across Pose and Age. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 67–74
work page 2018
-
[20]
Nathalie Casemajor, Stéphane Couture, Mauricio Delfin, Matthew Goerzen, and Alessandro Delfanti. 2015. Non-Participation in Digital Media: Toward a Framework of Mediated Political Action.Media, Culture & Society37, 6 (Sept. 2015), 850–866. doi:10/gndhkm
2015
-
[21]
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, and Suman Banerjee. 2021. Face-Off: Adversarial Face Obfuscation. Proceedings on Privacy Enhancing Technologies2021, 2 (April 2021), 369–390. doi:10/gsnqn2
work page 2021
-
[22]
Chaofeng Chen, Jiadi Mo, Jingwen Hou, Haoning Wu, Liang Liao, Wenxiu Sun, Qiong Yan, and Weisi Lin. 2023. TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment. arXiv:2308.03060 doi:10.48550/arXiv.2308.03060
- [23]
-
[24]
Mitchell Church, Ravi Thambusamy, and Hamid Nemati. 2020. User Misrepresentation in Online Social Networks: How Competition and Altruism Impact Online Disclosure Behaviours.Behaviour & Information Technology39, 12 (Dec. 2020), 1320–1340. doi:10/gpbrfj
work page 2020
-
[25]
Sauvik Das. 2020. Subversive AI: Resisting Automated Algorithmic Surveillance with Human-Centered Adversarial Machine Learning. InResistance AI Workshop at NeurIPS. 4
work page 2020
-
[26]
Julia Davies. 2007. Display, Identity and the Everyday: Self-presentation through Online Image Sharing.Discourse: studies in the cultural politics of education28, 4 (2007), 549–564. doi:10/c99wtk
work page 2007
-
[27]
Mauricio Delbracio, Damien Kelly, Michael S. Brown, and Peyman Milanfar. 2021. Mobile Computational Photography: A Tour.Annual Review of Vision Science7, 1 (2021), 571–604. doi:10/gmx68q
work page 2021
-
[28]
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. Arcface: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4690–4699. AuraMask 23
work page 2019
-
[29]
Diakogiannis, François Waldner, Peter Caccetta, and Chen Wu
Foivos I. Diakogiannis, François Waldner, Peter Caccetta, and Chen Wu. 2020. ResUNet-a: A Deep Learning Framework for Semantic Segmentation of Remotely Sensed Data.ISPRS Journal of Photogrammetry and Remote Sensing162 (April 2020), 94–114. doi:10.1016/j.isprsjprs.2020.01.013
-
[30]
2019.New York City Police Department Surveillance Technology
Angel Diaz. 2019.New York City Police Department Surveillance Technology. Technical Report. Brennan Center for Justice
work page 2019
-
[31]
Grinter, Jessica Delgado de la Flor, and Melissa Joseph
Paul Dourish, Rebecca E. Grinter, Jessica Delgado de la Flor, and Melissa Joseph. 2004. Security in the Wild: User Strategies for Managing Security as an Everyday, Practical Problem.Personal and Ubiquitous Computing8, 6 (Nov. 2004), 391–401. doi:10/dw9f76
work page 2004
-
[32]
Ádám Erdélyi, Thomas Winkler, and Bernhard Rinner. 2013. Serious Fun: Cartooning for Privacy Protection.. InProceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013. (MediaEval 2013)
work page 2013
-
[33]
Yihua Fan, Yongzhen Wang, Dong Liang, Yiping Chen, Haoran Xie, Fu Lee Wang, Jonathan Li, and Mingqiang Wei. 2024. Low-FaceNet: Face Recognition-Driven Low-Light Image Enhancement.IEEE Transactions on Instrumentation and Measurement73 (2024), 1–13. doi:10.1109/TIM.2024. 3372230
- [34]
-
[35]
Ashley Gorski. 2022. The Biden Administration’s SIGINT Executive Order, Part II: Redress for Unlawful Surveillance.Just Security(2022)
work page 2022
-
[36]
Mitchell Gray. 2003. Urban Surveillance and Panopticism: Will We Recognize the Facial Recognition Society?Surveillance & Society1, 3 (2003), 314–330. doi:10/gs42bb
work page 2003
-
[37]
Cory Hallam and Gianluca Zanella. 2017. Online Self-Disclosure: The Privacy Paradox Explained as a Temporally Discounted Balance between Concerns and Rewards.Computers in Human Behavior68 (March 2017), 217–227. doi:10/f9nv3s
work page 2017
-
[38]
Daniel A Hanley and Sally Hubbard. 2020. Eyes Everywhere: Amazon’s Surveillance Infrastructure and Revitalizing Worker Power.Open Markets Institute(2020)
work page 2020
-
[39]
Bertenthal, Kurt Hugenberg, and Apu Kapadia
Rakibul Hasan, Bennett I. Bertenthal, Kurt Hugenberg, and Apu Kapadia. 2021. Your Photo Is so Funny That I Don’t Mind Violating Your Privacy by Sharing It: Effects of Individual Humor Styles on Online Photo-sharing Behaviors. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–14. doi:10/gsnqs5
work page 2021
-
[40]
Crandall, Roberto Hoyle, and Apu Kapadia
Rakibul Hasan, Eman Hassan, Yifang Li, Kelly Caine, David J. Crandall, Roberto Hoyle, and Apu Kapadia. 2018. Viewer Experience of Obscuring Scene Elements in Photos to Enhance Privacy. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. doi:10/gfkr9s
work page 2018
-
[41]
Hassan, Rakibul Hasan, Patrick Shaffer, David Crandall, and Apu Kapadia
Eman T. Hassan, Rakibul Hasan, Patrick Shaffer, David Crandall, and Apu Kapadia. 2017. Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos. In2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (CVPR 2017). IEEE, Honolulu, HI, USA, 1333–1342. doi:10/gg2g6q
work page 2017
-
[42]
J. He, B. Liu, D. Kong, X. Bao, N. Wang, H. Jin, and G. Kesidis. 2016. PUPPIES: Transformation-Supported Personalized Privacy Preserving Partial Image Sharing. In2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). 359–370. doi:10/gsnqqm
work page 2016
-
[43]
Kashmir Hill. 2020. This Tool Could Protect Your Photos From Facial Recognition.The New York Times(Aug. 2020)
work page 2020
-
[44]
Man M. Ho and Jinjia Zhou. 2021. Deep Preset: Blending and Retouching Photos with Color Style Transfer. arXiv:2007.10701 [cs, eess] doi:10.48550/arXiv.2007.10701
-
[45]
Jahng, Namyeon Lee, and Kevin R
Seoyeon Hong, Mi R. Jahng, Namyeon Lee, and Kevin R. Wise. 2020. Do You Filter Who You Are?: Excessive Self-Presentation, Social Cues, and User Evaluations of Instagram Selfies.Computers in Human Behavior104 (March 2020), 106159. doi:10/gnfnbd
work page 2020
-
[46]
Yuheng Hu, Lydia Manikonda, and Subbarao Kambhampati. 2014. What We Instagram: A First Analysis of Instagram Photo Content and User Types.Proceedings of the International AAAI Conference on Web and Social Media8, 1 (May 2014), 595–598. doi:10/gr739d
2014
-
[47]
Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. 2008. Labeled Faces in the Wild: A Database Forstudying Face Recognition in Unconstrained Environments. InWorkshop on Faces in’Real-Life’Images: Detection, Alignment, and Recognition
work page 2008
-
[48]
Håkon Hukkelås, Rudolf Mester, and Frank Lindseth. 2019. DeepPrivacy: A Generative Adversarial Network for Face Anonymization. InAdvances in Visual Computing (Lecture Notes in Computer Science, Vol. 11844), George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Daniela Ushizima, Sek Chai, Shinjiro Sueda, Xin Lin, Aidong Lu, Daniel Thalmann, Chaoli Wan...
work page 2019
-
[49]
Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, and Farinaz Koushanfar. 2023. ReFace: Adversarial Transformation Networks for Real-time Attacks on Face Recognition Systems. In2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). 302–312. doi:10/gstdzq
work page 2023
-
[50]
Sergio Ibáñez-Sánchez, Carlos Orús, and Carlos Flavián. 2022. Augmented Reality Filters on Social Media. Analyzing the Drivers of Playability Based on Uses and Gratifications Theory.Psychology & Marketing39, 3 (March 2022), 559–578. doi:10/gpbsqv
work page 2022
-
[51]
Panagiotis Ilia, Iasonas Polakis, Elias Athanasopoulos, Federico Maggi, and Sotiris Ioannidis. 2015. Face/Off: Preventing Privacy Leakage from Photos in Social Networks. InProceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS ’15). Association for Computing Machinery, New York, NY, USA, 781–792. doi:10/ggvf2j
work page 2015
-
[52]
Ana Javornik, Ben Marder, Jennifer Brannon Barhorst, Graeme McLean, Yvonne Rogers, Paul Marshall, and Luk Warlop. 2022. ‘What Lies behind the Filter?’ Uncovering the Motivations for Using Augmented Reality (AR) Face Filters on Social Media and Their Effect on Well-Being.Computers in Human Behavior128 (March 2022), 107126. doi:10/gsnqst
work page 2022
-
[53]
Nicola Jones. 2024. The AI Revolution Is Running out of Data. What Can Researchers Do?Nature636, 8042 (Dec. 2024), 290–292. doi:10.1038/d41586- 024-03990-2 24 Lagogiannis et al
-
[54]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. InAdvances in Neural Information Processing Systems, Vol. 25. Curran Associates, Inc
2012
-
[55]
Isadora Krsek, Anubha Kabra, Yao Dou, Tarek Naous, Laura A Dabbish, Alan Ritter, Wei Xu, and Sauvik Das. 2025. Measuring, Modeling, and Helping People Account for Privacy Risks in Online Self-Disclosures with AI.Proceedings of the ACM on Human-Computer Interaction9, 2 (2025), 1–31
2025
-
[56]
2008.Surveillance Unlimited : How We’ve Become the Most Watched People on Earth
Keith Laidler. 2008.Surveillance Unlimited : How We’ve Become the Most Watched People on Earth. Cambridge [England] : Icon Books Ltd. : Distributed in the UK by TBS Ltd
2008
-
[57]
Huang, Aruni RoyChowdhury, Haoxiang Li, and Gang Hua
Erik Learned-Miller, Gary B. Huang, Aruni RoyChowdhury, Haoxiang Li, and Gang Hua. 2016. Labeled Faces in the Wild: A Survey. InAdvances in Face Detection and Facial Image Analysis, Michal Kawulok, M. Emre Celebi, and Bogdan Smolka (Eds.). Springer International Publishing, Cham, 189–248. doi:10.1007/978-3-319-25958-1_8
-
[58]
Bin Li, Junhui He, Jiwu Huang, and Yun Qing Shi. 2011. A Survey on Image Steganography and Steganalysis.J. Inf. Hiding Multim. Signal Process. 2, 2 (April 2011), 142–172
2011
-
[59]
Knijnenburg, and Kelly Caine
Yifang Li, Nishant Vishwamitra, Hongxin Hu, Bart P. Knijnenburg, and Kelly Caine. 2017. Effectiveness and Users’ Experience of Face Blurring as a Privacy Protection for Sharing Photos via Online Social Networks.Proceedings of the Human Factors and Ergonomics Society Annual Meeting61, 1 (Sept. 2017), 803–807. doi:10/gsnqnk
2017
-
[60]
Jacob Logas, Poojita Garg, Rosa I. Arriaga, and Sauvik Das. 2024. The Subversive AI Acceptance Scale (SAIA-8): A Scale to Measure User Acceptance of AI-generated, Privacy-Enhancing Image Modifications.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 185 (April 2024). doi:10.1145/3641024
-
[61]
Jacob Logas, Ari Schlesinger, Zhouyu Li, and Sauvik Das. 2022. Image DePO: Towards Gradual Decentralization of Online Social Networks Using Decentralized Privacy Overlays.Proceedings of the ACM on Human-Computer Interaction6, CSCW1 (March 2022), 1–28. doi:10/gsnqpm
2022
-
[62]
Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. arXiv:1711.05101 [cs] doi:10.48550/arXiv.1711.05101
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1711.05101 2019
-
[63]
Katerina Lup, Leora Trub, and Lisa Rosenthal. 2015. Instagram #Instasad?: Exploring Associations Among Instagram Use, Depressive Symptoms, Negative Social Comparison, and Strangers Followed.Cyberpsychology, Behavior, and Social Networking18, 5 (May 2015), 247–252. doi:10.1089/ cyber.2014.0560
-
[64]
Ryan Mac, Caroline Haskins, and Logan McDonald. 2020. Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA. https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement
2020
-
[65]
Penousal Machado and Amilcar Cardoso. 1998. Computing Aesthetics. InAdvances in Artificial Intelligence (Lecture Notes in Computer Science), Flávio Moreira de Oliveira (Ed.). Springer, Berlin, Heidelberg, 219–228. doi:10/dnwxc3
1998
-
[66]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2019. Towards Deep Learning Models Resistant to Adversarial Attacks.arXiv:1706.06083 [cs, stat](Sept. 2019). arXiv:1706.06083 [cs, stat]
work page internal anchor Pith review Pith/arXiv arXiv 2019
-
[67]
I. Manokha. 2018. Surveillance, Panopticism, and Self-Discipline in the Digital Age.Surveillance and Society16, 2 (2018). doi:10/gg4sxm
work page 2018
-
[68]
Sanya Mansoor. 2025. Pro-Palestinian Activists under Increased Surveillance on Massachusetts Campuses. https://www.wgbh.org/news/local/2025- 03-24/pro-palestinian-activists-under-increased-surveillance-on-massachusetts-campuses
work page 2025
-
[69]
Filippo Menczer and Thomas Hills. 2020. The Attention Economy.Scientific American323, 6 (2020), 54–61
work page 2020
-
[70]
Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. 2016. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:1606.04797 [cs.CV]
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[71]
Amar Kumar Mohapatra, Kamal Kumar, and Shubha Swarup. 2025. A Comparative Review and Performance Benchmarking of Face Recognition. Proceedings of Data Analytics and Management: ICDAM 2025, Volume 55 (2025), 320
work page 2025
-
[72]
Kyzyl Monteiro, Yuchen Wu, and Sauvik Das. 2025. Imago Obscura: An Image Privacy AI Co-Pilot to Enable Identification and Mitigation of Risks. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–26
work page 2025
-
[73]
Shyam Sundar
Anne Oeldorf-Hirsch and S. Shyam Sundar. 2016. Social and Technological Motivations for Online Photo Sharing.Journal of Broadcasting & Electronic Media60, 4 (Oct. 2016), 624–642. doi:10/gsnfbs
2016
-
[74]
Attention U-Net: Learning Where to Look for the Pancreas
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y. Hammerla, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. 2018. Attention U-Net: Learning Where to Look for the Pancreas. arXiv:1804.03999 [cs] doi:10.48550/arXiv.1804.03999
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1804.03999 2018
-
[75]
Parkhi, Andrea Vedaldi, and Andrew Zisserman
Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep Face Recognition. InProcedings of the British Machine Vision Conference
2015
-
[76]
doi:10/gf7pj3
British Machine Vision Association, Swansea, 41.1–41.12. doi:10/gf7pj3
-
[77]
Sarah Parvini, Garance Burke, and Jesse Bedayn. 2024. Surveillance Tech Advances by Biden Could Aid in Trump’s Promised Crackdown on Immigration. https://apnews.com/article/artificial-intelligence-ai-deportation-biden-trump-immigration-0a0c2387762a7342af5668660f0391b5
2024
-
[78]
Yilang Peng. 2017. Time Travel with One Click: Effects of Digital Filters on Perceptions of Photographs. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). Association for Computing Machinery, New York, NY, USA, 6000–6011. doi:10/gsnqr2
2017
-
[79]
Penousal Machado, Juan Romero, and Bill Z. Manaris. 2008. Experiments in Computational Aesthetics.. InThe Art of Artificial Evolution 2008 (Natural Computing Series). Springer, Berlin, Heidelberg, 381–415. doi:10/bkrcv3
2008
-
[80]
Daniela Petrelli and Steve Whittaker. 2010. Family Memories in the Home: Contrasting Physical and Digital Mementos.Personal and Ubiquitous Computing14, 2 (Feb. 2010), 153–169. doi:10.1007/s00779-009-0279-7 AuraMask 25
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.