pith. machine review for the scientific record. sign in

arxiv: 2604.13474 · v1 · submitted 2026-04-15 · 💻 cs.CR · cs.AI· cs.DC

Recognition: unknown

Secure and Privacy-Preserving Vertical Federated Learning

Anderson C.A. Nascimento, Sai Rahul Rachuri, Shan Jin, Yiwei Cai, Yizhen Wang

Authors on Pith no claims yet

Pith reviewed 2026-05-10 13:41 UTC · model grok-4.3

classification 💻 cs.CR cs.AIcs.DC
keywords vertical federated learningsecure multiparty computationdifferential privacyprivacy-preserving machine learningvertical data partitioningfederated learning securitymodel aggregation
0
0 comments X

The pith

Distributing the aggregator role across servers that run multiparty computation enables efficient privacy-preserving vertical federated learning for split features and non-shared labels.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops an end-to-end framework that protects both the private inputs held by different parties and the final model output in vertical federated learning. It splits the usual central aggregator into multiple servers that jointly execute secure multiparty computation to aggregate features and model updates without any server seeing the full data. Differential privacy noise is added only to the released model. The design avoids running every training step inside multiparty computation by supporting both fully global updates and hybrid global-local updates, which cuts the total computation and communication load. Experiments confirm that the protocols remain practical while meeting the stated privacy goals.

Core claim

The central claim is that three optimized protocols, built on secure multiparty computation performed by distributed servers for feature and model aggregation plus differential privacy applied to the final model, deliver both input privacy and output privacy for vertically partitioned federated learning while supporting purely global updates and global-local updates with far lower computation and communication than a naive full-MPC delegation of the entire training process.

What carries the argument

The distribution of the federated-learning aggregator into multiple servers that jointly run multiparty computation protocols for aggregation, combined with differential privacy on the released model, instantiated as three protocols tailored to different deployment scenarios.

If this is right

  • The framework supports both purely global model updates and hybrid global-local updates without sacrificing the privacy guarantees.
  • Input privacy is maintained for each party's features and labels while output privacy protects the final model.
  • The amount of computation and communication performed inside multiparty computation is reduced compared with running the entire training process in secure computation.
  • The approach applies to the common case where features are vertically split across clients and not every client holds labels.
  • Experimental measurements confirm that the protocols remain effective for training under the stated privacy constraints.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Organizations holding complementary features could train joint models without ever pooling raw data in one place.
  • The same server-splitting pattern might be reused for other privacy-sensitive collaborative tasks such as secure aggregation in distributed statistics.
  • If the efficiency gains hold under larger party counts, the method could scale vertical federated learning to settings that previously looked too expensive.

Load-bearing premise

The chosen multiparty computation building blocks and differential privacy noise levels can be tuned to deliver the stated efficiency gains while still preventing the considered adversaries from recovering private inputs or outputs.

What would settle it

Implement the three protocols and a baseline that delegates all training to multiparty computation on the same vertical dataset; measure total communication volume and wall-clock time, then test whether standard privacy attacks can extract any party's features or labels from the released model.

Figures

Figures reproduced from arXiv: 2604.13474 by Anderson C.A. Nascimento, Sai Rahul Rachuri, Shan Jin, Yiwei Cai, Yizhen Wang.

Figure 1
Figure 1. Figure 1: Training a Global model with Frozen Local models [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Training a Global model with Frozen Local models [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Training a Global model and Local models under [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
read the original abstract

We propose a novel end-to-end privacy-preserving framework, instantiated by three efficient protocols for different deployment scenarios, covering both input and output privacy, for the vertically split scenario in federated learning (FL), where features are split across clients and labels are not shared by all parties. We do so by distributing the role of the aggregator in FL into multiple servers and having them run secure multiparty computation (MPC) protocols to perform model and feature aggregation and apply differential privacy (DP) to the final released model. While a naive solution would have the clients delegating the entirety of training to run in MPC between the servers, our optimized solution, which supports purely global and also global-local models updates with privacy-preserving, drastically reduces the amount of computation and communication performed using multiparty computation. The experimental results also show the effectiveness of our protocols.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The manuscript proposes an end-to-end privacy-preserving framework for vertical federated learning with vertically partitioned features and non-shared labels. It distributes the aggregator across multiple servers that execute MPC protocols for model and feature aggregation, applies DP to the released model, and instantiates three protocols for different deployment scenarios. The central technical contribution is an optimized protocol supporting purely global and global-local model updates that avoids full delegation of training to MPC, thereby claiming to drastically reduce computation and communication overhead while preserving input and output privacy; experimental results are asserted to demonstrate effectiveness.

Significance. If the efficiency reductions and privacy guarantees can be rigorously established, the hybrid global-local update design could offer a practical middle ground between fully MPC-based VFL and local training, potentially enabling more scalable secure vertical FL deployments. The work correctly identifies the tension between naive full-MPC delegation and performance, and the use of distributed servers plus DP is a standard direction in the field.

major comments (3)
  1. Abstract: the claim that the optimized solution 'drastically reduces the amount of computation and communication performed using multiparty computation' is presented without any complexity analysis, operation counts, or comparison table against the naive full-delegation baseline, leaving the central efficiency assertion unverified.
  2. Abstract: no threat model, security definitions (e.g., simulation-based or game-based privacy), or formal proofs are supplied for the input/output privacy guarantees under the considered adversaries, despite the reliance on MPC primitives and DP noise levels; this is load-bearing for the privacy-preserving claim.
  3. Abstract: the statement that 'experimental results also show the effectiveness of our protocols' is unsupported by any quantitative metrics, runtime figures, communication volumes, accuracy results, or ablation studies in the visible text, undermining the soundness assessment of both efficiency and privacy claims.
minor comments (1)
  1. Abstract: the phrasing 'with privacy-preserving' is grammatically incomplete and should be clarified to specify which privacy properties are preserved in the global-local updates.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thorough review and valuable comments on our manuscript. We address each major comment below and propose revisions to improve the clarity of our claims in the abstract while ensuring they accurately reflect the content of the full paper.

read point-by-point responses
  1. Referee: Abstract: the claim that the optimized solution 'drastically reduces the amount of computation and communication performed using multiparty computation' is presented without any complexity analysis, operation counts, or comparison table against the naive full-delegation baseline, leaving the central efficiency assertion unverified.

    Authors: We agree that the abstract would benefit from more explicit support for this claim. The main text in Section 4 contains the full complexity analysis with operation counts and comparisons to the naive approach. We will revise the abstract to include a reference to this analysis in Section 4, making the efficiency assertion more verifiable directly from the abstract. revision: yes

  2. Referee: Abstract: no threat model, security definitions (e.g., simulation-based or game-based privacy), or formal proofs are supplied for the input/output privacy guarantees under the considered adversaries, despite the reliance on MPC primitives and DP noise levels; this is load-bearing for the privacy-preserving claim.

    Authors: The full manuscript provides a threat model and security definitions in Section 3, along with formal proofs in the appendix based on the simulation paradigm for the MPC components and DP for output privacy. The abstract omits these details due to space limitations. We will revise the abstract to briefly mention the security model and refer to Section 3 and the appendix for the definitions and proofs. revision: yes

  3. Referee: Abstract: the statement that 'experimental results also show the effectiveness of our protocols' is unsupported by any quantitative metrics, runtime figures, communication volumes, accuracy results, or ablation studies in the visible text, undermining the soundness assessment of both efficiency and privacy claims.

    Authors: Detailed quantitative results, including runtime figures, communication volumes, accuracy metrics, and ablation studies, are presented in Section 5 of the manuscript. We will revise the abstract to incorporate key quantitative highlights from these experiments to better substantiate the effectiveness claim. revision: yes

Circularity Check

0 steps flagged

No circularity: protocol construction with independent design claims

full rationale

The paper presents novel protocol constructions for privacy-preserving vertical federated learning, distributing the aggregator role across servers that run MPC for aggregation and apply DP to the final model. The central claim of drastic reductions in MPC computation/communication via support for purely global and global-local updates is a design choice justified by protocol description and experiments, not by any equation, fitted parameter, or self-citation that reduces the result to its own inputs by construction. No self-definitional steps, renamed known results, or load-bearing self-citations appear in the provided abstract or described structure; the work is self-contained as an engineering contribution rather than a derivation chain.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The framework relies on standard cryptographic assumptions for MPC security and differential privacy; no new entities or fitted parameters are introduced in the abstract.

axioms (2)
  • domain assumption Underlying MPC protocols provide the stated security guarantees against the considered adversaries
    Invoked when claiming input and output privacy.
  • domain assumption Differential privacy noise addition provides the claimed output privacy
    Used for the final released model.

pith-pipeline@v0.9.0 · 5449 in / 1090 out tokens · 28708 ms · 2026-05-10T13:41:12.569808+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

98 extracted references · 10 canonical work pages

  1. [1]

    Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang

    Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16). 308–318

  2. [2]

    Abdelrahaman Aly and Nigel P. Smart. 2019. Benchmarking Privacy Preserving Scientific Operations. InACNS 19International Conference on Applied Cryptogra- phy and Network Security (LNCS)

  3. [3]

    Toshinori Araki, Assi Barak, Jun Furukawa, Marcel Keller, Yehuda Lindell, Kazuma Ohara, and Hikaru Tsuchida. 2018. Generalizing the SPDZ Compiler For Other Protocols. InACM CCS 2018. ACM Press

  4. [4]

    Toshinori Araki, Jun Furukawa, Yehuda Lindell, Ariel Nof, and Kazuma Ohara

  5. [5]

    InACM CCS 2016

    High-Throughput Semi-Honest Secure Three-Party Computation with an Honest Majority. InACM CCS 2016. ACM Press

  6. [6]

    Gilad Asharov, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Ariel Nof, Benny Pinkas, Katsumi Takahashi, and Junichi Tomida. 2022. Efficient Secure Three-Party Sorting with Applications to Data Analysis and Heavy Hitters. InACM CCS 2022. ACM Press

  7. [7]

    Wenxuan Bao, Shan Jin, Hadi Abdullah, Anderson C. A. Nascimento, Vin- cent Bindschaedler, and Yiwei Cai. 2025. Deep Learning with Plausible De- niability. InThe 39th Annual Conference on Neural Information Processing Sys- tems (NeurIPS’25)

  8. [8]

    Donald Beaver. 1992. Efficient Multiparty Protocols Using Circuit Randomization. InCRYPTO’91 (LNCS)

  9. [9]

    Bonawitz, Adrià Gascón, Tancrède Lepoint, and Mar- iana Raykova

    James Henry Bell, Kallista A. Bonawitz, Adrià Gascón, Tancrède Lepoint, and Mar- iana Raykova. 2020. Secure Single-Server Aggregation with (Poly)Logarithmic Overhead. InACM CCS 2020. ACM Press

  10. [10]

    Yaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider, Ajith Suresh, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, and Avishay Yanai. 2024. ScionFL: Efficient and Robust Secure Quantized Aggrega- tion. InIEEE Conference on Secure and Trustworthy Machine Learning (SaTML’24). 490–511

  11. [11]

    Jonas Böhler and Florian Kerschbaum. 2021. Secure Multi-party Computation of Differentially Private Heavy Hitters. InACM CCS 2021. ACM Press

  12. [12]

    Bren- dan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth

    Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Bren- dan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical Secure Aggregation for Privacy-Preserving Machine Learning. InPro- ceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS’17). ACM, 1175–1191

  13. [13]

    Ran Canetti. 2001. Universally Composable Security: A New Paradigm for Cryp- tographic Protocols. In42nd FOCS. IEEE Computer Society Press

  14. [14]

    Ran Canetti. 2020. Universally Composable Security.J. ACM67, 5, Article 28 (Sept. 2020), 94 pages

  15. [15]

    Octavian Catrina and Amitabh Saxena. 2010. Secure Computation with Fixed- Point Numbers. InFC 2010 (LNCS)

  16. [16]

    Jeffrey Champion, abhi shelat, and Jonathan Ullman. 2019. Securely Sampling Biased Coins with Applications to Differential Privacy. InACM CCS 2019. ACM Press

  17. [17]

    Choquette-Choo, Arun Ganesh, Ryan McKenna, H

    Christopher A. Choquette-Choo, Arun Ganesh, Ryan McKenna, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta, and Zheng Xu. 2023. (Amplified) banded matrix factorization: a unified approach to private training. InProceed- ings of the 37th International Conference on Neural Information Processing Sys- tems ((NeurIPS’23). 74856–74889

  18. [18]

    Choquette-Choo, H

    Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, and Abhradeep Thakurta. 2023. Multi-epoch matrix factorization mechanisms for private machine learning. InProceedings of the 40th International Conference on Machine Learning (ICML’23). 5924–5963

  19. [19]

    Amrita Roy Chowdhury, Chuan Guo, Somesh Jha, and Laurens van der Maaten

  20. [20]

    InACM CCS 2022

    EIFFeL: Ensuring Integrity for Federated Learning. InACM CCS 2022. ACM Press

  21. [21]

    Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. 2017. EMNIST: Extending MNIST to handwritten letters.2017 International Joint Conference on Neural Networks (IJCNN’17)(2017). https://doi.org/10.1109/ijcnn. 2017.7966217

  22. [22]

    Ronald Cramer, Ivan Damgård, Daniel Escudero, Peter Scholl, and Chaoping Xing

  23. [23]

    InCRYPTO 2018, Part II (LNCS)

    SPD Z2𝑘 : Efficient MPC mod2 𝑘 for Dishonest Majority. InCRYPTO 2018, Part II (LNCS)

  24. [24]

    2015.Secure multiparty computation

    Ronald Cramer, Ivan Bjerre Damgård, et al. 2015.Secure multiparty computation. Cambridge University Press

  25. [25]

    Anders P. K. Dalskov, Daniel Escudero, and Marcel Keller. 2021. Fantastic Four: Honest-Majority Four-Party Secure Computation With Malicious Security. In USENIX Security 2021. USENIX Association

  26. [26]

    Sankha Das, Sayak Ray Chowdhury, Nishanth Chandran, Divya Gupta, Satya Lokam, and Rahul Sharma. 2025. Communication Efficient Secure and Pri- vate Multi-Party Deep Learning.Proceedings on Privacy Enhancing Technolo- gies (PETS’25)(2025)

  27. [27]

    Daniel Demmler, Thomas Schneider, and Michael Zohner. 2015. ABY – A Frame- work for Efficient Mixed-Protocol Secure Two-Party Computation. InThe Network and Distributed System Security (NDSS’15)

  28. [28]

    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Im- ageNet: A large-scale hierarchical image database. In2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). 248–255

  29. [29]

    Brendan McMahan, Keith Rush, Adam Smith, and Abhradeep Thakurta

    Sergey Denisov, H. Brendan McMahan, Keith Rush, Adam Smith, and Abhradeep Thakurta. 2022. Improved differential privacy for SGD via optimal private linear operators on adaptive streams. InProceedings of the 36th International Conference on Neural Information Processing Systems ((NeurIPS’22). 5910–5924

  30. [30]

    Cynthia Dwork. 2006. Differential Privacy. InAutomata, Languages and Program- ming. 1–12

  31. [31]

    Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Cali- brating Noise to Sensitivity in Private Data Analysis. InTheory of Cryptography Conference (TCC’06). Springer, 265–284

  32. [32]

    Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differ- ential privacy.Foundations and Trends®in Theoretical Computer Science9, 3–4 (2014), 211–407

  33. [33]

    Reo Eriguchi, Atsunori Ichikawa, Noboru Kunihiro, and Koji Nuida. 2021. Efficient Noise Generation to Achieve Differential Privacy with Applications to Secure Multiparty Computation. InFC 2021, Part I (LNCS)

  34. [34]

    Daniel Escudero, Satrajit Ghosh, Marcel Keller, Rahul Rachuri, and Peter Scholl

  35. [35]

    In CRYPTO 2020, Part II (LNCS)

    Improved Primitives for MPC over Mixed Arithmetic-Binary Circuits. In CRYPTO 2020, Part II (LNCS)

  36. [36]

    Daniel Escudero, Vipul Goyal, Antigoni Polychroniadou, and Yifan Song. 2022. TurboPack: Honest Majority MPC with Constant Online Communication. In ACM CCS 2022. ACM Press

  37. [37]

    FederatedAI. 2024. FATE. https://github.com/FederatedAI/FATE?tab=readme- ov-file

  38. [38]

    Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, He- len Möllering, Thien Duc Nguyen, Phillip Rieger, Ahmad-Reza Sadeghi, Thomas Schneider, Hossein Yalame, and Shaza Zeitouni. 2021. SAFELearn: Secure Ag- gregation for private FEderated Learning. In2021 IEEE Security and Privacy Workshops (SPW’21). 56–62

  39. [39]

    Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-Aware Minimization for Efficiently Improving Generalization. arXiv:2010.01412

  40. [40]

    Troncoso-Pastoriza, Bonnie Berger, and Jean-Pierre Hubaux

    David Froelicher, Hyunghoon Cho, Manaswitha Edupalli, Joao Sa Sousa, Jean- Philippe Bossuat, Apostolos Pyrgelis, Juan R. Troncoso-Pastoriza, Bonnie Berger, and Jean-Pierre Hubaux. 2023. Scalable and Privacy-Preserving Federated Princi- pal Component Analysis . In2023 IEEE Symposium on Security and Privacy (SP’23). 1908–1925

  41. [41]

    Liu, and Ting Wang

    Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X. Liu, and Ting Wang. 2022. Label Inference Attacks Against Vertical Federated Learning. In31st USENIX Security Symposium (Security’22). 1397–1414

  42. [42]

    Fangcheng Fu, Huanran Xue, Yong Cheng, Yangyu Tao, and Bin Cui. 2022. BlindFL: Vertical Federated Machine Learning without Peeking into Your Data. InProceedings of the 2022 International Conference on Management of Data (SIG- MOD’22). 1316–1330

  43. [43]

    Keke Gai, Mohan Wang, Jing Yu, Lei Xu, Peng Jiang, Liehuang Zhu, and Bin Xiao. 2025. Differentially Private Vertical Federated Learning With Adaptive Constraints and Dynamic Noise.IEEE Transactions on Information Forensics and Security (TIFS)20 (2025), 11150–11164

  44. [44]

    Google. 2025. Differential Privacy. https://github.com/google/differential- privacy

  45. [45]

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 770–778

  46. [46]

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. InProceedings of the International Conference on Learning Representations (ICLR’22)

  47. [47]

    Yimin Huang, Wanwan Wang, Xingying Zhao, Yukun Wang, Xinyu Feng, Hao He, and Ming Yao. 2023. EFMVFL: An Efficient and Flexible Multi-party Vertical Federated Learning without a Third Party.ACM Transactions on Knowledge Discovery from Data (TKDD)18, 3 (2023), 20 pages

  48. [48]

    Matthew Jagielski, Rahul Rachuri, Daniel Escudero, and Peter Scholl. 2025. Covert Attacks on Machine Learning Training in Passively Secure MPC. https://eprint. Secure and Privacy-Preserving Vertical Federated Learning , iacr.org/2025/906

  49. [49]

    Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, and Tianyi Chen. 2021. Cafe: Catastrophic data leakage in vertical federated learning.Proceedings of the 35th International Conference on Neural Information Processing Systems ((NeurIPS’21) 34 (2021), 994–1006

  50. [50]

    Peter Kairouz, Ziyu Liu, and Thomas Steinke. 2022. The Distributed Dis- crete Gaussian Mechanism for Federated Learning with Secure Aggregation. arXiv:2102.06387 [cs.LG]

  51. [51]

    Hannah Keller, Helen Möllering, Thomas Schneider, Oleksandr Tkachenko, and Liang Zhao. 2024. Secure noise sampling for DP in MPC with finite precision. In Proceedings of the 19th International Conference on A vailability, Reliability and Security. 1–12

  52. [52]

    Marcel Keller. 2020. MP-SPDZ: A Versatile Framework for Multi-Party Computa- tion. InACM CCS 2020. ACM Press

  53. [53]

    Marcel Keller and Ke Sun. 2022. Secure Quantized Training for Deep Learning. Cryptology ePrint Archive, Report 2022/933. https://eprint.iacr.org/2022/933

  54. [54]

    Afsana Khan, Marijn ten Thij, and Anna Wilbik. 2025. Vertical Federated Learning: A Structured Literature Review.Knowledge and Information Systems(2025)

  55. [55]

    Vladimir Kolesnikov, Naor Matania, Benny Pinkas, Mike Rosulek, and Ni Trieu

  56. [56]

    InACM CCS 2017

    Practical Multi-party Private Set Intersection from Symmetric-Key Tech- niques. InACM CCS 2017. ACM Press

  57. [57]

    Antti Koskela, Joonas Jälkö, and Antti Honkela. 2020. Computing Tight Dif- ferential Privacy Guarantees Using FFT. InProceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS’20) (Proceedings of Machine Learning Research, Vol. 108). PMLR, 2560–2569

  58. [58]

    Nishat Koti, Arpita Patra, Rahul Rachuri, and Ajith Suresh. 2022. Tetrad: Actively Secure 4PC for Secure Training and Inference. InProceedings of the 29th Annual Network and Distributed System Security Symposium NDSS’22

  59. [59]

    Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009)

  60. [60]

    Sven Laur, Jan Willemson, and Bingsheng Zhang. 2011. Round-Efficient Oblivious Database Manipulation. Cryptology ePrint Archive, Report 2011/429. https: //eprint.iacr.org/2011/429

  61. [61]

    Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. 2017. Feature Pyramid Networks for Object Detection. InProceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 2117–2125

  62. [62]

    Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, and Qiang Yang. 2024. Vertical Federated Learning: Concepts, Advances, and Challenges.IEEE Transactions on Knowledge and Data Engineering (TKDE)(2024), 1–20

  63. [63]

    Yang Liu, Xinwei Zhang, Yan Kang, Liping Li, Tianjian Chen, Mingyi Hong, and Qiang Yang. 2022. FedBCD: A Communication-Efficient Collaborative Learning Framework for Distributed Features.IEEE Transactions on Signal Processing (TSP) 70 (2022), 4277–4290

  64. [64]

    Hidde Lycklama, Lukas Burkhalter, Alexander Viand, Nicolas Küchler, and Anwar Hithnawi. 2023. RoFL: Robustness of Secure Federated Learning. In2023 IEEE Symposium on Security and Privacy. IEEE Computer Society Press

  65. [65]

    Yiping Ma, Jess Woods, Sebastian Angel, Antigoni Polychroniadou, and Tal Rabin. 2023. Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning. In2023 IEEE Symposium on Security and Privacy. IEEE Computer Society Press

  66. [66]

    Sindhuja Madabushi, Ahmad Faraz Khan, Haider Ali, and Jin-Hee Cho. 2025. OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning. arXiv:2504.15995 [cs.LG] https://arxiv.org/abs/2504.15995

  67. [67]

    Saber Malekmohammadi, Yaoliang Yu, and Yang Cao. 2024. Noise-aware algo- rithm for heterogeneous differentially private federated learning. InProceedings of the 41st International Conference on Machine Learning (ICML’24). 34461 – 34498

  68. [68]

    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. InProceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS’17), Vol. 54. 1273–1282

  69. [69]

    Ilya Mironov. 2017. Rényi Differential Privacy. In2017 IEEE 30th Computer Security Foundations Symposium (CSF’17). IEEE

  70. [70]

    Payman Mohassel and Peter Rindal. 2018. ABY3: A Mixed Protocol Framework for Machine Learning. InProceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS’18). 35–52

  71. [71]

    Payman Mohassel and Yupeng Zhang. 2017. SecureML: A System for Scalable Privacy-Preserving Machine Learning . In2017 IEEE Symposium on Security and Privacy (SP’17). 19–38

  72. [72]

    Vaikkunth Mugunthan, Pawan Goyal, and Lalana Kagal. 2021. Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners. https: //arxiv.org/abs/2106.05468

  73. [73]

    Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In2019 IEEE Symposium on Security and Privacy (SP’19). 739–753

  74. [74]

    Kalinin Nikita and Christoph H Lampert. 2024. Banded Square Root Matrix Factorization for Differentially Private Model Training. InProceedings of the 38th International Conference on Neural Information Processing Systems (NeurIPS’24), Vol. 37. 17602–17655

  75. [75]

    Sikha Pentyala, Davis Railsback, Ricardo Maia, Rafael Dowsley, David Melanson, Anderson Nascimento, and Martine De Cock. 2022. Training Differentially Private Models with Secure Multiparty Computation. arXiv:2202.02625 [cs.CR]

  76. [76]

    Benny Pinkas, Thomas Schneider, and Michael Zohner. 2018. Scalable Private Set Intersection Based on OT Extension.ACM Transactions on Privacy and Security (TOPS)21, 2, Article 7 (2018), 35 pages

  77. [77]

    Data Privacy and Trustworthy Machine Learning Research Lab. 2025. Privacy Meter. https://github.com/privacytrustlab

  78. [78]

    Pytorch. 2025. ResNet18. https://pytorch.org/vision/stable/models.html

  79. [79]

    Xinchi Qiu, Heng Pan, Wanru Zhao, Yan Gao, Pedro P. B. Gusmao, William F. Shen, Chenyang Ma, and Nicholas D. Lane. 2024. Secure Vertical Federated Learning Under Unreliable Connectivity. arXiv:2305.16794 [cs.CR]

  80. [80]

    Xinchi Qiu, Heng Pan, Wanru Zhao, Chenyang Ma, Yan Gao, Pedro Porto Buar- que de Gusmao, and Nicholas Donald Lane. 2024. vFedSec: Efficient Secure Aggregation for Vertical Federated Learning via Secure Layer

Showing first 80 references.