pith. machine review for the scientific record. sign in

arxiv: 2604.07125 · v1 · submitted 2026-04-08 · 💻 cs.CR · cs.LG

Recognition: 2 theorem links

· Lean Theorem

DDP-SA: Scalable Privacy-Preserving Federated Learning via Distributed Differential Privacy and Secure Aggregation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:50 UTC · model grok-4.3

classification 💻 cs.CR cs.LG
keywords federated learningdifferential privacysecure aggregationadditive secret sharingprivacy-preserving machine learningdistributed systems
0
0 comments X

The pith

DDP-SA adds calibrated Laplace noise to client gradients then splits them via additive secret sharing so no single server sees any individual update.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DDP-SA to combine local differential privacy with full-threshold additive secret sharing in federated learning. Clients first add Laplace noise to their gradients on device, then divide the noisy values into shares distributed across multiple intermediate servers. The central parameter server receives only the reconstructed sum of these noisy shares, never any client's raw contribution. This setup aims to give stronger end-to-end privacy than pure secure aggregation while preserving more model accuracy than noise applied alone. Experiments are presented to show the accuracy gain over standalone local differential privacy and the privacy advantage over MPC-only methods, with linear scaling in the number of clients.

Core claim

By applying client-side Laplace perturbation followed by full-threshold additive secret sharing, DDP-SA ensures that reconstruction of any client's contribution requires all shares and that the final aggregator obtains only the sum of already-noisy gradients, delivering joint privacy stronger than either technique used separately.

What carries the argument

Two-stage client protection: local Laplace noise addition followed by decomposition of the noisy gradient into additive secret shares sent to distinct intermediate servers.

If this is right

  • The central server never reconstructs any client-specific update, only the aggregate of noisy gradients.
  • Privacy is preserved against any proper subset of the intermediate servers being compromised.
  • Communication and computation costs increase linearly with the number of participating clients.
  • Model accuracy stays higher than with local differential privacy applied in isolation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same two-stage pattern could be tested on non-convex models or with other noise distributions to check whether the accuracy gain generalizes.
  • Reducing the number of intermediate servers while keeping the full-threshold requirement would be a direct next measurement of the privacy-communication tradeoff.
  • The design leaves open whether the same privacy-utility balance holds when clients have heterogeneous data distributions.

Load-bearing premise

The threat model holds that no single intermediate server or communication channel is fully compromised and that the Laplace noise scale can be set to meet the stated privacy level without destroying utility.

What would settle it

A reconstruction of any individual client's original gradient from the shares held by the intermediate servers, or an accuracy measurement showing DDP-SA falls below standalone LDP performance at the same privacy budget.

Figures

Figures reproduced from arXiv: 2604.07125 by Alla Jammine, Farid Nait-Abdesselam, Wenjing Wei.

Figure 1
Figure 1. Figure 1: DDP-SA Framework diagram. A. System Model Our system model consists of three types of entities: clients, intermediate servers, and a parameter server. Compared to the conventional two-layer FL architecture with only clients and a parameter server, our design introduces a layer of intermediate servers that securely aggregate model updates using ASS. This layer ensures that the parameter server reconstructs … view at source ↗
Figure 2
Figure 2. Figure 2: DDP-SA workflow, general scalable framework with [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Number of communication rounds for different defensive mechanisms. [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 6
Figure 6. Figure 6: Average training time per round for different defensive mechanisms. [PITH_FULL_IMAGE:figures/full_fig_p011_6.png] view at source ↗
Figure 5
Figure 5. Figure 5: Total time to convergence for different defensive mechanisms. [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Accuracy for different defensive mechanisms. (a) Test loss. (b) Test R [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Accuracy as a function of privacy budget [PITH_FULL_IMAGE:figures/full_fig_p013_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Accuracy as a function of the number of clients [PITH_FULL_IMAGE:figures/full_fig_p013_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Training loss as a function of the number of communication rounds [PITH_FULL_IMAGE:figures/full_fig_p014_10.png] view at source ↗
read the original abstract

This article presents DDP-SA, a scalable privacy-preserving federated learning framework that jointly leverages client-side local differential privacy (LDP) and full-threshold additive secret sharing (ASS) for secure aggregation. Unlike existing methods that rely solely on differential privacy or on secure multi-party computation (MPC), DDP-SA integrates both techniques to deliver stronger end-to-end privacy guarantees while remaining computationally practical. The framework introduces a two-stage protection mechanism: clients first perturb their local gradients with calibrated Laplace noise, then decompose the noisy gradients into additive secret shares that are distributed across multiple intermediate servers. This design ensures that (i) no single compromised server or communication channel can reveal any information about individual client updates, and (ii) the parameter server reconstructs only the aggregated noisy gradient, never any client-specific contribution. Extensive experiments show that DDP-SA achieves substantially higher model accuracy than standalone LDP while providing stronger privacy protection than MPC-only approaches. The proposed framework scales linearly with the number of participants and offers a practical, privacy-preserving solution for federated learning applications with controllable computational and communication overhead.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents DDP-SA, a federated learning framework combining client-side Laplace noise (LDP) with full-threshold additive secret sharing (ASS) for secure aggregation across intermediate servers. Clients perturb local gradients with calibrated noise then distribute shares; the parameter server reconstructs only the aggregate noisy gradient. The paper claims this yields stronger end-to-end privacy than LDP or MPC alone, substantially higher model accuracy than standalone LDP, linear scaling with participants, and practical overhead.

Significance. If the accuracy and privacy claims are substantiated by properly controlled experiments that isolate the effect of the ASS layer, the work would supply a concrete, scalable design point between pure LDP and full MPC. The linear scaling and explicit two-stage mechanism are positive features that could be useful for deployments where a single server must not see individual updates.

major comments (2)
  1. [Abstract / mechanism description] Abstract and mechanism description: the claim of 'substantially higher model accuracy than standalone LDP' is not supported by the stated construction. Clients add Laplace noise and then secret-share the noisy values; the parameter server recovers exactly the sum of those noisy gradients. This produces identical stochastic gradients to a baseline in which clients transmit noisy gradients directly to a single server. Any accuracy difference must therefore arise from unstated differences in noise scale, clipping threshold, effective privacy budget, or baseline implementation rather than from the DDP-SA architecture itself.
  2. [Threat model / privacy analysis] Threat model and privacy analysis: the paper asserts 'stronger privacy protection than MPC-only approaches' yet provides no formal composition theorem or comparison of the resulting (ε,δ) bounds when LDP noise is combined with ASS. It is unclear whether the end-to-end guarantee is strictly stronger than LDP alone or merely shifts the trust assumptions to the intermediate servers.
minor comments (2)
  1. [Abstract / Experiments] The abstract and introduction omit concrete experimental details (datasets, model architectures, number of clients, privacy budgets ε, baseline implementations, and statistical significance tests). These must be supplied with tables or figures to allow evaluation of the accuracy and scalability claims.
  2. [Preliminaries / Framework] Notation for the Laplace scale and the secret-sharing threshold is introduced without an explicit equation linking them to the final privacy budget; a short derivation or reference to the composition would improve clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments. We address each major point below and indicate the revisions that will be incorporated into the next version of the manuscript.

read point-by-point responses
  1. Referee: [Abstract / mechanism description] Abstract and mechanism description: the claim of 'substantially higher model accuracy than standalone LDP' is not supported by the stated construction. Clients add Laplace noise and then secret-share the noisy values; the parameter server recovers exactly the sum of those noisy gradients. This produces identical stochastic gradients to a baseline in which clients transmit noisy gradients directly to a single server. Any accuracy difference must therefore arise from unstated differences in noise scale, clipping threshold, effective privacy budget, or baseline implementation rather than from the DDP-SA architecture itself.

    Authors: We acknowledge that this observation is correct. Because clients apply Laplace noise before secret-sharing, the parameter server reconstructs precisely the same sum of noisy gradients that would be obtained by transmitting the noisy values directly. Consequently, any accuracy advantage observed in our experiments cannot be attributed to the DDP-SA mechanism itself and must result from uncontrolled differences in noise calibration, clipping, privacy budget allocation, or baseline implementation details. We will revise the abstract, introduction, and experimental discussion to remove the claim of 'substantially higher model accuracy' as an architectural benefit. The revised text will state that accuracy is comparable to a correctly implemented LDP baseline while highlighting the additional privacy and trust-distribution advantages of the two-stage construction. revision: yes

  2. Referee: [Threat model / privacy analysis] Threat model and privacy analysis: the paper asserts 'stronger privacy protection than MPC-only approaches' yet provides no formal composition theorem or comparison of the resulting (ε,δ) bounds when LDP noise is combined with ASS. It is unclear whether the end-to-end guarantee is strictly stronger than LDP alone or merely shifts the trust assumptions to the intermediate servers.

    Authors: We agree that a formal privacy analysis is required. The current manuscript only provides an informal argument. DDP-SA composes local Laplace noise with full-threshold additive secret sharing so that (i) each client’s update satisfies LDP against reconstruction by any proper subset of intermediate servers and (ii) the parameter server receives only the noisy aggregate. This yields a strictly stronger guarantee than MPC-only (which reveals an exact aggregate) and, from the parameter server’s viewpoint, stronger than direct LDP (which exposes individual noisy gradients). We will add a dedicated privacy-analysis section that states the trust assumptions (non-collusion of the intermediate servers), derives the end-to-end (ε, δ) bounds via sequential composition, and supplies explicit comparisons with both LDP and MPC baselines. revision: yes

Circularity Check

0 steps flagged

No significant circularity in framework description or claims

full rationale

The paper proposes a hybrid privacy framework that combines client-side Laplace noise addition with additive secret sharing for secure aggregation, with the parameter server reconstructing only the sum of noisy gradients. This construction is presented as a design choice rather than a mathematical derivation. The accuracy claim is explicitly tied to experimental results, not to any first-principles prediction or fitted parameter that reduces to its own inputs by construction. No self-definitional steps, load-bearing self-citations, or ansatz smuggling are present in the abstract or described mechanism. The architecture is self-contained against the stated threat model and does not rely on renaming known results or importing uniqueness theorems.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The design rests on standard assumptions from differential privacy and secure multi-party computation; no new entities are introduced.

free parameters (1)
  • Laplace noise scale
    Must be calibrated to a chosen privacy budget; exact mapping from epsilon to scale is not specified in the abstract.
axioms (2)
  • domain assumption Additive secret sharing is secure when at most one server is compromised
    Invoked to guarantee that no single server learns individual client contributions.
  • standard math Laplace mechanism satisfies epsilon-differential privacy for the chosen scale
    Standard property of the Laplace distribution used for local perturbation.

pith-pipeline@v0.9.0 · 5499 in / 1264 out tokens · 35543 ms · 2026-05-10T17:50:57.196831+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

110 extracted references · 15 canonical work pages

  1. [1]

    Federated machine learning: Concept and applications,

    Q. Yang, Y . Liu, T. Chen, and Y . Tong, “Federated machine learning: Concept and applications,”ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019

  2. [2]

    Communication-efficient learning of deep networks from decentral- ized data,

    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentral- ized data,” inArtificial intelligence and statistics. PMLR, 2017, pp. 1273–1282

  3. [3]

    Privacy- preserving deep learning: Revisited and enhanced,

    L. T. Phong, Y . Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy- preserving deep learning: Revisited and enhanced,” inApplications and Techniques in Information Security: 8th International Conference, ATIS 2017, Auckland, New Zealand, July 6–7, 2017, Proceedings. Springer, 2017, pp. 100–110

  4. [4]

    Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,

    M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in2019 IEEE symposium on security and privacy (SP). IEEE, 2019, pp. 739–753

  5. [5]

    Advances and open problems in federated learning,

    P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummingset al., “Advances and open problems in federated learning,”Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021

  6. [6]

    On defensive neural networks against inference attack in federated learning,

    H. Lee, J. Kim, R. Hussain, S. Cho, and J. Son, “On defensive neural networks against inference attack in federated learning,” inICC 2021- IEEE International Conference on Communications. IEEE, 2021, pp. 1–6

  7. [7]

    The algorithmic foundations of differential privacy,

    C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,”Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014

  8. [8]

    Practical secure aggre- gation for privacy-preserving machine learning,

    K. Bonawitz, V . Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggre- gation for privacy-preserving machine learning,” inproceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191

  9. [9]

    Verifynet: Secure and verifiable federated learning,

    G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, “Verifynet: Secure and verifiable federated learning,”IEEE Transactions on Information Forensics and Security, vol. 15, pp. 911–926, 2020

  10. [10]

    Privacy-preserving deep learning via additively homomorphic encryption,

    Y . Aono, T. Hayashi, L. Wang, S. Moriaiet al., “Privacy-preserving deep learning via additively homomorphic encryption,”IEEE transac- tions on information forensics and security, vol. 13, no. 5, pp. 1333– 1345, 2017. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING 15

  11. [11]

    Towards efficient and privacy-preserving federated deep learning,

    M. Hao, H. Li, G. Xu, S. Liu, and H. Yang, “Towards efficient and privacy-preserving federated deep learning,” inICC 2019-2019 IEEE international conference on communications (ICC). IEEE, 2019, pp. 1–6

  12. [12]

    Efficient and privacy-enhanced federated learning for industrial artificial intelli- gence,

    M. Hao, H. Li, X. Luo, G. Xu, H. Yang, and S. Liu, “Efficient and privacy-enhanced federated learning for industrial artificial intelli- gence,”IEEE Transactions on Industrial Informatics, vol. 16, no. 10, pp. 6532–6542, 2020

  13. [13]

    Communication-efficient learning of deep networks from decentralized data,

    H. B. McMahan, E. Moore, D. Ramage, and B. A. y Arcas, “Federated learning of deep networks using model averaging,” ArXiv, vol. abs/1602.05629, 2016. [Online]. Available: https://api. semanticscholar.org/CorpusID:16861557

  14. [14]

    Exploiting unintended feature leakage in collaborative learning,

    L. Melis, C. Song, E. De Cristofaro, and V . Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in2019 IEEE symposium on security and privacy (SP). IEEE, 2019, pp. 691–706

  15. [15]

    Model inversion attacks that exploit confidence information and basic countermeasures,

    M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1322–1333

  16. [16]

    Deep leakage from gradients,

    L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” inPro- ceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 14 774–14 784

  17. [17]

    idlg: Improved deep leakage from gradients.arXiv preprint arXiv:2001.02610, 2020

    B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,”arXiv preprint arXiv:2001.02610, 2020

  18. [18]

    Deep models under the gan: information leakage from collaborative deep learning,

    B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” inProceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 603–618

  19. [19]

    Inverting gradients-how easy is it to break privacy in federated learning?

    J. Geiping, H. Bauermeister, H. Dr ¨oge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?”Ad- vances in neural information processing systems, vol. 33, pp. 16 937– 16 947, 2020

  20. [20]

    Personalized federated learning with differential privacy,

    R. Hu, Y . Guo, H. Li, Q. Pei, and Y . Gong, “Personalized federated learning with differential privacy,”IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9530–9539, 2020

  21. [21]

    Differentially private federated learning: A client level perspective.arXiv preprint arXiv:1712.07557, 2017

    R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,”arXiv preprint arXiv:1712.07557, 2017

  22. [22]

    Federated learning with differential privacy: Algorithms and performance analysis,

    K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V . Poor, “Federated learning with differential privacy: Algorithms and performance analysis,”IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454–3469, 2020

  23. [23]

    Fedsel: Federated sgd under local differential privacy with top-k dimension selection,

    R. Liu, Y . Cao, M. Yoshikawa, and H. Chen, “Fedsel: Federated sgd under local differential privacy with top-k dimension selection,” inDatabase Systems for Advanced Applications: 25th International Conference, DASFAA 2020, Jeju, South Korea, September 24–27, 2020, Proceedings, Part I 25. Springer, 2020, pp. 485–501

  24. [24]

    Local differential privacy-based federated learning for internet of things,

    Y . Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, D. Niyato, and K.-Y . Lam, “Local differential privacy-based federated learning for internet of things,”IEEE Internet of Things Journal, vol. 8, no. 11, pp. 8836–8853, 2021

  25. [25]

    Federated f-differential privacy,

    Q. Zheng, S. Chen, Q. Long, and W. Su, “Federated f-differential privacy,” inInternational Conference on Artificial Intelligence and Statistics. PMLR, 2021, pp. 2251–2259

  26. [26]

    Wireless federated learning with local differential privacy,

    M. Seif, R. Tandon, and M. Li, “Wireless federated learning with local differential privacy,” in2020 IEEE International Symposium on Information Theory (ISIT). IEEE, 2020, pp. 2604–2609

  27. [27]

    Differentially private federated learning on non-iid data: Convergence analysis and adaptive optimization,

    L. Chen, X. Ding, Z. Bao, P. Zhou, and H. Jin, “Differentially private federated learning on non-iid data: Convergence analysis and adaptive optimization,”IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 9, pp. 4567–4581, 2024

  28. [28]

    Noise-aware algorithm for heterogeneous differentially private federated learning,

    S. Malekmohammadi, Y . Yu, and Y . Cao, “Noise-aware algorithm for heterogeneous differentially private federated learning,” inProceedings of the 41st International Conference on Machine Learning, 2024, pp. 34 461–34 498

  29. [29]

    Cross-silo feder- ated learning with record-level personalized differential privacy,

    J. Liu, J. Lou, L. Xiong, J. Liu, and X. Meng, “Cross-silo feder- ated learning with record-level personalized differential privacy,” in Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security, 2024, pp. 303–317

  30. [30]

    Ali-dpfl: Differentially private federated learning with adaptive local iterations,

    X. Ling, J. Fu, K. Wang, H. Liu, and Z. Chen, “Ali-dpfl: Differentially private federated learning with adaptive local iterations,” in2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). IEEE, 2024, pp. 349–358

  31. [31]

    Private, efficient, and accurate: Protecting models trained by multi-party learn- ing with differential privacy,

    W. Ruan, M. Xu, W. Fang, L. Wang, L. Wang, and W. Han, “Private, efficient, and accurate: Protecting models trained by multi-party learn- ing with differential privacy,” in2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 1926–1943

  32. [32]

    Differentially private feder- ated learning on heterogeneous data,

    M. Noble, A. Bellet, and A. Dieuleveut, “Differentially private feder- ated learning on heterogeneous data,” inInternational Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 10 110–10 145

  33. [33]

    Differentially private federated learning: A systematic review,

    J. Fu, Y . Hong, X. Ling, L. Wang, X. Ran, Z. Sun, W. H. Wang, Z. Chen, and Y . Cao, “Differentially private federated learning: A systematic review,”arXiv preprint arXiv:2405.08299, 2024

  34. [34]

    Practical differentially private and byzantine-resilient federated learning,

    Z. Xiang, T. Wang, W. Lin, and D. Wang, “Practical differentially private and byzantine-resilient federated learning,”Proceedings of the ACM on Management of Data, vol. 1, no. 2, pp. 1–26, 2023

  35. [35]

    Adap dp-fl: Differentially private federated learning with adaptive noise,

    J. Fu, Z. Chen, and X. Han, “Adap dp-fl: Differentially private federated learning with adaptive noise,” in2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). IEEE, 2022, pp. 656–663

  36. [36]

    Soteriafl: A unified framework for private federated learning with communication compression,

    Z. Li, H. Zhao, B. Li, and Y . Chi, “Soteriafl: A unified framework for private federated learning with communication compression,”Advances in Neural Information Processing Systems, vol. 35, pp. 4285–4300, 2022

  37. [37]

    Differentially private federated learning via inexact admm with multiple local updates,

    M. Ryu and K. Kim, “Differentially private federated learning via inexact admm with multiple local updates,”arXiv preprint arXiv:2202.09409, 2022

  38. [38]

    User- level privacy-preserving federated learning: Analysis and performance optimization,

    K. Wei, J. Li, M. Ding, C. Ma, H. Su, B. Zhang, and H. V . Poor, “User- level privacy-preserving federated learning: Analysis and performance optimization,”IEEE Transactions on Mobile Computing, vol. 21, no. 9, pp. 3388–3401, 2021

  39. [39]

    Projected federated averaging with heterogeneous differential privacy,

    J. Liu, J. Lou, L. Xiong, J. Liu, and X. Meng, “Projected federated averaging with heterogeneous differential privacy,”Proceedings of the VLDB Endowment, vol. 15, no. 4, pp. 828–840, 2021

  40. [40]

    Dp- fl: a novel differentially private federated learning framework for the unbalanced data,

    X. Huang, Y . Ding, Z. L. Jiang, S. Qi, X. Wang, and Q. Liao, “Dp- fl: a novel differentially private federated learning framework for the unbalanced data,”World Wide Web, vol. 23, pp. 2529–2545, 2020

  41. [41]

    Dp-admm: Admm-based distributed learning with differential privacy,

    Z. Huang, R. Hu, Y . Guo, E. Chan-Tin, and Y . Gong, “Dp-admm: Admm-based distributed learning with differential privacy,”IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1002– 1012, 2019

  42. [42]

    Dynamic personalized federated learning with adaptive differential privacy,

    X. Yang, W. Huang, and M. Ye, “Dynamic personalized federated learning with adaptive differential privacy,”Advances in Neural In- formation Processing Systems, vol. 36, pp. 72 181–72 192, 2023

  43. [43]

    Learning to generate image em- beddings with user-level differential privacy,

    Z. Xu, M. Collins, Y . Wang, L. Panait, S. Oh, S. Augenstein, T. Liu, F. Schroff, and H. B. McMahan, “Learning to generate image em- beddings with user-level differential privacy,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7969–7980

  44. [44]

    Make landscape flatter in differentially private federated learning,

    Y . Shi, Y . Liu, K. Wei, L. Shen, X. Wang, and D. Tao, “Make landscape flatter in differentially private federated learning,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24 552–24 562

  45. [45]

    Understanding clipping for federated learning: Convergence and client-level differen- tial privacy,

    X. Zhang, X. Chen, M. Hong, Z. S. Wu, and J. Yi, “Understanding clipping for federated learning: Convergence and client-level differen- tial privacy,” inInternational Conference on Machine Learning, ICML

  46. [46]

    26 048–26 067

    PMLR, 2022, pp. 26 048–26 067

  47. [47]

    Differentially private federated learning with local regularization and sparsification,

    A. Cheng, P. Wang, X. S. Zhang, and J. Cheng, “Differentially private federated learning with local regularization and sparsification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 122–10 131

  48. [48]

    Personal- ization improves privacy-accuracy tradeoffs in federated learning,

    A. Bietti, C.-Y . Wei, M. Dudik, J. Langford, and S. Wu, “Personal- ization improves privacy-accuracy tradeoffs in federated learning,” in International Conference on Machine Learning. PMLR, 2022, pp. 1945–1962

  49. [49]

    Differ- entially private learning with adaptive clipping,

    G. Andrew, O. Thakkar, B. McMahan, and S. Ramaswamy, “Differ- entially private learning with adaptive clipping,”Advances in Neural Information Processing Systems, vol. 34, pp. 17 455–17 466, 2021

  50. [50]

    Learning differentially private recurrent language models,

    H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private recurrent language models,” inInternational Con- ference on Learning Representations, 2018, pp. 1–14

  51. [51]

    The fun- damental price of secure aggregation in differentially private federated learning,

    W.-N. Chen, C. A. C. Choo, P. Kairouz, and A. T. Suresh, “The fun- damental price of secure aggregation in differentially private federated learning,” inInternational Conference on Machine Learning. PMLR, 2022, pp. 3056–3089

  52. [52]

    The poisson binomial mech- anism for unbiased federated learning with secure aggregation,

    W.-N. Chen, A. Ozgur, and P. Kairouz, “The poisson binomial mech- anism for unbiased federated learning with secure aggregation,” in International Conference on Machine Learning. PMLR, 2022, pp. 3490–3506

  53. [53]

    D2p-fed: Differentially private federated learning with efficient communication,

    L. Wang, R. Jia, and D. Song, “D2p-fed: Differentially private federated learning with efficient communication,”arXiv preprint arXiv:2006.13039, 2020

  54. [54]

    Effi- cient differentially private secure aggregation for federated learning via IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING 16 hardness of learning with errors,

    T. Stevens, C. Skalka, C. Vincent, J. Ring, S. Clark, and J. Near, “Effi- cient differentially private secure aggregation for federated learning via IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING 16 hardness of learning with errors,” in31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 1379–1395

  55. [55]

    The distributed discrete gaussian mechanism for federated learning with secure aggregation,

    P. Kairouz, Z. Liu, and T. Steinke, “The distributed discrete gaussian mechanism for federated learning with secure aggregation,” inInter- national Conference on Machine Learning. PMLR, 2021, pp. 5201– 5212

  56. [56]

    The skellam mechanism for dif- ferentially private federated learning,

    N. Agarwal, P. Kairouz, and Z. Liu, “The skellam mechanism for dif- ferentially private federated learning,”Advances in Neural Information Processing Systems, vol. 34, pp. 5052–5064, 2021

  57. [57]

    Compression boosts differentially private federated learning,

    R. Kerkouche, G. ´Acs, C. Castelluccia, and P. Genev `es, “Compression boosts differentially private federated learning,” in2021 IEEE Euro- pean Symposium on Security and Privacy (EuroS&P). IEEE, 2021, pp. 304–318

  58. [58]

    cpsgd: Communication-efficient and differentially-private distributed sgd,

    N. Agarwal, A. T. Suresh, F. X. X. Yu, S. Kumar, and B. McMahan, “cpsgd: Communication-efficient and differentially-private distributed sgd,”Advances in Neural Information Processing Systems, vol. 31, 2018

  59. [59]

    Local and central dif- ferential privacy for robustness and privacy in federated learning,

    M. Naseri, J. Hayes, and E. De Cristofaro, “Local and central dif- ferential privacy for robustness and privacy in federated learning,” inProceedings of the 29th Network and Distributed System Security Symposium (NDSS), 2022

  60. [60]

    {PrivateFL}: Accurate, differentially private federated learning via personalized data transformation,

    Y . Yang, B. Hui, H. Yuan, N. Gong, and Y . Cao, “{PrivateFL}: Accurate, differentially private federated learning via personalized data transformation,” in32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 1595–1612

  61. [61]

    Federated learning with bayesian differ- ential privacy,

    A. Triastcyn and B. Faltings, “Federated learning with bayesian differ- ential privacy,” in2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019, pp. 2587–2596

  62. [62]

    Dynamic privacy allocation for locally differentially private federated learning with composite objectives,

    J. Zhang, D. Fay, and M. Johansson, “Dynamic privacy allocation for locally differentially private federated learning with composite objectives,” inICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024, pp. 9461–9465

  63. [63]

    Towards accurate and stronger local differential privacy for federated learning with staircase randomized response,

    M. Varun, S. Feng, H. Wang, S. Sural, and Y . Hong, “Towards accurate and stronger local differential privacy for federated learning with staircase randomized response,” in14th ACM Conference on Data and Application Security and Privacy. ACM, 2024

  64. [64]

    Personalized federated learning method based on bregman divergence and differ- ential privacy (in chinese),

    S. Zhang, J. Zhang, G. Zhu, S. Long, and L. Zhetao, “Personalized federated learning method based on bregman divergence and differ- ential privacy (in chinese),”Journal of Software, vol. 35, no. 11, pp. 5249–5262, 2023

  65. [65]

    Ppefl: Privacy-preserving edge federated learning with local differential privacy,

    B. Wang, Y . Chen, H. Jiang, and Z. Zhao, “Ppefl: Privacy-preserving edge federated learning with local differential privacy,”IEEE Internet of Things Journal, vol. 10, no. 17, pp. 15 488–15 500, 2023

  66. [66]

    Signds-fl: Local differentially private federated learning with sign-based dimension selection,

    X. Jiang, X. Zhou, and J. Grossklags, “Signds-fl: Local differentially private federated learning with sign-based dimension selection,”ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 5, pp. 1–22, 2022

  67. [67]

    Fedta: Locally-differential federated learning with top-k mechanism and adam optimization,

    Y . Li, G. Wang, T. Peng, and G. Feng, “Fedta: Locally-differential federated learning with top-k mechanism and adam optimization,” in Ubiquitous Security, G. Wang, K.-K. R. Choo, J. Wu, and E. Damiani, Eds. Singapore: Springer Nature Singapore, 2023, pp. 380–391

  68. [68]

    Webfed: Cross-platform federated learning framework based on web browser with local dif- ferential privacy,

    Z. Lian, Q. Yang, Q. Zeng, and C. Su, “Webfed: Cross-platform federated learning framework based on web browser with local dif- ferential privacy,” inICC 2022-IEEE International Conference on Communications. IEEE, 2022, pp. 2071–2076

  69. [69]

    Local differential privacy for federated learning,

    P. C. Mahawaga Arachchige, D. Liu, S. Camtepe, S. Nepal, M. Grobler, P. Bertok, and I. Khalil, “Local differential privacy for federated learning,” inEuropean Symposium on Research in Computer Security. Springer, 2022, pp. 195–216

  70. [70]

    Safeguarding cross-silo federated learning with local differential privacy,

    C. Wang, X. Wu, G. Liu, T. Deng, K. Peng, and S. Wan, “Safeguarding cross-silo federated learning with local differential privacy,”Digital Communications and Networks, vol. 8, no. 4, pp. 446–454, 2022

  71. [71]

    Privacy-enhanced federated learning: A restrictively self-sampled and data-perturbed local differential privacy method,

    J. Zhao, M. Yang, R. Zhang, W. Song, J. Zheng, J. Feng, and S. Matwin, “Privacy-enhanced federated learning: A restrictively self-sampled and data-perturbed local differential privacy method,”Electronics, vol. 11, no. 23, p. 4007, 2022

  72. [72]

    Ldp-fl: Practical private aggregation in federated learning with local differential privacy,

    L. Sun, J. Qian, and X. Chen, “Ldp-fl: Practical private aggregation in federated learning with local differential privacy,” inProceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2021

  73. [73]

    Federated learning with personalized local differential privacy,

    G. Yang, S. Wang, and H. Wang, “Federated learning with personalized local differential privacy,” in2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS). IEEE, 2021, pp. 484–489

  74. [74]

    Federated latent dirichlet allocation: A local differential privacy based framework,

    Y . Wang, Y . Tong, and D. Shi, “Federated latent dirichlet allocation: A local differential privacy based framework,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 6283–6290

  75. [75]

    Collecting and analyzing multidimensional data with local differential privacy,

    N. Wang, X. Xiao, Y . Yang, J. Zhao, S. C. Hui, H. Shin, J. Shin, and G. Yu, “Collecting and analyzing multidimensional data with local differential privacy,” in2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 638–649

  76. [76]

    Ldp-fed: Federated learning with local differential privacy,

    S. Truex, L. Liu, K.-H. Chow, M. E. Gursoy, and W. Wei, “Ldp-fed: Federated learning with local differential privacy,” inProceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, 2020, pp. 61–66

  77. [77]

    Echo of neighbors: Privacy amplification for personalized private federated learning with shuffle model,

    Y . Liu, S. Zhao, L. Xiong, Y . Liu, and H. Chen, “Echo of neighbors: Privacy amplification for personalized private federated learning with shuffle model,” inProceedings of the AAAI Conference on Artificial Intelligence, 2023

  78. [78]

    Shuffled check-in: Privacy amplification towards practical distributed learning,

    S. P. Liew, S. Hasegawa, and T. Takahashi, “Shuffled check-in: Privacy amplification towards practical distributed learning,” inComputer Se- curity Symposium 2023 (CSS 2023). Information Processing Society of Japan, 2023

  79. [79]

    Flame: Differ- entially private federated learning in the shuffle model,

    R. Liu, Y . Cao, H. Chen, R. Guo, and M. Yoshikawa, “Flame: Differ- entially private federated learning in the shuffle model,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, 2021, pp. 8688–8696

  80. [80]

    A generalized shuffle framework for privacy amplification: Strengthening privacy guarantees and enhancing utility,

    E. Chen, Y . Cao, and Y . Ge, “A generalized shuffle framework for privacy amplification: Strengthening privacy guarantees and enhancing utility,” vol. 38, no. 10, pp. 11 267–11 275, 2024

Showing first 80 references.