pith. machine review for the scientific record. sign in

arxiv: 2605.13708 · v1 · submitted 2026-05-13 · 💻 cs.CR · cs.DC· cs.LG

Recognition: unknown

DisAgg: Distributed Aggregators for Efficient Secure Aggregation in Federated Learning

Authors on Pith no claims yet

Pith reviewed 2026-05-14 17:46 UTC · model grok-4.3

classification 💻 cs.CR cs.DCcs.LG
keywords federated learningsecure aggregationsecret sharingprivacy preservationdistributed computationclient committee
0
0 comments X

The pith

DisAgg distributes aggregation to a small client committee via secret sharing to cut secure FL computation while preserving privacy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DisAgg, a protocol where clients secret-share their model updates to a small committee of peer clients called Aggregators. These Aggregators compute partial sums locally and return only the combined shares to the central server for final reconstruction. This removes the need for per-client masking or heavy homomorphic encryption used in prior one-shot protocols. The design targets large-scale settings such as 100,000 clients each sending 100,000-dimensional vectors and reports a 4.6x speedup over the previous best method while keeping updates private from a curious server and a limited number of colluding clients.

Core claim

DisAgg has clients secret-share their update vectors to a committee of Aggregators, which locally compute partial sums on the shares and return only the aggregated shares to the server for reconstruction, removing local masking and public-key operations to deliver a 4.6x speedup over OPA for 100k clients with 100k-dimensional vectors.

What carries the argument

A small committee of clients acting as Aggregators that receive secret shares, perform local partial summation, and forward only the aggregated shares to the server.

If this is right

  • Supports scaling federated learning to higher client counts and larger model dimensions without proportional growth in client-side computation.
  • Lowers total latency by replacing per-client cryptographic masking with lighter secret-sharing operations.
  • Keeps the single server interaction per round while adding resilience through the aggregator committee structure.
  • Reduces the computational burden on resource-constrained devices such as 5G clients.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The committee approach may extend to other distributed aggregation tasks where a central party needs sums without seeing individuals.
  • Dynamic selection of the aggregator committee based on client uptime or compute capacity could further improve reliability in practice.
  • The reported trade-off between communication and computation could be tested by varying committee size in large-scale simulations.

Load-bearing premise

The protocol assumes secret sharing to the aggregator committee delivers privacy without extra overhead and that the fraction of colluding clients stays below the threshold that would allow reconstruction of individual updates.

What would settle it

Measure whether the full protocol with 100k clients each sending 100k-dimensional vectors runs in 4.6 times less time than OPA while no client update can be recovered by the server or by any allowed number of colluding clients.

Figures

Figures reproduced from arXiv: 2605.13708 by Anastasios Drosou, Dimitrios Alexopoulos, Giorgos Tatsis, Haaris Mehmood, Jie Xu, Karthikeyan Saravanan, Mete Ozay.

Figure 1
Figure 1. Figure 1: Overview of the SECAGG protocol compared with the proposed DISAGG. SECAGG uses masks for the models to hide individual inputs and the server aggregates the masked models, whereas DISAGG secret-shares part of the model parameters to the Aggregators for them to perform the partial aggregation instead of the server. We aim to preserve the reduced synchronization advan￾tages of recent protocols while improving… view at source ↗
Figure 2
Figure 2. Figure 2: High-level overview of the proposed secure aggregation protocol DISAGG. U & A denote the set of the clients and Aggre￾gators respectively, xˆi denotes the secret (model update) for the i th client. PKE stands for the public-key-encryption protocol. To overcome such limitations, we introduce a protocol that retain the low-interaction, one-shot paradigm of recent works while off-loading the summation to a sm… view at source ↗
Figure 3
Figure 3. Figure 3: Contour plot showing the optimal number of Aggregators A for DISAGG and OPA as well as the expected speedup over OPA across different (M, N) settings under a 5G client connectivity assumption (2 MBps upload / 20 MBps download), with k = 0.3 and kcomp = 0.66. Theoretically, DISAGG is faster than OPA for practical cross-device and cross-silo setups: for up to 106 clients per round and models with 106 trainab… view at source ↗
Figure 4
Figure 4. Figure 4: Speedup of DISAGG over OPA as a function of the com￾bined dropout and corruption factor k = γ + δ. Results are shown for different (M, N) pairs. The plot demonstrates that DISAGG consistently outperforms OPA, achieving over 4× improvement for practical levels of k up to 0.3. Specifically, the combined cost is: Cost = 10 · max 0, Starget − Sactual Starget  | {z } speedup penalty + MN/ρ MN | {z } download … view at source ↗
Figure 5
Figure 5. Figure 5: Aggregator downstream volume (per iteration) versus committee size, with a target 3× speedup over OPA under the 5G setting of main paper. Increasing A reduces Q = N/A and thus Aggregator download, trading off against speedup. 1k 3k 5k 10k Number of clients (N) 10 1 10 2 10 3 10 4 Per Iteration Total Time (sec) 1k 3k 5k 10k Number of clients (N) 10 1 10 2 10 3 10 4 Per Iteration Total Time (sec) Protocol Se… view at source ↗
Figure 6
Figure 6. Figure 6: Overall combined computation and communication tim￾ings per FL iteration for SECAGG+, LIGHTSECAGG (LIGHTSA), OPA and DISAGG. DISAGG is 10% faster for M = 1k (left), and 3.2x faster for M = 10k (right) over OPA. participation (OPA). We report both computational and communication contributions for the primary system roles (clients, committee / Aggregators when applicable, and server), and include the setup p… view at source ↗
Figure 7
Figure 7. Figure 7: Speedup of DISAGG over OPA for one FL iteration with M = 10k, N = 10k, γ = 0.1, δ = 0.2, and varying committee size A. DISAGG achieves 3× speedup including the setup phase (top) and 3.1× without it (bottom). The server must decode and aggregate the masked encodings for OPA, whereas DISAGG merely reconstructs the global model from a compact set of aggregated shares, yielding additional efficiency. OPA is on… view at source ↗
Figure 8
Figure 8. Figure 8: Speedup of DISAGG over OPA for one FL iteration with k = γ + δ given γ, δ ∈ {0.01, 0.05, 0.10, 0.15}, M = 10k, N = 10k. Top graph depicts timings including setup, whereas bottom graph is without setup phase. 5.4 Aggregator Download Reduction As an extension of the discussion in Section 4.5 on the speedup versus Aggregator download trade-off, we evaluate this trade-off in an experimental setting using the r… view at source ↗
Figure 9
Figure 9. Figure 9: Empirical verification for convergence analyses of DIS￾AGG compared to plain-text under FL tolerance to dropouts via the δ parameter. The protocol can handle up to a certain fraction of dropped clients. If a client takes a long period of time to upload or download data, the delay may be due to a slow connection. Accordingly, we categorize clients into three different network speeds based on typical mobile … view at source ↗
Figure 11
Figure 11. Figure 11: Contour plot showing the minimum number of Aggregators A (calculated for minimum packing factor ρ = 1) for DISAGG and OPA and the resulting speedup of DISAGG over OPA across (M, N) under a 5G client connectivity assumption (2 MBps upload / 20 MBps download), with k = 0.3 and kcomp = 0.66. 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Clients (N) DisAgg A Values 10 3 10 4 10 5 10 6 Model… view at source ↗
Figure 12
Figure 12. Figure 12: Contour plot showing the minimum number of Aggregators A for DISAGG and OPA and the resulting speedup of DISAGG over OPA across (M, N) under a 5G client connectivity assumption (2 MBps upload / 20 MBps download), with k = 0.3 and kcomp = 0.66. The number of Aggregators are computed for packing factor ρ = 16, matching OPA’s configuration. 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Cli… view at source ↗
Figure 13
Figure 13. Figure 13: Contour plot showing the optimal number of Aggregators A for DISAGG and OPA and the resulting speedup of DISAGG over OPA across (M, N) under a 4G client connectivity assumption (200 kBps upload / 2 MBps download), with k = 0.3 and kcomp = 0.66. 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Clients (N) DisAgg A Values 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Clien… view at source ↗
Figure 14
Figure 14. Figure 14: Contour plot showing the optimal number of Aggregators A for DISAGG and OPA and the resulting speedup of DISAGG over OPA across (M, N) under a 3G client connectivity assumption (50 kBps upload / 500 kBps download), with k = 0.3 and kcomp = 0.66. 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Clients (N) DisAgg A Values 10 3 10 4 10 5 10 6 Model Size (M) 10 3 10 4 10 5 10 6 Number of Clie… view at source ↗
Figure 15
Figure 15. Figure 15: Contour plot showing the optimal number of Aggregators A for DISAGG and OPA and the speedup of DISAGG over OPA across (M, N) under the BFT constraint (γ + δ ≤ 1/3) and a 5G client connectivity assumption, with γ = δ = 0.1 and kcomp = 0.66 [PITH_FULL_IMAGE:figures/full_fig_p019_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Training accuracy with different datasets and model using plaintext FL. Q denotes the use of quantization required for cryptographic primitives in secure aggregation. F denotes floating point precision as used in standard FL. 0 100 200 300 400 500 Time (sec) =669, A=1061 =500, A=830 =250, A=461 =100, A=221 =50, A=149 =25, A=111 =1000, A=1495 =500, A=830 =250, A=461 =100, A=221 =50, A=149 =25, A=111 4.6x S… view at source ↗
Figure 18
Figure 18. Figure 18: Time comparison for one FL iteration of DISAGG and OPA with (60% 5G, 40% 4G) clients on the left, (60% 5G, 40% 3G) on the right and a shared y-axis for each row. The top row includes the setup times while the bottom row excludes it. DisAgg is within ±30% of OPA timings under such conditions. select such clients initially or regard them as dropouts later. As shown in Section 5.6, in such cases, DisAgg achi… view at source ↗
Figure 19
Figure 19. Figure 19: Combined computation and communication time per FL iteration for SECAGG+ and LIGHTSECAGG (LIGHTSA) for M=1k (left) and M=10k (right). Solid lines include mask computa￾tion time for SECAGG+ and LIGHTSECAGG while dashed lines exclude it. The difference in per iteration total time is negligible. E ARTIFACT APPENDIX E.1 Abstract This artifact provides the code and accompanying instruc￾tions required to reprod… view at source ↗
read the original abstract

Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure-aggregation schemes protect privacy against an honest-but-curious server, but existing approaches often suffer from many communication rounds, heavy public-key operations, or difficulty handling client dropouts. Recent methods like One-Shot Private Aggregation (OPA) cut rounds to a single server interaction per FL iteration, yet they impose substantial cryptographic and computational overhead on both server and clients. We propose a new protocol called DisAgg that leverages a small committee of clients called Aggregators to perform the aggregation itself: each client secret-shares its update vector to Aggregators, which locally compute partial sums and return only aggregated shares for server-side reconstruction. This design eliminates local masking and expensive homomorphic encryption, reducing endpoint computation while preserving privacy against a curious server and a limited fraction of colluding clients. By leveraging optimal trade-offs between communication and computation costs, DisAgg processes 100k-dimensional update vectors from 100k 5G clients with a 4.6x speedup compared to OPA, the previous best protocol.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents DisAgg, a secure aggregation protocol for federated learning in which clients secret-share their update vectors to a small committee of aggregator clients. The aggregators compute local partial sums and return only aggregated shares to the server for reconstruction. This eliminates per-client masking and homomorphic encryption, yielding a claimed 4.6x speedup over OPA for 100k-dimensional vectors from 100k clients while preserving privacy against a curious server and a bounded fraction of colluding clients.

Significance. If the speedup and privacy properties are rigorously validated, the committee-based secret-sharing design would constitute a meaningful efficiency advance for large-scale secure FL. It trades modest additional communication for substantially lower per-client and server computation, which could improve practicality in settings such as 5G networks where prior one-shot protocols incur prohibitive cryptographic costs.

major comments (2)
  1. [Security Analysis] Security Analysis section: the privacy claim against a curious server and limited colluding clients is asserted via standard secret-sharing properties but lacks a formal game-based proof or reduction; this is load-bearing for the central privacy guarantee.
  2. [Performance Evaluation] Performance Evaluation section (results for 100k clients / 100k dimensions): the 4.6x speedup versus OPA is stated without detailed experimental methodology, baseline implementations, communication-volume measurements, or statistical analysis, preventing verification of the performance claim.
minor comments (2)
  1. [Abstract] Abstract: the phrase 'optimal trade-offs' is used without specifying the concrete committee size, threshold, or cost model that realizes the claimed speedup.
  2. [Notation] Notation and figures: the notation for secret shares and partial sums would benefit from an explicit small-scale example or diagram to aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. The comments highlight areas where additional rigor and detail will strengthen the manuscript. We address each major comment below and plan to incorporate the suggested improvements in the revised version.

read point-by-point responses
  1. Referee: [Security Analysis] Security Analysis section: the privacy claim against a curious server and limited colluding clients is asserted via standard secret-sharing properties but lacks a formal game-based proof or reduction; this is load-bearing for the central privacy guarantee.

    Authors: We agree that a formal game-based proof would make the privacy argument more rigorous. The current analysis relies on the standard threshold security of Shamir secret sharing against a bounded number of colluding clients and an honest-but-curious server. In the revision we will add an explicit security game in the Security Analysis section together with a reduction showing that any successful attack on DisAgg implies an attack on the underlying secret-sharing scheme. We will also state the precise collusion threshold and corruption model more formally. revision: yes

  2. Referee: [Performance Evaluation] Performance Evaluation section (results for 100k clients / 100k dimensions): the 4.6x speedup versus OPA is stated without detailed experimental methodology, baseline implementations, communication-volume measurements, or statistical analysis, preventing verification of the performance claim.

    Authors: We acknowledge that the experimental section currently lacks sufficient detail for independent verification. In the revised manuscript we will expand the Performance Evaluation section to include: (i) a complete description of the simulation environment and hardware, (ii) implementation details and source references for both DisAgg and the OPA baseline, (iii) measured communication volumes broken down by client-to-aggregator, aggregator-to-server, and reconstruction phases, and (iv) statistical results from multiple runs with means, standard deviations, and confidence intervals. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The protocol builds on standard secret-sharing and committee aggregation primitives described in the abstract. No equations, fitted parameters, or self-citations are shown that reduce the speedup or privacy claims to definitions or inputs by construction. The 4.6x speedup is presented as an empirical outcome from concrete choices (committee size, threshold, dimension) rather than a tautological renaming or self-referential prediction. The derivation remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on standard cryptographic assumptions for secret sharing and the introduction of a new aggregator role; no free parameters are fitted to data in the abstract.

axioms (1)
  • domain assumption Security model assumes an honest-but-curious server and that only a limited fraction of clients collude.
    Explicitly stated in the abstract as the privacy threat model the protocol preserves against.
invented entities (1)
  • Aggregators committee no independent evidence
    purpose: To receive secret shares from clients, locally compute partial sums, and return aggregated shares to the server.
    New protocol component introduced to distribute the aggregation task away from the server and clients.

pith-pipeline@v0.9.0 · 5524 in / 1337 out tokens · 46497 ms · 2026-05-14T17:46:57.049467+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

49 extracted references · 4 canonical work pages

  1. [1]

    ACM CCS 2017 , booktitle =

    Bonawitz, Keith and Ivanov, Vladimir and Kreuter, Ben and Marcedone, Antonio and others , title =. ACM CCS 2017 , booktitle =. 2017 , publisher =

  2. [2]

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) , booktitle =

    McMahan, Brendan and Moore, Eider and Ramage, Daniel and Hampson, Seth and y Arcas, Blaise Agüera , title =. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) , booktitle =. 2017 , publisher =

  3. [3]

    Proceedings of Machine learning and systems , volume=

    Federated optimization in heterogeneous networks , author=. Proceedings of Machine learning and systems , volume=

  4. [4]

    International Conference on Learning Representations , year=

    Adaptive Federated Optimization , author=. International Conference on Learning Representations , year=

  5. [5]

    2021 , journal =

    Wang, Jianyu and Charles, Zachary and Xu, Zheng and Joshi, Gauri and others , title =. 2021 , journal =. 2107.06917 , archivePrefix =

  6. [6]

    Advances in Neural Information Processing Systems , booktitle =

    Geiping, Jonas and Bauermeister, Hartmut and Dröge, Hannah and Moeller, Michael , title =. Advances in Neural Information Processing Systems , booktitle =. 2020 , volume =

  7. [7]

    IEEE INFOCOM 2019 - IEEE Conference on Computer Communications , journal =

    Wang, Zhibo and Song, Mengkai and Zhang, Zhifei and Song, Yang and Wang, Qian and Qi, Haibo , title =. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications , journal =. 2019 , pages =

  8. [8]

    Federated Learning , journal =

    Zhu, Ligeng and Han, Song , title =. Federated Learning , journal =. 2020 , pages =

  9. [9]

    Brendan and Avent, Brendan and Bellet, Aurélien and others , title =

    Kairouz, Peter and McMahan, H. Brendan and Avent, Brendan and Bellet, Aurélien and others , title =. CoRR , volume =. 2019 , url =

  10. [10]

    Bell, James and Bonawitz, K. A. and Gascón, Adrià and Lepoint, Tancrède and Raykova, Mariana , title =. Cryptology ePrint Archive , note =. 2020 , url =

  11. [11]

    Ozan and Ramchandran, Kannan , title =

    Kadhe, Swanand and Rajaraman, Nived and Koyluoglu, O. Ozan and Ramchandran, Kannan , title =. 2020 , eprint =

  12. [12]

    2022 , eprint =

    So, Jinhyun and He, Chaoyang and Yang, Chien-Sheng and Li, Songze and others , title =. 2022 , eprint =

  13. [13]

    Cryptology ePrint Archive , note =

    Ma, Yiping and Woods, Jess and Angel, Sebastian and Polychroniadou, Antigoni and Rabin, Tal , title =. Cryptology ePrint Archive , note =. 2023 , url =

  14. [14]

    Cryptology ePrint Archive , note =

    Karthikeyan, Harish and Polychroniadou, Antigoni , title =. Cryptology ePrint Archive , note =. 2024 , url =

  15. [15]

    2020 , eprint =

    Li, Xiang and Huang, Kaixuan and Yang, Wenhao and Wang, Shusen and Zhang, Zhihua , title =. 2020 , eprint =

  16. [16]

    2018 , eprint =

    Yu, Hao and Yang, Sen and Zhu, Shenghuo , title =. 2018 , eprint =

  17. [17]

    and others , title =

    Karimireddy, Sai Praneeth and Kale, Satyen and Mohri, Mehryar and Reddi, Sashank J. and others , title =. 2021 , eprint =

  18. [18]

    and Guler, Basak and Jiao, Jiantao and Avestimehr, Salman , title =

    So, Jinhyun and Ali, Ramy E. and Guler, Basak and Jiao, Jiantao and Avestimehr, Salman , title =. 2023 , eprint =

  19. [19]

    Communications of the ACM , volume =

    Shamir, Adi , title =. Communications of the ACM , volume =. 1979 , pages =

  20. [20]

    The 22nd International Conference on Artificial Intelligence and Statistics , publisher =

    Yu, Qian and Li, Songze and Raviv, Netanel and others , title =. The 22nd International Conference on Artificial Intelligence and Statistics , publisher =. 2019 , pages =

  21. [21]

    STOC '92 , note =

    Franklin, Matthew and Yung, Moti , title =. STOC '92 , note =. 1992 , month = jul, pages =

  22. [22]

    and Tukey, John W

    Cooley, James W. and Tukey, John W. , title =. Mathematics of Computation , volume =. 1965 , pages =

  23. [23]

    Brendan and others , title =

    Abadi, Martin and Chu, Andy and Goodfellow, Ian and McMahan, H. Brendan and others , title =. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security , series =. 2016 , month = oct, pages =

  24. [24]

    2019 , eprint =

    Wei, Kang and Li, Jun and Ding, Ming and Ma, Chuan and others , title =. 2019 , eprint =

  25. [25]

    and Klein, Tassilo and Nabi, Moin , title =

    Geyer, Robin C. and Klein, Tassilo and Nabi, Moin , title =. 2018 , eprint =

  26. [26]

    IEEE Transactions on Information Forensics and Security , year =

    Phong, Le Trieu and Aono, Yoshinori and Hayashi, Takuya and Wang, Lihua and Moriai, Shiho , title =. IEEE Transactions on Information Forensics and Security , year =. doi:10.1109/TIFS.2017.2787987 , keywords =

  27. [27]

    SoK: Secure Aggregation Based on Cryptographic Schemes for Federated Learning , volume =

    Mansouri, Mohamad and Önen, Melek and Jaballah, Wafa and Conti, Mauro , year =. SoK: Secure Aggregation Based on Cryptographic Schemes for Federated Learning , volume =. Proceedings on Privacy Enhancing Technologies , doi =

  28. [28]

    2022 , journal =

    Guo, Yue and Polychroniadou, Antigoni and Shi, Elaine and Byrd, David and Balch, Tucker , title =. 2022 , journal =. 2022/714 , archivePrefix =

  29. [29]

    and Gibson, Nicholas and Near, Joseph P

    Ngong, Ivoline C. and Gibson, Nicholas and Near, Joseph P. , title =. 2023 , eprint =

  30. [30]

    Salman , title =

    So, Jinhyun and Guler, Basak and Avestimehr, A. Salman , title =. 2021 , journal =. 2002.04156 , archivePrefix =

  31. [31]

    Calibrating Noise to Sensitivity in Private Data Analysis

    Dwork, Cynthia and McSherry, Frank and Nissim, Kobbi and Smith, Adam. Calibrating Noise to Sensitivity in Private Data Analysis. Theory of Cryptography. 2006

  32. [32]

    SIAM Journal on Computing , volume=

    What can we learn privately? , author=. SIAM Journal on Computing , volume=. 2011 , publisher=

  33. [33]

    International Conference on Machine Learning , pages=

    Discrete distribution estimation under local privacy , author=. International Conference on Machine Learning , pages=. 2016 , organization=

  34. [34]

    International Conference on Machine Learning , pages=

    The distributed discrete gaussian mechanism for federated learning with secure aggregation , author=. International Conference on Machine Learning , pages=. 2021 , organization=

  35. [35]

    Network bandwidth --- Compute Engine , howpublished =

  36. [36]

    2024 , url =

    James Bell-Clark and Adrià Gascón and Baiyu Li and Mariana Raykova and Phillipp Schoppmann , title =. 2024 , url =

  37. [37]

    2021 , url =

    Efficient algorithms for calculating the probability distribution of the sum of hypergeometric-distributed random variables , journal =. 2021 , url =

  38. [38]

    Proceedings of the 2025 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) , year=

    TACITA: Secure Aggregation for Asynchronous Federated Learning with Malicious Servers , author=. Proceedings of the 2025 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) , year=

  39. [39]

    2025 , journal =

    Zhang, Xiang and Li, Zhou and Wan, Kai and Sun, Hua and Ji, Mingyue and Caire, Giuseppe , title =. 2025 , journal =. 2503.04564 , archivePrefix =

  40. [40]

    2011 , url =

    Banerjee, Abhishek and Peikert, Chris and Rosen, Alon , title =. 2011 , url =

  41. [41]

    Advances in neural information processing systems , volume=

    Deep leakage from gradients , author=. Advances in neural information processing systems , volume=

  42. [42]

    Advances in neural information processing systems , volume=

    Inverting gradients-how easy is it to break privacy in federated learning? , author=. Advances in neural information processing systems , volume=

  43. [43]

    Worldwide Connectivity: Mobile & Fixed Networks Digital Divide 2023 , howpublished =

  44. [44]

    CPU Comparison: Samsung Exynos 1280 vs Intel Xeon Gold 5220R , year =

  45. [45]

    Kemp, C. D. and Kemp, A. W. , title =. Journal of the Royal Statistical Society: Series B (Methodological) , volume =. 2018 , month =

  46. [46]

    and Player, Rachel and Scott, Sam , title =

    Albrecht, Martin R. and Player, Rachel and Scott, Sam , title =. Journal of Mathematical Cryptology , volume =. 2015 , month =

  47. [47]

    The International Telecommunication Union , author =

    Mobile network coverage , url =. The International Telecommunication Union , author =

  48. [48]

    The Guardian Nigeria News - Nigeria and World News , author =

  49. [49]

    Journal of Global Optimization , volume =

    Storn, Rainer and Price, Kenneth , title =. Journal of Global Optimization , volume =. 1997 , doi =