pith. machine review for the scientific record. sign in

arxiv: 2604.16574 · v1 · submitted 2026-04-17 · 💻 cs.LG · cs.AI

Recognition: unknown

FedOBP: Federated Optimal Brain Personalization through Cloud-Edge Element-wise Decoupling

Authors on Pith no claims yet

Pith reviewed 2026-05-10 08:29 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords federatedpersonalizedparameterspersonalizationbraindecouplingfedobpglobal
0
0 comments X

The pith

FedOBP introduces a quantile-thresholded importance score based on a federated first-order Taylor approximation to select a small set of parameters for personalization, claiming better performance than prior PFL methods.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Federated learning lets many devices train a shared model without sending raw data to a central server. But when each device has different data, the shared model often performs poorly on any single device. Personalized federated learning tries to fix this by keeping some parts of the model shared and adapting others locally. The challenge is knowing exactly which parts to adapt. FedOBP borrows an idea from neural network pruning called Optimal Brain Damage. In pruning, researchers estimate how much removing a parameter would hurt the loss using a Taylor expansion. Here, the authors create an importance score for each parameter that approximates the first-order derivative in a federated way. They compute this score on the server instead of on the phones to save device resources. A quantile threshold then picks only the most important parameters for personalization. Experiments on several datasets with different levels of data difference show the method beats existing approaches while personalizing very few parameters.

Core claim

Extensive experiments demonstrate that FedOBP outperforms state-of-the-art methods across diverse datasets and heterogeneity scenarios, while requiring personalization of only a very small number of personalized parameters.

Load-bearing premise

The federated approximation of the first-order derivative in the Taylor expansion accurately ranks parameters by their sensitivity to local loss landscapes, and the quantile threshold reliably separates globally useful from locally useful parameters without post-hoc tuning.

Figures

Figures reproduced from arXiv: 2604.16574 by Changqiao Xu, Enmao Diao, Fuzhen Zhuang, Gabriel-Miro Muntean, Lujie Zhong, Tian Du, Xingyan Chen.

Figure 1
Figure 1. Figure 1: FedOBP follows the general framework of standard PFL, incorporating the Federated OBP parameter importance score function IO(·). To accommodate the limited compu￾tational resources of edge devices, the importance metric is computed on the server side rather than on the clients. The pseudocode of the main steps is provided in Algorithm 1. 1) Importance Evaluation: Based on the previously up￾loaded local mod… view at source ↗
Figure 1
Figure 1. Figure 1: Overview of FedOBP. The server computes the Federated OBP parameter importance IO(θ t−1 i ; D) for each selected client based on the uploaded local model and the aggregated global model θ t g , and determines the personalized parameter subset ut i and the globally shared parameter subset v t i using a quantile-based thresholding mechanism. The server then sends v t i to client i. Client i merges the global… view at source ↗
Figure 2
Figure 2. Figure 2: Convergence comparison of FedOBP and eleven baseline methods α = 0.1 on the 4-layer CNN model across eight datasets. 0 100 200 300 400 Epoch 40 45 50 55 60 65 70 75 Accuracy (%) CIFAR10 0 100 200 300 400 Epoch 10.0 12.5 15.0 17.5 20.0 22.5 25.0 27.5 30.0 Accuracy (%) CIFAR100 0 100 200 300 400 Epoch 60 65 70 75 80 85 90 Accuracy (%) EMNIST 0 100 200 300 400 Epoch 75.0 77.5 80.0 82.5 85.0 87.5 90.0 92.5 Acc… view at source ↗
Figure 3
Figure 3. Figure 3: Convergence comparison of FedOBP and eleven baseline methods with α = 0.5 on the 4-layer CNN model across eight datasets. more suitable for the FedOBP score than layer-wise normal￾ization [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Comparison of three scores (Gradient IG(·), Fisher IF (·), and OBP IO(·)) across six datasets under different quantile settings. and 0.7–0.9 on SVHN, while conv1 remains at 0.2–0.3 and 0.1–0.3, respectively. Similar trends are observed on MNIST, FMNIST, MEDMNISTA, and MEDMNISTC. In particular, MNIST stabilizes between 350 and 450 epochs with classifier proportions of 0.4–0.8 and conv1 proportions of 0.2–0.… view at source ↗
Figure 5
Figure 5. Figure 5: Ablation study of normalization strategies across eight datasets. We compare NoNorm, LayerNorm, and GlobalNorm, where GlobalNorm includes [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Distribution of personalized parameters across layers over FL epochs using the 4-layer CNN model on eight datasets. [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Accuracy versus Downlink Overhead Ratio on different datasets. [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
read the original abstract

Federated Learning (FL) faces challenges from client data heterogeneity and resource-constrained mobile devices, which can degrade model accuracy. Personalized Federated Learning (PFL) addresses this issue by adapting shared global knowledge to local data distributions. A promising approach in PFL is model decoupling, which separates the model into global and personalized parameters, raising the key question of which parameters should be personalized to balance global knowledge sharing and local adaptation. In this paper, we propose a Federated Optimal Brain Personalization (FedOBP) algorithm with a quantile-based thresholding mechanism and introduce an element-wise importance score. This score extends Optimal Brain Damage (OBD) pruning theory by incorporating a federated approximation of the first-order derivative in the Taylor expansion to evaluate the importance of each parameter for personalization. Moreover, we move the metric computation originally performed on clients to the server side, to alleviate the burden on resource-constrained mobile devices. To the best of our knowledge, this is the first work to bridge classical saliency-based pruning theory with federated parameter decoupling, providing a rigorous theoretical justification for selecting personalized parameters based on their sensitivity to local loss landscapes. Extensive experiments demonstrate that FedOBP outperforms state-of-the-art methods across diverse datasets and heterogeneity scenarios, while requiring personalization of only a very small number of personalized parameters.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes FedOBP, a personalized federated learning algorithm that decouples model parameters into global and personalized sets using a quantile-based threshold on an element-wise importance score. This score extends Optimal Brain Damage by replacing the second-order term with a server-side federated approximation of the first-order derivative from the loss Taylor expansion, moving metric computation off clients to reduce edge-device burden. The central claim is that this provides a theoretically justified way to personalize only a very small number of parameters while outperforming state-of-the-art PFL methods across datasets and heterogeneity levels.

Significance. If the first-order approximation reliably ranks parameters by local loss sensitivity, the work would offer a novel and efficient bridge between classical saliency-based pruning and federated parameter decoupling, with practical benefits for resource-constrained clients. The server-side computation shift is a clear engineering strength that could generalize to other PFL methods.

major comments (2)
  1. [importance score definition and Taylor expansion] The element-wise importance score (defined via the federated first-order Taylor approximation) is load-bearing for the decoupling decision and the claim of outperformance with few personalized parameters. Classical OBD saliency uses the second-order Hessian term; the paper's substitution of only the first-order gradient term risks ranking by gradient magnitude rather than curvature or true local sensitivity. No derivation or external validation is provided showing that this approximation preserves ranking quality under heterogeneity, which directly undermines the 'rigorous theoretical justification' asserted in the abstract.
  2. [quantile-based thresholding mechanism] The quantile threshold is a free tunable parameter whose selection determines which parameters are personalized. The reported gains with 'only a very small number' of personalized parameters may depend on post-hoc choice of this threshold; without sensitivity analysis or cross-validation across heterogeneity scenarios, the experimental outperformance claims rest on potentially circular tuning.
minor comments (1)
  1. [abstract and experimental claims] The abstract states 'extensive experiments' but provides no details on baselines, number of runs, error bars, or ablation controls; the full experimental section should include these to allow verification of robustness.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive feedback on the theoretical grounding of the importance score and the experimental robustness of the quantile threshold in FedOBP. We address each major comment point by point below, indicating planned revisions where appropriate.

read point-by-point responses
  1. Referee: [importance score definition and Taylor expansion] The element-wise importance score (defined via the federated first-order Taylor approximation) is load-bearing for the decoupling decision and the claim of outperformance with few personalized parameters. Classical OBD saliency uses the second-order Hessian term; the paper's substitution of only the first-order gradient term risks ranking by gradient magnitude rather than curvature or true local sensitivity. No derivation or external validation is provided showing that this approximation preserves ranking quality under heterogeneity, which directly undermines the 'rigorous theoretical justification' asserted in the abstract.

    Authors: We acknowledge that classical OBD employs the second-order Hessian diagonal for saliency, whereas FedOBP uses a server-side federated first-order approximation derived from the Taylor expansion of the local loss. This substitution is explicitly motivated by the need to shift computation away from resource-constrained clients, as full Hessian evaluation is prohibitive in federated settings. The first-order term still captures directional sensitivity to local loss changes, and the federated aggregation provides a stable estimate across clients. While the current manuscript does not contain an exhaustive derivation proving that the ranking is identical to full OBD under arbitrary heterogeneity, the approximation is justified when higher-order terms are negligible near local minima. We will revise the theoretical section to include a clearer derivation of the approximation error bounds and add new experiments that (i) compare parameter rankings produced by our score against those from a centralized OBD baseline on non-federated proxies and (ii) ablate ranking stability across varying Dirichlet heterogeneity parameters. These additions will strengthen the justification without changing the core algorithm. revision: partial

  2. Referee: [quantile-based thresholding mechanism] The quantile threshold is a free tunable parameter whose selection determines which parameters are personalized. The reported gains with 'only a very small number' of personalized parameters may depend on post-hoc choice of this threshold; without sensitivity analysis or cross-validation across heterogeneity scenarios, the experimental outperformance claims rest on potentially circular tuning.

    Authors: The quantile threshold is indeed a hyperparameter that controls the fraction of parameters marked for personalization. In the reported experiments we selected quantiles yielding a small personalization ratio (typically 1-5% of parameters) while ensuring competitive accuracy; these choices were fixed prior to final testing and applied uniformly across datasets. To directly address the concern of post-hoc tuning, we will add a comprehensive sensitivity study in the revised manuscript. This will include performance curves for quantile values ranging from 0.90 to 0.99 on all evaluated datasets, together with results under multiple heterogeneity regimes (Dirichlet concentration parameters 0.1, 0.5, and 1.0). The new analysis will demonstrate that FedOBP remains superior to baselines over a broad interval of thresholds, thereby removing any appearance of circular selection. revision: yes

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The central claim rests on the validity of extending OBD's Taylor approximation to the federated setting and on the assumption that server-side importance scores transfer to client personalization without loss of fidelity.

free parameters (1)
  • quantile threshold
    Used to select which parameters are personalized; its value determines the small number of personalized parameters and is not derived from first principles.
axioms (1)
  • domain assumption The first-order Taylor expansion term provides a reliable importance ranking for parameters under federated data heterogeneity.
    Invoked when defining the element-wise importance score from the OBD extension.
invented entities (1)
  • element-wise importance score no independent evidence
    purpose: To quantify each parameter's sensitivity to local loss for deciding personalization.
    New metric introduced by combining federated derivative approximation with OBD; no independent falsifiable handle provided in abstract.

pith-pipeline@v0.9.0 · 5556 in / 1335 out tokens · 21893 ms · 2026-05-10T08:29:59.778429+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

41 extracted references · 4 canonical work pages · 1 internal anchor

  1. [1]

    Communication-efficient learning of deep networks from decentralized data,

    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” inArtificial Intelligence and Statistics. PMLR, 2017, pp. 1273– 1282. 13

  2. [2]

    Scaffold: Stochastic controlled aver- aging for federated learning,

    S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled aver- aging for federated learning,” inInternational conference on machine learning. PMLR, 2020, pp. 5132–5143

  3. [3]

    Feddc: Federated learning with non-iid data via local drift decoupling and correction,

    L. Gao, H. Fu, L. Li, Y . Chen, M. Xu, and C.-Z. Xu, “Feddc: Federated learning with non-iid data via local drift decoupling and correction,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 10 112–10 121

  4. [4]

    Federated learning method based on contrastive representation knowledge distillation,

    G. Zhou and J. Zheng, “Federated learning method based on contrastive representation knowledge distillation,” in 2025 5th International Conference on Computer Science, Electronic Information Engineering and Intelligent Con- trol Technology (CEI), 2025, pp. 102–105

  5. [5]

    Heterogeneous federated learning driven by multi- knowledge distillation,

    B. Xu, L. Cheng, Q. Wen, Z. Zou, X. Hu, Z. Dong, and J. Qi, “Heterogeneous federated learning driven by multi- knowledge distillation,”IEEE Transactions on Mobile Computing, vol. 24, no. 12, pp. 13 048–13 061, 2025

  6. [6]

    Metafed: Federated learning among federations with cyclic knowl- edge distillation for personalized healthcare,

    Y . Chen, W. Lu, X. Qin, J. Wang, and X. Xie, “Metafed: Federated learning among federations with cyclic knowl- edge distillation for personalized healthcare,”IEEE Transactions on Neural Networks and Learning Systems, 2023

  7. [7]

    Fedgkd: Towards heterogeneous federated learning via global knowledge distillation,

    D. Yao, W. Pan, Y . Dai, Y . Wan, X. Ding, C. Yu, H. Jin, Z. Xu, and L. Sun, “Fedgkd: Towards heterogeneous federated learning via global knowledge distillation,” IEEE Transactions on Computers, 2023

  8. [8]

    Fedcache: A knowledge cache-driven federated learning architecture for person- alized edge intelligence,

    Z. Wu, S. Sun, Y . Wang, M. Liu, K. Xu, W. Wang, X. Jiang, B. Gao, and J. Lu, “Fedcache: A knowledge cache-driven federated learning architecture for person- alized edge intelligence,”IEEE Transactions on Mobile Computing, vol. 23, no. 10, pp. 9368–9382, 2024

  9. [9]

    Tackling spatial-temporal heterogeneous fed- erated learning with orthogonal regularization,

    C. Wu, H. Wang, X. Zhang, H. Chen, J. Bu, and J. Liu, “Tackling spatial-temporal heterogeneous fed- erated learning with orthogonal regularization,”IEEE Transactions on Mobile Computing, pp. 1–17, 2026

  10. [10]

    Personalized federated learning via gradient-fusion and gradient-decoupling for heteroge- neous data,

    Z. He, Y . Li, and Z. Cai, “Personalized federated learning via gradient-fusion and gradient-decoupling for heteroge- neous data,”IEEE Transactions on Mobile Computing, vol. 25, no. 3, pp. 2956–2972, 2026

  11. [11]

    Ditto: Fair and robust federated learning through personalization,

    T. Li, S. Hu, A. Beirami, and V . Smith, “Ditto: Fair and robust federated learning through personalization,” in International conference on machine learning. PMLR, 2021, pp. 6357–6368

  12. [12]

    Personalized federated learning with first or- der model optimization,

    M. Zhang, K. Sapra, S. Fidler, S. Yeung, and J. M. Alvarez, “Personalized federated learning with first or- der model optimization,” inInternational Conference on Learning Representations, 2021

  13. [13]

    On bridging generic and personalized federated learning for image classification,

    H.-Y . Chen and W.-L. Chao, “On bridging generic and personalized federated learning for image classification,” inInternational Conference on Learning Representa- tions, 2022

  14. [14]

    Reads: A personalized federated learning framework with fine- grained layer aggregation and decentralized clustering,

    H. Fu, F. Tian, G. Deng, L. Liang, and X. Zhang, “Reads: A personalized federated learning framework with fine- grained layer aggregation and decentralized clustering,” IEEE Transactions on Mobile Computing, vol. 24, no. 8, pp. 7709–7725, 2025

  15. [15]

    Classter: Mobile shift-robust personalized federated learning via class-wise clustering,

    X. Li, S. Liu, Z. Zhou, Y . Xu, B. Guo, and Z. Yu, “Classter: Mobile shift-robust personalized federated learning via class-wise clustering,”IEEE Transactions on Mobile Computing, vol. 24, no. 3, pp. 2014–2028, 2025

  16. [16]

    Personalized federated learning on non-iid data via group-based meta- learning,

    L. Yang, J. Huang, W. Lin, and J. Cao, “Personalized federated learning on non-iid data via group-based meta- learning,”ACM Transactions on Knowledge Discovery from Data, vol. 17, no. 4, pp. 1–20, 2023

  17. [17]

    Cpper-fl: Clustered parallel training for efficient personalized federated learning,

    R. Zhang, F. Liu, J. Liu, M. Chen, Q. Tang, T. Huang, and F. R. Yu, “Cpper-fl: Clustered parallel training for efficient personalized federated learning,”IEEE Trans- actions on Mobile Computing, 2024

  18. [18]

    Federated Learning with Personalization Layers

    M. G. Arivazhagan, V . Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,”arXiv preprint arXiv:1912.00818, 2019

  19. [19]

    Adaptive personalized fed- erated learning.arXiv preprint arXiv:2003.13461,

    Y . Deng, M. M. Kamani, and M. Mahdavi, “Adap- tive personalized federated learning,”arXiv preprint arXiv:2003.13461, 2020

  20. [20]

    Think locally, act globally: Federated learning with local and global representations,

    P. P. Liang, T. Liu, L. Ziyin, N. B. Allen, R. P. Auerbach, D. Brent, R. Salakhutdinov, and L.-P. Morency, “Think locally, act globally: Federated learning with local and global representations,”arXiv preprint arXiv:2001.01523, 2020

  21. [21]

    Exploiting shared representations for personalized fed- erated learning,

    L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Exploiting shared representations for personalized fed- erated learning,” inInternational Conference on Machine Learning. PMLR, 2021, pp. 2089–2099

  22. [22]

    Fedbabu: Toward en- hanced representation for federated image classification,

    J. Oh, S. Kim, and S.-Y . Yun, “Fedbabu: Toward en- hanced representation for federated image classification,” inInternational Conference on Learning Representa- tions, 2022

  23. [23]

    Personalized federated learning with feature alignment and classifier collab- oration,

    J. Xu, X. Tong, and S.-L. Huang, “Personalized federated learning with feature alignment and classifier collab- oration,” inThe Eleventh International Conference on Learning Representations, 2023

  24. [24]

    doi:10.48550/arXiv.2305.15706 , abstract =

    J. Tan, Y . Zhou, G. Liu, J. H. Wang, and S. Yu, “pfedsim: Similarity-aware model aggregation towards personalized federated learning,”arXiv preprint arXiv:2305.15706, 2023

  25. [25]

    Towards opti- mal customized architecture for heterogeneous federated learning with contrastive cloud-edge model decoupling,

    C. Xingyan, D. Tian, W. Mu, G. Tiancheng, Z. Yu, K. Gang, X. Changqiao, and W. D. Oliver, “Towards opti- mal customized architecture for heterogeneous federated learning with contrastive cloud-edge model decoupling,” IEEE Transactions on Computers, 2024

  26. [26]

    Personalized federated learn- ing via feature distribution adaptation,

    C. Mclaughlin and L. Su, “Personalized federated learn- ing via feature distribution adaptation,” inThe Thirty- eighth Annual Conference on Neural Information Pro- cessing Systems, 2024

  27. [27]

    Dynamic personalized federated learning with adaptive differential privacy,

    X. Yang, W. Huang, and M. Ye, “Dynamic personalized federated learning with adaptive differential privacy,” Advances in Neural Information Processing Systems, vol. 36, pp. 72 181–72 192, 2023

  28. [28]

    Accelerat- ing federated learning via parameter selection and pre- synchronization in mobile edge-cloud networks,

    H. Zhou, M. Li, P. Sun, B. Guo, and Z. Yu, “Accelerat- ing federated learning via parameter selection and pre- synchronization in mobile edge-cloud networks,”IEEE Transactions on Mobile Computing, vol. 23, no. 11, pp. 10 313–10 328, 2024

  29. [29]

    Fedselect: Personalized federated learning 14 with customized selection of parameters for fine-tuning,

    R. Tamirisa, C. Xie, W. Bao, A. Zhou, R. Arel, and A. Shamsian, “Fedselect: Personalized federated learning 14 with customized selection of parameters for fine-tuning,” inProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, 2024, pp. 23 985– 23 994

  30. [30]

    Optimal brain damage,

    Y . LeCun, J. Denker, and S. Solla, “Optimal brain damage,”Advances in Neural Information Processing Systems, vol. 2, 1989

  31. [31]

    Federated learning over connected modes,

    D. Grinwald, P. Wiesner, and S. Nakajima, “Federated learning over connected modes,” inThe Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024

  32. [32]

    Federated representation learning in the under-parameterized regime,

    R. Liu, C. Shen, and J. Yang, “Federated representation learning in the under-parameterized regime,” inForty- first International Conference on Machine Learning, 2024

  33. [33]

    Seqfededt: Ac- celerating sequential federated learning on non-iid data via element-wise decoupled training,

    T. Du, X. Chen, M. Wang, Y . Liu, S. Yao, G. Kou, F. Zhuang, C. Xu, and G.-M. Muntean, “Seqfededt: Ac- celerating sequential federated learning on non-iid data via element-wise decoupled training,”IEEE Transactions on Mobile Computing, pp. 1–16, 2025

  34. [34]

    Second order derivatives for network pruning: Optimal brain surgeon,

    B. Hassibi and D. Stork, “Second order derivatives for network pruning: Optimal brain surgeon,”Advances in Neural Information Processing Systems, vol. 5, 1992

  35. [35]

    Llm-pruner: On the structural pruning of large language models,

    X. Ma, G. Fang, and X. Wang, “Llm-pruner: On the structural pruning of large language models,”Advances in Neural Information Processing Systems, vol. 36, pp. 21 702–21 720, 2023

  36. [36]

    Loraprune: Pruning meets low-rank parameter-efficient fine-tuning,

    M. Zhang, H. Chen, C. Shen, Z. Yang, L. Ou, X. Yu, and B. Zhuang, “Loraprune: Pruning meets low-rank parameter-efficient fine-tuning,” 2023

  37. [37]

    Distilling the knowledge in a neural network,

    G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,”STAT, vol. 1050, p. 9, 2015

  38. [38]

    Importance estimation for neural network pruning,

    P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz, “Importance estimation for neural network pruning,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 264–11 272

  39. [39]

    Adaptive federated optimization,

    S. J. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Kone ˇcn`y, S. Kumar, and H. B. McMahan, “Adaptive federated optimization,” inInternational Conference on Learning Representations, 2021

  40. [40]

    Fedala: Adaptive local aggregation for personalized federated learning,

    J. Zhang, Y . Hua, H. Wang, T. Song, Z. Xue, R. Ma, and H. Guan, “Fedala: Adaptive local aggregation for personalized federated learning,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 9, 2023, pp. 11 237–11 244. Xingyan Chenreceived the Ph.D. degree in com- puter technology from Beijing University of Posts and Telecommunic...

  41. [41]

    He has published in journals and confer- ences including IEEE TRANSACTIONS ONMOBILE COMPUTINGand IEEE INFOCOM

    He is currently an Associate Professor with the School of Intelligent Engineering and Automation, BUPT. He has published in journals and confer- ences including IEEE TRANSACTIONS ONMOBILE COMPUTINGand IEEE INFOCOM. His research interests include federated learning, multi-agent re- inforcement learning, and stochastic optimization. Tian Duis currently purs...