pith. machine review for the scientific record. sign in

arxiv: 1901.11173 · v1 · submitted 2019-01-31 · 💻 cs.LG · stat.ML

Recognition: unknown

Peer-to-peer Federated Learning on Graphs

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords learningmodelnodesalgorithmnetworktrainingbelieflearn
0
0 comments X
read the original abstract

We consider the problem of training a machine learning model over a network of nodes in a fully decentralized framework. The nodes take a Bayesian-like approach via the introduction of a belief over the model parameter space. We propose a distributed learning algorithm in which nodes update their belief by aggregate information from their one-hop neighbors to learn a model that best fits the observations over the entire network. In addition, we also obtain sufficient conditions to ensure that the probability of error is small for every node in the network. We discuss approximations required for applying this algorithm to train Deep Neural Networks (DNNs). Experiments on training linear regression model and on training a DNN show that the proposed learning rule algorithm provides a significant improvement in the accuracy compared to the case where nodes learn without cooperation.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Function-Space ADMM for Decentralized Federated Learning: A Control Theoretic Perspective

    cs.LG 2026-05 unverdicted novelty 6.0

    FedF-ADMM uses function-space ADMM updates projected via knowledge distillation plus a PI-like stabilization term to deliver faster, more stable convergence and higher accuracy than prior decentralized FL methods unde...