pith. machine review for the scientific record. sign in

hub

Federated Learning: Strategies for Improving Communication Efficiency

27 Pith papers cite this work. Polarity classification is still indexing.

27 Pith papers citing it
abstract

Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude.

hub tools

years

2026 26 2019 1

verdicts

UNVERDICTED 27

clear filters

representative citing papers

Scaling Federated Linear Contextual Bandits via Sketching

cs.LG · 2026-05-01 · unverdicted · novelty 7.0

FSCLB scales federated linear contextual bandits with sketching to achieve over 90% lower computation and communication costs while preserving a near-optimal regret bound of O(sqrt(l d T)).

Scalar Federated Learning for Linear Quadratic Regulator

eess.SY · 2026-04-06 · unverdicted · novelty 7.0

A scalar-projection federated zeroth-order method for model-free LQR policy learning that reduces per-agent communication from O(d) to O(1) with convergence rate improving in the number of agents.

Communication-Efficient Gluon in Federated Learning

cs.LG · 2026-04-12 · unverdicted · novelty 5.0

Compressed Gluon variants using unbiased/contraction compressors and SARAH-style variance reduction achieve convergence guarantees and lower communication costs in federated learning under layer-wise smoothness.

citing papers explorer

Showing 1 of 1 citing paper after filters.