pith. machine review for the scientific record. sign in

arxiv: 1610.02527 · v1 · submitted 2016-10-08 · 💻 cs.LG

Recognition: unknown

Federated Optimization: Distributed Machine Learning for On-Device Intelligence

Authors on Pith no claims yet
classification 💻 cs.LG
keywords datanumberoptimizationdevicesfederatedsettingdistributedusers
0
0 comments X
read the original abstract

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of \federated optimization.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 11 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Federated Learning: Strategies for Improving Communication Efficiency

    cs.LG 2016-10 conditional novelty 8.0

    Structured updates (low-rank or masked) and sketched updates (quantized, rotated, subsampled) reduce uplink communication in federated learning by up to two orders of magnitude on convolutional and recurrent networks.

  2. Byzantine-Robust Distributed SGD: A Unified Analysis and Tight Error Bounds

    math.OC 2026-04 unverdicted novelty 7.0

    Unified convergence rates and tight lower bounds for Byzantine-robust distributed SGD under stochasticity and general data heterogeneity, showing local momentum reduces stochastic error floors.

  3. XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers

    cs.CR 2026-04 unverdicted novelty 7.0

    XFED is the first aggregation-agnostic non-collusive model poisoning attack that bypasses eight state-of-the-art defenses on six benchmark datasets without attacker coordination.

  4. Rescaled Asynchronous SGD: Optimal Distributed Optimization under Data and System Heterogeneity

    cs.LG 2026-05 unverdicted novelty 6.0

    Rescaled ASGD recovers convergence to the true global objective by rescaling worker stepsizes proportional to computation times, matching the known time lower bound in the leading term under non-convex smoothness and ...

  5. Response Time Enhances Alignment with Heterogeneous Preferences

    cs.LG 2026-05 unverdicted novelty 6.0

    Response times modeled as drift-diffusion processes enable consistent estimation of population-average preferences from heterogeneous anonymous binary choices.

  6. Who Trains Matters: Federated Learning under Enrollment and Participation Selection Biases

    cs.LG 2026-04 unverdicted novelty 6.0

    A two-stage selection model for federated learning permits inverse probability weighting to recover the target-population mean update under ignorability and positivity.

  7. Rennala MVR: Improved Time Complexity for Parallel Stochastic Optimization via Momentum-Based Variance Reduction

    math.OC 2026-05 unverdicted novelty 5.0

    Rennala MVR improves time complexity over Rennala SGD for smooth nonconvex stochastic optimization in heterogeneous parallel systems under a mean-squared smoothness assumption.

  8. Evaluating Federated Learning approaches for mammography under breast density heterogeneity

    cs.LG 2026-05 unverdicted novelty 4.0

    FedAvg matches centralized training accuracy on mammography data split by breast density heterogeneity, showing standard FL can handle this clinical variation without special fixes.

  9. AICCE: AI Driven Compliance Checker Engine

    cs.CR 2026-04 unverdicted novelty 4.0

    AICCE combines RAG-based retrieval of protocol specs with dual LLM pipelines for debate-driven explanations or fast script execution, reporting up to 99% accuracy on IPv6 samples.

  10. Split and Aggregation Learning for Foundation Models Over Mobile Embodied AI Network (MEAN): A Comprehensive Survey

    cs.IT 2026-05 unverdicted novelty 3.0

    The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.

  11. A Survey on AI for 6G: Challenges and Opportunities

    cs.NI 2026-03 accept novelty 1.0

    AI techniques including deep learning, reinforcement learning, and federated learning are positioned to enable high data rates, low latency, and massive connectivity in 6G networks while addressing scalability, securi...