Semantic consensus on model outputs for public prompts enables federated LLM fine-tuning that matches parameter-aggregation baselines with orders-of-magnitude lower communication.
hub
International Conference on Learning Representations , year =
10 Pith papers cite this work. Polarity classification is still indexing.
hub tools
years
2026 10verdicts
UNVERDICTED 10representative citing papers
Rescaled ASGD recovers convergence to the true global objective by rescaling worker stepsizes proportional to computation times, matching the known time lower bound in the leading term under non-convex smoothness and bounded heterogeneity.
FedVSSAM mitigates flatness incompatibility in SAM-based federated learning by consistently using a variance-suppressed adjusted direction for local perturbation, descent, and global updates, with non-convex convergence guarantees.
ChainFed achieves memory-efficient private LLM fine-tuning on edge devices through sequential layer-by-layer adapter training with dynamic co-tuning, perceptive optimization, and adaptive starting point selection, improving accuracy by up to 46.46%.
FedGMI applies VAEs as density estimators in federated learning to infer mixture proportions of shared distributions for structured personalization under data heterogeneity.
FedPLT assigns client-specific model layers for training and matches or beats full-model federated learning accuracy with 71-82 percent fewer trainable parameters per client.
AdaBFL introduces a novel three-layer adaptive aggregation mechanism for Byzantine-robust federated learning that counters complex attacks, provides non-convex non-iid convergence guarantees, and shows superior performance in experiments.
SplitFT adapts cut-layer selection and reduces LoRA rank per client in federated split learning to improve efficiency and performance when fine-tuning LLMs on heterogeneous devices and data.
A closed-form FL convergence upper bound incorporating sensing SNR, dataset size, and transmission reliability enables joint optimization of sensing power, snapshots, and communication power in ISAC systems.
DP-FLogTinyLLM combines federated learning, differential privacy, and LoRA-tuned tiny LLMs to match centralized log anomaly detection performance on Thunderbird and BGL datasets while preserving privacy.
citing papers explorer
-
Beyond Parameter Aggregation: Semantic Consensus for Federated Fine-Tuning of LLMs
Semantic consensus on model outputs for public prompts enables federated LLM fine-tuning that matches parameter-aggregation baselines with orders-of-magnitude lower communication.
-
Rescaled Asynchronous SGD: Optimal Distributed Optimization under Data and System Heterogeneity
Rescaled ASGD recovers convergence to the true global objective by rescaling worker stepsizes proportional to computation times, matching the known time lower bound in the leading term under non-convex smoothness and bounded heterogeneity.
-
FedVSSAM: Mitigating Flatness Incompatibility in Sharpness-Aware Federated Learning
FedVSSAM mitigates flatness incompatibility in SAM-based federated learning by consistently using a variance-suppressed adjusted direction for local perturbation, descent, and global updates, with non-convex convergence guarantees.
-
Beyond End-to-End: Dynamic Chain Optimization for Private LLM Adaptation on the Edge
ChainFed achieves memory-efficient private LLM fine-tuning on edge devices through sequential layer-by-layer adapter training with dynamic co-tuning, perceptive optimization, and adaptive starting point selection, improving accuracy by up to 46.46%.
-
FedGMI: Generative Model-Driven Federated Learning for Probabilistic Mixture Inference
FedGMI applies VAEs as density estimators in federated learning to infer mixture proportions of shared distributions for structured personalization under data heterogeneity.
-
FedPLT: Scalable, Resource-Efficient, and Heterogeneity-Aware Federated Learning via Partial Layer Training
FedPLT assigns client-specific model layers for training and matches or beats full-model federated learning accuracy with 71-82 percent fewer trainable parameters per client.
-
AdaBFL: Multi-Layer Defensive Adaptive Aggregation for Bzantine-Robust Federated Learning
AdaBFL introduces a novel three-layer adaptive aggregation mechanism for Byzantine-robust federated learning that counters complex attacks, provides non-convex non-iid convergence guarantees, and shows superior performance in experiments.
-
SplitFT: An Adaptive Federated Split Learning System For LLMs Fine-Tuning
SplitFT adapts cut-layer selection and reduces LoRA rank per client in federated split learning to improve efficiency and performance when fine-tuning LLMs on heterogeneous devices and data.
-
ISAC for AI: A Trade-off Framework Across Data Acquisition and Transfer in Federated Learning
A closed-form FL convergence upper bound incorporating sensing SNR, dataset size, and transmission reliability enables joint optimization of sensing power, snapshots, and communication power in ISAC systems.
-
DP-FlogTinyLLM: Differentially private federated log anomaly detection using Tiny LLMs
DP-FLogTinyLLM combines federated learning, differential privacy, and LoRA-tuned tiny LLMs to match centralized log anomaly detection performance on Thunderbird and BGL datasets while preserving privacy.