Recognition: unknown
RELIEF: Turning Missing Modalities into Training Acceleration for Federated Learning on Heterogeneous IoT Edge
read the original abstract
Federated learning (FL) over heterogeneous IoT edge devices faces coupled system-modality-data heterogeneity: the lower-cost device carries both fewer sensors and less computational power, so the slowest device (straggler) produces the most incomplete gradient signals. Naively averaging their updates dilutes rare-modality information and wastes computation on absent-sensor parameters, whereas existing methods handle the triple heterogeneity (system, modality, data) in isolation and none addresses their coupling. To resolve this issue, we propose RELIEF, a framework that partitions the fusion-layer Low-Rank Adaptation (LoRA) projection matrix into modality-aligned column blocks and uses this partition as a unified interface for aggregation, elastic training, and communication. Each block is aggregated only within the cohort of devices possessing that modality, which eliminates cross-modal gradient interference; the server then allocates personalized training budgets by prioritizing blocks with the highest cohort-internal divergence, so that resource-constrained devices train fewer but more impactful parameters. We prove that cohort-wise aggregation removes interference from the convergence bound and that the divergence-guided allocation achieves sublinear regret. Experiments on two IoT sensor datasets (PAMAP2, MHEALTH) under both full-parameter (CNN) and parameter-efficient (LoRA) training show that RELIEF achieves up to 9.41x speedup and 37% energy reduction over FedAvg with up to 15.3 pp rare-modality F1 gains, and real-device validation on a two-Jetson AGX Orin testbed confirms these results.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
PRISM: Exposing and Resolving Spurious Isolation in Federated Multimodal Continual Learning
PRISM maintains per-expert gradient subspace bases preserved under FedAvg to resolve spurious isolation in federated multimodal continual learning, outperforming 16 baselines with larger gains on longer task sequences.
-
EASE: Federated Multimodal Unlearning via Entanglement-Aware Anchor Closure
EASE closes three residual anchors in federated multimodal unlearning using bilateral displacement, cosine-sine decomposition, and forget lock, achieving near-retrain performance on forget and retain data.
-
Application-Aware Twin-in-the-Loop Planning for Federated Split Learning over Wireless Edge Networks
TiLP integrates network, training, and task sub-twins into a digital twin and uses receding-horizon cross-entropy planning with actor-critic guidance to jointly optimize resource allocation in federated split learning...
-
Lifecycle-Aware Federated Continual Learning in Mobile Autonomous Systems
A new dual-timescale FCL framework with layer-selective rehearsal and knowledge recovery improves mIoU by up to 8.3% in federated settings for autonomous systems.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.