WISV uses a channel-aware semantic acceptance policy on hidden representations to boost accepted sequence length by up to 60.8% and cut interaction rounds by 37.3% in distributed speculative decoding, with under 1% accuracy loss.
hub
Split learning for health: Distributed deep learning without sharing raw patient data
13 Pith papers cite this work. Polarity classification is still indexing.
abstract
Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN.
hub tools
years
2026 13verdicts
UNVERDICTED 13representative citing papers
SplitFed-CL improves segmentation performance in privacy-preserving federated settings by having a global teacher refine unreliable local labels via weighted student-teacher correction, consistency regularization, and adaptive loss weighting.
HARMONY mitigates representation skew in heterogeneous hybrid split federated learning via meta-learning to simulate diverse extractors and server-side contrastive learning to align features, delivering up to 43% accuracy gains.
Sequential prediction passing on DAGs for logistic regression yields O(M/sqrt(D)) excess loss when M-agent windows cover all features, with Omega(k/D) lower bound identifying depth as the fundamental limit.
TiLP integrates network, training, and task sub-twins into a digital twin and uses receding-horizon cross-entropy planning with actor-critic guidance to jointly optimize resource allocation in federated split learning, improving task success by 9.5 percentage points on robotic tasks.
Par-S²ZPO matches centralized RLHF sample complexity while converging faster in policy updates and outperforming FedAvg on MuJoCo tasks.
LightSplit uses non-invertible orthogonal projections as an information bottleneck in split learning to reduce transmitted dimensionality by 32x while retaining more than 95% accuracy and limiting reconstruction risk.
Single-sample clients add one calibrated noisy perturbation to their data point and share transformed representations, allowing the server to recover unbiased gradients for private distributed regression.
SplitFT adapts cut-layer selection and reduces LoRA rank per client in federated split learning to improve efficiency and performance when fine-tuning LLMs on heterogeneous devices and data.
A survey that introduces a unified training pipeline and taxonomizes split learning approaches for LLM fine-tuning across model, system, and privacy dimensions.
FedProxy replaces weak adapters with a proxy SLM for federated LLM fine-tuning, outperforming prior methods and approaching centralized performance via compression, heterogeneity-aware aggregation, and training-free fusion.
Three optimized MPC protocols for privacy-preserving vertical federated learning that support global and global-local updates while reducing computation versus naive full-MPC delegation.
The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.
citing papers explorer
-
WISV: Wireless-Informed Semantic Verification for Distributed Speculative Decoding in Device-Edge LLM Inference
WISV uses a channel-aware semantic acceptance policy on hidden representations to boost accepted sequence length by up to 60.8% and cut interaction rounds by 37.3% in distributed speculative decoding, with under 1% accuracy loss.
-
SplitFed-CL: A Split Federated Co-Learning Framework for Medical Image Segmentation with Inaccurate Labels
SplitFed-CL improves segmentation performance in privacy-preserving federated settings by having a global teacher refine unreliable local labels via weighted student-teacher correction, consistency regularization, and adaptive loss weighting.
-
HARMONY: Bridging the Personalization-Generalization Gap by Mitigating Representation Skew in Heterogeneous Split Federated Learning
HARMONY mitigates representation skew in heterogeneous hybrid split federated learning via meta-learning to simulate diverse extractors and server-side contrastive learning to align features, delivering up to 43% accuracy gains.
-
Networked Information Aggregation for Binary Classification
Sequential prediction passing on DAGs for logistic regression yields O(M/sqrt(D)) excess loss when M-agent windows cover all features, with Omega(k/D) lower bound identifying depth as the fundamental limit.
-
Application-Aware Twin-in-the-Loop Planning for Federated Split Learning over Wireless Edge Networks
TiLP integrates network, training, and task sub-twins into a digital twin and uses receding-horizon cross-entropy planning with actor-critic guidance to jointly optimize resource allocation in federated split learning, improving task success by 9.5 percentage points on robotic tasks.
-
Efficient Federated RLHF via Zeroth-Order Policy Optimization
Par-S²ZPO matches centralized RLHF sample complexity while converging faster in policy updates and outperforming FedAvg on MuJoCo tasks.
-
LightSplit: Practical Privacy-Preserving Split Learning via Orthogonal Projections
LightSplit uses non-invertible orthogonal projections as an information bottleneck in split learning to reduce transmitted dimensionality by 32x while retaining more than 95% accuracy and limiting reconstruction risk.
-
Modulated learning for private and distributed regression with just a single sample per client device
Single-sample clients add one calibrated noisy perturbation to their data point and share transformed representations, allowing the server to recover unbiased gradients for private distributed regression.
-
SplitFT: An Adaptive Federated Split Learning System For LLMs Fine-Tuning
SplitFT adapts cut-layer selection and reduces LoRA rank per client in federated split learning to improve efficiency and performance when fine-tuning LLMs on heterogeneous devices and data.
-
A Survey on Split Learning for LLM Fine-Tuning: Models, Systems, and Privacy Optimizations
A survey that introduces a unified training pipeline and taxonomizes split learning approaches for LLM fine-tuning across model, system, and privacy dimensions.
-
FedProxy: Federated Fine-Tuning of LLMs via Proxy SLMs and Heterogeneity-Aware Fusion
FedProxy replaces weak adapters with a proxy SLM for federated LLM fine-tuning, outperforming prior methods and approaching centralized performance via compression, heterogeneity-aware aggregation, and training-free fusion.
-
Secure and Privacy-Preserving Vertical Federated Learning
Three optimized MPC protocols for privacy-preserving vertical federated learning that support global and global-local updates while reducing computation versus naive full-MPC delegation.
-
Split and Aggregation Learning for Foundation Models Over Mobile Embodied AI Network (MEAN): A Comprehensive Survey
The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.