pith. machine review for the scientific record. sign in

arxiv: 1812.00564 · v1 · submitted 2018-12-03 · 💻 cs.LG · stat.ML

Recognition: unknown

Split learning for health: Distributed deep learning without sharing raw patient data

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords learningsplitnndatadeepdistributedentitieshealthsharing
0
0 comments X
read the original abstract

Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 14 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. WISV: Wireless-Informed Semantic Verification for Distributed Speculative Decoding in Device-Edge LLM Inference

    cs.IT 2026-04 unverdicted novelty 7.0

    WISV uses a channel-aware semantic acceptance policy on hidden representations to boost accepted sequence length by up to 60.8% and cut interaction rounds by 37.3% in distributed speculative decoding, with under 1% ac...

  2. Keyed Nonlinear Transform: Lightweight Privacy-Enhancing Feature Sharing for Medical Image Analysis

    eess.IV 2026-05 unverdicted novelty 6.0

    KNT applies key-conditioned nonlinear obfuscation to split-inference features, cutting re-identification AUC from 0.635 to 0.586 with 0.15 ms overhead and under 1 pp accuracy loss.

  3. SplitFed-CL: A Split Federated Co-Learning Framework for Medical Image Segmentation with Inaccurate Labels

    eess.IV 2026-05 unverdicted novelty 6.0

    SplitFed-CL improves segmentation performance in privacy-preserving federated settings by having a global teacher refine unreliable local labels via weighted student-teacher correction, consistency regularization, and...

  4. HARMONY: Bridging the Personalization-Generalization Gap by Mitigating Representation Skew in Heterogeneous Split Federated Learning

    cs.LG 2026-05 unverdicted novelty 6.0

    HARMONY mitigates representation skew in heterogeneous hybrid split federated learning via meta-learning to simulate diverse extractors and server-side contrastive learning to align features, delivering up to 43% accu...

  5. Networked Information Aggregation for Binary Classification

    cs.LG 2026-05 unverdicted novelty 6.0

    Sequential prediction passing on DAGs for logistic regression yields O(M/sqrt(D)) excess loss when M-agent windows cover all features, with Omega(k/D) lower bound identifying depth as the fundamental limit.

  6. Application-Aware Twin-in-the-Loop Planning for Federated Split Learning over Wireless Edge Networks

    cs.NI 2026-04 unverdicted novelty 6.0

    TiLP integrates network, training, and task sub-twins into a digital twin and uses receding-horizon cross-entropy planning with actor-critic guidance to jointly optimize resource allocation in federated split learning...

  7. Efficient Federated RLHF via Zeroth-Order Policy Optimization

    cs.LG 2026-04 unverdicted novelty 6.0

    Par-S²ZPO matches centralized RLHF sample complexity while converging faster in policy updates and outperforming FedAvg on MuJoCo tasks.

  8. LightSplit: Practical Privacy-Preserving Split Learning via Orthogonal Projections

    cs.LG 2026-05 unverdicted novelty 5.0

    LightSplit uses non-invertible orthogonal projections as an information bottleneck in split learning to reduce transmitted dimensionality by 32x while retaining more than 95% accuracy and limiting reconstruction risk.

  9. Modulated learning for private and distributed regression with just a single sample per client device

    cs.LG 2026-05 unverdicted novelty 5.0

    Single-sample clients add one calibrated noisy perturbation to their data point and share transformed representations, allowing the server to recover unbiased gradients for private distributed regression.

  10. SplitFT: An Adaptive Federated Split Learning System For LLMs Fine-Tuning

    cs.DC 2026-04 unverdicted novelty 5.0

    SplitFT adapts cut-layer selection and reduces LoRA rank per client in federated split learning to improve efficiency and performance when fine-tuning LLMs on heterogeneous devices and data.

  11. A Survey on Split Learning for LLM Fine-Tuning: Models, Systems, and Privacy Optimizations

    cs.CR 2026-04 unverdicted novelty 5.0

    A survey that introduces a unified training pipeline and taxonomizes split learning approaches for LLM fine-tuning across model, system, and privacy dimensions.

  12. FedProxy: Federated Fine-Tuning of LLMs via Proxy SLMs and Heterogeneity-Aware Fusion

    cs.LG 2026-04 unverdicted novelty 5.0

    FedProxy replaces weak adapters with a proxy SLM for federated LLM fine-tuning, outperforming prior methods and approaching centralized performance via compression, heterogeneity-aware aggregation, and training-free fusion.

  13. Secure and Privacy-Preserving Vertical Federated Learning

    cs.CR 2026-04 unverdicted novelty 5.0

    Three optimized MPC protocols for privacy-preserving vertical federated learning that support global and global-local updates while reducing computation versus naive full-MPC delegation.

  14. Split and Aggregation Learning for Foundation Models Over Mobile Embodied AI Network (MEAN): A Comprehensive Survey

    cs.IT 2026-05 unverdicted novelty 3.0

    The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.