Recognition: unknown
A Survey on Split Learning for LLM Fine-Tuning: Models, Systems, and Privacy Optimizations
Pith reviewed 2026-05-08 02:42 UTC · model grok-4.3
The pith
Split learning divides LLMs across clients and servers so resource-limited users can fine-tune models without sharing raw data.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper claims that split learning for LLM fine-tuning can be captured by a unified training pipeline whose key stages are client-side forward pass on the initial layers, transmission of intermediate activations to the server, server-side continuation of the forward and backward passes, and return of gradients; existing work is then systematically classified under model-level optimizations that adjust layer splits and architectures, system-level techniques that reduce communication volume and latency, and privacy mechanisms that add noise or encryption to intermediate representations.
What carries the argument
The unified fine-grained training pipeline that decomposes split learning into sequential stages of model partitioning, activation exchange, and gradient propagation to enable systematic comparison across papers.
If this is right
- Resource-constrained clients can perform useful LLM adaptation by running only the first few layers locally and sending activations onward.
- Privacy defenses can be inserted at the activation-exchange step to limit what an honest-but-curious server can infer.
- System optimizations such as layer-wise compression or asynchronous updates can lower the communication cost that currently limits deployment.
- Model-level choices about which layers to place on the client directly affect both accuracy and the amount of private information leaked.
- The taxonomy supplies a shared vocabulary that future papers can use to position their contributions relative to prior work.
Where Pith is reading between the lines
- The same pipeline structure could be reused to survey split-learning applications beyond LLMs, such as vision or multimodal models.
- Direct head-to-head benchmarks that implement several reviewed methods inside the unified pipeline would quickly reveal which optimizations actually move the needle on real hardware.
- Regulatory settings that require data localization may find split learning more attractive than full federated learning because only activations cross the boundary.
- If the weakest assumption holds, the survey's categories can serve as a checklist for new system designs rather than requiring each team to invent its own taxonomy.
Load-bearing premise
The existing published work on split learning for LLM fine-tuning is already broad and stable enough that a single pipeline can describe the essential variations without leaving out important practical differences.
What would settle it
Discovery of multiple high-impact split-learning methods whose operational flow cannot be mapped onto the proposed pipeline stages or whose privacy and efficiency trade-offs fall outside the three review dimensions.
Figures
read the original abstract
Fine-tuning unlocks large language models (LLMs) for specialized applications, but its high computational cost often puts it out of reach for resource-constrained organizations. While cloud platforms could provide the needed resources, data privacy concerns make sharing sensitive information with third parties risky. A promising solution is split learning for LLM fine-tuning, which divides the model between clients and a server, allowing collaborative and secure training through exchanged intermediate data, thus enabling resource-constrained participants to adapt LLMs safely. % In light of this, a growing body of literature has emerged to advance this paradigm, introducing varied model methods, system optimizations, and privacy defense-attack techniques for split learning. To bring clarity and direction to the field, a comprehensive survey is needed to classify, compare, and critique these diverse approaches. This paper fills the gap by presenting the first extensive survey dedicated to split learning for LLM fine-tuning. We propose a unified, fine-grained training pipeline to pinpoint key operational components and conduct a systematic review of state-of-the-art work across three core dimensions: model-level optimization, system-level efficiency, and privacy preservation. Through this structured taxonomy, we establish a foundation for advancing scalable, robust, and secure collaborative LLM adaptation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript is the first dedicated survey on split learning for LLM fine-tuning. It introduces a unified fine-grained training pipeline to identify key operational components and performs a systematic review of state-of-the-art work across three dimensions: model-level optimization, system-level efficiency, and privacy preservation.
Significance. If the literature coverage is exhaustive and the proposed pipeline faithfully maps onto the reviewed methods without oversimplification, the survey would provide a useful organizing framework for an emerging area at the intersection of distributed ML and privacy. It could help standardize terminology and surface open challenges in balancing model utility, communication cost, and privacy guarantees for resource-constrained LLM adaptation.
major comments (2)
- [Abstract and §1] Abstract and §1: The claim that this is the 'first extensive survey' is load-bearing for the paper's positioning. The manuscript should explicitly describe the literature search protocol, databases queried, keywords, inclusion/exclusion criteria, and date range to allow readers to assess completeness and selection bias.
- [Unified pipeline section] The unified pipeline section: The pipeline is presented as capturing 'key operational components,' yet the review must demonstrate that it accommodates the full range of variations found in the cited works (e.g., different split points, aggregation strategies, or privacy mechanisms) rather than forcing all approaches into a single template.
minor comments (2)
- [Abstract] Abstract: The text contains a stray LaTeX comment marker ('% In light of this,') that should be removed.
- [Throughout] Throughout: Ensure that all cited works are consistently referenced with full bibliographic details and that the taxonomy tables or figures clearly indicate which papers fall into each subcategory.
Simulated Author's Rebuttal
We thank the referee for the positive evaluation and recommendation for minor revision. The comments highlight important aspects for strengthening the survey's rigor and clarity, which we address below.
read point-by-point responses
-
Referee: [Abstract and §1] Abstract and §1: The claim that this is the 'first extensive survey' is load-bearing for the paper's positioning. The manuscript should explicitly describe the literature search protocol, databases queried, keywords, inclusion/exclusion criteria, and date range to allow readers to assess completeness and selection bias.
Authors: We agree that a transparent description of the literature search methodology is necessary to substantiate the claim of providing the first extensive survey and to allow assessment of completeness. In the revised manuscript, we will insert a new subsection (e.g., §1.1) that details the search protocol. This will specify the databases queried (Google Scholar, arXiv, IEEE Xplore, ACM Digital Library), the keywords and search strings (e.g., “split learning” AND (“LLM fine-tuning” OR “large language model adaptation”)), the date range (2020–2024, with final search conducted in March 2024), inclusion criteria (peer-reviewed papers, preprints, and technical reports that explicitly address split learning for LLM fine-tuning), and exclusion criteria (works focused solely on inference, non-LLM models, or general federated learning without model splitting). This addition will be placed early in the introduction to support the positioning without changing the core taxonomy or contributions. revision: yes
-
Referee: [Unified pipeline section] The unified pipeline section: The pipeline is presented as capturing 'key operational components,' yet the review must demonstrate that it accommodates the full range of variations found in the cited works (e.g., different split points, aggregation strategies, or privacy mechanisms) rather than forcing all approaches into a single template.
Authors: We appreciate this point on ensuring the pipeline does not oversimplify. The pipeline was constructed as a flexible abstraction derived from common stages observed across the literature, with the subsequent taxonomy sections intended to capture variations. To make this explicit, we will revise the unified pipeline section to include a mapping discussion and a compact table. The table will list representative works and show how they instantiate each pipeline stage, explicitly noting differences such as split points (e.g., after the embedding layer versus after multiple transformer blocks), aggregation strategies (client-side versus server-side), and privacy mechanisms (differential privacy, secure multi-party computation, or homomorphic encryption). We will also add forward references from the pipeline to the model-, system-, and privacy-dimension sections to illustrate accommodation of diversity. This revision will be limited to the pipeline section and will not require restructuring the overall taxonomy. revision: yes
Circularity Check
No significant circularity; survey with no derivations or self-referential claims
full rationale
This paper is a literature survey that proposes a unified training pipeline and taxonomy for split learning in LLM fine-tuning. It reviews external state-of-the-art work across model, system, and privacy dimensions without any original equations, fitted parameters, predictions, or theorems. All load-bearing elements are citations to prior independent literature and a descriptive classification; no step reduces by construction to the paper's own inputs, self-citations, or fitted quantities. The reader's assessment of score 0.0 is confirmed: the work is self-contained as a review and does not exhibit any of the enumerated circularity patterns.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Clark, and Prosanta Gope
Arwa Alromih, John A. Clark, and Prosanta Gope. 2022. Privacy-Aware Split Learning Based Energy Theft Detection for Smart Grids. InInformation and Communications Security: 24th International Conference, ICICS 2022, Canterbury, UK, September 5–8, 2022, Proceedings(Canterbury, United Kingdom). Springer-Verlag, Berlin, Heidelberg, 281–300
2022
-
[2]
Huiqing Ao, Hui Tian, Wanli Ni, Gaofeng Nie, and Dusit Niyato. 2025. Semi-Asynchronous Federated Split Learning for Computing-Limited Devices in Wireless Networks.IEEE Transactions on Wireless Communications24, 6 (2025), 5196–5212
2025
-
[3]
Ahmad Ayad, Melvin Renner, and Anke Schmeink. 2021. Improving the communication and computation efficiency of split learning for iot applications. InProceedings of the 2021 IEEE Global Communications Conference(Madrid, Spain). IEEE, Piscataway, NJ, USA, 01–06
2021
-
[4]
Yijie Bai, Yanjiao Chen, Hanlei Zhang, Wenyuan Xu, Haiqin Weng, and Dou Goodman. 2023. VILLAIN: backdoor attacks against vertical split learning. InProceedings of the 32nd USENIX Conference on Security Symposium. USENIX Association, Anaheim, CA, USA
2023
-
[5]
Mislav Balunovic, Dimitar Dimitrov, Nikola Jovanović, and Martin Vechev. 2022. LAMP: Extracting Text from Gradients with Language Model Priors. InAdvances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 7641–7654
2022
-
[6]
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Maksim Riabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel
-
[7]
InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Petals: Collaborative Inference and Fine-tuning of Large Models. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). Association for Computational Linguistics, Toronto, Canada, 558–568
-
[8]
Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In2021 IEEE symposium on security and privacy (SP). IEEE, 141–159
2021
-
[9]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners.Advances in neural information processing systems33 (2020), 1877–1901
2020
- [10]
-
[11]
Konstantinos Chatzikokolakis, Miguel E Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. 2013. Broadening the scope of differential privacy using metrics. Ininternational symposium on privacy enhancing technologies symposium. Springer Berlin Heidelberg, Berlin, Heidelberg
2013
-
[12]
Manisha Chawla, Gagan Raj Gupta, Shreyas Gaddam, and Manas Wadhwa. 2024. Beyond Federated Learning for IoT: Efficient Split Learning With Caching and Model Customization.IEEE Internet of Things Journal11, 20 (2024), 32617–32630
2024
-
[13]
Guanzhong Chen, Zhenghan Qin, Mingxin Yang, Yajie Zhou, Tao Fan, Tianyu Du, and Zenglin Xu. 2024. Unveiling the vulnerability of private fine-tuning in split-based frameworks for large language models: A bidirectionally enhanced attack. InProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA, 2904–2918
2024
-
[14]
Haitian Chen, Xuebin Chen, Lulu Peng, and Yuntian Bai. 2023. Personalized fair split learning for resource-constrained Internet of Things.Sensors 24, 1 (2023), 88
2023
-
[15]
Hanxiao Chen, Hongwei Li, Guishan Dong, Meng Hao, Guowen Xu, Xiaoming Huang, and Zhe Liu. 2020. Practical membership inference attack against collaborative inference in industrial IoT.IEEE Transactions on Industrial Informatics18, 1 (2020), 477–487
2020
-
[16]
Sai Chen, Fengran Mo, Yanhao Wang, Cen Chen, Jian-Yun Nie, Chengyu Wang, and Jamie Cui. 2023. A customized text sanitization mechanism with differential privacy. InFindings of the Association for Computational Linguistics. Toronto, Canada, 5747–5758
2023
-
[17]
Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, and Furu Wei. 2022. The-x: Privacy- preserving transformer inference with homomorphic encryption. InFindings of the association for computational linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 3510–3520
2022
-
[18]
Xing Chen, Jingtao Li, and Chaitali Chakrabarti. 2021. Communication and computation reduction for split learning using asynchronous training. In2021 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 76–81
2021
-
[19]
Xiaopei Chen, Liang Li, Fei Ji, and Wen Wu. 2025. Memory-efficient split federated learning for llm fine-tuning on heterogeneous mobile devices. InIEEE INFOCOM 2025-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). IEEE, 1–6
2025
-
[20]
Xiaopei Chen, Wen Wu, Fei Ji, Yongguang Lu, and Liang Li. 2025. Privacy-aware split federated learning for LLM fine-tuning over internet of things.IEEE Internet of Things Journal(2025)
2025
-
[21]
Ayush Chopra, Surya Kant Sahu, Abhishek Singh, Abhinav Java, Praneeth Vepakomma, Mohammad Mohammadi Amiri, and Ramesh Raskar. 2023. Adaptive split learning. InFederated Learning Systems (FLSys) Workshop@ MLSys 2023
2023
-
[22]
Haiyu Deng, Yanna Jiang, Guangsheng Yu, Qin Wang, Xu Wang, Wei Ni, Shiping Chen, and Ren Ping Liu. 2026. Client-Cooperative Split Learning. IEEE Transactions on Services Computing(2026)
2026
-
[23]
Jieren Deng, Yijue Wang, Ji Li, Chenghong Wang, Chao Shang, Hang Liu, Sanguthevar Rajasekaran, and Caiwen Ding. 2021. Tag: Gradient attack on transformer-based language models. InFindings of the association for computational linguistics: EMNLP 2021. Association for Computational Linguistics, Punta Cana, Dominican Republic, 3600–3610
2021
-
[24]
Minxin Du, Xiang Yue, Sherman SM Chow, Tianhao Wang, Chenyu Huang, and Huan Sun. 2023. Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass. InProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA, 2665–2679. Manuscript submitted to ACM 30 Liu et al
2023
-
[25]
Qiang Duan, Shijing Hu, Ruijun Deng, and Zhihui Lu. 2022. Combined federated and split learning in edge computing for ubiquitous intelligence in internet of things: State-of-the-art and future directions.Sensors22, 16 (2022), 5983
2022
-
[26]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. InTheory of cryptography conference. Springer, Springer, Berlin, Heidelberg, 265–284
2006
-
[27]
Ege Erdogan, Alptekin Küpçü, and A Ercument Cicek. 2022. Splitguard: Detecting and mitigating training-hijacking attacks in split learning. In Proceedings of the 21st Workshop on Privacy in the Electronic Society. Association for Computing Machinery, New York, NY, USA, 125–137
2022
-
[28]
Ege Erdoğan, Alptekin Küpçü, and A Ercüment Çiçek. 2022. Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning. InProceedings of the 21st Workshop on Privacy in the Electronic Society. Association for Computing Machinery, New York, NY, USA, 115–124
2022
-
[29]
Ege Erdoğan, Unat Tekşen, M Salih Çeliktenyıldız, Alptekin Küpçü, and A Ercüment Çiçek. 2024. SplitOut: Out-of-the-box training-hijacking detection in split learning via outlier detection. InInternational Conference on Cryptology and Network Security. Springer, Singapore, 118–142
2024
-
[30]
Wenhao Fan, Penghui Chen, Xiongfei Chun, and Yuan’an Liu. 2025. MADRL-based model partitioning, aggregation control, and resource allocation for cloud-edge-device collaborative split federated learning.IEEE Transactions on Mobile Computing(2025)
2025
-
[31]
Xingyu Feng, Renqi Jia, Chengwen Luo, Victor CM Leung, and Weitao Xu. 2024. SLwF: A split learning without forgetting framework for Internet of Things.IEEE Internet of Things Journal12, 9 (2024), 12008–12020
2024
-
[32]
Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X Liu, and Ting Wang. 2022. Label inference attacks against vertical federated learning. In31st USENIX security symposium (USENIX Security 22). 1397–1414
2022
-
[33]
Fangcheng Fu, Xuanyu Wang, Jiawei Jiang, Huanran Xue, and Bui Cui. 2024. ProjPert: Projection-based perturbation for label protection in split learning based vertical federated learning.IEEE Transactions on Knowledge and Data Engineering36, 7 (2024), 3417–3428
2024
-
[34]
Jiayun Fu, Xiaojing Ma, Bin B Zhu, Pingyi Hu, Ruixin Zhao, Yaru Jia, Peng Xu, Hai Jin, and Dongmei Zhang. 2023. Focusing on Pinocchio’s Nose: A Gradients Scrutinizer to Thwart Split-Learning Hijacking Attacks Using Intrinsic Attributes.. In30th Annual Network and Distributed System Security Symposium. San Diego, California, USA
2023
-
[35]
Saurabh Gajbhiye, Priyanka Singh, and Shaifu Gupta. 2022. Data poisoning attack by label flipping on splitfed learning. InInternational Conference on Recent Trends in Image Processing and Pattern Recognition. Springer, Springer Nature Switzerland, Cham, 391–405
2022
-
[36]
Siyu Gao. 2025. Communication-Efficient Split Federated Learning with Dynamic Feature Compression. In2025 IEEE 6th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT). IEEE, 01–07
2025
-
[37]
Xinben Gao and Lan Zhang. 2023. {PCAT}: Functionality and data stealing from split learning by {Pseudo-Client} attack. In32nd USENIX Security Symposium (USENIX Security 23). USENIX Association, Anaheim, CA, 5271–5288
2023
-
[38]
Yunqi Gao, Bing Hu, Mahdi Boloursaz Mashhadi, Wei Wang, and Mehdi Bennis. 2025. PipeSFL: A fine-grained parallelization framework for split federated learning on heterogeneous clients.IEEE transactions on mobile computing24, 3 (2025), 1774–1791
2025
-
[39]
Grzegorz Gawron and Philip Stubbings. 2022.Feature space hijacking attacks against differentially private split learning. arXiv:2201.04018 https://arxiv.org/abs/2201.04018
-
[40]
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks.IEEE Access7 (2019), 47230–47244
2019
-
[41]
Yujie Gu, Richeng Jin, Xiaoyu Ji, Yier Jin, and Wenyuan Xu. 2026.Differentially Private and Communication Efficient Large Language Model Split Inference via Stochastic Quantization and Soft Prompt. arXiv:2602.11513 https://arxiv.org/abs/2602.11513
-
[42]
Zixuan Gu, Qiufeng Fan, Long Sun, Yang Liu, and Xiaojun Ye. 2025. VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2 (KDD ’25). Association for Computing Machinery, New York, NY, USA, 5470–5481
2025
-
[43]
Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents.Journal of Network and Computer Applications116 (2018), 1–8
2018
-
[44]
Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, and Jaekyun Moon. 2021. Accelerating federated learning with split learning on locally generated losses. InICML 2021 workshop on federated learning for user privacy and data confidentiality. ICML Board
2021
-
[45]
Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G Brinton, and Jaekyun Moon. 2023. SplitGP: Achieving both generalization and personalization in federated learning. InIEEE Conference on Computer Communications. IEEE, 1–10
2023
-
[46]
Dong-Jun Han, Do-Yeon Kim, Minseok Choi, David Nickel, Jaekyun Moon, Mung Chiang, and Christopher G Brinton. 2023. Federated split learning with joint personalization-generalization for inference-stage optimization in wireless edge networks.IEEE Transactions on Mobile Computing23, 6 (2023), 7048–7065
2023
-
[47]
Qiang He, Kaibin Wang, Zeqian Dong, Liang Yuan, Feifei Chen, Hai Jin, and Yun Yang. 2025. Hourglass: Enabling Efficient Split Federated Learning with Data Parallelism. InProceedings of the Twentieth European Conference on Computer Systems (EuroSys ’25). Association for Computing Machinery, New York, NY, USA, 1317–1333
2025
-
[48]
Ying He, Zhili Shen, Jingyu Hua, Qixuan Dong, Jiacheng Niu, Wei Tong, Xu Huang, Chen Li, and Sheng Zhong. 2023. Backdoor attack against split neural network-based vertical federated learning.IEEE Transactions on Information Forensics and Security19 (2023), 748–763
2023
-
[49]
Zecheng He, Tianwei Zhang, and Ruby B Lee. 2019. Model inversion attacks against collaborative inference. InProceedings of the 35th annual computer security applications conference (ACSAC ’19). ACM, Association for Computing Machinery, New York, NY, USA, 148–162. Manuscript submitted to ACM A Survey on Split Learning for LLM Fine-Tuning: Models, Systems, ...
2019
-
[50]
Xiaoyang Hou, Jian Liu, Jingyu Li, Yuhan Li, Jiawen Zhang, Wen-jie Lu, Cheng Hong, and Kui Ren. 2026. Ciphergpt: Secure two-party gpt inference. IEEE Transactions on Dependable and Secure Computing(2026), 1–16
2026
-
[51]
Cheng-Yen Hsieh, Yu-Chuan Chuang, and An-Yeu Wu. 2022. C3-SL: Circular convolution-based batch-wise compression for communication-efficient split learning. InIEEE 32nd International Workshop on Machine Learning for Signal Processing. IEEE, 1–6
2022
-
[52]
Chenghao Hu and Baochun Li. 2024. Menos: Split Fine-Tuning Large Language Models with Efficient GPU Memory Sharing. InProceedings of the 25th International Middleware Conference. Association for Computing Machinery, New York, NY, USA, 185–198
2024
-
[53]
Zhanyi Hu, Tianchen Zhou, Bingzhe Wu, Cen Chen, and Yanhao Wang. 2025. A review and experimental evaluation on split learning.Future Internet17, 2 (2025), 87
2025
-
[54]
Zhanyi Hu, Tianchen Zhou, Bingzhe Wu, Cen Chen, and Yanhao Wang. 2025. SLPerf: A Research Library and Benchmark Framework for Split Learning. In2025 IEEE 41st International Conference on Data Engineering Workshops (ICDEW). IEEE, 33–36
2025
- [55]
-
[56]
Yirui Huang, Jia-Li Yin, Zhou Tan, Qiuxiang Wang, and Ximeng Liu. 2026. SLDP-LoRA: A privacy-preserving split learning framework with low-rank adaptation.IEEE Transactions on Network Science and Engineering13 (2026), 2111–2127
2026
-
[57]
Geetabai S Hukkeri, RH Goudar, GM Dhananjaya, Vijayalaxmi N Rathod, and Shilpa Ankalaki. 2025. A Comprehensive Survey on Split-Fed Learning: Methods, Innovations, and Future Directions.IEEE Access13 (2025), 46312–46333
2025
-
[58]
Joohyung Jeon and Joongheon Kim. 2020. Privacy-sensitive parallel split learning. In2020 International Conference on Information Networking (ICOIN). IEEE, 7–9
2020
-
[59]
Yanna Jiang, Guangsheng Yu, Qin Wang, Xu Wang, Baihe Ma, Caijun Sun, Wei Ni, and Ren Ping Liu. 2025. Split Unlearning. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, New York, NY, USA, 948–962
2025
-
[60]
Praveen Joshi, Chandra Thapa, Seyit Camtepe, Mohammed Hasanuzzamana, Ted Scully, and Haithem Afli. 2021. Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance. arXiv:2109.09246 https://arxiv.org/abs/2109.09246
- [61]
-
[62]
Sanjay Kariyappa and Moinuddin K Qureshi. 2023. Exploit: Extracting private labels in split learning. In2023 IEEE conference on secure and trustworthy machine learning (SaTML). IEEE, 165–175
2023
-
[63]
Tanveer Khan and Antonis Michalas. 2026. Oops!... They Stole it Again: Attacks on Split Learning. InProceedings of the 18th ACM Workshop on Artificial Intelligence and Security. Association for Computing Machinery, New York, NY, USA, 123–135
2026
-
[64]
Mohammad Kohankhaki, Ahmad Ayad, Mahdi Barhoush, and Anke Schmeink. 2023. Detecting data poisoning in split learning using intraclass- distance inflated loss. In2023 IEEE Globecom Workshops (GC Wkshps). IEEE, 2091–2096
2023
-
[65]
Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, and Chaitali Chakrabarti. 2022. Ressfl: A resistance transfer framework for defending model inversion attack in split federated learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10194–10202
2022
-
[66]
Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, and Chong Wang. 2022. Label Leakage and Protection in Two-party Split Learning. InInternational Conference on Learning Representations
2022
-
[67]
Zuguang Li, Shaohua Wu, Liang Li, and Songge Zhang. 2025. Energy-efficient split learning for fine-tuning large language models in edge networks. IEEE Networking Letters7, 3 (2025), 176–180
2025
-
[68]
Zuguang Li, Wen Wu, Shaohua Wu, and Wei Wang. 2024. Adaptive split learning over energy-constrained wireless edge networks. InIEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). IEEE, 1–6
2024
-
[69]
Ziang Li, Mengda Yang, Yaxin Liu, Juan Wang, Hongxin Hu, Wenzhe Yi, and Xiaoyang Xu. 2023. GAN you see me? enhanced data reconstruction attacks against split inference.Advances in neural information processing systems36 (2023), 54554–54566
2023
-
[70]
Dandan Liang, Jianing Zhang, Evan Chen, Zhe Li, Rui Li, and Haibo Yang. 2025. Towards Straggler-Resilient Split Federated Learning: An Unbalanced Update Approach. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems. Red Hook, NY, USA
2025
-
[71]
Xingzhu Liang, Yachen Xu, Yu-e Lin, and Chunjiong Zhang. 2025. Federated split learning via dynamic aggregation and homomorphic encryption on non-IID data.The Journal of Supercomputing81, 1 (2025), 63
2025
-
[72]
Yipeng Liang, Qimei Chen, Rongpeng Li, Guangxu Zhu, Muhammad Kaleem Awan, and Hao Jiang. 2026. Communication-and-Computation Efficient Split Federated Learning in Wireless Networks: Gradient Aggregation and Resource Management.IEEE Transactions on Wireless Communications 25 (2026), 1981–1995
2026
-
[73]
Yunming Liao, Yang Xu, Hongli Xu, Lun Wang, Zhiwei Yao, and Chunming Qiao. 2024. MergeSFL: Split federated learning with feature merging and batch size regulation. In2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 2054–2067
2024
-
[74]
Yunming Liao, Yang Xu, Hongli Xu, Zhiwei Yao, Liusheng Huang, and Chunming Qiao. 2024. Parallelsfl: A novel split federated learning framework tackling heterogeneity issues. InProceedings of the 30th Annual International Conference on Mobile Computing and Networking. New York, NY, USA, 845–860
2024
-
[75]
Yunming Liao, Yang Xu, Hongli Xu, Zhiwei Yao, Lun Wang, and Chunming Qiao. 2023. Accelerating federated learning with data and model parallelism in edge computing.IEEE/ACM Transactions on Networking32, 1 (2023), 904–918. Manuscript submitted to ACM 32 Liu et al
2023
-
[76]
Zheng Lin, Zhe Chen, Xianhao Chen, Wei Ni, and Yue Gao. 2026. HASFL: Heterogeneity-aware split federated learning over edge computing systems.IEEE Transactions on Mobile Computing(2026)
2026
- [77]
-
[78]
Zehang Lin, Zhe Lin, Miao Yang, Jianhao Huang, Yuxin Zhang, Zihan Fang, Xia Du, Zhe Chen, Shunzhi Zhu, and Wei Ni. 2026. SL-ACC: A Communication-Efficient Split Learning Framework with Adaptive Channel-wise Compression.IEEE Transactions on Vehicular Technology(2026)
2026
-
[79]
Zheng Lin, Guanqiao Qu, Xianhao Chen, and Kaibin Huang. 2024. Split learning in 6G edge networks.IEEE Wireless Communications31, 4 (2024), 170–176
2024
-
[80]
Zheng Lin, Guanqiao Qu, Wei Wei, Xianhao Chen, and Kin K Leung. 2025. Adaptsfl: Adaptive split federated learning in resource-constrained edge networks.IEEE Transactions on Networking(2025)
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.