Recognition: unknown
Class-Aware Adaptive Differential Privacy in Deep Learning for Sensor-Based Fall Detection
Pith reviewed 2026-05-10 16:21 UTC · model grok-4.3
The pith
Adapting noise in differential privacy to each mini-batch's class mix improves fall-detection accuracy while retaining formal privacy guarantees.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By making the noise magnitude in differential privacy depend on the class composition of each mini-batch, the CA-ADP mechanism reduces the usual accuracy penalty of privacy protection in fall-detection models. When integrated with a hybrid 3D CNN-BiLSTM architecture and tested on three public sensor datasets, the framework delivers measurable F-score gains over uniform-noise baselines while satisfying the standard (ε,δ)-differential privacy definition and passing Wilcoxon signed-rank checks for consistent improvement.
What carries the argument
The Class-Aware Adaptive Differential Privacy (CA-ADP) mechanism, which scales gradient noise according to the class counts inside each mini-batch.
If this is right
- The method yields F-score gains of 3.3%, 8.5%, and 7.5% on SisFall, UP-Fall, and MobiAct respectively while preserving (ε,δ)-differential privacy.
- Wilcoxon signed-rank tests show the adaptive approach outperforms conventional differential privacy on the same architecture and data.
- The framework provides formal privacy guarantees that many prior fall-detection studies omit.
- Performance remains competitive with non-private baselines in real-world healthcare sensor settings.
Where Pith is reading between the lines
- The same batch-wise noise scaling could be applied to other sensor classification tasks that suffer from class imbalance.
- Reduced noise on balanced batches may allow smaller training sets to reach usable accuracy under privacy constraints.
- Practical deployment would still need checks that the class-composition signal itself does not create a new side channel.
Load-bearing premise
Dynamically changing noise based on class counts in each batch still satisfies the claimed differential privacy bound and does not leak extra information through the balance signal itself.
What would settle it
A membership or attribute inference attack that succeeds at a higher rate on models trained with the class-aware noise schedule than on models trained with the matching uniform noise level.
Figures
read the original abstract
Fall detection is a critical task in healthcare, particularly for elderly people. Timely fall detection and treatment can prevent severe injuries. Sensor-based activity data can be used to detect fall. However, this data are highly sensitive and raises significant privacy concerns. Existing privacy approaches apply uniform noise across all training samples, which affects the prediction performance. To address this limitation, we propose a Class-Aware Adaptive Differential Privacy (CA-ADP) framework integrated with a hybrid 3D Convolutional Neural Network and Bidirectional Long Short-Term Memory (3D CNN-BiLSTM) architecture. The CA-ADP mechanism dynamically adjusts the magnitude of noise added to gradients based on the class composition of each mini-batch. This process ensures privacy while mitigates performance degradation. We formally analyze the $(\epsilon,\delta)$-Differential Privacy guarantee and provide a privacy-utility trade-off analysis. The proposed method is evaluated on three public benchmark datasets, namely SisFall, UP-Fall, and MobiAct. The experimental results show that the proposed privacy model achieves improvements of 3.3\%, 8.5\%, and 7.5\% over the conventional privacy-based model in terms of F-score for the SisFall, UP-Fall, and MobiAct datasets, respectively. Comparisons with prior studies show that the CA-AD based framework achieves competitive performance and provides formal privacy guarantees, which are largely overlooked in existing studies. Wilcoxon signed-rank tests confirm that the proposed mechanism consistently outperforms conventional differential privacy. Those results establish the proposed CA-ADP framework as an effective approach to privacy-preserving fall detection in real-world healthcare settings.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a Class-Aware Adaptive Differential Privacy (CA-ADP) framework that integrates with a 3D CNN-BiLSTM architecture for sensor-based fall detection. Noise added to per-mini-batch gradients is scaled according to the observed class proportions within each batch; the authors claim this yields formal (ε, δ)-DP guarantees while delivering F-score gains of 3.3 %, 8.5 %, and 7.5 % over conventional DP baselines on the SisFall, UP-Fall, and MobiAct datasets, respectively, supported by Wilcoxon signed-rank tests and a privacy-utility trade-off analysis.
Significance. If the data-dependent adaptation can be shown to preserve the stated (ε, δ) bounds without introducing leakage through the class-balance signal, the method would provide a concrete route to reduce the utility penalty of DP-SGD on the imbalanced, privacy-sensitive classification tasks typical of wearable healthcare sensing. The use of public benchmarks, explicit privacy claims, and statistical testing are constructive elements.
major comments (2)
- [Abstract and privacy-analysis section] The formal (ε, δ)-DP analysis (referenced in the abstract and presumably detailed in the methods section) applies standard DP-SGD composition or moments-accountant bounds to a noise schedule whose magnitude is chosen deterministically from the private class counts inside each mini-batch. No explicit privacy loss term or worst-case bound is supplied for the selection function itself; an adversary observing the realized noise scale can therefore infer information about batch class balance, which is sensitive in an imbalanced fall-detection setting. This gap directly undermines the central claim that the deployed mechanism satisfies the reported privacy parameters.
- [Experimental results] Table or figure reporting the F-score deltas (abstract and experimental results) presents point estimates without error bars, standard deviations across random seeds, or an ablation that isolates the class-composition estimator from other modeling choices. Because the central empirical claim rests on these specific percentage improvements, the absence of these controls makes it impossible to judge whether the gains are robust or attributable to the adaptation rule.
minor comments (2)
- The precise functional form of the noise-scaling rule (how class proportions map to the noise multiplier) is described only at a high level; a mathematical definition or algorithm box would clarify the mechanism.
- The manuscript does not discuss whether the class-composition estimator itself is released or kept internal, nor how any auxiliary information about batch statistics is protected.
Simulated Author's Rebuttal
We thank the referee for the constructive and insightful comments on our manuscript. We address each major comment point by point below. We will revise the manuscript to incorporate the suggested improvements to the privacy analysis and experimental reporting.
read point-by-point responses
-
Referee: [Abstract and privacy-analysis section] The formal (ε, δ)-DP analysis (referenced in the abstract and presumably detailed in the methods section) applies standard DP-SGD composition or moments-accountant bounds to a noise schedule whose magnitude is chosen deterministically from the private class counts inside each mini-batch. No explicit privacy loss term or worst-case bound is supplied for the selection function itself; an adversary observing the realized noise scale can therefore infer information about batch class balance, which is sensitive in an imbalanced fall-detection setting. This gap directly undermines the central claim that the deployed mechanism satisfies the reported privacy parameters.
Authors: We acknowledge that the current privacy analysis does not explicitly bound the privacy loss arising from the data-dependent selection of the noise scale based on per-batch class counts. This is a valid concern, as the class composition is sensitive and could leak information if observed through the chosen noise magnitude. In the revised manuscript, we will extend the formal analysis to include a privacy loss term for the class-composition estimator. We will model the estimator's sensitivity, apply the moments accountant to compose its privacy cost with the standard DP-SGD bounds, and if needed, add calibrated noise to the class counts to ensure the overall mechanism satisfies the stated (ε, δ) guarantees without additional leakage. The updated analysis will be presented in the methods section with a clear worst-case bound. revision: yes
-
Referee: [Experimental results] Table or figure reporting the F-score deltas (abstract and experimental results) presents point estimates without error bars, standard deviations across random seeds, or an ablation that isolates the class-composition estimator from other modeling choices. Because the central empirical claim rests on these specific percentage improvements, the absence of these controls makes it impossible to judge whether the gains are robust or attributable to the adaptation rule.
Authors: We agree that additional statistical controls are necessary to substantiate the reported F-score improvements. In the revised version, we will rerun all experiments across at least five independent random seeds and report mean F-scores accompanied by standard deviations and error bars in the relevant tables and figures. We will also add an ablation study that isolates the class-composition adaptation by comparing the full CA-ADP mechanism against a non-adaptive variant (using fixed or averaged class proportions) while keeping all other components identical. This will clarify the contribution of the dynamic scaling rule. The Wilcoxon signed-rank test results will be reported with exact p-values for transparency. revision: yes
Circularity Check
No circularity: central claims are empirical performance gains on public benchmarks plus a stated formal privacy analysis.
full rationale
The paper presents an empirical comparison of F-scores on three public datasets (SisFall, UP-Fall, MobiAct) and asserts a formal (ε,δ)-DP analysis for its class-aware noise scaling. No equations or derivations are shown that reduce the reported improvements or the privacy bound to fitted parameters, self-citations, or input data by construction. The improvement numbers are measured outcomes rather than predictions forced by the model definition, and the privacy claim is presented as an independent analysis rather than a renaming or self-referential definition. Self-citations, if present in the full text, are not load-bearing for the core result.
Axiom & Free-Parameter Ledger
free parameters (1)
- class-composition noise scaling factors
axioms (1)
- standard math Standard (ε,δ)-differential privacy definition holds after the adaptive noise addition
Reference graph
Works this paper leans on
-
[1]
A. Sucerquia, J. D. L´ opez, J. F. Vargas-Bonilla, Sisfall: A fall and movement dataset, Sensors 17 (1) (2017). doi:10.3390/s17010198. URLhttps://www.mdpi.com/1424-8220/17/1/198
-
[2]
M. Maruf, M. M. Haque, M. M. Hasan, M. Farhan, A. Islam, State-of-the-art review on fall prediction among older adults: Exploring edge devices as a promising approach for the future, Measurement: Sensors 39 (2025) 101878. doi:https://doi.org/10.1016/j.measen.2025.101878. URLhttps://www.sciencedirect.com/science/article/pii/S2665917425000728
-
[3]
N. Pathan, H. Yu, M. Vassallo, P. Koufaki, Fedfall: Federated learning based framework for fall detec- tion, in: 2025 30th International Conference on Automation and Computing (ICAC), 2025, pp. 1–9. doi:10.1109/ICAC65379.2025.11196291. 18
-
[4]
F. Yang, S. Li, C. Sun, X. Li, Z. Xiao, Action recognition in rehabilitation: combining 3d convo- lution and lstm with spatiotemporal attention, Frontiers in Physiology Volume 15 - 2024 (2024). doi:10.3389/fphys.2024.1472380. URLhttps://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2024. 1472380
-
[5]
M. A. Khatun, M. A. Yousuf, T. N. Turna, A. Azad, S. A. Alyami, M. A. Moni, A multi-agent and attention-aware enhanced CNN-BiLSTM model for human activity recognition for enhanced disability assistance, Diagnostics 15 (5) (2025) 537. doi:10.3390/diagnostics15050537
-
[6]
Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, L. Zhang, Deep learning with differential privacy, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Com- munications Security, ACM, New York, NY, USA, 2016, pp. 308–318. doi:10.1145/2976749.2978318
-
[7]
J. K. Sana, M. S. Rahman, M. S. Rahman, Privacy-preserving customer churn prediction model in the context of telecommunication industry, Engineering Applications of Artificial Intelligence 162 (2025) 112514. doi:https://doi.org/10.1016/j.engappai.2025.112514. URLhttps://www.sciencedirect.com/science/article/pii/S095219762502545X
-
[8]
Dwork, Differential privacy, in: M
C. Dwork, Differential privacy, in: M. Bugliesi, B. Preneel, V. Sassone, I. Wegener (Eds.), Automata, Languages and Programming, Springer Berlin Heidelberg, Berlin, Heidelberg, 2006, pp. 1–12
2006
-
[9]
A. Bourke, G. Lyons, A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor, Medi- cal Engineering and Physics 30 (1) (2008) 84–90. doi:https://doi.org/10.1016/j.medengphy.2006.12.001. URLhttps://www.sciencedirect.com/science/article/pii/S1350453306002657
-
[10]
A. N. Ishak, M. H. Habaebi, S. H. Yusoff, M. R. Islam, Wearable based-sensor fall detection system using machine learning algorithm, in: 2021 8th International Conference on Computer and Communication Engineering (ICCCE), 2021, pp. 53–57. doi:10.1109/ICCCE50029.2021.9467195
-
[11]
A. H. Fakhrulddin, X. Fei, H. Li, Convolutional neural networks (cnn) based human fall detection on body sensor networks (bsn) sensor data, in: 2017 4th International Conference on Systems and Informatics (ICSAI), 2017, pp. 1461–1465. doi:10.1109/ICSAI.2017.8248516
-
[12]
K. Gunale, P. Mukherji, S. Motade, Convolutional neural network-based fall detection for the el- derly person monitoring, Journal of Advances in Information Technology 14 (2023) 1169–1176. doi:10.12720/jait.14.6.1169-1176
-
[13]
S. A. Uddin MZ, Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning., Vol. 11, 2021, p. 16455. doi:10.1038/s41598-021- 95947
-
[14]
Ordonez, D
F. Ordonez, D. Roggen, Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition, Sensors 23 (3) (2023) 1456
2023
-
[15]
M. M. Hasan, M. S. Islam, S. Abdullah, Robust pose-based human fall detection using recurrent neural network, in: 2019 IEEE International Conference on Robotics, Automation, Artificial-intelligence and Internet-of-Things (RAAICON), 2019, pp. 48–51. doi:10.1109/RAAICON48939.2019.23
-
[16]
X. Hu, S. Yu, J. Zheng, Z. Fang, Z. Zhao, X. Qu, A hybrid cnn-lstm model for involun- tary fall detection using wrist-worn sensors, Advanced Engineering Informatics 65 (2025) 103178. doi:https://doi.org/10.1016/j.aei.2025.103178. URLhttps://www.sciencedirect.com/science/article/pii/S1474034625000710
-
[17]
T. Alanazi, G. Muhammad, Human fall detection using 3d multi-stream convolutional neural networks with fusion, Diagnostics 12 (12) (2022) 3060. doi:10.3390/diagnostics12123060
-
[18]
W. S. Almukadi, F. Alrowais, M. K. Saeed, et al., Deep feature fusion with computer vision driven fall detection approach for enhanced assisted living safety, Scientific Reports 14 (2024) 21537. doi:10.1038/s41598-024-71545-6
-
[19]
C. Wei, W. Li, C. Gong, W. Chen, Dc-sgd: Differentially private sgd with dynamic clipping through gradient norm distribution estimation, IEEE Transactions on Information Forensics and Security 20 (2025) 4498–4511. doi:10.1109/TIFS.2025.3557755. 19
-
[20]
K. Pan, Y.-S. Ong, M. Gong, H. Li, A. Qin, Y. Gao, Differential privacy in deep learning: A literature survey, Neurocomputing 589 (2024) 127663. doi:https://doi.org/10.1016/j.neucom.2024.127663. URLhttps://www.sciencedirect.com/science/article/pii/S092523122400434X
-
[21]
Andrew, O
G. Andrew, O. Thakkar, H. B. McMahan, S. Ramaswamy, Differentially private learning with adap- tive clipping, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS ’21, Curran Associates Inc., Red Hook, NY, USA, 2021
2021
-
[22]
Z. Xiangfei, Z. Qingchen, Adaptive differential privacy mechanism for enhanced deep learning model utility and privacy, Neural Networks 196 (2026) 108345. doi:https://doi.org/10.1016/j.neunet.2025.108345. URLhttps://www.sciencedirect.com/science/article/pii/S0893608025012262
-
[23]
Guerra-Manzanares, L
A. Guerra-Manzanares, L. J. L. Lopez, M. Maniatakos, F. E. Shamout, Privacy-Preserving Machine Learning for Healthcare: Open Challenges and Future Perspectives, Springer Nature Switzerland, 2023, p. 25–40
2023
-
[24]
Raju, Privacy preserving machine learning in healthcare, International Journal of Intelligent Systems and Applications in Engineering 12 (4) (Oct
V. Raju, Privacy preserving machine learning in healthcare, International Journal of Intelligent Systems and Applications in Engineering 12 (4) (Oct. 2024). URLhttps://ijisae.org/index.php/IJISAE/article/view/7504
2024
-
[25]
V. Pichapati, A. T. Suresh, F. X. Yu, S. J. Reddi, S. Kumar, Adaclip: Adaptive clipping for private sgd (2019). arXiv:1908.07643. URLhttps://arxiv.org/abs/1908.07643
-
[26]
Y. Choi, J. Park, Y. Park, J. Lee, J. Byun, Differentially private upsampling for enhanced anomaly detection in imbalanced data, Engineering Applications of Artificial Intelligence 163 (2026) 113124. doi:https://doi.org/10.1016/j.engappai.2025.113124. URLhttps://www.sciencedirect.com/science/article/pii/S0952197625031550
-
[27]
L. Mart´ ınez-Villase˜ nor, H. Ponce, J. Brieva, E. Moya-Albor, J. N´ u˜ nez-Mart´ ınez, C. Pe˜ nafort-Asturiano, Up-fall detection dataset: A multimodal approach, Sensors 19 (9) (2019). doi:10.3390/s19091988. URLhttps://www.mdpi.com/1424-8220/19/9/1988
-
[28]
Vavoulas, C
G. Vavoulas, C. Chatzaki, T. Malliotakis, M. Pediaditis, M. Tsiknakis, The mobiact dataset: Recog- nition of activities of daily living using smartphones., in: C. R¨ ocker, M. Ziefle, J. O’Donoghue, L. A. Maciaszek, W. Molloy (Eds.), ICT4AgeingWell, SCITEPRESS, 2016, pp. 143–151. URLhttp://dblp.uni-trier.de/db/conf/ict4ageingwell/ict4ageingwell2016.html# ...
2016
-
[29]
Chatzaki, M
C. Chatzaki, M. Pediaditis, G. Vavoulas, M. Tsiknakis, Human daily activity and fall recognition using a smartphone’s acceleration sensor, in: Information and Communication Technologies for Ageing Well and e-Health”, Vol. 736, Springer International Publishing, Cham, 2017, pp. 100–118
2017
-
[30]
J. Lee, C. Clifton, How much is enough? choosingϵfor differential privacy, in: X. Lai, J. Zhou, H. Li (Eds.), Information Security, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 325–340
2011
- [31]
-
[32]
Q. Geng, P. Viswanath, The optimal noise-adding mechanism in differential privacy, IEEE Transactions on Information Theory 62 (2) (2016) 925–951. doi:10.1109/TIT.2015.2504967
-
[33]
J. Dong, A. Roth, W. J. Su, Gaussian Differential Privacy, Journal of the Royal Statistical Society Series B: Statistical Methodology 84 (1) (2022) 3–37. arXiv:https://academic.oup.com/jrsssb/article- pdf/84/1/3/49324238/jrsssb 84 1 3.pdf, doi:10.1111/rssb.12454. URLhttps://doi.org/10.1111/rssb.12454
-
[34]
F. McSherry, K. Talwar, Mechanism design via differential privacy, in: 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), 2007, pp. 94–103. doi:10.1109/FOCS.2007.66
-
[35]
Jordon, J
J. Jordon, J. Yoon, M. van der Schaar, Pate-gan: Generating synthetic data with differential privacy guarantees, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, OpenReview.net, 2019. URLhttps://openreview.net/forum?id=S1zk9iRqF7 20
2019
-
[36]
C. Canonne, G. Kamath, T. Steinke, The discrete gaussian for differential privacy, Journal of Privacy and Confidentiality 12 (1) (Jul. 2022). doi:10.29012/jpc.784
-
[37]
The algorithmic foundations of differential privacy
C. Dwork, A. Roth, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci. 9 (3–4) (2014) 211–407. doi:10.1561/0400000042. URLhttps://doi.org/10.1561/0400000042
-
[38]
I. Mironov, R´ enyi differential privacy, in: Proceedings of the 30th IEEE Computer Security Foundations Symposium (CSF), IEEE, 2017, pp. 263–275. doi:10.1109/CSF.2017.11
-
[39]
A. Yousefpour, et al., Opacus: User-friendly differential privacy library in PyTorch, arXiv:2109.12298 (2021)
-
[40]
Demˇ sar, Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research 7 (2006) 1–30
J. Demˇ sar, Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research 7 (2006) 1–30
2006
-
[41]
M. A. Al-qaness, A. Dahou, M. Abd Elaziz, A. M. Helmi, Human activity recognition and fall detection using convolutional neural network and transformer-based architecture, Biomedical Signal Processing and Control 95 (2024) 106412. doi:https://doi.org/10.1016/j.bspc.2024.106412. URLhttps://www.sciencedirect.com/science/article/pii/S1746809424004701
-
[42]
G. C. Cawley, N. L. Talbot, On over-fitting in model selection and subsequent selection bias in perfor- mance evaluation, Journal of Machine Learning Research 11 (2010) 2079–2107
2010
-
[43]
S. Ahn, J. Kim, B. Koo, Y. Kim, Evaluation of inertial sensor-based pre-impact fall detection algorithms using public dataset, Sensors 19 (4) (2019). doi:10.3390/s19040774. URLhttps://www.mdpi.com/1424-8220/19/4/774
-
[44]
M. Mahmoud, M. Osama, A. Milad, J. Raafat, P. Elkafrawy, S. Fawzi, Sudden fall detection and prediction using ai techniques, in: 2024 21st Learning and Technology Conference, 2024, pp. 308–312. doi:10.1109/LT60077.2024.10469410
-
[45]
M. Bun, T. Steinke, Concentrated differential privacy: Simplifications, extensions, and lower bounds, in: Proceedings of the 14th International Conference on Theory of Cryptography (TCC), Springer, Berlin, Heidelberg, 2016, pp. 635–658. 21
2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.