Recognition: unknown
Fall Risk and Gait Analysis in Community-Dwelling Older Adults using World-Spaced 3D Human Mesh Recovery
Pith reviewed 2026-05-10 16:20 UTC · model grok-4.3
The pith
Video-based 3D mesh recovery extracts gait parameters linked to fall risk in older adults
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By applying a 3D Human Mesh Recovery model to videos of community-dwelling older adults completing the Timed Up and Go test, the authors extract spatiotemporal gait parameters including step time, step length, and sit-to-stand duration. Video-derived step time correlates significantly with IMU-based insole measurements. Linear mixed effects models show that higher self-rated fall risk and fear of falling predict shorter and more variable step lengths as well as longer sit-to-stand durations.
What carries the argument
The 3D Human Mesh Recovery model that reconstructs world-spaced 3D human meshes from 2D video frames to derive gait parameters without calibration.
If this is right
- Gait assessment becomes possible in community centers using standard video equipment.
- Specific gait features such as step variability and sit-to-stand duration serve as indicators of fall risk.
- The method offers an ecologically valid alternative to clinical stopwatch timing for older adult mobility.
- Video pipelines enable scalable collection of gait data in natural environments.
Where Pith is reading between the lines
- This video approach might be adapted for smartphone-based monitoring to track fall risk changes over time at home.
- It could connect to broader efforts in using computer vision for health screening in aging populations.
- Validation across different video qualities and participant groups would strengthen the case for widespread use.
Load-bearing premise
The 3D Human Mesh Recovery model accurately reconstructs world-spaced gait parameters from unconstrained community videos of older adults without additional calibration or per-setting ground truth.
What would settle it
A new validation set of community videos where video-derived step times fail to correlate with simultaneous IMU insole readings and where gait features show no association with fall risk questionnaire scores.
Figures
read the original abstract
Gait assessment is a key clinical indicator of fall risk and overall health in older adults. However, standard clinical practice is largely limited to stopwatch-measured gait speed. We present a pipeline that leverages a 3D Human Mesh Recovery (HMR) model to extract gait parameters from recordings of older adults completing the Timed Up and Go (TUG) test. From videos recorded across different community centers, we extract and analyze spatiotemporal gait parameters, including step time, sit-to-stand duration, and step length. We found that video-derived step time was significantly correlated with IMU-based insole measurements. Using linear mixed effects models, we confirmed that shorter, more variable step lengths and longer sit-to-stand durations were predicted by higher self-rated fall risk and fear of falling. These findings demonstrate that our pipeline can enable accessible and ecologically valid gait analysis in community settings.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents a pipeline that applies 3D Human Mesh Recovery (HMR) to monocular videos of community-dwelling older adults performing the Timed Up and Go (TUG) test. From the recovered meshes it extracts spatiotemporal parameters (step time, step length, sit-to-stand duration) and reports (i) a significant correlation between video-derived step time and IMU-insole ground truth, and (ii) linear mixed-effects models showing that higher self-rated fall risk and fear of falling predict shorter/more variable step lengths and longer sit-to-stand times.
Significance. If the metric accuracy of the HMR-derived parameters can be established beyond step time, the work would provide a practical route to ecologically valid, scalable gait assessment outside laboratory settings, potentially improving fall-risk screening in community centers.
major comments (1)
- [Abstract] Abstract: The central empirical claim—that shorter, more variable step lengths and longer sit-to-stand durations are associated with higher fall risk—rests on the assumption that the HMR pipeline recovers accurate world-space (metric) values for step length and sit-to-stand duration. Only step time is shown to correlate significantly with IMU measurements; no correlation coefficients, error metrics, or ground-truth comparisons are described for the other two parameters. Monocular HMR is known to suffer from scale ambiguity and depth errors, which could systematically bias the LME predictors and therefore the reported associations.
minor comments (1)
- [Abstract] Abstract: Sample size, exact statistical values (correlation coefficients, p-values, effect sizes), and the random-effects structure of the linear mixed-effects models are not reported, making it difficult to judge the robustness and generalizability of the findings.
Simulated Author's Rebuttal
We thank the referee for the thoughtful review and for identifying the need to clarify the scope of our validation. We respond to the major comment below.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central empirical claim—that shorter, more variable step lengths and longer sit-to-stand durations are associated with higher fall risk—rests on the assumption that the HMR pipeline recovers accurate world-space (metric) values for step length and sit-to-stand duration. Only step time is shown to correlate significantly with IMU measurements; no correlation coefficients, error metrics, or ground-truth comparisons are described for the other two parameters. Monocular HMR is known to suffer from scale ambiguity and depth errors, which could systematically bias the LME predictors and therefore the reported associations.
Authors: We agree that only step time receives direct IMU validation (Pearson correlation reported in Results). Step length and sit-to-stand duration lack equivalent ground-truth metrics in the current dataset. While the world-space HMR approach used in the pipeline is designed to recover metric-scale values, we acknowledge that residual scale ambiguity or depth errors could in principle affect absolute values. However, the linear mixed-effects models examine within-subject associations with self-reported fall risk; any uniform scale bias would not alter the sign or statistical significance of the coefficients. We will revise the manuscript to (i) explicitly state the validation scope in the abstract and methods, (ii) add a dedicated limitations paragraph discussing potential metric errors for unvalidated parameters, and (iii) outline future work that will incorporate additional sensors for full parameter validation. These changes constitute a partial revision. revision: partial
Circularity Check
No significant circularity; empirical correlations and statistical modeling are self-contained
full rationale
The paper's core pipeline applies a pre-trained 3D Human Mesh Recovery model to monocular videos to derive spatiotemporal parameters (step time, step length, sit-to-stand duration), validates one parameter (step time) via direct correlation against independent IMU insole data, and fits linear mixed effects models to test associations between those parameters and self-rated fall-risk scores. No equations or steps reduce by construction to their inputs: the LME outputs are statistical inferences from observed data rather than renamed fits, the HMR extraction is treated as an external tool without self-referential definitions, and no self-citation chains or uniqueness theorems are invoked to justify the method. The derivation remains externally benchmarked and falsifiable.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The 3D Human Mesh Recovery model accurately reconstructs gait parameters from 2D videos in real-world settings
- domain assumption Linear mixed effects models appropriately capture the relationship between gait parameters and fall risk
Reference graph
Works this paper leans on
-
[1]
Akter, T
S. Akter, T. M. Guess, S. Sarker, et al. Explainable machine learning for early detection of mild cognitive impairment, fall risk, and frailty using sensor-based motor function data. medRxiv, 2025. 2025.12.23.25342943. 2
2025
-
[2]
Banarjee, P
C. Banarjee, P. M. Maldonado, M. S. B. Hossain, H. Choi, R. Xie, and L. Thiamwong. Associations of gait characteris- tics with fall risk and frailty in older women.Innovation in Aging, 9(Supplement 2):igaf122.3357, 2025. 2
2025
-
[3]
D. Bates. lme4: Linear mixed-effects models using Eigen and S4.R Package Version, 1(1), 2016. 2
2016
-
[4]
Bergen and I
G. Bergen and I. Shakya. CDC STEADI: Evaluation guide for older adult clinical fall prevention programs. Technical report, Centers for Disease Control and Prevention, 2019. 3
2019
-
[5]
J. D. Blasco-Garc ´ıa, G. Garc´ıa-L´opez, M. Jim ´enez-Mu˜noz, et al. A computer vision-based system to help health profes- sionals to apply tests for fall risk assessment.Sensors, 24(6): 2015, 2024. 2
2015
-
[6]
Brachman, V
A. Brachman, V . Hadyk, A. Kamieniarz-Olczak, and A. Nawrat-Szołtysik. Examining the role of fear of falling on gait parameters and short-term gait adaptation in older adults.Journal of Clinical Medicine, 14(23):8311, 2025. 4
2025
-
[7]
A. S. Buchman, P. A. Boyle, R. S. Wilson, D. A. Fleischman, S. Leurgans, and D. A. Bennett. Association between late- life social activity and motor decline in older adults.Archives of Internal Medicine, 169(12):1139–1146, 2009. 1
2009
-
[8]
Bytyci and M
I. Bytyci and M. Y . Henein. Stride length predicts adverse clinical events in older adults: a systematic review and meta- analysis.Journal of Clinical Medicine, 10(12):2670, 2021. 2
2021
-
[9]
Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y . Sheikh. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields.IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1):172–186, 2019. 1, 2
2019
-
[10]
Chang, J
J. Chang, J. Nathalie, M. Nguyenhuy, R. Xu, S. A. Virk, and A. Saxena. Slow gait speed is associated with worse post- operative outcomes in cardiac surgery: A systematic review and meta-analysis.Journal of Cardiac Surgery, 37(1):197– 204, 2022. 1
2022
-
[11]
Cuignet, C
T. Cuignet, C. Perchoux, G. Caruso, et al. Mobility among older adults: Deconstructing the effects of motility and movement on wellbeing.Urban Studies, 57(2):383–401,
-
[12]
M. ´A. De la C ´amara, S. Higueras-Fresnillo, K. P. Sadarangani, I. Esteban-Cornejo, D. Martinez-Gomez, and ´O. L. Veiga. Clinical and ambulatory gait speed in older adults: associations with several physical, mental, and cog- nitive health outcomes.Physical Therapy, 100(4):718–727,
-
[13]
Delbaere, J
K. Delbaere, J. C. Close, A. S. Mikolaizak, P. S. Sachdev, H. Brodaty, and S. R. Lord. The falls efficacy scale international (FES-I): A comprehensive longitudinal validation study.Age and Ageing, 39(2):210–216, 2010. 3
2010
-
[14]
Delfi, M
G. Delfi, M. Kamachi, and T. Dutta. Development of an auto- mated minimum foot clearance measurement system: Proof of principle.Sensors, 21(3):976, 2021. 2
2021
-
[15]
Elyan, P
E. Elyan, P. Vuttipittayamongkol, P. Johnston, et al. Com- puter vision and machine learning for medical image analy- sis: recent advances, challenges, and way forward.Artificial Intelligence Surgery, 2(1):24–45, 2022. 2
2022
-
[16]
H.-S. Fang, J. Li, H. Tang, et al. AlphaPose: Whole-body regional multi-person pose estimation and tracking in real- time.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7157–7173, 2022. 1, 2
2022
-
[17]
A. Fogel. Movement and communication in human infancy: The social dynamics of development.Human Movement Sci- ence, 11(4):387–423, 1992. 1
1992
-
[18]
M. U. Friedrich, S. Relton, D. Wong, and J. Alty. Computer vision in clinical neurology: a review.JAMA Neurology,
-
[19]
D. J. Goble. The BTrackS Balance Test is a valid predictor of older adult falling, 2018. 3
2018
-
[20]
Humans in 4d: Re- constructing and tracking humans with transformers.2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14737–14748, 2023
Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4d: Re- constructing and tracking humans with transformers.2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14737–14748, 2023. 2, 1
2023
-
[21]
L. Hou, X. Liu, Y . Zhang, et al. Cohort profile: West china health and aging trend (WCHAT).The Journal of Nutrition, Health and Aging, 25(3):302–310, 2021. 2
2021
-
[22]
Jakubowski, T
K. Jakubowski, T. Eerola, P. Alborno, G. V olpe, A. Camurri, and M. Clayton. Extracting coarse body movements from video in music performance: A comparison of automated computer vision techniques with motion capture data.Fron- tiers in Digital Humanities, 4:9, 2017. 2
2017
-
[23]
Javaid, A
M. Javaid, A. Haleem, R. P. Singh, and M. Ahmed. Com- puter vision to enhance healthcare domain: An overview of features, implementation, and opportunities.Intelligent Pharmacy, 2(6):792–803, 2024. 2
2024
-
[24]
Ultralytics YOLOv8, 2023
Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics YOLOv8, 2023. 1
2023
-
[25]
G. I. Kempen, L. Yardley, J. C. Van Haastregt, et al. The short FES-I: a shortened version of the falls efficacy scale- international to assess fear of falling.Age and Ageing, 37 (1):45–50, 2008. 3
2008
-
[26]
Khang, V
A. Khang, V . Abdullayev, E. Litvinova, S. Chumachenko, A. V . Alyar, and P. Anh. Application of computer vision (CV) in the healthcare ecosystem. InComputer Vision and AI-Integrated IoT Technologies in the Medical Ecosystem, pages 1–16. CRC Press, 2024. 2
2024
-
[27]
U. Kim, J. Lim, Y . Park, and Y . Bae. Predicting fall risk through step width variability at increased gait speed in com- munity dwelling older adults.Scientific Reports, 15(1): 16915, 2025. 2, 4
2025
-
[28]
Muhammed Kocabas, Nikos Athanasiou, and Michael J. Black. Vibe: Video inference for human body pose and shape estimation.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5252–5262,
2020
-
[29]
Langhammer, B
B. Langhammer, B. Lindmark, and J. Stanghelle. The re- lation between gait velocity and static and dynamic balance in the early rehabilitation of patients with acute stroke.Ad- vances in Physiotherapy, 8(2):60–65, 2006. 4
2006
-
[30]
L. Lee, T. Patel, A. Costa, et al. Screening for frailty in primary care: accuracy of gait speed and hand-grip strength. Canadian Family Physician, 63(1):e51–e57, 2017. 1
2017
-
[31]
S. S. Levy, K. J. Thralls, and S. A. Kviatkovsky. Validity and reliability of a portable balance tracking system, BTrackS, in older adults.Journal of Geriatric Physical Therapy, 41(2): 102–107, 2018. 3
2018
-
[32]
R. Li, R. J. St George, X. Wang, et al. Moving towards in- telligent telemedicine: Computer vision measurement of hu- man movement.Computers in Biology and Medicine, 147: 105776, 2022. 2
2022
-
[33]
Cliff: Carrying location information in full frames into human pose and shape estimation
Zhihao Li, Jianzhuang Liu, Zhensong Zhang, Songcen Xu, and Youliang Yan. Cliff: Carrying location information in full frames into human pose and shape estimation. InEuro- pean Conference on Computer Vision, 2022. 1
2022
-
[34]
L ¨ofgren, L
N. L ¨ofgren, L. Berglund, V . Giedraitis, E. Rosendahl, and A. C. ˚Aberg. Can turn duration and step parameters dur- ing the timed up and go test with and without a dual- task discriminate between individuals with different cog- nitive abilities? An explorative study.Assessment, page 10731911251410337, 2026. 2
2026
-
[35]
Lu, Y .-C
C.-T. Lu, Y .-C. Liu, and Y .-C. Pan. An intelligent playback control system adapted by body movements and facial ex- pressions recognized by OpenPose and CNN.Multimedia Tools and Applications, 83(10):31139–31160, 2024. 2
2024
-
[36]
U. M. U. Luis, M. S. Rodrigo, S. S. Cristhian, and S. M. C. Mauricio. Beyond timing: A critical review of the iTUG test and its implementation challenges for fall risk assessment in community-dwelling older adults.Health Policy and Tech- nology, page 101166, 2026. 2, 4
2026
-
[37]
Luo, J.-T
Z. Luo, J.-T. Hsieh, N. Balachandar, et al. Computer vision- based descriptive analytics of seniors’ daily activities for long-term health monitoring.Machine Learning for Health- care (MLHC), 2(1), 2018. 2
2018
-
[38]
Marano, G
M. Marano, G. Sergi, A. Magliozzi, et al. Fear of falling im- pairs spatiotemporal gait parameters, mobility, and quality of life in Parkinson’s disease: a cross-sectional study.Neuro- logical Sciences, 46(6):2655–2663, 2025. 4
2025
-
[39]
Marr.Vision: A Computational Investigation into the Hu- man Representation and Processing of Visual Information
D. Marr.Vision: A Computational Investigation into the Hu- man Representation and Processing of Visual Information. MIT Press, 2010. 2
2010
-
[40]
Mawarikado, Y
Y . Mawarikado, Y . Uchihashi, Y . Inagaki, et al. Relation- ship between history of falls and foot pressure centre param- eters during gait and stance in patients with lower-limb os- teoarthritis.Scientific Reports, 15(1):26723, 2025. 2
2025
-
[41]
S. Mroz, N. Baddour, C. McGuirk, et al. Comparing the quality of human pose estimation with BlazePose or OpenPose. In2021 4th International Conference on Bio- Engineering for Smart Technologies (BioSMART), 2021. 2
2021
-
[42]
K.-D. Ng, S. Mehdizadeh, A. Iaboni, A. Mansfield, A. Flint, and B. Taati. Measuring gait variables using computer vision to assess mobility and fall risk in older adults with dementia. IEEE Journal of Translational Engineering in Health and Medicine, 8:1–9, 2020. 2
2020
-
[43]
R. W. Nithman and J. L. Vincenzo. How steady is the STEADI? inferential analysis of the CDC fall risk toolkit. Archives of Gerontology and Geriatrics, 83:185–194, 2019. 3
2019
-
[44]
S. M. O’Connor, H. S. Baweja, and D. J. Goble. Validating the BTrackS balance plate as a low cost alternative for the measurement of sway-induced center of pressure.Journal of Biomechanics, 49(16):4142–4145, 2016. 3
2016
-
[45]
Ortega-Bastidas, B
P. Ortega-Bastidas, B. Gomez, P. Aqueveque, S. Luarte- Martinez, and R. Cano-de-la Cuerda. Instrumented timed up and go test (iTUG)—more than assessing time to predict falls: a systematic review.Sensors, 23(7):3426, 2023. 2, 4
2023
-
[46]
Parker, J
D. Parker, J. Andrews, and C. Price. Validity and reliabil- ity of the XSENSOR in-shoe pressure measurement system. PLoS One, 18(1):e0277971, 2023. 2
2023
-
[47]
Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3d hands, face, and body from a single image.2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10967–10977, 2019. 2
2019
-
[48]
Ranjan, D
R. Ranjan, D. Ahmedt-Aristizabal, M. A. Armin, and J. Kim. Computer vision for clinical gait analysis: A gait abnormal- ity video dataset.IEEE Access, 13:45321–45339, 2025. 2
2025
-
[49]
F. Ren, C. Ren, and T. Lyu. IoT-based 3D pose estimation and motion optimization for athletes: Application of C3D and OpenPose.Alexandria Engineering Journal, 115:210– 221, 2025. 2
2025
-
[50]
E. Salcedo. Computer vision-based gait recognition on the edge: A survey on feature representations, models, and ar- chitectures.Journal of Imaging, 10(12):326, 2024. 2
2024
-
[51]
Seematter-Bagnoud and C
L. Seematter-Bagnoud and C. B ¨ula. Brief assessments and screening for geriatric conditions in older primary care pa- tients: a pragmatic approach.Public Health Reviews, 39(1): 8, 2018. 1
2018
-
[52]
C. Shen, S. Yu, J. Wang, G. Q. Huang, and L. Wang. A comprehensive survey on deep gait recognition: Algorithms, datasets, and challenges.IEEE Transactions on Biometrics, Behavior, and Identity Science, 2024. 2
2024
-
[53]
World-grounded human motion recovery via gravity-view coordinates.SIGGRAPH Asia 2024 Conference Papers,
Zehong Shen, Huaijin Pi, Yan Xia, Zhi Cen, Sida Peng, Zechen Hu, Hujun Bao, Ruizhen Hu, and Xiaowei Zhou. World-grounded human motion recovery via gravity-view coordinates.SIGGRAPH Asia 2024 Conference Papers,
2024
-
[54]
Soyong Shin, Juyong Kim, Eni Halilaj, and Michael J. Black. Wham: Reconstructing world-grounded humans with accu- rate 3d motion.2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2070–2080,
2024
-
[55]
Studenski
S. Studenski. Gait speed reveals clues to lifelong health. JAMA Network Open, 2(10):e1913112, 2019. 1
2019
-
[56]
RoFormer: Enhanced Transformer with Rotary Position Embedding
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding.ArXiv, abs/2104.09864, 2021. 1
work page internal anchor Pith review arXiv 2021
-
[57]
Deep patch vi- sual odometry.ArXiv, abs/2208.04726, 2022
Zachary Teed, Lahav Lipson, and Jia Deng. Deep patch vi- sual odometry.ArXiv, abs/2208.04726, 2022. 1
-
[58]
Toshev and C
A. Toshev and C. Szegedy. DeepPose: Human pose es- timation via deep neural networks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), 2014. 1, 2
2014
-
[59]
Transition between the timed up and go turn to sit subtasks: is timing everything?Journal of the American Medical Directors Association, 17(9):864–e9, 2016
Aner Weiss, Anat Mirelman, Nir Giladi, Lisa L Barnes, David A Bennett, Aron S Buchman, and Jeffrey M Haus- dorff. Transition between the timed up and go turn to sit subtasks: is timing everything?Journal of the American Medical Directors Association, 17(9):864–e9, 2016. 4
2016
-
[60]
C. W. Won, S. Lee, J. Kim, et al. Korean frailty and aging cohort study (KFACS): cohort profile.BMJ Open, 10(4): e035573, 2020. 2
2020
-
[61]
ViTPose: Simple vision transformer baselines for human pose estimation
Yufei Xu, Jing Zhang, Qiming Zhang, and Dacheng Tao. ViTPose: Simple vision transformer baselines for human pose estimation. InAdvances in Neural Information Pro- cessing Systems (NeurIPS), 2022. 2, 1
2022
-
[62]
S. Yan, Z. Yu, C. Primiero, et al. A multimodal vision foun- dation model for clinical dermatology.Nature Medicine, pages 1–12, 2025. 2
2025
-
[63]
Yardley, N
L. Yardley, N. Beyer, K. Hauer, G. Kempen, C. Piot-Ziegler, and C. Todd. Development and initial validation of the falls efficacy scale-international (FES-I).Age and Ageing, 34(6): 614–619, 2005. 3
2005
-
[64]
Decoupling human and camera motion from videos in the wild
Vickie Ye, Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa. Decoupling human and camera motion from videos in the wild. InIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 2, 1
2023
-
[65]
Zheng, W
C. Zheng, W. Wu, C. Chen, et al. Deep learning-based hu- man pose estimation: A survey.ACM Computing Surveys, 56(1):1–37, 2023. 2
2023
-
[66]
D. Zhu, J. Zhao, T. Wu, B. Zhu, M. Wang, and T. Han. Effects of a computer vision–based exercise application for people with knee osteoarthritis: Randomized controlled trial. JMIR mHealth and uHealth, 13(1):e63022, 2025. 2 Fall Risk and Gait Analysis in Community-Dwelling Older Adults using World-Spaced 3D Human Mesh Recovery Appendix
2025
-
[67]
Participants were recruited within Orlando, Florida using various strategies, including flyers, word-of-mouth, and col- laboration with community partners
Population Characteristics The sample consisted of community-dwelling older adults. Participants were recruited within Orlando, Florida using various strategies, including flyers, word-of-mouth, and col- laboration with community partners. The inclusion criteria were that participants must be aged≥60 years, be able to walk (with or without assistive devic...
-
[68]
Specifics on GVHMR Comparisons to previous methodsSkeleton-based pose es- timation methods such as OpenPose [9] and AlphaPose
-
[69]
extract two dimensional pixel-level joint positions from RGB images, which are fundamentally ill-suited for gait analysis; depth is unrecoverable from a single 2D projec- tion, and measures such as step length and step width are conflated with camera perspective, rendering them neither metrically accurate nor view-invariant. While 3D HMR methods can recon...
-
[70]
Transition Duration Computation 9.1.1
Gait Event Detection 9.1. Transition Duration Computation 9.1.1. Sit-to-Stand and Stand-to-Sit (STS) To extract STS and turning durations from monocular video, we first computed a set of biomechanically mean- ingful signals from GVHMR joint trajectories. Hip and shoulder midpoints were used to derive vertical and ante- rior–posterior motion, and a trunk a...
-
[71]
The error residuals are represented byε ij
LME Models Linear mixed effects (LME) models were utilized in this study to account for the repeated measures model of the data.Y ij is the outcome for participantion trialj,β 0 is the fixed intercept,x ijβrepresents the fixed effects (fall risk factors and age) andb 0i is the random intercept of par- ticipanti, to capture individual-level differences. Th...
-
[72]
Limitations This pilot study is limited by a relatively small sam- ple of 207 videos from 52 older adults from an ongoing study, which may constrain generalizability. Nevertheless, the dense within-participant data, spanning multiple trials and complementary measures including insole-derived step times, self-rated fall risk, fear of falling, and postural ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.