xMAE pretrains biosignal representations via masked cross-modal reconstruction of temporally ordered signals like ECG and PPG, outperforming baselines on 15 of 19 downstream tasks including cardiovascular prediction and sleep staging.
Title resolution pending
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5representative citing papers
A cross-modal masked autoencoder creates reusable biosignal fingerprints that match or exceed specialist models on seven cardiovascular tasks using only single-modality input.
Membership inference attacks can detect whether specific ECG data participated in pretraining self-supervised foundation encoders, with leakage strongest in small cohorts and contrastive models.
PRISM-CTG is the first large-scale foundation model for cardiotocography that uses multi-view self-supervised learning on unlabeled data to learn transferable representations, outperforming baselines on seven downstream tasks with external validation.
The survey organizes foundation models for sensor-based HAR into a lifecycle taxonomy and identifies three trajectories: HAR-specific models from scratch, adaptation of general time-series models, and integration with large language models.
citing papers explorer
-
Physiology-Aware Masked Cross-Modal Reconstruction for Biosignal Representation Learning
xMAE pretrains biosignal representations via masked cross-modal reconstruction of temporally ordered signals like ECG and PPG, outperforming baselines on 15 of 19 downstream tasks including cardiovascular prediction and sleep staging.
-
Biosignal Fingerprinting: A Cross-Modal PPG-ECG Foundation Model
A cross-modal masked autoencoder creates reusable biosignal fingerprints that match or exceed specialist models on seven cardiovascular tasks using only single-modality input.
-
Membership Inference Attacks Expose Participation Privacy in ECG Foundation Encoders
Membership inference attacks can detect whether specific ECG data participated in pretraining self-supervised foundation encoders, with leakage strongest in small cohorts and contrastive models.
-
PRISM-CTG: A Foundation Model for Cardiotocography Analysis with Multi-View SSL
PRISM-CTG is the first large-scale foundation model for cardiotocography that uses multi-view self-supervised learning on unlabeled data to learn transferable representations, outperforming baselines on seven downstream tasks with external validation.
-
Foundation Models Defining A New Era In Sensor-based Human Activity Recognition: A Survey And Outlook
The survey organizes foundation models for sensor-based HAR into a lifecycle taxonomy and identifies three trajectories: HAR-specific models from scratch, adaptation of general time-series models, and integration with large language models.