A compact 0.09B model using hierarchical discrete tokenization and prompted latent translation outperforms larger baselines in cross-modal PPG-to-ECG synthesis and cross-frequency super-resolution.
arXiv preprint arXiv:2504.19596 , year=
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
AMI reduces sensor usage by 48.8% and improves accuracy by 1.9% on average across three medical datasets by jointly learning when to sense and how to infer from multimodal physiological signals.
citing papers explorer
-
Compact Latent Manifold Translation: A Parameter-Efficient Foundation Model for Cross-Modal and Cross-Frequency Physiological Signal Synthesis
A compact 0.09B model using hierarchical discrete tokenization and prompted latent translation outperforms larger baselines in cross-modal PPG-to-ECG synthesis and cross-frequency super-resolution.
-
Sense Less, Infer More: Agentic Multimodal Transformers for Edge Medical Intelligence
AMI reduces sensor usage by 48.8% and improves accuracy by 1.9% on average across three medical datasets by jointly learning when to sense and how to infer from multimodal physiological signals.