Recognition: unknown
Cross-Modal Generation: From Commodity WiFi to High-Fidelity mmWave and RFID Sensing
Pith reviewed 2026-05-10 08:10 UTC · model grok-4.3
The pith
RF-CMG generates high-fidelity mmWave and RFID signals from abundant WiFi data by decoupling high-frequency guidance from scarce targets and low-frequency physical constraints from the source in a diffusion process.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
RF-CMG is a diffusion-based cross-modal generative method that leverages data-rich WiFi signals to synthesize high-fidelity RF data for scarce modalities including mmWave and RFID. The key insight is to decouple cross-modal generation into high-frequency guidance and low-frequency constraint, which respectively learn high-frequency distribution from limited target modality data and preserve the underlying physical structure via low-frequency constraints during generation. It introduces a Modality-Guided Embedding module to steer the reverse diffusion trajectory toward the target high-frequency distribution, and a Low-Frequency Modality Consistency module to progressively enforce low-fidelity
What carries the argument
RF-CMG diffusion framework with Modality-Guided Embedding (MGE) module to steer reverse diffusion toward target high-frequency distribution and Low-Frequency Modality Consistency (LFMC) module to enforce low-frequency constraints from the source modality.
If this is right
- RF-CMG outperforms several prevalent generative models in synthesizing RFID and mmWave signals.
- Data generated by RF-CMG improves performance in downstream gesture recognition tasks.
- The proportion of synthetic data mixed with real data affects overall downstream sensing performance.
- The decoupling approach enables high-quality target-modality generation from data-rich source modalities.
Where Pith is reading between the lines
- This cross-modal technique could lower deployment costs for mmWave and RFID systems by reducing the need for direct data collection.
- The frequency-decoupling strategy might generalize to other data-scarce wireless sensing domains such as terahertz or acoustic signals.
- Synthetic data from this method could support training of multi-modal fusion models that combine commodity WiFi with higher-resolution RF sensing.
Load-bearing premise
Separating high-frequency guidance learned from scarce target data and low-frequency constraints from the source modality will produce high-fidelity signals without accumulating structural biases or losing critical physical information during the diffusion process.
What would settle it
A direct measurement showing that RF-CMG synthesized signals deviate from real mmWave or RFID high-frequency statistics or physical structure, or that models trained with the synthetic data fail to improve or degrade gesture recognition accuracy relative to real-data baselines.
Figures
read the original abstract
AIGC has shown remarkable success in CV and NLP, and has recently demonstrated promising potential in the wireless domain. However, significant data imbalance exists across RF modalities, with abundant WiFi data but scarce mmWave and RFID data due to high acquisition cost. This makes it difficult to train high-quality generative models for these data-scarce modalities. In this work, we propose RF-CMG, a diffusion-based cross-modal generative method that leverages data-rich WiFi signals to synthesize high-fidelity RF data for scarce modalities including mmWave and RFID. The key insight of RF-CMG is to decouple cross-modal generation into high-frequency guidance and low-frequency constraint, which respectively learn high-frequency distribution from limited target modality data and preserve the underlying physical structure via low-frequency constraints during generation. On this basis, we introduce a Modality-Guided Embedding (MGE) module to steer the reverse diffusion trajectory toward the target high-frequency distribution, and a Low-Frequency Modality Consistency (LFMC) module to progressively enforce low-frequency constraints to suppress the accumulation of source-modality structural biases during inference, enabling high-quality target-modality generation. Performance comparison with several prevalent generative models demonstrates that RF-CMG achieves superior performance in synthesizing RFID and mmWave signals. We further showcase the effectiveness of the data generated by RF-CMG in gesture recognition tasks, and analyze the impact of the proportion of synthetic data on downstream performance.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes RF-CMG, a diffusion-based cross-modal generative framework that synthesizes high-fidelity mmWave and RFID signals from abundant WiFi data. It decouples generation into high-frequency guidance (learned from scarce target-modality samples via the Modality-Guided Embedding (MGE) module) and low-frequency physical-structure constraints (enforced progressively by the Low-Frequency Modality Consistency (LFMC) module). The paper reports that RF-CMG outperforms several prevalent generative models on RFID and mmWave synthesis and shows downstream utility when the synthetic data is used to augment gesture-recognition training.
Significance. If the high-fidelity claim holds under physical validation, the work would meaningfully address data scarcity in RF sensing modalities, enabling better-trained models for applications such as gesture recognition and wireless sensing. The explicit frequency-decoupling strategy within a diffusion backbone is a technically interesting contribution to cross-modal generative modeling in the wireless domain and could inspire analogous techniques for other multi-modal sensing problems.
major comments (2)
- [§4] §4 (MGE and LFMC modules): The central design choice of separating high-frequency guidance (from limited target samples) from low-frequency constraints (from WiFi) assumes that broadband RF effects (phase, multipath, Doppler) can be cleanly partitioned without loss or bias accumulation. Because these physical phenomena are inherently coupled across frequencies, the LFMC enforcement may not fully suppress diffusion artifacts when the MGE guidance is statistically under-constrained by scarce data. This assumption is load-bearing for the high-fidelity synthesis claim and requires explicit physical-consistency checks (e.g., Doppler spectrum or channel impulse response matching) beyond standard generative metrics.
- [§5] §5 (experimental evaluation): The reported superiority over baseline generative models is stated without accompanying quantitative tables, error bars, statistical tests, or ablation results on the contribution of MGE versus LFMC. In addition, no evaluation of physical fidelity (e.g., preservation of propagation characteristics) is described, which is necessary to substantiate that the generated signals are usable for downstream RF tasks.
minor comments (3)
- [Abstract] The abstract asserts 'superior performance' and 'effectiveness' without any numeric results; adding at least the key quantitative metrics (e.g., FID, PSNR, or downstream accuracy deltas) would improve readability.
- [§3] Notation for the diffusion reverse process, MGE embedding, and LFMC loss terms should be introduced with explicit equations in §3 to aid reproducibility.
- [§6] The discussion of limitations (e.g., sensitivity to the amount of target data or domain shift between WiFi and mmWave hardware) is brief; expanding it would strengthen the manuscript.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address each major comment below, providing our rationale and indicating the revisions we will incorporate to strengthen the manuscript.
read point-by-point responses
-
Referee: [§4] §4 (MGE and LFMC modules): The central design choice of separating high-frequency guidance (from limited target samples) from low-frequency constraints (from WiFi) assumes that broadband RF effects (phase, multipath, Doppler) can be cleanly partitioned without loss or bias accumulation. Because these physical phenomena are inherently coupled across frequencies, the LFMC enforcement may not fully suppress diffusion artifacts when the MGE guidance is statistically under-constrained by scarce data. This assumption is load-bearing for the high-fidelity synthesis claim and requires explicit physical-consistency checks (e.g., Doppler spectrum or channel impulse response matching) beyond standard generative metrics.
Authors: We appreciate the referee's point on the inherent coupling of RF physical effects. Our decoupling strategy is grounded in the observation that low-frequency components primarily encode large-scale propagation structure (e.g., dominant paths and bulk Doppler) that can be robustly inherited from WiFi, while high-frequency components encode modality-specific fine details learned by MGE from scarce target samples. LFMC then applies progressive low-frequency alignment during the reverse diffusion process to reduce structural bias accumulation. While we believe this separation is effective in practice, we agree that explicit physical validation is important. In the revised manuscript we will add direct comparisons of Doppler spectra and channel impulse responses between generated and real mmWave/RFID signals to quantify any residual artifacts and further support the high-fidelity claim. revision: yes
-
Referee: [§5] §5 (experimental evaluation): The reported superiority over baseline generative models is stated without accompanying quantitative tables, error bars, statistical tests, or ablation results on the contribution of MGE versus LFMC. In addition, no evaluation of physical fidelity (e.g., preservation of propagation characteristics) is described, which is necessary to substantiate that the generated signals are usable for downstream RF tasks.
Authors: We acknowledge that the current presentation relies primarily on figures for quantitative comparisons, which limits precise interpretation. We will revise the manuscript to include a new table reporting mean and standard-deviation values for all metrics (FID, PSNR, SSIM, etc.), with error bars added to the relevant figures and statistical significance tests (paired t-tests) to confirm differences from baselines. We will also add an ablation study isolating the contributions of MGE and LFMC. As noted in our response to §4, we will further incorporate physical-fidelity evaluations (Doppler spectrum and CIR matching) to demonstrate that the synthesized signals preserve propagation characteristics and are suitable for downstream tasks such as gesture recognition. revision: yes
Circularity Check
No circularity: derivation introduces independent MGE and LFMC modules whose performance claims rest on external comparisons rather than self-definition or fitted inputs
full rationale
The paper proposes RF-CMG as a new diffusion-based architecture that decouples high-frequency guidance (via MGE learned from scarce target data) from low-frequency constraints (via LFMC from WiFi). No equations or claims reduce the target high-fidelity synthesis to a renaming of inputs, a fitted parameter relabeled as prediction, or a self-citation chain. Performance superiority is asserted via comparison to external generative models on downstream tasks; the central insight and modules are presented as novel contributions without load-bearing reliance on prior self-authored uniqueness theorems or ansatzes. This is the common case of an independent methodological proposal.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Diffusion reverse process can be steered by modality-specific embeddings without violating underlying signal physics
Reference graph
Works this paper leans on
-
[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 (2023)
work page internal anchor Pith review Pith/arXiv arXiv 2023
- [2]
-
[3]
Yara Bahram, Mohammadhadi Shateri, and Eric Granger. 2026. Dogfit: Domain- guided fine-tuning for efficient transfer learning of diffusion models. InProceed- ings of the AAAI Conference on Artificial Intelligence, Vol. 40. 2345–2353
2026
-
[4]
Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guil- laume Desjardins, and Alexander Lerchner. 2018. Understanding disentangling in𝛽-VAE.arXiv preprint arXiv:1804.03599(2018)
work page Pith review arXiv 2018
-
[5]
Yu Cao and Shaogang Gong. 2024. Few-shot image generation by conditional relaxing diffusion inversion. InEuropean Conference on Computer Vision. Springer, 20–37
2024
-
[6]
Dar-Yen Chen, Hmrishav Bandyopadhyay, Kai Zou, and Yi-Zhe Song. 2025. Nitrofusion: High-fidelity single-step diffusion through dynamic adversarial training. InProceedings of the Computer Vision and Pattern Recognition Conference. 7654–7663
2025
-
[7]
Xi Chen, Hang Li, Chenyi Zhou, Xue Liu, Di Wu, and Gregory Dudek. 2022. Fi- dora: Robust WiFi-based indoor localization via unsupervised domain adaptation. IEEE Internet of Things Journal9, 12 (2022), 9872–9888
2022
-
[8]
Xingyu Chen and Xinyu Zhang. 2023. Rf genesis: Zero-shot generalization of mmwave sensing through simulation-based data synthesis and generative diffu- sion models. InProceedings of the 21st ACM Conference on Embedded Networked Sensor Systems. 28–42
2023
-
[9]
Zhixiong Chen, Hyundong Shin, and Arumugam Nallanathan. 2025. Generative diffusion model-based variational inference for MIMO channel estimation.IEEE Transactions on Communications(2025)
2025
-
[10]
Guoxuan Chi, Zheng Yang, Chenshu Wu, Jingao Xu, Yuchong Gao, Yunhao Liu, and Tony Xiao Han. 2024. RF-diffusion: Radio signal generation via time- frequency diffusion. InProceedings of the 30th Annual International Conference on Mobile Computing and Networking. 77–92
2024
- [11]
-
[12]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks.Commun. ACM63, 11 (2020), 139–144
2020
-
[13]
Mutasem Q Hamdan and Khairi A Hamdi. 2020. Variational Auto-encoders application in wireless Vehicle-to-Everything communications. In2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). IEEE, 1–6
2020
-
[14]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition. 770–778
2016
-
[15]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium.Advances in neural information processing systems30 (2017)
2017
-
[16]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models.Advances in neural information processing systems33 (2020), 6840–6851
2020
-
[17]
Alain Hore and Djemel Ziou. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition. IEEE, 2366–2369
2010
-
[18]
Teng Hu, Jiangning Zhang, Liang Liu, Ran Yi, Siqi Kou, Haokun Zhu, Xu Chen, Yabiao Wang, Chengjie Wang, and Lizhuang Ma. 2023. Phasic content fusing diffusion model with directional distribution consistency for few-shot model adaption. InProceedings of the IEEE/CVF international conference on computer vision. 2406–2415
2023
-
[19]
Shuokang Huang, Po-Yu Chen, and Julie Ann McCann. 2023. DiffAR: Adaptive Conditional Diffusion Model for Temporal-augmented Human Activity Recogni- tion.. InIJCAI. 3812–3820
2023
-
[20]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114(2013)
work page internal anchor Pith review Pith/arXiv arXiv 2013
- [21]
-
[22]
Tianhong Li and Kaiming He. 2025. Back to basics: Let denoising generative models denoise.arXiv preprint arXiv:2511.13720(2025)
work page internal anchor Pith review arXiv 2025
-
[23]
Peng Liao, Xuyu Wang, Lingling An, Shiwen Mao, Tianya Zhao, and Chao Yang
-
[24]
Tfsemantic: A time–frequency semantic gan framework for imbalanced classification using radio signals.ACM Transactions on Sensor Networks20, 4 (2024), 1–22
2024
-
[25]
Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le
-
[26]
Flow matching for generative modeling.arXiv preprint arXiv:2210.02747 (2022)
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[27]
Yan Liu, Anlan Yu, Leye Wang, Bin Guo, Yang Li, Enze Yi, and Daqing Zhang
-
[28]
Unifi: A unified framework for generalizable gesture recognition with wi-fi signals using consistency-guided multi-view networks.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies7, 4 (2024), 1–29
2024
-
[29]
Haofan Lu, Christopher Vattheuer, Baharan Mirzasoleiman, and Omid Abari
- [30]
-
[31]
Yimin Mao, Zhengxin Guo, Biyun Sheng, Linqing Gui, and Fu Xiao. 2024. Wi- Cro: WiFi-based cross domain activity recognition via modified GAN.IEEE Transactions on Vehicular Technology73, 10 (2024), 14961–14973
2024
-
[32]
John W McKown and R Lee Hamilton. 1991. Ray tracing as a design tool for radio networks.IEEE Network5, 6 (1991), 27–30
1991
-
[33]
Arnab Kumar Mondal, Piyush Tiwary, Parag Singla, and Prathosh AP. 2023. Few- shot cross-domain image generation via inference-time latent-code learning. In The Eleventh International Conference on Learning Representations
2023
-
[34]
Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A Efros, Yong Jae Lee, Eli Shecht- man, and Richard Zhang. 2021. Few-shot image generation via cross-domain correspondence. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10743–10752
2021
-
[35]
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al . 2018. Improving language understanding by generative pre-training. (2018)
2018
-
[36]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen
-
[37]
Hierarchical text-conditional image generation with clip latents.arXiv preprint arXiv:2204.061251, 2 (2022), 3
work page internal anchor Pith review arXiv 2022
-
[38]
Lei Ren, Haiteng Wang, Jinwang Li, Yang Tang, and Chunhua Yang. 2025. AIGC for Industrial Time Series: From Deep-Generative Models to Large-Generative Models.IEEE Transactions on Systems, Man, and Cybernetics: Systems(2025)
2025
-
[39]
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding.Advances in neural information processing systems35 (2022), 36479–36494
2022
-
[40]
Biyun Sheng, Rui Han, Fu Xiao, Zhengxin Guo, and Linqing Gui. 2024. MetaFormer: Domain-adaptive WiFi sensing with only one labelled target sam- ple.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies8, 1 (2024), 1–27
2024
-
[41]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional net- works for large-scale image recognition.arXiv preprint arXiv:1409.1556(2014)
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[42]
Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models.arXiv preprint arXiv:2010.02502(2020)
work page internal anchor Pith review Pith/arXiv arXiv 2020
-
[43]
Hirotaka Takita, Daijiro Kabata, Shannon L Walston, Hiroyuki Tatekawa, Kenichi Saito, Yasushi Tsujimoto, Yukio Miki, and Daiju Ueda. 2025. A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians.npj Digital Medicine8, 1 (2025), 175
2025
-
[44]
Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. 2023. Gemini: a family of highly capable multimodal models.arXiv preprint arXiv:2312.11805(2023)
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[45]
Ahmet Üstün, Viraat Aryabumi, Zheng Yong, Wei-Yin Ko, Daniel D’souza, Gbe- mileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, et al
-
[46]
InProceedings of the 62nd annual meeting of the Association for Computa- tional Linguistics (volume 1: Long papers)
Aya model: An instruction finetuned open-access multilingual language model. InProceedings of the 62nd annual meeting of the Association for Computa- tional Linguistics (volume 1: Long papers). 15894–15939
-
[47]
Fei Wang, Yizhe Lv, Mengdie Zhu, Han Ding, and Jinsong Han. 2024. Xrf55: A radio frequency dataset for human indoor action analysis.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies8, 1 (2024), 1–34
2024
-
[48]
Jiacheng Wang, Hongyang Du, Dusit Niyato, Zehui Xiong, Jiawen Kang, Bo Ai, Zhu Han, and Dong In Kim. 2024. Generative artificial intelligence assisted wire- less sensing: Human flow detection in practical communication environments. IEEE Journal on Selected Areas in Communications42, 10 (2024), 2737–2753
2024
- [49]
-
[50]
Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez- Garcia, and Bogdan Raducanu. 2018. Transferring gans: generating images from limited data. InProceedings of the European conference on computer vision (ECCV). 218–234
2018
-
[51]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing13, 4 (2004), 600–612
2004
-
[52]
Chaozheng Wen, Jingwen Tong, Yingdong Hu, Zehong Lin, and Jun Zhang. 2025. Neural representation for wireless radiation field reconstruction: A 3D Gaussian splatting approach.IEEE Transactions on Wireless Communications(2025). 9
2025
-
[53]
Chaozheng Wen, Jingwen Tong, Yingdong Hu, Zehong Lin, and Jun Zhang. 2025. Wrf-gs: Wireless radiation field reconstruction with 3d gaussian splatting. In IEEE INFOCOM 2025-IEEE Conference on Computer Communications. IEEE, 1–10
2025
-
[54]
Chunjing Xiao, Yanhui Han, Wei Yang, Yane Hou, Fangzhan Shi, and Kevin Chetty. 2024. Diffusion-model-based contrastive learning for human activity recognition.IEEE Internet of Things Journal11, 20 (2024), 33525–33536
2024
-
[55]
Jiali Xu, Shuo Wang, Valeria Loscri, Alessandro Brighente, Mauro Conti, and Romain Rouvoy. 2025. GANSec: Enhancing Supervised Wireless Anomaly Detec- tion Robustness Through Tailored Conditional GAN Augmentation. InEuropean Symposium on Research in Computer Security. Springer, 430–449
2025
-
[56]
Huan Yan, Xiang Zhang, Jinyang Huang, Yuanhao Feng, Meng Li, Anzhi Wang, Weihua Ou, Hongbing Wang, and Zhi Liu. 2025. Wi-sfdagr: Wifi-based cross- domain gesture recognition via source-free domain adaptation.IEEE Internet of Things Journal(2025)
2025
-
[57]
Zheng Yang, Yi Zhang, and Qian Zhang. 2022. Rethinking fall detection with Wi-Fi.IEEE Transactions on Mobile Computing22, 10 (2022), 6126–6143
2022
-
[58]
Shuangjiao Zhai, Zhanyong Tang, Petteri Nurmi, Dingyi Fang, Xiaojiang Chen, and Zheng Wang. 2021. RISE: Robust wireless sensing using probabilistic and statistical assessments. InProceedings of the 27th Annual International Conference on Mobile Computing and Networking. 309–322
2021
-
[59]
Jie Zhang, Zhanyong Tang, Meng Li, Dingyi Fang, Petteri Nurmi, and Zheng Wang. 2018. CrossSense: Towards cross-site and large-scale WiFi sensing. In Proceedings of the 24th annual international conference on mobile computing and networking. 305–320
2018
-
[60]
Lihao Zhang, Haijian Sun, Samuel Berweger, Camillo Gentile, and Rose Qingyang Hu. 2026. RF-3DGS: Wireless channel modeling with radio radiance field and 3D Gaussian splatting.IEEE Transactions on Wireless Communications25 (2026), 10419–10433
2026
-
[61]
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang
-
[62]
InProceedings of the IEEE conference on computer vision and pattern recognition
The unreasonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recognition. 586–595
-
[63]
Yi Zhang, Yue Zheng, Kun Qian, Guidong Zhang, Yunhao Liu, Chenshu Wu, and Zheng Yang. 2021. Widar3. 0: Zero-effort cross-domain gesture recognition with Wi-Fi.IEEE Transactions on Pattern Analysis and Machine Intelligence44, 11 (2021), 8671–8688
2021
-
[64]
Langcheng Zhao, Rui Lyu, Hang Lei, Qi Lin, Anfu Zhou, Huadong Ma, Jingjia Wang, Xiangbin Meng, Chunli Shao, Yida Tang, et al. 2024. AirECG: Contactless electrocardiogram for cardiac disease monitoring via mmWave sensing and cross-domain diffusion model.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies8, 3 (2024), 1–27
2024
-
[65]
Xiaopeng Zhao, Zhenlin An, Qingrui Pan, and Lei Yang. 2023. Nerf2: Neural radio-frequency radiance fields. InProceedings of the 29th Annual International Conference on Mobile Computing and Networking. 1–15
2023
-
[66]
Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, and Ngai- Man Man Cheung. 2022. Few-shot image generation via adaptation-aware kernel modulation.Advances in Neural Information Processing Systems35 (2022), 19427– 19440
2022
-
[67]
Yunqing Zhao, Chao Du, Milad Abdollahzadeh, Tianyu Pang, Min Lin, Shuicheng Yan, and Ngai-Man Cheung. 2023. Exploring incompatible knowledge transfer in few-shot image generation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7380–7391
2023
- [68]
-
[69]
Xingyu Zhou, Le Liang, Jing Zhang, Peiwen Jiang, Yong Li, and Shi Jin. 2025. Generative diffusion models for high dimensional channel estimation.IEEE Transactions on Wireless Communications(2025)
2025
- [70]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.