Recognition: 2 theorem links
· Lean TheoremSMCNet: Supervised Surface Material Classification Using mmWave Radar IQ Signals and Complex-valued CNNs
Pith reviewed 2026-05-10 18:12 UTC · model grok-4.3
The pith
A complex-valued CNN using mmWave radar IQ signals classifies indoor surface materials at 99 percent accuracy even at distances unseen in training.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
SMCNet processes complex-valued mmWave MIMO FMCW radar IQ signals through a complex-valued CNN to classify indoor surface materials. When trained on measurements from three sensing distances and evaluated on those distances plus two unseen ones, the network reaches 99.12 to 99.53 percent overall accuracy. Applying range FFT pre-processing before the network improves accuracy on the unseen distances from 25.25 percent to 58.81 percent without any retraining or distance normalization.
What carries the argument
The complex-valued CNN that ingests raw radar IQ signals directly, with range FFT pre-processing applied to mitigate distance-dependent effects in the reflected signals.
If this is right
- Indoor robots can obtain material labels from a single mmWave sensor without separate vision or contact hardware.
- Digital twins of rooms can incorporate material properties measured at varying sensor heights or positions.
- Systems trained once can operate across a range of distances without repeated data collection or model updates.
- Pre-processing the range dimension of radar data becomes a practical step for any distance-varying radar classification task.
Where Pith is reading between the lines
- The same radar-plus-complex-network approach could be tested on outdoor surfaces where distance variation is larger.
- Combining the radar output with other cheap sensors might raise accuracy on the hardest unseen-distance cases above the reported 58 percent.
- If the material features prove stable across different radar hardware, the method could transfer to new devices without full retraining.
Load-bearing premise
The reflected mmWave IQ signals contain material-specific features that remain sufficiently consistent and distinguishable across different sensing distances, allowing the complex-valued CNN to generalize without retraining or explicit distance normalization.
What would settle it
Collecting new radar measurements at additional untrained distances with the same materials and observing whether overall accuracy falls below 50 percent even after range FFT pre-processing would falsify the claim of robust generalization.
Figures
read the original abstract
Understanding surface material properties is crucial for enhancing indoor robot perception and indoor digital twinning. However, not all sensor modalities typically employed for this task are capable of reliably capturing detailed surface material characteristics. By analyzing the reflected RF signal from a mmWave radar sensor, it is possible to extract information about the reflective material and its composition from a certain surface. We introduce a mmWave MIMO FMCW radar-based surface material classifier SMCNet, employing a complex-valued Convolutional Neural Network (CNN) and complex radar IQ signal input for classifying indoor surface materials. While current radar-based material estimation approaches rely on a fixed sensing distance and constrained setups, our approach incorporates a setup with multiple sensing distances. We trained SMCNet using data from three distinct distances and subsequently tested it on these distances, as well as on two more unseen distances. We reached an overall accuracy of 99.12-99.53 % on our test set. Notably, range FFT pre-processing improved accuracy on unknown distances from 25.25 % to 58.81 % without re-training.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces SMCNet, a complex-valued CNN classifier for indoor surface materials that takes mmWave MIMO FMCW radar IQ signals as input. Data are collected at five distances; the network is trained on three distances and evaluated on both the training distances and two held-out distances. The abstract reports overall test-set accuracies of 99.12–99.53 % and states that range-FFT preprocessing raises accuracy on the unseen distances from 25.25 % to 58.81 % without retraining.
Significance. If the empirical results are reproducible, the work offers a concrete demonstration that complex-valued networks can exploit radar IQ data for material discrimination and that a simple range-FFT step partially mitigates distance variation. The explicit multi-distance protocol and the numerical contrast between seen and unseen distances constitute a falsifiable contribution that could inform future radar-based perception pipelines for robotics.
major comments (2)
- [Abstract] Abstract: the headline claim of 99.12–99.53 % overall accuracy is driven by performance on the three training distances; the 58.81 % figure on the two unseen distances remains well below the seen-distance level, indicating that residual distance-dependent effects (attenuation, phase progression, or multipath) are still present after range-FFT preprocessing and are being exploited by the network on seen data.
- [Abstract] Abstract and experimental description: no information is supplied on the number of material classes, total number of samples, train/test split protocol, or sensor configuration (carrier frequency, bandwidth, antenna array size, or surface-angle variation). These omissions make it impossible to judge whether the reported accuracies are statistically reliable or whether the unseen-distance test truly isolates material signatures.
minor comments (1)
- [Abstract] The abstract states that the network was “subsequently tested … on two more unseen distances” but does not clarify whether the two unseen distances were drawn from the same physical surfaces or whether new surfaces were introduced; this distinction affects the interpretation of generalization.
Simulated Author's Rebuttal
We thank the referee for the constructive comments and the recommendation for major revision. We address each point below and have revised the manuscript to improve clarity and completeness.
read point-by-point responses
-
Referee: [Abstract] Abstract: the headline claim of 99.12–99.53 % overall accuracy is driven by performance on the three training distances; the 58.81 % figure on the two unseen distances remains well below the seen-distance level, indicating that residual distance-dependent effects (attenuation, phase progression, or multipath) are still present after range-FFT preprocessing and are being exploited by the network on seen data.
Authors: We agree that the overall accuracy is driven by strong performance on the three training distances. The manuscript's primary contribution is the explicit multi-distance protocol and the demonstration that range-FFT preprocessing yields a measurable improvement on unseen distances (from 25.25 % to 58.81 %). To make the distinction transparent, we have revised the abstract to report accuracies separately for seen and unseen distances rather than only the aggregate figure. This change acknowledges the residual distance-dependent effects while preserving the reported improvement. revision: yes
-
Referee: [Abstract] Abstract and experimental description: no information is supplied on the number of material classes, total number of samples, train/test split protocol, or sensor configuration (carrier frequency, bandwidth, antenna array size, or surface-angle variation). These omissions make it impossible to judge whether the reported accuracies are statistically reliable or whether the unseen-distance test truly isolates material signatures.
Authors: The experimental setup and dataset sections of the manuscript contain the requested details on material classes, sample counts, train/test split, sensor parameters, and collection protocol (including fixed surface orientation). We acknowledge that these elements were not summarized in the abstract, which limits immediate assessment. We have therefore revised the abstract to include a concise statement of the number of classes, dataset scale, split protocol, and key sensor specifications. This ensures readers can evaluate statistical reliability and confirm that the unseen-distance test isolates material signatures under the stated conditions. revision: yes
Circularity Check
No circularity: standard supervised empirical ML evaluation on held-out radar data
full rationale
The manuscript describes data collection at five distances, training a complex-valued CNN on IQ signals (or range-FFT preprocessed versions) from three distances, and reporting test accuracies on both seen and unseen distances. All performance numbers are direct empirical outcomes of train/test splits; no equations, predictions, or claims reduce by construction to fitted parameters, self-defined quantities, or self-citation chains. The central result (accuracy figures) is externally falsifiable via replication on the same sensor setup and is not derived from any internal ansatz or uniqueness theorem. This is a self-contained supervised classification study whose validity rests on experimental protocol rather than any circular derivation.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearWe introduce a mmWave MIMO FMCW radar-based surface material classifier SMCNet, employing a complex-valued Convolutional Neural Network (CNN) and complex radar IQ signal input... range FFT pre-processing improved accuracy on unknown distances from 25.25 % to 58.81 %
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanreality_from_one_distinction unclearSMCNet structure for processing complex radar signals... complex convolutional layers... complex ReLU
Reference graph
Works this paper leans on
-
[1]
Saini, A
L. Saini, A. Acosta and G. Hakobyan, ”Graph Neural Networks for Object Type Classification Based on Automotive Radar Point Clouds and Spectra,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023
2023
-
[2]
Robust Depth Estimation in Foggy Environments Combining RGB Images and mmWave Radar
Xiong, Mengchen and Xu, Xiao and Yang, Dong and Steinbach, Eckehard, “Robust Depth Estimation in Foggy Environments Combining RGB Images and mmWave Radar”, 2022 IEEE International Symposium on Multimedia (ISM), 2022
2022
-
[3]
Liu and S
Y . Liu and S. Chang and Z. Wei and K. Zhang and Z. Feng, ”Fusing mmWave Radar With Camera for 3-D Detection in Autonomous Driv- ing,” in IEEE Internet of Things Journal, vol. 9, no. 20, pp. 20408-20421, 15 Oct.15, 2022
2022
-
[4]
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
Wolters, Philipp and Gilg, Johannes and Teepe, Torben and Herzog, Fabian and Laouichi, Anouar and Hofmann, Martin and Rigoll, Gerhard, “Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception”, arXiv.org, 2024
2024
-
[5]
Z. Su, B. Ming and W. Hua, ”An Asymmetric Radar-Camera Fusion Framework for Autonomous Driving,” 2023 IEEE SENSORS, Vienna, Austria, 2023
2023
-
[6]
Liu and B
R. Liu and B. Jiang and T. Fan and L. Yang and Y . Lin and H. Xu, ”An Indoor Human Tracking System Using a Compact 24GHz FMCW Radar with 250MHz Bandwidth,” 2022 IEEE MTT-S International Wireless Symposium (IWS), Harbin, China, 2022, pp. 1-3
2022
-
[7]
Kocur, T
D. Kocur, T. Porteleky and M. ˇSvecov´a, ”UWB Radar Testbed System for Localization of Multiple Static Persons,” 2019 IEEE SENSORS, Montreal, QC, Canada, 2019
2019
-
[8]
Deiana and E
D. Deiana and E. M. Suijker and R. J. Bolt and A. P. M. Maas and W. J. Vlothuizen and A. S. Kossen, ”Real time indoor presence detection with a novel radar on a chip,” 2014 International Radar Conference, Lille, France, 2014
2014
-
[9]
R. N. Khushaba and A. J. Hill, ”Radar-Based Materials Classification Using Deep Wavelet Scattering Transform: A Comparison of Centimeter vs. Millimeter Wave Units,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2016-2022, April 2022
2016
-
[10]
Weiß and A
J. Weiß and A. Santra, ”Material Classification using 60-GHz Radar and Deep Convolutional Neural Network,” 2019 International Radar Conference (RADAR), Toulon, France, 2019
2019
-
[11]
Montgomery and G
D. Montgomery and G. Holm ´en and P. Almers and A. Jakobsson, ”Surface Classification with Millimeter-Wave Radar Using Temporal Features and Machine Learning,” 2019 16th European Radar Conference (EuRAD), Paris, France, 2019, pp. 1-4
2019
-
[12]
Alsaleh, D
N. Alsaleh, D. Pomorski, M. Sebbache and K. Haddadi, ”Machine Learning-Based Monostatic Microwave Radar for Building Material Classification,” 2023 IEEE International Instrumentation and Measure- ment Technology Conference (I2MTC), Kuala Lumpur, Malaysia, 2023
2023
-
[13]
”Complex-Valued Deep Learning for WiFi-Based Indoor Positioning: Method and Performance,” European Wireless 2024, Brno, Czech Republic, 2024
Seguel, Fabian and Salihu, Driton and H ¨agele, Stefan and Steinbach, Eckehard. ”Complex-Valued Deep Learning for WiFi-Based Indoor Positioning: Method and Performance,” European Wireless 2024, Brno, Czech Republic, 2024
2024
-
[14]
J. A. Barrachina, C. Ren, C. Morisseau, G. Vieillard and J. . -P. Ovarlez, ”Complex-Valued Vs. Real-Valued Neural Networks for Classification Perspectives: An Example on Non-Circular Data,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021
2021
-
[15]
IMAGEVK-74 4D Imaging Radar, Mini-Circuits and Vayyar, https:// www.minicircuits.com/WebStore/imagevk 74.html
-
[16]
K. He, X. Zhang, S. Ren and J. Sun, ”Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV , USA, 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90
-
[17]
Efficientnet: Rethinking model scaling for convolutional neural networks,
Mingxing Tan and Quoc V . Le, ”EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” CoRR, http://arxiv.org/abs/1905.11946, 2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.