Recognition: unknown
Light-ResKAN: A Parameter-Sharing Lightweight KAN with Gram Polynomials for Efficient SAR Image Recognition
Pith reviewed 2026-05-13 22:17 UTC · model grok-4.3
The pith
Light-ResKAN reaches 99.09% accuracy on SAR image datasets while cutting FLOPs by 82.9 times and parameters by 163.8 times compared to VGG16.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Light-ResKAN modifies the ResNet backbone by substituting standard convolutions with KAN convolutions that use Gram polynomials as activations and applies a per-channel parameter-sharing scheme inside each kernel. On the MSTAR, FUSAR-Ship, and SAR-ACD benchmarks the resulting model records 99.09 percent, 93.01 percent, and 97.26 percent accuracy respectively. When tested on 1024-by-1024 MSTAR images the same architecture reduces floating-point operations by a factor of 82.90 and trainable parameters by a factor of 163.78 relative to VGG16, while still preserving sufficient feature diversity for the reported classification performance.
What carries the argument
KAN convolution layers that replace fixed activations with Gram polynomials and enforce per-channel parameter sharing inside each kernel, allowing adaptive non-linear feature extraction at greatly reduced parameter count.
If this is right
- SAR image classification at high accuracy becomes feasible on power- and memory-limited edge processors without cloud offloading.
- The same Gram-polynomial KAN blocks can be inserted into other residual architectures to shrink model size for any large-resolution imagery task.
- Per-channel sharing reduces redundancy without collapsing channel-specific information needed for distinguishing SAR targets.
- The approach scales to 1024-by-1024 inputs while still delivering the stated order-of-magnitude savings in compute and storage.
- Direct comparison on three public SAR benchmarks shows the method outperforms prior lightweight CNN baselines in the reported accuracy-efficiency trade-off.
Where Pith is reading between the lines
- The architecture could be tested on multi-temporal or multi-polarization SAR stacks to check whether the same efficiency gains hold when input dimensionality increases.
- Replacing Gram polynomials with other orthogonal polynomial bases inside the KAN layers offers a simple ablation that would isolate the contribution of the chosen activation family.
- Quantizing the already-reduced weights after training would be a natural next step to push the model further toward ultra-low-power microcontrollers.
- The parameter-sharing pattern may generalize to transformer-style attention blocks, potentially yielding lightweight hybrid models for video SAR streams.
Load-bearing premise
Gram-polynomial activations together with per-channel parameter sharing will keep enough feature diversity in the network to match or exceed the accuracy of full-parameter CNNs on SAR imagery.
What would settle it
Measuring accuracy on the same MSTAR, FUSAR-Ship, or SAR-ACD splits after replacing the Gram-polynomial KAN layers with ordinary convolutions of matched capacity and confirming that accuracy rises substantially while the claimed FLOPs and parameter reductions disappear.
Figures
read the original abstract
Synthetic Aperture Radar (SAR) image recognition is vital for disaster monitoring, military reconnaissance, and ocean observation. However, large SAR image sizes hinder deep learning deployment on resource-constrained edge devices, and existing lightweight models struggle to balance high-precision feature extraction with low computational requirements. The emerging Kolmogorov-Arnold Network (KAN) enhances fitting by replacing fixed activations with learnable ones, reducing parameters and computation. Inspired by KAN, we propose Light-ResKAN to achieve a better balance between precision and efficiency. First, Light-ResKAN modifies ResNet by replacing convolutions with KAN convolutions, enabling adaptive feature extraction for SAR images. Second, we use Gram Polynomials as activations, which are well-suited for SAR data to capture complex non-linear relationships. Third, we employ a parameter-sharing strategy: each kernel shares parameters per channel, preserving unique features while reducing parameters and FLOPs. Our model achieves 99.09%, 93.01%, and 97.26% accuracy on MSTAR, FUSAR-Ship, and SAR-ACD datasets, respectively. Experiments on MSTAR resized to $1024 \times 1024$ show that compared to VGG16, our model reduces FLOPs by $82.90 \times$ and parameters by $163.78 \times$. This work establishes an efficient solution for edge SAR image recognition.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces Light-ResKAN, a parameter-efficient modification of ResNet for SAR image recognition. It replaces convolutional layers with KAN convolutions that use Gram polynomials as activations and applies per-channel parameter sharing within kernels. The paper claims state-of-the-art accuracies of 99.09% on MSTAR, 93.01% on FUSAR-Ship, and 97.26% on SAR-ACD, while demonstrating 82.90× reduction in FLOPs and 163.78× in parameters compared to VGG16 on 1024×1024 images.
Significance. Should the empirical results prove robust, the approach represents a meaningful advance in lightweight deep learning for SAR imagery, potentially enabling real-time processing on edge hardware for applications like disaster monitoring and reconnaissance. The combination of KANs with Gram polynomials and sharing strategy offers a new direction for balancing expressivity and efficiency in convolutional networks.
major comments (3)
- Abstract: The central performance claims (99.09% accuracy on MSTAR, 82.90× FLOPs reduction vs. VGG16) rest on unreviewed empirical results with no training details, ablation studies, or error bars provided, undermining assessment of the accuracy-efficiency trade-off.
- Method (parameter-sharing description): The assertion that per-channel sharing 'preserves unique features while reducing parameters' is load-bearing for the efficiency claims, yet no analysis, visualization of learned functions, or ablation (shared vs. independent KAN activations per channel) is given to confirm feature diversity is maintained for SAR textures.
- Experiments: No ablation studies compare Gram polynomials to standard KAN activations or to baseline ResNet, and no details on how VGG16 was trained/adapted on the 1024×1024 resized MSTAR data are supplied, making the reported 163.78× parameter reduction difficult to interpret.
minor comments (1)
- Abstract: Dataset names (MSTAR, FUSAR-Ship, SAR-ACD) appear without citations or brief descriptions; adding these would improve clarity for readers outside the SAR community.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment point by point below, providing clarifications from the manuscript where applicable and committing to revisions that strengthen the presentation of our results and methods.
read point-by-point responses
-
Referee: Abstract: The central performance claims (99.09% accuracy on MSTAR, 82.90× FLOPs reduction vs. VGG16) rest on unreviewed empirical results with no training details, ablation studies, or error bars provided, undermining assessment of the accuracy-efficiency trade-off.
Authors: The full training details, hyperparameters, data preprocessing steps, and ablation studies are described in Section 4 (Experiments) and the supplementary material. To make these more immediately accessible, we will revise the abstract to include a concise reference to the experimental protocol and report error bars as standard deviations computed over five independent runs. This will improve transparency without altering the core claims. revision: yes
-
Referee: Method (parameter-sharing description): The assertion that per-channel sharing 'preserves unique features while reducing parameters' is load-bearing for the efficiency claims, yet no analysis, visualization of learned functions, or ablation (shared vs. independent KAN activations per channel) is given to confirm feature diversity is maintained for SAR textures.
Authors: We agree that direct empirical support for the parameter-sharing mechanism would strengthen the paper. In the revised manuscript we will add an ablation comparing per-channel shared KAN activations against fully independent activations per channel, together with visualizations of the learned Gram polynomial basis functions across channels. These additions will demonstrate that feature diversity for SAR textures is retained while parameters and FLOPs are reduced. revision: yes
-
Referee: Experiments: No ablation studies compare Gram polynomials to standard KAN activations or to baseline ResNet, and no details on how VGG16 was trained/adapted on the 1024×1024 resized MSTAR data are supplied, making the reported 163.78× parameter reduction difficult to interpret.
Authors: We will expand Section 4 to include two new ablation studies: (i) Gram polynomials versus standard KAN activations (B-splines) on MSTAR, and (ii) Light-ResKAN versus the unmodified ResNet baseline. We will also supply explicit implementation details for the VGG16 baseline, including the exact training schedule, adaptation to 1024×1024 inputs, and the precise method used to compute parameter and FLOP counts, ensuring the efficiency ratios are fully reproducible. revision: yes
Circularity Check
No circularity: empirical architecture validated by direct measurements
full rationale
The paper introduces Light-ResKAN as a ResNet variant that substitutes standard convolutions with KAN convolutions using Gram-polynomial activations and per-channel parameter sharing. All central claims (99.09% MSTAR accuracy, 82.90× FLOPs reduction vs. VGG16, etc.) are presented as measured experimental outcomes on fixed datasets rather than as predictions or theorems derived from the model equations. No equation is shown to equal its own fitted inputs by construction, no uniqueness theorem is invoked via self-citation, and the efficiency numbers are obtained by direct counting of parameters and operations on the implemented network. The derivation chain therefore remains self-contained and non-circular.
Axiom & Free-Parameter Ledger
free parameters (2)
- Gram polynomial degree
- Architecture depth and width
axioms (2)
- domain assumption KAN layers with learnable activations can achieve comparable or better function approximation than fixed-activation CNNs with fewer parameters
- ad hoc to paper Gram polynomials are well-suited to capture complex non-linear relationships in SAR imagery
Reference graph
Works this paper leans on
-
[1]
C. A. Wiley, “Synthetic aperture radars,”IEEE Transactions on Aerospace and Electronic Systems, vol. 3, no. 5, pp. 440–443, May 1985
work page 1985
-
[2]
Spaceborne synthetic aperture radar imaging algorithms: An overview,
G. C. Sun, Y . Liu, J. Xiang, and et al., “Spaceborne synthetic aperture radar imaging algorithms: An overview,”IEEE Geoscience and Remote Sensing Magazine, vol. 10, no. 1, pp. 161–184, 2021
work page 2021
-
[3]
Cloudseg: A multi- modal learning framework for robust land cover mapping under cloudy conditions,
F. Xu, Y . Shi, W. Yang, G.-S. Xia, and X. X. Zhu, “Cloudseg: A multi- modal learning framework for robust land cover mapping under cloudy conditions,”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 214, pp. 21–32, 2024
work page 2024
-
[4]
High frame rate along-track swarm sar sub-aperture collaboration imaging for moving target,
N. Jiang, J. Chen, J. Zhu, B. Liang, D. Yang, X. Huang, and M. Xing, “High frame rate along-track swarm sar sub-aperture collaboration imaging for moving target,”IEEE Transactions on Geoscience and Remote Sensing, 2025
work page 2025
-
[5]
Benchmarking deep learning classifiers for SAR automatic target recognition,
J. Fein-Ashley, T. Ye, R. Kannan, V . Prasanna, and C. Busart, “Benchmarking deep learning classifiers for SAR automatic target recognition,”arXiv, 2023
work page 2023
-
[6]
W. Li, W. Yang, T. Liu, Y . Hou, Y . Li, Z. Liu, Y . Liu, and L. Liu, “Predicting gradient is better: Exploring self-supervised learning for SAR ATR with a joint-embedding predictive architecture,”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 218, pp. 326– 338, 2024. 15
work page 2024
-
[7]
Saratr-x: Toward building a foundation model for SAR target recognition,
W. Li, W. Yang, Y . Hou, L. Liu, Y . Liu, and X. Li, “Saratr-x: Toward building a foundation model for SAR target recognition,”IEEE Transactions on Image Processing, vol. 34, pp. 869–884, 2025
work page 2025
-
[8]
M. Datcu, Z. Huang, A. Anghel, J. Zhao, and R. Cacoveanu, “Explainable, physics-aware, trustworthy artificial intelligence: A paradigm shift for synthetic aperture radar,”IEEE Geoscience and Remote Sensing Magazine, vol. 11, no. 1, pp. 8–25, 2023
work page 2023
-
[9]
Deep belief networks and deep learning,
Y . Hua, J. Guo, and H. Zhao, “Deep belief networks and deep learning,” inProceedings of 2015 international conference on intelligent computing and internet of things. IEEE, 2015, pp. 1–4
work page 2015
-
[10]
Gradient-based learning applied to document recognition,
Y . Lecun, L. Bottou, Y . Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998
work page 1998
-
[11]
Blind super-resolution via meta-learning and markov chain monte carlo simulation,
J. Xia, Z. Yang, S. Li, S. Zhang, Y . Fu, D. G ¨und¨uz, and X. Li, “Blind super-resolution via meta-learning and markov chain monte carlo simulation,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 8139–8156, 2024
work page 2024
-
[12]
Reducing the dimensionality of data with neural networks,
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,”science, vol. 313, no. 5786, pp. 504–507, 2006
work page 2006
-
[13]
Advancing segment anything model for efficient salient object detection in remote sensing images,
J. Zhang, L. Liu, Z. Su, T. Liu, Z. Liu, and M. Pietik ¨ainen, “Advancing segment anything model for efficient salient object detection in remote sensing images,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–16, 2025
work page 2025
-
[14]
Rapid salient object detection with difference convolutional neural networks,
Z. Su, L. Liu, M. M ¨uller, J. Zhang, D. Wofk, M.-M. Cheng, and M. Pietik ¨ainen, “Rapid salient object detection with difference convolutional neural networks,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 10, pp. 9061–9077, 2025
work page 2025
-
[15]
Gradient-based learning algorithms for recurrent networks and their computational complexity,
R. J. Williams and D. Zipser, “Gradient-based learning algorithms for recurrent networks and their computational complexity,” in Backpropagation, 2013, pp. 433–486
work page 2013
-
[16]
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014
work page 2014
-
[17]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”Advances in neural information processing systems, vol. 30, 2017
work page 2017
-
[18]
Y . Liu, B. Peng, L. Liu, and X. Li, “S 4ST: A strong, self-transferable, fast, and simple scale transformation for transferable targeted attack,” arXiv preprint arXiv:2410.13891, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[19]
A causal adjustment module for debiasing scene graph generation,
L. Liu, S. Sun, S. Zhi, F. Shi, Z. Liu, J. Heikkil ¨a, and Y . Liu, “A causal adjustment module for debiasing scene graph generation,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 5, pp. 4024–4043, 2025
work page 2025
-
[20]
Advancements in image recognition: Comparing CNNs and vision transformers,
M. Yan, “Advancements in image recognition: Comparing CNNs and vision transformers,”Applied Computing and Engineering, vol. 104, no. 1, pp. 143–149, Nov 2024
work page 2024
-
[21]
Madinet: Mamba diffusion network for SAR target detection,
J. Zhou, Y . Liu, B. Peng, L. Liu, and X. Li, “Madinet: Mamba diffusion network for SAR target detection,”IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2025
work page 2025
-
[22]
Y . Zhang, X. Gao, J. Xia, W. Li, S. Zhang, L. Liu, and X. Li, “Asc-sepnet: Enhancing robust sar ground target recognition via attribute scattering center and separability dual-driven learning,”IEEE Transactions on Aerospace and Electronic Systems, vol. 61, no. 5, pp. 11 308–11 324, 2025
work page 2025
-
[23]
Diffdet4sar: Diffusion-based aircraft target detection network for SAR images,
J. Zhou, C. Xiao, B. Peng, Z. Liu, L. Liu, Y . Liu, and X. Li, “Diffdet4sar: Diffusion-based aircraft target detection network for SAR images,”IEEE Geoscience and Remote Sensing Letters, 2024
work page 2024
-
[24]
Deep learning for target recognition from SAR images,
A. El Housseini, A. Toumi, and A. Khenchaf, “Deep learning for target recognition from SAR images,” in2017 Seminar on Detection Systems Architectures and Technologies (DAT). IEEE, 2017, pp. 1–5
work page 2017
-
[25]
Fifty years of SAR automatic target recognition: The road forward,
j. Zhou, Y . Liu, L. Liu, W. Li, B. Peng, Y . Song, G. Kuang, and X. Li, “Fifty years of SAR automatic target recognition: The road forward,” arXiv preprint arXiv:2501.22159, 2025
-
[26]
A survey on deep- learning-based real-time SAR ship detection,
J. Li, J. Chen, P. Cheng, Z. Yu, L. Yu, and C. Chi, “A survey on deep- learning-based real-time SAR ship detection,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 3218–3247, 2023
work page 2023
-
[27]
Z. Zhou, D. Xu, C. Liu, and P. Wang, “Research and application of lightweight technology of convolutional neural networks in deep learning,” in2025 IEEE 6th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Apr 2025, pp. 1197–1200
work page 2025
-
[28]
Lightweight pixel difference networks for efficient visual representation learning,
Z. Su, J. Zhang, L. Wang, H. Zhang, Z. Liu, M. Pietik ¨ainen, and L. Liu, “Lightweight pixel difference networks for efficient visual representation learning,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 14 956–14 974, 2023
work page 2023
-
[29]
A dynamic kernel prior model for unsupervised blind image super- resolution,
Z. Yang, J. Xia, S. Li, X. Huang, S. Zhang, Z. Liu, Y . Fu, and Y . Liu, “A dynamic kernel prior model for unsupervised blind image super- resolution,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 26 046–26 056
work page 2024
-
[30]
A comprehensive survey on model compression and acceleration,
T. Choudhary, V . Mishra, A. Goswami, and J. Sarangapani, “A comprehensive survey on model compression and acceleration,” Artificial Intelligence Review, vol. 53, no. 7, pp. 5113–5155, Feb 2020
work page 2020
-
[31]
Model compression for deep neural networks: A survey,
Z. Li, H. Li, and L. Meng, “Model compression for deep neural networks: A survey,”Computers, vol. 12, no. 3, p. 60, Mar 2023
work page 2023
-
[32]
A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y . Zhu, R. Pang, V . Vasudevanet al., “Searching for mobilenetv3,” inProceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314–1324
work page 2019
-
[33]
Shufflenet v2: Practical guidelines for efficient cnn architecture design,
N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” inProceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 116–131
work page 2018
-
[34]
Swin transformer: Hierarchical vision transformer using shifted windows,
Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” inProceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022
work page 2021
-
[35]
J. Shao, C. Qu, J. Liet al., “A lightweight convolutional neural network based on visual attention for SAR image target classification,”Sensors, vol. 18, no. 9, p. 3039, 2018
work page 2018
-
[36]
S. Liu, W. Kong, X. Chenet al., “Multi-scale ship detection algorithm based on a lightweight neural network for spaceborne SAR images,” Remote Sensing, vol. 14, no. 5, p. 1149, 2022
work page 2022
-
[37]
KAN: Kolmogorov–arnold networks,
Z. Liu, Y . Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljacic, T. Y . Hou, and M. Tegmark, “KAN: Kolmogorov–arnold networks,” inThe Thirteenth International Conference on Learning Representations, 2025
work page 2025
-
[38]
Kolmogorov-arnold network for satellite image classification in remote sensing,
M. Cheon, “Kolmogorov-arnold network for satellite image classification in remote sensing,”ArXiv, vol. abs/2406.00600, 2024
-
[39]
Spectralkan: Kolmogorov-arnold network for hyperspectral images change detection,
Y . Wang, X. Yu, Y . Gao, J. Sha, J. Wang, L. Gao, Y . Zhang, and X. Rong, “Spectralkan: Kolmogorov-arnold network for hyperspectral images change detection,”ArXiv, vol. abs/2407.00949, 2024
-
[40]
S. T. Seydi, “Unveiling the power of wavelets: A wavelet-based kolmogorov-arnold network for hyperspectral image classification,” ArXiv, vol. abs/2406.07869, 2024
-
[41]
Kan you see it? kans and sentinel for effective and explainable crop field segmentation,
D. Rege Cambrin, E. Poeta, E. Pastor, T. Cerquitelli, E. Baralis, and P. Garza, “Kan you see it? kans and sentinel for effective and explainable crop field segmentation,” inComputer Vision – ECCV 2024 Workshops. Springer Nature Switzerland, 2025, pp. 115–131
work page 2024
-
[42]
F. Granata, S. Zhu, and F. D. Nunno, “Advanced streamflow forecasting for central european rivers: The cutting-edge kolmogorov-arnold networks compared to transformers,”Journal of Hydrology, Oct 2024
work page 2024
-
[43]
An ensemble approach using self-attention based mobilenetv2 for SAR classification,
K. Anjali, R. P. Singh, M. K. Panda, and K. Palaniappan, “An ensemble approach using self-attention based mobilenetv2 for SAR classification,” Procedia Computer Science, vol. 235, pp. 3207–3216, 2024
work page 2024
-
[44]
Mobileshuffle: An efficient CNN architecture for spaceborne SAR scene classification,
T. Xu, P. Xiao, and H. Wang, “Mobileshuffle: An efficient CNN architecture for spaceborne SAR scene classification,”IEEE Geoscience and Remote Sensing Letters, 2024
work page 2024
-
[45]
A novel CNN-based detector for ship detection based on rotatable bounding box in SAR images,
R. Yang, Z. Pan, X. Jiaet al., “A novel CNN-based detector for ship detection based on rotatable bounding box in SAR images,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 1938–1958, 2021
work page 1938
-
[46]
W. Liu, Z. Lin, G. Gaoet al., “Unsupervised SAR image change type recognition using regionally restricted pca-kmean and lightweight mobilenet,”Remote Sensing, vol. 14, no. 24, p. 6362, 2022
work page 2022
-
[47]
G. Zhao, P. Li, Z. Zhang, F. Guo, X. Huang, W. Xu, J. Wang, and J. Chen, “Towards SAR automatic target recognition: Multi-category SAR image classification based on light weight vision transformer,” in 2024 21st Annual International Conference on Privacy, Security and Trust (PST). IEEE, 2024, pp. 1–6
work page 2024
-
[48]
Towards efficient SAR ship detection: Multi-level feature fusion and lightweight network design,
W. Xu, Z. Guo, P. Huang, W. Tan, and Z. Gao, “Towards efficient SAR ship detection: Multi-level feature fusion and lightweight network design,”Remote Sensing, vol. 17, no. 15, p. 2588, Jul 2025
work page 2025
-
[49]
V . K. Rakesh, S. Mazumdar, T. Samanta, S. Pal, and A. Das, “Impact of hyperparameter optimization on the accuracy of lightweight deep learning models for real-time image classification,”arXiv preprint, 2025
work page 2025
-
[50]
J. Lee, L. Mukhanov, A. S. Molahosseini, U. Minhas, Y . Hua, J. M. del Rincon, K. Dichev, C.-H. Hong, and H. Vandierendonck, “Resource- efficient convolutional networks: A survey on model-, arithmetic-, and implementation-level techniques,”ACM Computing Surveys, vol. 56, no. 3, pp. 1–38, 2023
work page 2023
-
[51]
Kolmogorov-arnold convolutions: Design principles and empirical studies,
I. Drokin, “Kolmogorov-arnold convolutions: Design principles and empirical studies,”arXiv preprint arXiv:2407.01092, 2024. 16
-
[52]
Ms-gan: Learn to memorize scene for unpaired sar-to-optical image translation,
Z. Guo, Z. Zhang, Q. Cai, J. Liu, Y . Fan, and S. Mei, “Ms-gan: Learn to memorize scene for unpaired sar-to-optical image translation,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 11 467–11 484, 2024
work page 2024
-
[53]
X. Luo, R. Bao, Z. Liu, S. Zhu, and Q. Liu, “Super-resolution of SAR images with speckle noise based on combination of cubature kalman filter and low-rank approximation,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023
work page 2023
-
[54]
C. Zhang, Z. Zhang, Y . Deng, Y . Zhang, M. Chong, Y . Tan, and P. Liu, “Blind super-resolution for SAR images with speckle noise based on deep learning probabilistic degradation model and SAR priors,”Remote Sensing, vol. 15, no. 2, p. 330, Jan 2023
work page 2023
-
[55]
Median robust extended local binary pattern for texture classification,
L. Liu, S. Lao, P. W. Fieguth, Y . Guo, X. Wang, and M. Pietik ¨ainen, “Median robust extended local binary pattern for texture classification,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1368–1381, 2016
work page 2016
-
[56]
Texture classification from random features,
L. Liu and P. Fieguth, “Texture classification from random features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 574–586, 2012
work page 2012
-
[57]
Standard SAR ATR evaluation experiments using the mstar public release data set,
T. D. Ross, S. W. Worrell, V . J. Velten, J. C. Mossing, and M. L. Bryant, “Standard SAR ATR evaluation experiments using the mstar public release data set,”Proc. SPIE, vol. 3370, pp. 566–573, 1998
work page 1998
-
[58]
X. Hou, W. Ao, Q. Song, J. Lai, H. Wang, and F. Xu, “Fusar-ship: Building a high-resolution sar-ais matchup dataset of gaofen-3 for ship detection and recognition,”Sci. China (Inf. Sci.), vol. 63, no. 4, pp. 40–58, 2020
work page 2020
-
[59]
X. Sun, Y . Lv, Z. Wang, and K. Fu, “Scan: Scattering characteristics analysis network for few-shot aircraft classification in high-resolution SAR images,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022
work page 2022
-
[60]
Imagenet classification with deep convolutional neural networks,
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Advances in neural information processing systems, vol. 25, 2012
work page 2012
-
[61]
Very deep convolutional networks for large-scale image recognition,
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” inProc. 3rd Int. Conf. Learn. Representations, 2015, pp. 1–14
work page 2015
-
[62]
Deep residual learning for image recognition,
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778
work page 2016
-
[63]
Z. Liu, H. Mao, C.-Y . Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A convnet for the 2020s,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11 976–11 986
work page 2022
-
[64]
A remote sensing method for crop mapping based on multiscale neighborhood feature extraction,
Y . Wu, Y . Wu, B. Wang, and H. Yang, “A remote sensing method for crop mapping based on multiscale neighborhood feature extraction,”Remote Sensing, vol. 15, no. 1, p. 47, 2022
work page 2022
-
[65]
S. K. Roy, A. Sukul, A. Jamali, J. M. Haut, and P. Ghamisi, “Cross hyperspectral and lidar attention transformer: An extended self-attention for land use and land cover classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024
work page 2024
-
[66]
Asanet: Asymmetric semantic aligning network for rgb and SAR image land cover classification,
P. Zhang, B. Peng, C. Lu, Q. Huang, and D. Liu, “Asanet: Asymmetric semantic aligning network for rgb and SAR image land cover classification,”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 218, pp. 574–587, 2024
work page 2024
-
[67]
Very high resolution synthetic aperture radar systems and imaging: A review,
X. Chen, Z. Dong, Z. Zhang, C. Tu, T. Yi, and Z. He, “Very high resolution synthetic aperture radar systems and imaging: A review,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 7104–7123, 2024
work page 2024
-
[68]
S. A. Twinkle, S. Kamilya, and J. Mukherjee, “Edge preserving multiplicative noise removal of SAR images through convolutional neural network and anisotropic diffusion,” in2024 IEEE India Geoscience and Remote Sensing Symposium (InGARSS). IEEE, 2024, pp. 1–4
work page 2024
-
[69]
Renyi divergences learning for explainable classification of SAR image pairs,
M. Gallet, A. Mian, and A. Atto, “Renyi divergences learning for explainable classification of SAR image pairs,” inICASSP 2024- 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024, pp. 7445–7449
work page 2024
-
[70]
A novel cfar-based ship detection method using range-compressed data for spaceborne SAR system,
C. Wang, B. Guo, J. Song, F. He, and C. Li, “A novel cfar-based ship detection method using range-compressed data for spaceborne SAR system,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024
work page 2024
-
[71]
SAR images for interference detection and suppression,
H. Yang, P. Lang, J. Yin, and X. Lu, “SAR images for interference detection and suppression,”Journal of Radars, 2025, [OL]
work page 2025
-
[72]
ATRNet-STAR: A large dataset and benchmark towards remote sensing object recognition in the wild,
Y . Liu, W. Li, L. Liu, J. Zhou, B. Peng, Y . Song, X. Xiong, W. Yang, T. Liu, Z. Liu, and X. Li, “ATRNet-STAR: A large dataset and benchmark towards remote sensing object recognition in the wild,”IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–18, 2026
work page 2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.