pith. machine review for the scientific record. sign in

arxiv: 2604.13947 · v1 · submitted 2026-04-15 · 💻 cs.CV

Recognition: unknown

Heuristic Style Transfer for Real-Time, Efficient Weather Attribute Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 13:34 UTC · model grok-4.3

classification 💻 cs.CV
keywords weather attribute detectionstyle transfermulti-task learningGram matricesPatchGANreal-time inferenceembedded visionimage classification
0
0 comments X

The pith

Weather attributes can be detected in real time by treating them as visual style variations in lightweight multi-task models.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper explores the idea that weather conditions primarily appear as changes in image style, which can be modeled using techniques from style transfer such as Gram matrices and PatchGAN features. It introduces two model families, RTM based on truncated ResNet and PMG combining PatchGAN with Gram matrices, integrated into a multi-task attention framework for predicting weather types and 11 attributes. These models are designed to be efficient, with the PMG using fewer than 5 million parameters while achieving F1 scores over 96% on an internal test set of weather images and over 78% in zero-shot tests on external datasets. The work also provides a large public dataset of over 500,000 annotated images. This matters because it offers a computationally light way to add weather detection to vision systems for applications like autonomous driving or traffic monitoring where speed and low resource use are critical.

Core claim

Framing weather attribute detection as a heuristic style transfer problem, the authors demonstrate that automated Gram matrices from lower and intermediate layers of a truncated ResNet-50, combined with PatchGAN-style local style capture in a multi-task attention model, enable accurate prediction of 53 weather classes with high efficiency and generalization.

What carries the argument

The PMG (PatchGAN-MultiTasks-Gram) architecture that integrates local style features from PatchGAN with global Gram matrix computations in a supervised multi-task learning setup with attention mechanisms.

If this is right

  • The modular architecture allows easy addition or removal of style-related or weather-related tasks.
  • The models exhibit strong zero-shot generalization to external datasets.
  • Real-time operation with small memory footprint makes them deployable on embedded systems.
  • The released dataset of 503,875 images supports further research in weather-aware computer vision.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This style-centric approach might be adapted to detect other environmental factors like road surface conditions or atmospheric effects.
  • Combining these models with object detection systems could enhance scene understanding in varying weather without increasing computational load significantly.
  • Future work could test if the same heuristics apply to video sequences for temporal consistency in weather attribute tracking.

Load-bearing premise

That the visual effects of weather conditions can be adequately represented and distinguished through automated computations of style features like Gram matrices and local PatchGAN patches.

What would settle it

If experiments on additional external datasets with different lighting, cameras, or geographies show F1 scores dropping substantially below the reported 78 percent, this would challenge the claim of effective style-based generalization.

read the original abstract

We present lightweight and efficient architectures to detect weather conditions from RGB images, predicting the weather type (sunny, rain, snow, fog) and 11 complementary attributes such as intensity, visibility, and ground condition, for a total of 53 classes across the tasks. This work examines to what extent weather conditions manifest as variations in visual style. We investigate style-inspired techniques, including Gram matrices, a truncated ResNet-50 targeting lower and intermediate layers, and PatchGAN-style architectures, within a multi-task framework with attention mechanisms. Two families are introduced: RTM (ResNet50-Truncated-MultiTasks) and PMG (PatchGAN-MultiTasks-Gram), together with their variants. Our contributions include automation of Gram-matrix computation, integration of PatchGAN into supervised multi-task learning, and local style capture through local Gram for improved spatial coherence. We also release a dataset of 503,875 images annotated with 12 weather attributes under a Creative Commons Attribution (CC-BY) license. The models achieve F1 scores above 96 percent on our internal test set and above 78 percent in zero-shot evaluation on several external datasets, confirming their generalization ability. The PMG architecture, with fewer than 5 million parameters, runs in real time with a small memory footprint, making it suitable for embedded systems. The modular design of the models also allows style-related or weather-related tasks to be added or removed as needed.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes two families of lightweight models (RTM and PMG) for real-time multi-task detection of weather types and 11 attributes (53 classes total) from RGB images. It frames weather attributes as visual style variations and integrates automated Gram matrices, truncated ResNet-50 layers, and PatchGAN-style local features within a multi-task attention framework. The authors release a CC-BY dataset of 503,875 images and report F1 scores above 96% on an internal test set and above 78% in zero-shot evaluation on external datasets, with the PMG variant using fewer than 5 million parameters for real-time inference on embedded systems.

Significance. If the performance numbers and the load-bearing role of the style-transfer components hold, the work would deliver practical, low-footprint models for embedded weather detection together with a large public dataset that could support further research. The modular architecture and explicit focus on real-time constraints are clear engineering strengths; the heuristic style-transfer framing, if substantiated, could generalize to other domains where low-level visual statistics correlate with semantic labels.

major comments (2)
  1. [Abstract and results section] Abstract and results section: The central claim that weather attributes (intensity, visibility, ground condition) are effectively captured by style statistics (Gram matrices, local PatchGAN features) rather than semantic content is load-bearing for the paper's framing, yet no ablation is described that isolates the contribution of the Gram-matrix and PatchGAN modules versus a plain multi-task attention baseline. Without such controls, it remains possible that dataset correlations or the attention mechanism alone drive the reported F1 scores.
  2. [Abstract] Abstract: The zero-shot claim (>78% F1 on external datasets) is presented as evidence of style generalization, but the manuscript provides no analysis of distribution shift between internal and external sets (camera, geography, lighting) or error breakdown by attribute type. If low-level texture overlap rather than learned style transfer explains the numbers, the generalization argument is weakened.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'automation of Gram-matrix computation' is introduced without an equation, algorithm box, or reference to the precise implementation; adding this would improve reproducibility.
  2. The manuscript would benefit from explicit reporting of training/validation/test splits, class imbalance handling, and at least one standard supervised baseline (e.g., unmodified ResNet-50 multi-task) to allow readers to assess the incremental value of the style components.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments. We address each major point below and will incorporate the suggested analyses into a revised manuscript to strengthen the claims regarding the role of style statistics and the zero-shot generalization.

read point-by-point responses
  1. Referee: [Abstract and results section] Abstract and results section: The central claim that weather attributes (intensity, visibility, ground condition) are effectively captured by style statistics (Gram matrices, local PatchGAN features) rather than semantic content is load-bearing for the paper's framing, yet no ablation is described that isolates the contribution of the Gram-matrix and PatchGAN modules versus a plain multi-task attention baseline. Without such controls, it remains possible that dataset correlations or the attention mechanism alone drive the reported F1 scores.

    Authors: We agree that an explicit ablation isolating the style-transfer components is necessary to substantiate the framing. In the revised manuscript we will add a controlled ablation: a plain multi-task attention baseline using the same truncated ResNet-50 backbone and attention modules but without Gram-matrix or PatchGAN-style local feature branches. We will report F1 scores on the internal test set for this baseline versus the full RTM and PMG variants, thereby quantifying the incremental contribution of the automated Gram matrices and local style capture. This table will be placed in the results section alongside the existing performance numbers. revision: yes

  2. Referee: [Abstract] Abstract: The zero-shot claim (>78% F1 on external datasets) is presented as evidence of style generalization, but the manuscript provides no analysis of distribution shift between internal and external sets (camera, geography, lighting) or error breakdown by attribute type. If low-level texture overlap rather than learned style transfer explains the numbers, the generalization argument is weakened.

    Authors: We acknowledge the need for greater transparency on the zero-shot evaluation. In the revision we will add (i) a quantitative characterization of distribution shift (differences in camera models, geographic regions, and illumination statistics between the 503k-image training set and each external dataset) and (ii) a per-attribute error breakdown (precision, recall, and F1) for the zero-shot results. These additions will appear in a new subsection of the experiments, allowing readers to assess whether the observed performance is driven by style statistics or by residual low-level texture correlations. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical results from standard training and evaluation

full rationale

The paper reports F1 scores from training PMG and RTM architectures on a released dataset of 503k images, with evaluation on internal held-out test sets and zero-shot external datasets. No equations are presented that reduce the claimed metrics to inputs by construction. No self-citations are invoked as load-bearing for uniqueness theorems or ansatzes. The style-transfer framing is an empirical hypothesis tested via supervised multi-task learning, not a derivation that collapses to its own definitions or fits.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that weather attributes are adequately represented by style statistics; standard deep-learning training assumptions apply but no new entities or fitted constants beyond network weights are introduced.

axioms (1)
  • domain assumption Weather conditions manifest as variations in visual style
    Explicitly stated as the core investigative premise in the abstract.

pith-pipeline@v0.9.0 · 5575 in / 1169 out tokens · 61838 ms · 2026-05-10T13:34:11.666289+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 22 canonical work pages · 3 internal anchors

  1. [1]

    Adverse weather conditions in the validation of ADAS/AD sen- sors

    Engstle, Armin. Adverse weather conditions in the validation of ADAS/AD sen- sors. In:AmE 2023–Automotive meets Electronics; 14. GMM Symposium, pp. 103–107 (2023)

  2. [2]

    Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather

    Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique. Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather. 2024. URL https://arxiv.org/abs/2407.02581

  3. [3]

    Zang, Shizhe, Ding, Ming, Smith, David, Tyler, Paul, Rakotoarivelo, Thierry, Kaafar, Mohamed Ali. The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car.IEEE Vehicular Technology Magazine,14(2), 103-111 (2019). doi:10.1109/MVT.2019.2892497

  4. [4]

    Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros. Unpaired Image-to- Image Translation using Cycle-Consistent Adversarial Networks. arXiv preprint arXiv:1703.10593. 2020. URL https://arxiv.org/abs/1703.10593

  5. [6]

    URL https://arxiv.org/abs/2007.15651

  6. [7]

    Real- Time Weather Image Classification with SVM.arXiv preprint arXiv:2409.00821 (2024)

    Ship, Eden, Spivak, Eitan, Agarwal, Shubham, Birman, Raz, Hadar, Ofer. Real- Time Weather Image Classification with SVM.arXiv preprint arXiv:2409.00821 (2024)

  7. [8]

    Feature extraction for classification of different weather conditions.Frontiers of Electrical and Electronic Engineering in China,6(2), 339–346 (2011)

    Zhao, Xudong, Liu, Peng, Liu, Jiafeng, Tang, Xianglong. Feature extraction for classification of different weather conditions.Frontiers of Electrical and Electronic Engineering in China,6(2), 339–346 (2011)

  8. [9]

    Two-class weather classifica- tion

    Lu, Cewu, Lin, Di, Jia, Jiaya, Tang, Chi-Keung. Two-class weather classifica- tion. In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3718–3725 (2014)

  9. [10]

    Classification of weather situations on single color images

    Roser, Martin, Moosmann, Frank. Classification of weather situations on single color images. In:2008 IEEE intelligent vehicles symposium, pp. 798–803 (2008)

  10. [11]

    Analysis of impact of rain conditions on ADAS.Sensors,20(23), 6720 (2020)

    Roh, Chang-Gyun, Kim, Jisoo, Im, I-Jeong. Analysis of impact of rain conditions on ADAS.Sensors,20(23), 6720 (2020)

  11. [12]

    A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and int eractivity,

    Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung. A Multitask, Multilingual, Multimodal Evalua- tion of ChatGPT on Reasoning, Hallucination, and Interactivity. 2023. URL 29 https://arxiv.org/abs/2302.04023

  12. [13]

    ChatGPT: Vision and challenges

    Gill, Sukhpal Singh, Kaur, Rupinder. ChatGPT: Vision and challenges. Internet of Things and Cyber-Physical Systems,3, 262–271 (2023). doi:10.1016/j.iotcps.2023.05.004

  13. [14]

    Beyond Text: Unveiling Multimodal Proficiency of Large Language Models with MultiAPI Benchmark

    Xiao Liu, Jianfeng Lin, Jiawei Zhang. Beyond Text: Unveiling Multimodal Proficiency of Large Language Models with MultiAPI Benchmark. 2023. URL https://arxiv.org/abs/2311.13053

  14. [15]

    ResNet15: weather recog- nition on traffic road with deep convolutional neural network.Advances in Meteorology,2020(1), 6972826 (2020)

    Xia, Jingming, Xuan, Dawei, Tan, Ling, Xing, Luping. ResNet15: weather recog- nition on traffic road with deep convolutional neural network.Advances in Meteorology,2020(1), 6972826 (2020)

  15. [16]

    Weather image classification using convolutional neural network with transfer learning

    Naufal, Mohammad Farid, Kusuma, Selvia Ferdiana. Weather image classification using convolutional neural network with transfer learning. In:AIP Conference Proceedings(2022)

  16. [17]

    Advancing weather image classification using deep convolutional neu- ral networks

    Papadimitriou, Orestis, Kanavos, Athanasios, Mylonas, Phivos, Maragoudakis, Manolis. Advancing weather image classification using deep convolutional neu- ral networks. In:2023 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP) 18th International Workshop on Semantic and Social Media Adaptation & Personalization (S...

  17. [18]

    Weather Image Classification Using Convolution Neural Network.Annals of the Romanian Society for Cell Biology, 4156–4166 (2021)

    Patel, Manthan, Das, Sunav, Krishnaraj, N. Weather Image Classification Using Convolution Neural Network.Annals of the Romanian Society for Cell Biology, 4156–4166 (2021)

  18. [19]

    Classifying weather images using deep neural networks for large scale datasets.International Journal of Advanced Computer Science and Applications,14(1) (2023)

    Mittal, Shweta, Sangwan, Om Prakash. Classifying weather images using deep neural networks for large scale datasets.International Journal of Advanced Computer Science and Applications,14(1) (2023)

  19. [20]

    Multi-Weather Classification using Deep Learning: A CNN-SVM Amalgamated Approach

    Kukreja, Vinay, Sharma, Rishabh, Yadav, Rishika. Multi-Weather Classification using Deep Learning: A CNN-SVM Amalgamated Approach. In:2023 World Conference on Communication & Computing (WCONF), pp. 1–5 (2023)

  20. [21]

    WeatherNet: Recognis- ing weather and visual conditions from street-level images using deep residual learning.ISPRS International Journal of Geo-Information,8(12), 549 (2019)

    Ibrahim, Mohamed R, Haworth, James, Cheng, Tao. WeatherNet: Recognis- ing weather and visual conditions from street-level images using deep residual learning.ISPRS International Journal of Geo-Information,8(12), 549 (2019)

  21. [22]

    Weather Classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of Convolutional Neural Networks

    Guerra, Jose Carlos Villarreal, Khanam, Zeba, Ehsan, Shoaib, Stolkin, Rustam, McDonald-Maier, Klaus. Weather Classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of Convolutional Neural Networks. In:2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp. 305–310 (2018). 30

  22. [23]

    Analyse d’images par m´ ethode de Deep Learning appliqu´ ee au contexte routier en conditions m´ et´ eorologiques d´ egrad´ ees

    Dahmane, Khouloud. Analyse d’images par m´ ethode de Deep Learning appliqu´ ee au contexte routier en conditions m´ et´ eorologiques d´ egrad´ ees. Phdthesis, Univer- sit´ e Clermont Auvergne [2017-2020] (2020)

  23. [24]

    Real-Time Environment Condition Classifica- tion for Autonomous Vehicles

    Introvigne, Marco, Ramazzina, Andrea, Walz, Stefanie, Scheuble, Dominik, Bijelic, Mario. Real-Time Environment Condition Classification for Autonomous Vehicles.arXiv preprint arXiv:2405.19305(2024)

  24. [25]

    A study of weather-image classification combining VIT and a dual enhanced-attention module.Electronics,12(5), 1213 (2023)

    Li, Jing, Luo, Xueping. A study of weather-image classification combining VIT and a dual enhanced-attention module.Electronics,12(5), 1213 (2023)

  25. [26]

    Weather image classification using EfficientNet and Dual Attention Block

    Rani, Rella Usha, Kakarla, Jagadeesh, Sundar, B. Weather image classification using EfficientNet and Dual Attention Block. In:2023 2nd International Con- ference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), pp. 1–4 (2023)

  26. [27]

    Image-Based Self-attentive Multi-label Weather Classi- fication Network

    Pikun, Wang, Ling, Wu. Image-Based Self-attentive Multi-label Weather Classi- fication Network. In:International Conference on Image, Vision and Intelligent Systems, pp. 497–504 (2022)

  27. [28]

    Weather and light level classification for autonomous driving: Dataset, baseline and active learning

    Dhananjaya, Mahesh M, Kumar, Varun Ravi, Yogamani, Senthil. Weather and light level classification for autonomous driving: Dataset, baseline and active learning. In:2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 2816–2821 (2021)

  28. [29]

    DMRVisNet: Deep Multi-head Regression Network for Pixel-wise Visibility Estimation Under Foggy Weather

    Jing You, Shaocheng Jia, Xin Pei, Danya Yao. DMRVisNet: Deep Multi-head Regression Network for Pixel-wise Visibility Estimation Under Foggy Weather

  29. [30]

    URL https://arxiv.org/abs/2112.04278

  30. [31]

    Low-latency perception in off-road dynamical low visibility environments.Expert Systems with Applications,201, 117010 (2022)

    Ferreira Neto, Nelson Alves, Ruiz, Marco, Reis, Marco, Cajahyba, Tiago, Oliveira, Davi, Barreto, Ana C., Simas Filho, Eduardo F., de Oliveira, Wagner L.A., Schnitman, Leizer, Monteiro, Roberto L.S. Low-latency perception in off-road dynamical low visibility environments.Expert Systems with Applications,201, 117010 (2022). doi:10.1016/j.eswa.2022.117010

  31. [32]

    Estimating meteorological visibility range under foggy weather conditions: A deep learning approach.Procedia Computer Science,141, 478–483 (2018)

    Chaabani, Hazar, Werghi, Naoufel, Kamoun, Faouzi, Taha, Bilal, Outay, Fatma, others. Estimating meteorological visibility range under foggy weather conditions: A deep learning approach.Procedia Computer Science,141, 478–483 (2018)

  32. [33]

    A machine learning approach to road surface anomaly assessment using smartphone sensors.IEEE Sensors Journal,20(5), 2635–2647 (2019)

    Basavaraju, Akanksh, Du, Jing, Zhou, Fujie, Ji, Jim. A machine learning approach to road surface anomaly assessment using smartphone sensors.IEEE Sensors Journal,20(5), 2635–2647 (2019)

  33. [34]

    SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions.arXiv preprint arXiv:2406.09945 (2024)

    Piroli, Aldi, Dallabetta, Vinzenz, Kopp, Johannes, Walessa, Marc, Meiss- ner, Daniel, Dietmayer, Klaus. SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions.arXiv preprint arXiv:2406.09945 (2024). 31

  34. [35]

    Road surface monitoring using smartphone sensors: A review.Sensors,18(11), 3845 (2018)

    Sattar, Shahram, Li, Songnian, Chapman, Michael. Road surface monitoring using smartphone sensors: A review.Sensors,18(11), 3845 (2018)

  35. [36]

    Artificial Intelligence for road quality assessment in smart cities: a machine learning approach to acoustic data analysis.Computational Urban Science,3(1), 28 (2023)

    Jagatheesaperumal, Senthil Kumar, Bibri, Simon Elias, Ganesan, Shrivarshni, Jeyaraman, Poongkalai. Artificial Intelligence for road quality assessment in smart cities: a machine learning approach to acoustic data analysis.Computational Urban Science,3(1), 28 (2023)

  36. [37]

    Real-time joint recognition of weather and ground surface conditions by a multi-task deep network.Engineering Applications of Artificial Intelligence,139, 109543 (2025)

    Gragnaniello, Diego, Greco, Antonio, Sansone, Carlo, Vento, Bruno. Real-time joint recognition of weather and ground surface conditions by a multi-task deep network.Engineering Applications of Artificial Intelligence,139, 109543 (2025)

  37. [38]

    Inexpensive multimodal sensor fusion system for autonomous data acquisition of road surface conditions.IEEE Sensors Journal,16(21), 7731–7743 (2016)

    Chen, Yulu Luke, Jahanshahi, Mohammad R, Manjunatha, Preetham, Gan, WeiPhang, Abdelbarr, Mohamed, Masri, Sami F, Becerik-Gerber, Burcin, Caf- frey, John P. Inexpensive multimodal sensor fusion system for autonomous data acquisition of road surface conditions.IEEE Sensors Journal,16(21), 7731–7743 (2016)

  38. [39]

    Deep learning-based robust positioning for all-weather autonomous driving.Nature machine intelligence,4(9), 749–760 (2022)

    Almalioglu, Yasin, Turan, Mehmet, Trigoni, Niki, Markham, Andrew. Deep learning-based robust positioning for all-weather autonomous driving.Nature machine intelligence,4(9), 749–760 (2022)

  39. [40]

    Use of ice detection sensors for improving winter road safety.Accident Analysis & Prevention,191, 107197 (2023)

    DiLorenzo, Taryn, Yu, Xinbao. Use of ice detection sensors for improving winter road safety.Accident Analysis & Prevention,191, 107197 (2023)

  40. [41]

    LiDAR data integrity verification for autonomous vehicle.IEEE Access,7, 138018–138031 (2019)

    Changalvala, Raghu, Malik, Hafiz. LiDAR data integrity verification for autonomous vehicle.IEEE Access,7, 138018–138031 (2019)

  41. [42]

    Recog- nition of road surface condition through an on-vehicle camera using multiple classifiers

    Panhuber, Christian, Liu, Bo, Scheickl, Oliver, Wies, Rene, Isert, Carsten. Recog- nition of road surface condition through an on-vehicle camera using multiple classifiers. In:Proceedings of SAE-China Congress 2015: Selected Papers, pp. 267–279 (2016)

  42. [43]

    Toward accurate and scalable rainfall estimation using surveillance camera data and a hybrid deep-learning framework.Environmental Science and Ecotechnology, 100562 (2025)

    Manuel, Fiallos-Salguero, Khu, Soon-Thiam, Guan, Jingyu, Wang, Mingna. Toward accurate and scalable rainfall estimation using surveillance camera data and a hybrid deep-learning framework.Environmental Science and Ecotechnology, 100562 (2025)

  43. [44]

    Toward improved real-time rainfall intensity esti- mation using video surveillance cameras.Water Resources Research,59(8), e2023WR034831 (2023)

    Zheng, Feifei, Yin, Hang, Ma, Yiyi, Duan, Huan-Feng, Gupta, Hoshin, Savic, Dragan, Kapelan, Zoran. Toward improved real-time rainfall intensity esti- mation using video surveillance cameras.Water Resources Research,59(8), e2023WR034831 (2023)

  44. [45]

    Modelling weather precipitation intensity on surfaces in motion with application to autonomous vehicles.Sensors,23(19), 8034 (2023)

    Carvalho, Mateus, Hangan, Horia. Modelling weather precipitation intensity on surfaces in motion with application to autonomous vehicles.Sensors,23(19), 8034 (2023). 32

  45. [47]

    URL https://arxiv.org/abs/1611.07004

  46. [48]

    Image style transfer using convolutional neural networks

    Gatys, Leon A, Ecker, Alexander S, Bethge, Matthias. Image style transfer using convolutional neural networks. In:Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414–2423 (2016)

  47. [49]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Karen Simonyan, Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. 2015. URL https://arxiv.org/abs/1409.1556

  48. [50]

    Deep unsupervised learning using nonequilibrium thermodynamics

    Sohl-Dickstein, Jascha, Weiss, Eric, Maheswaranathan, Niru, Ganguli, Surya. Deep unsupervised learning using nonequilibrium thermodynamics. In:Interna- tional conference on machine learning, pp. 2256–2265 (2015)

  49. [51]

    Denoising Diffusion Probabilistic Models

    Jonathan Ho, Ajay Jain, Pieter Abbeel. Denoising Diffusion Probabilistic Models

  50. [52]

    URL https://arxiv.org/abs/2006.11239

  51. [53]

    Generative adversarial networks.Communications of the ACM,63(11), 139–144 (2020)

    Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, Bengio, Yoshua. Generative adversarial networks.Communications of the ACM,63(11), 139–144 (2020)

  52. [54]

    Attention Is All You Need

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention Is All You Need. 2023. URL https://arxiv.org/abs/1706.03762

  53. [55]

    Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

    Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. 2016. URL https://arxiv.org/abs/1502.03044

  54. [56]

    Squeeze-and-Excitation Networks

    Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. Squeeze-and-Excitation Networks. 2019. URL https://arxiv.org/abs/1709.01507

  55. [57]

    Non-local neural networks

    Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He. Non-local Neural Networks. 2018. URL https://arxiv.org/abs/1711.07971

  56. [58]

    An empirical study of training self- supervised vision transformers

    Xinlei Chen, Saining Xie, Kaiming He. An Empirical Study of Training Self- Supervised Vision Transformers. 2021. URL https://arxiv.org/abs/2104.02057

  57. [59]

    R.et al.Grad-cam: Visual explanations from deep networks via gradient-based localization.International Journal of Computer Vision128, 336–359 (2019)

    Selvaraju, Ramprasaath R., Cogswell, Michael, Das, Abhishek, Vedantam, Ramakrishna, Parikh, Devi, Batra, Dhruv. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.International Journal of Computer Vision,128(2), 336–359 (2019). doi:10.1007/s11263-019-01228-7

  58. [60]

    Tony Cai, Rong Ma

    T. Tony Cai, Rong Ma. Theoretical Foundations of t-SNE for Visualizing High- Dimensional Clustered Data. 2022. URL https://arxiv.org/abs/2105.07536. 33

  59. [61]

    Multi-class weather classification on single images

    Zhang, Zheng, Ma, Huadong. Multi-class weather classification on single images. In:2015 IEEE International Conference on Image Processing (ICIP), pp. 4396– 4400 (2015)

  60. [62]

    Evaluation of CNN-Based Approaches to Adverse Weather Image Classification for Autonomous Driving Systems.IEEE Open Journal of Intelligent Transportation Systems(2025)

    Afxentiou, Viktoria, Vladimirova, Tanya. Evaluation of CNN-Based Approaches to Adverse Weather Image Classification for Autonomous Driving Systems.IEEE Open Journal of Intelligent Transportation Systems(2025)

  61. [63]

    Weather phenomenon database (WEAPD).Harvard Dataverse dataset, 627 (2021)

    Xiao, Haixia. Weather phenomenon database (WEAPD).Harvard Dataverse dataset, 627 (2021). 34