pith. machine review for the scientific record. sign in

arxiv: 2604.02818 · v1 · submitted 2026-04-03 · ⚛️ physics.ao-ph

Recognition: 2 theorem links

· Lean Theorem

MAG-Net: Physics-Aware Multi-Modal Fusion of Geostationary Satellite and Radar for Severe Convective Precipitation Nowcasting

Anyuan Xiong, Dandan Chen, Enda Zhu, Yaqiang Wang

Authors on Pith no claims yet

Pith reviewed 2026-05-13 18:38 UTC · model grok-4.3

classification ⚛️ physics.ao-ph
keywords precipitation nowcastingmulti-modal fusiongeostationary satelliteradarconvective stormsdeep learningmodel interpretability
0
0 comments X

The pith

Fusing radar with three geostationary satellite channels extends severe convective precipitation nowcasting skill past 30 minutes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces MAG-Net to merge radar echo dynamics with infrared and water-vapor satellite observations so that thermodynamic and microphysical signals can support longer-range forecasts of intense storms. Radar-only methods lose accuracy rapidly because they lack these precursors, while existing neural networks produce blurred outputs and unstable training. MAG-Net uses separate encoding streams for each data type, an uncertainty-weighted loss that balances reflectivity regression against event probability, and an inference-time fusion step that preserves fine texture. On a multi-year southeastern China dataset the model raises critical success index at 40 dBZ from 0.172 to 0.255 relative to prior networks. Integrated-gradients attribution shows the network shifts reliance onto satellite inputs as lead time grows and convection strengthens.

Core claim

MAG-Net integrates radar dynamics with the IR 10.8, WV 7.1, and brightness-temperature-difference satellite channels inside a dual-stream encoder and symmetric dual-head decoder. An uncertainty-weighted multi-task objective trains reflectivity regression and probabilistic event detection together, while a gradient-preserving fusion step at inference retains high-frequency detail from the regression head. On the 2018-2023 southeastern China dataset the network improves CSI40 by 0.083 over CPrecNet and raises detection of intense echoes, with Integrated Gradients confirming that satellite dependence grows with forecast horizon and convective intensity.

What carries the argument

The Dual-Stream Encoder that processes radar and satellite modalities separately before attention-guided fusion, paired with the Gradient-Preserving Fusion inference strategy that combines probabilistic constraints and regression outputs.

If this is right

  • Nowcasting skill for intense convective events improves at lead times beyond 30 minutes where radar-only methods degrade.
  • The model can be interpreted post hoc to show when satellite data supplies critical precursors for severe weather.
  • Uncertainty-weighted multi-task training stabilizes learning across regression and probability outputs.
  • High-frequency echo texture is retained at inference without sacrificing probabilistic calibration.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same fusion pattern could be tested on other satellite instruments or ground-based sensors where radar coverage is incomplete.
  • Extending the lead time further might reveal whether additional thermodynamic variables become necessary once satellite signals saturate.
  • Operational warning systems could use the per-pixel attribution maps to flag forecasts that depend heavily on satellite inputs.

Load-bearing premise

The three chosen satellite channels already contain enough thermodynamic and microphysical information to extend nowcasting skill without needing additional channels or independent validation.

What would settle it

Retraining the identical architecture on the same radar data but with the three satellite channels removed and checking whether CSI40 at 60-minute lead time falls back to the CPrecNet baseline value.

Figures

Figures reproduced from arXiv: 2604.02818 by Anyuan Xiong, Dandan Chen, Enda Zhu, Yaqiang Wang.

Figure 1
Figure 1. Figure 1: Schematic overview of the proposed MAG-Net (Multi-modal [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overall performance evaluation. (a) Mean Absolute Error (MAE) and [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative visualization of a representative convective initiation event [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative visualization of a convective initiation event on August [PITH_FULL_IMAGE:figures/full_fig_p007_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Spatial attention heatmaps explaining the initiation event in Figure [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
read the original abstract

Radar-based convective precipitation nowcasting suffers from rapid performance degradation beyond 30 minutes due to missing thermodynamic variables. Existing deep learning models also face blurring effects, training instability, and limited interpretability. To address this, we propose MAG-Net, a Physics-Aware Multi-modal Attention-guided Generator Network. It integrates radar dynamics with selected geostationary satellite channels (IR 10.8, WV 7.1, BTD) to incorporate thermodynamic and microphysical precursors. MAG-Net features a Dual-Stream Encoder for heterogeneous modalities and a Symmetric Dual-Head Decoder optimizing reflectivity regression and event probability via an uncertainty-weighted multi-task strategy. Furthermore, an inference-time Gradient-Preserving Fusion (GPF) strategy combines probabilistic constraints with regression details for better high-frequency texture retention. Experiments on a large-scale dataset (2018-2023) over southeastern China show MAG-Net outperforms deterministic (e.g., CPrecNet) and generative (e.g., DGMR) baselines. Specifically, it improves CSI40 by 0.083 (0.172 to 0.255) over CPrecNet, enhancing intense convective echo detection. Finally, Integrated Gradients (IG) analysis reveals the model's reliance on satellite inputs increases with forecast lead time and convective intensity, confirming that satellite data captures critical precursors for severe weather prediction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 0 minor

Summary. The paper proposes MAG-Net, a physics-aware multi-modal attention-guided generator network for severe convective precipitation nowcasting. It fuses radar reflectivity with three selected geostationary satellite channels (IR 10.8, WV 7.1, BTD) via a dual-stream encoder and symmetric dual-head decoder that optimizes reflectivity regression and event probability under an uncertainty-weighted multi-task loss, plus an inference-time gradient-preserving fusion step. On a 2018-2023 southeastern China dataset the model is reported to raise CSI40 from 0.172 (CPrecNet) to 0.255 while Integrated Gradients analysis indicates growing satellite reliance with lead time and convective intensity.

Significance. If the numerical gains and IG-based attribution hold after proper verification, the work would provide concrete evidence that satellite thermodynamic and microphysical precursors can extend radar nowcasting skill beyond 30 min and would supply a reproducible multi-modal architecture with built-in interpretability for operational severe-weather applications.

major comments (3)
  1. [Abstract] Abstract and Experiments section: the headline CSI40 gain of 0.083 is presented without error bars, statistical significance tests, or explicit train/validation/test split details, so the central performance claim cannot be verified from the reported numbers alone.
  2. [Abstract] Abstract and Experiments section: the three satellite channels are described as 'selected' to supply thermodynamic and microphysical precursors, yet no channel-ablation results or comparison against other Himawari-8 bands are supplied; without these the attribution of the observed improvement to the physics-aware fusion remains untested.
  3. [Abstract] Integrated Gradients analysis: the claim that satellite reliance increases with lead time and intensity is stated qualitatively; quantitative IG attribution scores or statistical comparison against the radar-only baseline are not reported, weakening the interpretability contribution.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which highlight important aspects for improving the verifiability and interpretability of our results. We address each major comment below and will make corresponding revisions to the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract and Experiments section: the headline CSI40 gain of 0.083 is presented without error bars, statistical significance tests, or explicit train/validation/test split details, so the central performance claim cannot be verified from the reported numbers alone.

    Authors: We agree that error bars, significance testing, and explicit split details are necessary to allow verification of the CSI40 improvement. In the revised manuscript we will report standard deviations across multiple random seeds for all key metrics, include paired statistical tests (e.g., Wilcoxon signed-rank) against CPrecNet and other baselines, and provide a clear description of the temporal train/validation/test partitioning (2018–2021 training, 2022 validation, 2023 testing) used on the southeastern China dataset. revision: yes

  2. Referee: [Abstract] Abstract and Experiments section: the three satellite channels are described as 'selected' to supply thermodynamic and microphysical precursors, yet no channel-ablation results or comparison against other Himawari-8 bands are supplied; without these the attribution of the observed improvement to the physics-aware fusion remains untested.

    Authors: The IR 10.8, WV 7.1, and BTD channels were chosen on the basis of established physical relationships to convective instability and microphysics. We acknowledge that empirical ablation evidence would strengthen this attribution. We will add channel-ablation experiments and limited comparisons against additional Himawari-8 bands in the revised Experiments section to quantify the contribution of the selected channels to the reported gains. revision: yes

  3. Referee: [Abstract] Integrated Gradients analysis: the claim that satellite reliance increases with lead time and intensity is stated qualitatively; quantitative IG attribution scores or statistical comparison against the radar-only baseline are not reported, weakening the interpretability contribution.

    Authors: We agree that quantitative IG results would make the interpretability claim more robust. In the revision we will report mean Integrated Gradients attribution scores (with standard deviations) for satellite versus radar inputs stratified by lead time and convective intensity, together with statistical comparisons against the radar-only baseline. revision: yes

Circularity Check

0 steps flagged

No circularity: performance metrics are empirical test-set results on held-out data

full rationale

The paper introduces MAG-Net as a new architecture with dual-stream encoder, symmetric decoder, uncertainty-weighted loss, and inference-time GPF. All reported gains (CSI40 +0.083 over CPrecNet) and IG attributions are obtained by training on 2018-2023 southeastern China data and evaluating on held-out test samples. No equation, parameter fit, or self-citation reduces the claimed improvement or the lead-time dependence of satellite reliance to a quantity defined by the inputs themselves. The channel selection (IR 10.8, WV 7.1, BTD) is presented as a modeling choice whose sufficiency is asserted but not derived from prior results by the same authors. The derivation chain is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that the three satellite channels encode usable thermodynamic precursors and on training-time fitted parameters that balance the multi-task loss; no new physical entities are postulated.

free parameters (1)
  • task uncertainty weights
    Parameters that balance reflectivity regression against event probability in the multi-task objective.
axioms (1)
  • domain assumption Selected satellite channels (IR 10.8, WV 7.1, BTD) capture thermodynamic and microphysical precursors relevant to convective precipitation
    Invoked to justify the multi-modal fusion design.

pith-pipeline@v0.9.0 · 5552 in / 1183 out tokens · 53611 ms · 2026-05-13T18:38:34.418337+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

60 extracted references · 60 canonical work pages

  1. [1]

    Nowcasting thunderstorms: A status report,

    J. W. Wilson, N. A. Crook, C. K. Mueller, J. Sun, and M. Dixon, “Nowcasting thunderstorms: A status report,”Bulletin of the American Meteorological Society, vol. 79, no. 10, pp. 2079–2100, 1998

  2. [2]

    Nowcast- ing thunderstorm hazards using machine learning: the impact of data sources on performance,

    J. Leinonen, U. Hamann, U. Germann, and J. R. Mecikalski, “Nowcast- ing thunderstorm hazards using machine learning: the impact of data sources on performance,”Natural Hazards and Earth System Sciences, vol. 22, no. 2, pp. 577–597, 2022

  3. [3]

    Thunderstorm nowcasting with deep learning: A multi-hazard data fusion model,

    J. Leinonen, U. Hamann, I. V . Sideris, and U. Germann, “Thunderstorm nowcasting with deep learning: A multi-hazard data fusion model,” Geophysical Research Letters, vol. 50, no. 8, p. e2022GL101626, 2023

  4. [4]

    Development of a precipitation nowcasting algorithm based upon optical flow techniques,

    N. E. Bowler, C. E. Pierce, and A. Seed, “Development of a precipitation nowcasting algorithm based upon optical flow techniques,”Journal of Hydrology, vol. 288, no. 1-2, pp. 74–91, 2004

  5. [5]

    Steps: A probabilistic precipitation forecasting scheme which merges an extrapolation nowcast with downscaled nwp,

    N. E. Bowler, C. E. Pierce, and A. W. Seed, “Steps: A probabilistic precipitation forecasting scheme which merges an extrapolation nowcast with downscaled nwp,”Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, vol. 132, no. 620, pp. 2127–2155, 2006

  6. [6]

    Pysteps: An open-source python library for probabilistic precipitation nowcasting (v1. 0),

    S. Pulkkinen, D. Nerini, A. A. P ´erez Hortal, C. Velasco-Forero, A. Seed, U. Germann, and L. Foresti, “Pysteps: An open-source python library for probabilistic precipitation nowcasting (v1. 0),”Geoscientific Model Development, vol. 12, no. 10, pp. 4185–4219, 2019

  7. [7]

    Reducing the spin-up of a regional nwp system without data assimilation,

    C. J. Short and J. Petch, “Reducing the spin-up of a regional nwp system without data assimilation,”Quarterly Journal of the Royal Meteorological Society, vol. 148, no. 745, pp. 1623–1643, 2022

  8. [8]

    Hybrid physics-ai outperforms numerical weather prediction for extreme precipitation nowcasting,

    P. Das, A. Posch, N. Barber, M. Hicks, K. Duffy, T. Vandal, D. Singh, K. v. Werkhoven, and A. R. Ganguly, “Hybrid physics-ai outperforms numerical weather prediction for extreme precipitation nowcasting,”npj Climate and Atmospheric Science, vol. 7, no. 1, p. 282, 2024

  9. [9]

    Rainfall nowcasting models: state of the art and possible future perspectives,

    D. L. De Luca, F. Napolitano, D. Kim, C. Onof, D. Biondi, L.-P. Wang, F. Russo, E. Ridolfi, B. Moccia, and F. Marconi, “Rainfall nowcasting models: state of the art and possible future perspectives,”Hydrological Sciences Journal, pp. 1–20, 2025

  10. [10]

    Convolutional lstm network: A machine learning approach for precipitation nowcasting,

    X. Shi, Z. Chen, H. Wang, D.-Y . Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,”Advances in neural information processing systems, vol. 28, 2015

  11. [11]

    Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms,

    Y . Wang, M. Long, J. Wang, Z. Gao, and P. S. Yu, “Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms,” Advances in neural information processing systems, vol. 30, 2017

  12. [12]

    Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning,

    Y . Wang, Z. Gao, M. Long, J. Wang, and P. S. Yu, “Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning,” inInternational conference on machine learning. PMLR, 2018, pp. 5123–5132

  13. [13]

    Simvp: Simpler yet better video prediction,

    Z. Gao, C. Tan, L. Wu, and S. Z. Li, “Simvp: Simpler yet better video prediction,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 3170–3180

  14. [14]

    V-net: Fully convolutional neural networks for volumetric medical image segmentation,

    F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in2016 fourth international conference on 3D vision (3DV). Ieee, 2016, pp. 565–571

  15. [15]

    Non-local neural net- works,

    X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural net- works,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803

  16. [16]

    Attention is all you need,

    A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”Advances in neural information processing systems, vol. 30, 2017

  17. [17]

    Swin transformer v2: Scaling up capacity and resolution,

    Z. Liu, H. Hu, Y . Lin, Z. Yao, Z. Xie, Y . Wei, J. Ning, Y . Cao, Z. Zhang, L. Donget al., “Swin transformer v2: Scaling up capacity and resolution,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12 009–12 019

  18. [18]

    Earthformer: Exploring space-time transformers for earth system fore- casting,

    Z. Gao, X. Shi, H. Wang, Y . Zhu, Y . B. Wang, M. Li, and D.-Y . Yeung, “Earthformer: Exploring space-time transformers for earth system fore- casting,”Advances in Neural Information Processing Systems, vol. 35, pp. 25 390–25 403, 2022

  19. [19]

    Advancing realistic precip- itation nowcasting with a spatiotemporal transformer-based denoising diffusion model,

    Z. Zhao, X. Dong, Y . Wang, and C. Hu, “Advancing realistic precip- itation nowcasting with a spatiotemporal transformer-based denoising diffusion model,”IEEE Transactions on Geoscience and Remote Sens- ing, vol. 62, pp. 1–15, 2024

  20. [20]

    A deep learning-based method- ology for precipitation nowcasting with radar,

    L. Chen, Y . Cao, L. Ma, and J. Zhang, “A deep learning-based method- ology for precipitation nowcasting with radar,”Earth and Space Science, vol. 7, no. 2, p. e2019EA000812, 2020

  21. [21]

    Skilful precipitation nowcasting using deep generative models of radar,

    S. Ravuri, K. Lenc, M. Willson, D. Kangin, R. Lam, P. Mirowski, M. Fitzsimons, M. Athanassiadou, S. Kashem, S. Madgeet al., “Skilful precipitation nowcasting using deep generative models of radar,”Nature, vol. 597, no. 7878, pp. 672–677, 2021

  22. [22]

    Skilful nowcasting of extreme precipitation with nowcastnet,

    Y . Zhang, M. Long, K. Chen, L. Xing, R. Jin, M. I. Jordan, and J. Wang, “Skilful nowcasting of extreme precipitation with nowcastnet,”Nature, vol. 619, no. 7970, pp. 526–532, 2023

  23. [23]

    Forecasting convective initiation by monitoring the evolution of moving cumulus in daytime goes imagery,

    J. R. Mecikalski and K. M. Bedka, “Forecasting convective initiation by monitoring the evolution of moving cumulus in daytime goes imagery,” Monthly Weather Review, vol. 134, no. 1, pp. 49–78, 2006

  24. [24]

    Nowcasting storm initiation and growth using goes-8 and wsr-88d data,

    R. D. Roberts and S. Rutledge, “Nowcasting storm initiation and growth using goes-8 and wsr-88d data,”Weather and Forecasting, vol. 18, no. 4, pp. 562–584, 2003

  25. [25]

    Spatiotemporal inference network for precipitation nowcasting with 10 multimodal fusion,

    Q. Jin, X. Zhang, X. Xiao, Y . Wang, G. Meng, S. Xiang, and C. Pan, “Spatiotemporal inference network for precipitation nowcasting with 10 multimodal fusion,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 1299–1314, 2023

  26. [26]

    A cross-modal spatiotemporal joint predictive network for rainfall nowcasting,

    K. Zheng, L. He, H. Ruan, S. Yang, J. Zhang, C. Luo, S. Tang, J. Zhang, Y . Tian, and J. Cheng, “A cross-modal spatiotemporal joint predictive network for rainfall nowcasting,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–23, 2024

  27. [27]

    Deep learning model based on multi- scale feature fusion for precipitation nowcasting,

    J. Tan, Q. Huang, and S. Chen, “Deep learning model based on multi- scale feature fusion for precipitation nowcasting,”Geoscientific Model Development, vol. 17, no. 1, pp. 53–69, 2024

  28. [28]

    Enhanced multimodal- fusion network for radar quantitative precipitation estimation incorpo- rating relative humidity data,

    W. Cui, J. Si, L. Zhang, L. Han, and Y . Chen, “Enhanced multimodal- fusion network for radar quantitative precipitation estimation incorpo- rating relative humidity data,”IEEE Transactions on Geoscience and Remote Sensing, 2025

  29. [29]

    A spatiotemporal deep fusion model for merging satellite and gauge precipitation in china,

    H. Wu, Q. Yang, J. Liu, and G. Wang, “A spatiotemporal deep fusion model for merging satellite and gauge precipitation in china,”Journal of Hydrology, vol. 584, p. 124664, 2020

  30. [30]

    Fsrgan: A satellite and radar-based fusion prediction network for precipitation nowcasting,

    D. Niu, Y . Li, H. Wang, Z. Zang, M. Jiang, X. Chen, and Q. Huang, “Fsrgan: A satellite and radar-based fusion prediction network for precipitation nowcasting,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 7002–7013, 2024

  31. [31]

    Precipitation retrieval integrating multiple satellite observations: A dataset and a framework,

    Z. Wang, B. He, C. Wang, B. Xu, and C. Bai, “Precipitation retrieval integrating multiple satellite observations: A dataset and a framework,” IEEE Transactions on Geoscience and Remote Sensing, 2025

  32. [32]

    Mmf-rnn: A multimodal fusion model for precipitation nowcasting using radar and ground station data,

    Q. Liu, Y . Xiao, Y . Gui, G. Dai, H. Li, X. Zhou, A. Ren, G. Zhou, and J. Shen, “Mmf-rnn: A multimodal fusion model for precipitation nowcasting using radar and ground station data,”IEEE Transactions on Geoscience and Remote Sensing, 2025

  33. [33]

    Key factors for quantitative precipitation nowcasting using ground weather radar data based on deep learning,

    D. Han, J. Im, Y . Shin, and J. Lee, “Key factors for quantitative precipitation nowcasting using ground weather radar data based on deep learning,”Geoscientific Model Development, vol. 16, no. 20, pp. 5895– 5914, 2023

  34. [34]

    A deep learning-based precipitation nowcasting model fusing gnss-pwv and radar echo observations,

    M. Liu, W. Zhang, Y . Lou, X. Dong, Z. Zhang, and X. Zhang, “A deep learning-based precipitation nowcasting model fusing gnss-pwv and radar echo observations,”IEEE Transactions on Geoscience and Remote Sensing, 2025

  35. [35]

    Convective precipita- tion nowcasting using u-net model,

    L. Han, H. Liang, H. Chen, W. Zhang, and Y . Ge, “Convective precipita- tion nowcasting using u-net model,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–8, 2021

  36. [36]

    Explainable artificial intelligence in meteorology and climate science: Model fine-tuning, calibrating trust and learning new science,

    A. Mamalakis, I. Ebert-Uphoff, and E. A. Barnes, “Explainable artificial intelligence in meteorology and climate science: Model fine-tuning, calibrating trust and learning new science,” inInternational Workshop on Extending Explainable AI Beyond Deep Models and Classifiers. Springer, 2020, pp. 315–339

  37. [37]

    When physics meets machine learning: A survey of physics-informed machine learning,

    C. Meng, S. Griesemer, D. Cao, S. Seo, and Y . Liu, “When physics meets machine learning: A survey of physics-informed machine learning,” Machine Learning for Computational Science and Engineering, vol. 1, no. 1, p. 20, 2025

  38. [38]

    Physics-informed machine learning,

    G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, “Physics-informed machine learning,”Nature Reviews Physics, vol. 3, no. 6, pp. 422–440, 2021

  39. [39]

    Physics-informed machine learning: case studies for weather and climate modelling,

    K. Kashinath, M. Mustafa, A. Albert, J. Wu, C. Jiang, S. Esmaeilzadeh, K. Azizzadenesheli, R. Wang, A. Chattopadhyay, A. Singhet al., “Physics-informed machine learning: case studies for weather and climate modelling,”Philosophical Transactions of the Royal Society A, vol. 379, no. 2194, p. 20200093, 2021

  40. [40]

    Better localized predictions with out-of-scope infor- mation and explainable ai: One-shot sar backscatter nowcast framework with data from neighboring region,

    Z. Li and I. Demir, “Better localized predictions with out-of-scope infor- mation and explainable ai: One-shot sar backscatter nowcast framework with data from neighboring region,”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 207, pp. 92–103, 2024

  41. [41]

    A cloud type classification with noaa 7 split-window measure- ments,

    T. Inoue, “A cloud type classification with noaa 7 split-window measure- ments,”Journal of Geophysical Research: Atmospheres, vol. 92, no. D4, pp. 3991–4000, 1987

  42. [42]

    Progress and challenges in forecast verification,

    E. Ebert, L. Wilson, A. Weigel, M. Mittermaier, P. Nurmi, P. Gill, M. G ¨ober, S. Joslyn, B. Brown, T. Fowleret al., “Progress and challenges in forecast verification,”Meteorological Applications, vol. 20, no. 2, pp. 130–139, 2013

  43. [43]

    Cprecnet: Enhanced nowcast of high-resolution short-term precipitation using deep learning,

    J. Park and C. Lee, “Cprecnet: Enhanced nowcast of high-resolution short-term precipitation using deep learning,”Geophysical Research Letters, vol. 52, no. 13, p. e2024GL113907, 2025

  44. [44]

    Axiomatic attribution for deep networks,

    M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” inInternational conference on machine learning. PMLR, 2017, pp. 3319–3328

  45. [45]

    Image processing of radar mosaics for the climatology of convection initiation in south china,

    L. Bai, G. Chen, and L. Huang, “Image processing of radar mosaics for the climatology of convection initiation in south china,”Journal of Applied Meteorology and Climatology, vol. 59, no. 1, pp. 65–81, 2020

  46. [46]

    Introducing the new generation of chinese geostationary weather satellites, fengyun-4,

    J. Yang, Z. Zhang, C. Wei, F. Lu, and Q. Guo, “Introducing the new generation of chinese geostationary weather satellites, fengyun-4,” Bulletin of the American Meteorological Society, vol. 98, no. 8, pp. 1637–1658, 2017

  47. [47]

    Deep learning for precipitation nowcasting: A benchmark and a new model,

    X. Shi, Z. Gao, L. Lausen, H. Wang, D.-Y . Yeung, W.-k. Wong, and W.-c. Woo, “Deep learning for precipitation nowcasting: A benchmark and a new model,”Advances in neural information processing systems, vol. 30, 2017

  48. [48]

    Disentangling physical dynamics from unknown factors for unsupervised video prediction,

    V . L. Guen and N. Thome, “Disentangling physical dynamics from unknown factors for unsupervised video prediction,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 474–11 484

  49. [49]

    Synqpf-net: Short-term precipitation forecasts by integrating graphcast predictions and high-resolution obser- vational analyses,

    D. Chen, D. Yao, and Y . Wang, “Synqpf-net: Short-term precipitation forecasts by integrating graphcast predictions and high-resolution obser- vational analyses,”Journal of Geophysical Research: Machine Learning and Computation, vol. 3, no. 1, p. e2025JH000907, 2026

  50. [50]

    Swin transformer: Hierarchical vision transformer using shifted windows,

    Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” inProceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022

  51. [51]

    Multi-task learning using uncer- tainty to weigh losses for scene geometry and semantics,

    A. Kendall, Y . Gal, and R. Cipolla, “Multi-task learning using uncer- tainty to weigh losses for scene geometry and semantics,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7482–7491

  52. [52]

    Learning from imbalanced data,

    H. He and E. A. Garcia, “Learning from imbalanced data,”IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263–1284, 2009

  53. [53]

    The perception-distortion tradeoff,

    Y . Blau and T. Michaeli, “The perception-distortion tradeoff,” inPro- ceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6228–6237

  54. [54]

    Visualizing and understanding convolu- tional networks,

    M. D. Zeiler and R. Fergus, “Visualizing and understanding convolu- tional networks,” inEuropean conference on computer vision. Springer, 2014, pp. 818–833

  55. [55]

    Interpretable deep learning for spatial analysis of severe hailstorms,

    D. J. Gagne II, S. E. Haupt, D. W. Nychka, and G. Thompson, “Interpretable deep learning for spatial analysis of severe hailstorms,” Monthly Weather Review, vol. 147, no. 8, pp. 2827–2845, 2019

  56. [56]

    Scale-dependence of the predictability of precipitation from continental radar images. part i: Description of the methodology,

    U. Germann and I. Zawadzki, “Scale-dependence of the predictability of precipitation from continental radar images. part i: Description of the methodology,”Monthly Weather Review, vol. 130, no. 12, pp. 2859– 2873, 2002

  57. [57]

    Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorol- ogy,

    M. Veillette, S. Samsi, and C. Mattioli, “Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorol- ogy,”Advances in Neural Information Processing Systems, vol. 33, pp. 22 009–22 019, 2020

  58. [58]

    The codes for cprecnet,

    J. Park and C. Lee, “The codes for cprecnet,” distributed by Zenodo, 2024, [Online]. Available: https://doi.org/10.5281/zenodo.13971354. Accessed: 2026-02-13. [Online]. Available: https://doi.org/10.5281/ zenodo.13971354

  59. [59]

    Skillful nowcasting: A pytorch implementation of dgmr,

    OpenClimateFix, “Skillful nowcasting: A pytorch implementation of dgmr,” distributed by GitHub, 2023, [Online]. Available: https://github.com/openclimatefix/skillful nowcasting. Accessed: 2026- 02-13. [Online]. Available: https://github.com/openclimatefix/skillful nowcasting

  60. [60]

    Openstl: A comprehensive benchmark of spatiotemporal predictive learning,

    C. Tan, S. Li, Z. Gao, W. Guan, Z. Wang, Z. Liu, L. Wu, and S. Z. Li, “Openstl: A comprehensive benchmark of spatiotemporal predictive learning,” distributed by GitHub, 2023, [Online]. Available: https: //github.com/chengtan9907/OpenSTL. Accessed: 2026-02-13. [Online]. Available: https://github.com/chengtan9907/OpenSTL