pith. machine review for the scientific record. sign in

arxiv: 2605.00878 · v1 · submitted 2026-04-26 · 💻 cs.CV

Recognition: unknown

Single Image Defogging Using a Fourth-Order Telegraph PDE Guided by Physical Haze Modeling

Authors on Pith no claims yet

Pith reviewed 2026-05-09 20:33 UTC · model grok-4.3

classification 💻 cs.CV
keywords image defoggingPDE restorationdark channel priortelegraph equationhaze removalfourth-order diffusionimage enhancementphysical modeling
0
0 comments X

The pith

A fourth-order telegraph PDE guided by dark channel prior estimates restores single foggy images while preserving structural details.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a hybrid method for single-image defogging that combines physical haze modeling with a fourth-order nonlinear telegraph PDE. Dark channel prior estimates supply atmospheric light, transmission map, and a guidance image, after which the PDE evolves the result using an edge-adaptive diffusion coefficient and a transmission-weighted fidelity term. The fourth-order diffusion is intended to suppress haze while the hyperbolic telegraph form improves numerical stability and convergence, monitored by relative error norms. The approach is evaluated against dark channel prior variants and variational methods, using MSE and SSIM when ground truth exists and no-reference metrics otherwise. A reader would care because the method offers a way to enhance visibility from one image in unknown depth and scattering conditions typical of real outdoor scenes.

Core claim

The authors claim that evolving a fourth-order telegraph PDE that incorporates an edge-adaptive diffusion coefficient and a fidelity term weighted by the transmission map, after initializing with dark channel prior estimates of atmospheric parameters, produces defogged images of comparable visual quality that maintain structural details better than pure dark channel prior or variational baselines.

What carries the argument

A fourth-order nonlinear telegraph PDE with edge-adaptive diffusion coefficient and transmission-map-weighted fidelity term, evolved from a dark channel prior guidance image.

If this is right

  • The method achieves comparable or better performance on both synthetic and real images when measured by MSE, SSIM, FADE, average gradient, and entropy.
  • Fourth-order diffusion suppresses haze while the hyperbolic formulation improves numerical stability and convergence monitored by relative error norms.
  • Structural details are preserved across comparisons with dark channel prior, modified dark channel prior, and variational defogging techniques.
  • The hybrid model works on single images without requiring multiple inputs or per-scene ground truth.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The transmission-weighted fidelity term suggests the PDE could be adapted to other restoration tasks where depth-dependent weighting is available.
  • Success of the method implies that physical priors can stabilize higher-order PDEs in inverse imaging problems beyond defogging.
  • The approach could be extended to video sequences by adding a temporal term to the telegraph PDE while retaining the same guidance strategy.

Load-bearing premise

The dark channel prior produces sufficiently accurate estimates of atmospheric light and transmission map for the PDE evolution to recover faithful details across varied real-world scenes without ground truth.

What would settle it

A set of real-world foggy images on which the proposed method records worse no-reference scores, such as higher FADE or lower contrast restoration index, than the standard dark channel prior alone.

Figures

Figures reproduced from arXiv: 2605.00878 by Manish Kumar, Rajendra K. Ray.

Figure 1
Figure 1. Figure 1: Ground truth images In next section we shows visual inspection and quantitative metrics demonstrate that the proposed model effectively suppresses haze while preserving fine details and avoiding halo and blocky artifacts commonly observed in prior-based methods. 6 Image Quality Measurement In this section we presents a quantitative and qualitative evaluation of the proposed PDE-based defogging model with r… view at source ↗
Figure 2
Figure 2. Figure 2: Visual Comparison of defogging for Img 1. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Visual Comparison of defogging for Img 2. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visual Comparison of defogging for Img 3. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Visual Comparison of defogging for Img 4. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visual Comparison of defogging for Img 5. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Visual Comparison of defogging for Img 6. The first column shows the input images, and the [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison of no-reference quality metrics [PITH_FULL_IMAGE:figures/full_fig_p012_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Line graph comparison of MSE and SSIM of defogging results obtained using different methods [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: The first column contains input foggy non-references image, second third and forth column [PITH_FULL_IMAGE:figures/full_fig_p015_10.png] view at source ↗
read the original abstract

In real-world scenarios, image defogging is an inverse problem due to unknown scene depth, atmospheric scattering, and the common absence of ground truth . To resolve the issue, we propose a hybrid defogging model that integrates a fourth-order nonlinear PDE with a physical haze formation model. We used Dark Channel Prior to estimate atmospheric parameters and to generate a guidance image, while the final restoration is performed via a fourth-order PDE-based evolution. A fourth-order PDE of the type telegraph is then evolved, incorporating an edge-adaptive diffusion coefficient and a fidelity term weighted by the transmission map. Fourth-order diffusion effectively suppresses haze while preserving structural details, and the hyperbolic formulation improves numerical stability and convergence behavior. We use relative error norm criteria for the convergence of our PDE. The proposed method is compared with Dark Channel prior, modified Dark Channel prior, and variational-based single-image defogging techniques. When we have ground truth available, we use MSE and SSIM for quantitative evaluation, whereas no-reference metrics, including FADE, Contrast Restoration Index, Average Gradient, and Entropy, are applied to real-world foggy images. Experimental results demonstrate that the proposed hybrid PDE-based method provides comparable visual quality and maintains structural details.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The paper proposes a hybrid single-image defogging method that integrates the Dark Channel Prior (DCP) to estimate atmospheric light A and transmission map t(x) and generate a guidance image, followed by evolution of a fourth-order nonlinear telegraph PDE. The PDE uses an edge-adaptive diffusion coefficient derived from the guidance and a fidelity term weighted by t(x). Convergence is monitored via relative error norm. When ground truth is available, evaluation uses MSE and SSIM; on real images, no-reference metrics (FADE, Contrast Restoration Index, Average Gradient, Entropy) are reported. The central claim is that the method yields visual quality comparable to DCP, modified DCP, and variational baselines while better preserving structural details.

Significance. If validated, the hybrid physical-PDE formulation could offer a stable, detail-preserving alternative to purely variational or learning-based defogging by leveraging hyperbolic telegraph dynamics for improved convergence. The explicit use of DCP-derived guidance for both fidelity weighting and edge adaptation is a clear design choice that merits credit for attempting to ground the PDE in the haze formation model. However, the significance remains provisional given the absence of any isolation of the PDE contribution from DCP accuracy.

major comments (3)
  1. [Method section (PDE formulation)] Method section (PDE formulation): the fidelity term ||u - I||^2 weighted by the DCP-derived transmission map t(x) and the edge-adaptive coefficient both presuppose that DCP estimates are sufficiently accurate; no sensitivity analysis, propagation bounds, or synthetic experiments with known ground-truth depth are provided to show that DCP errors do not anchor the solution to incorrect values in sky, bright-object, or dense-fog regions.
  2. [Experimental results] Experimental results: the reported MSE/SSIM and no-reference metric improvements are presented only for the full pipeline; no ablation removing the DCP-weighted fidelity term or the DCP-derived guidance image is shown, so it is impossible to determine whether observed quality stems from the fourth-order telegraph PDE or from DCP's known behavior.
  3. [Abstract and evaluation description] Abstract and evaluation description: the claim that 'fourth-order diffusion effectively suppresses haze while preserving structural details' is not supported by any quantitative isolation of the PDE evolution independent of the DCP prior, undermining attribution of the 'comparable visual quality' result.
minor comments (1)
  1. [Abstract] Abstract: tense inconsistency ('We used Dark Channel Prior' versus present-tense description of the method).

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. The comments highlight important aspects of validating the PDE's contribution in our hybrid defogging method. We respond to each major comment and outline the revisions we will implement in the updated manuscript.

read point-by-point responses
  1. Referee: [Method section (PDE formulation)] Method section (PDE formulation): the fidelity term ||u - I||^2 weighted by the DCP-derived transmission map t(x) and the edge-adaptive coefficient both presuppose that DCP estimates are sufficiently accurate; no sensitivity analysis, propagation bounds, or synthetic experiments with known ground-truth depth are provided to show that DCP errors do not anchor the solution to incorrect values in sky, bright-object, or dense-fog regions.

    Authors: We agree that demonstrating robustness to DCP estimation errors is important for the credibility of the hybrid model. Although the original manuscript relies on standard DCP usage without explicit sensitivity tests, we will revise the method and experimental sections to include synthetic experiments using known ground-truth depth maps to generate foggy images. These will quantify how DCP inaccuracies affect the PDE evolution in regions such as skies and dense fog, and we will discuss error propagation bounds based on the fidelity weighting. revision: yes

  2. Referee: [Experimental results] Experimental results: the reported MSE/SSIM and no-reference metric improvements are presented only for the full pipeline; no ablation removing the DCP-weighted fidelity term or the DCP-derived guidance image is shown, so it is impossible to determine whether observed quality stems from the fourth-order telegraph PDE or from DCP's known behavior.

    Authors: The referee correctly identifies that the current results do not isolate the PDE's contribution through ablations. To address this, the revised manuscript will include ablation studies: (1) replacing the t(x)-weighted fidelity with a uniform weight, and (2) using a guidance image without DCP derivation (e.g., the input image itself). These will be evaluated on the same datasets to show the specific benefits of the telegraph PDE dynamics and edge-adaptive terms. revision: yes

  3. Referee: [Abstract and evaluation description] Abstract and evaluation description: the claim that 'fourth-order diffusion effectively suppresses haze while preserving structural details' is not supported by any quantitative isolation of the PDE evolution independent of the DCP prior, undermining attribution of the 'comparable visual quality' result.

    Authors: The abstract claim is derived from the comparative results against pure DCP methods, where the hybrid approach shows improved structural preservation in both visual inspection and metrics like SSIM. However, to strengthen the attribution, we will update the abstract and evaluation description to reference the new ablation experiments and clarify that the PDE evolution, guided by DCP, is responsible for the observed detail preservation beyond standard DCP restoration. revision: partial

Circularity Check

0 steps flagged

No circularity; hybrid method applies external DCP estimates to guide independent PDE evolution

full rationale

The paper's chain starts from the physical haze formation model, applies the established external Dark Channel Prior to compute atmospheric light A and transmission t(x), then evolves a fourth-order telegraph PDE whose fidelity term is weighted by t(x) and whose diffusion coefficient is edge-adaptive from the same estimates. No claimed result, prediction, or uniqueness statement reduces by construction to a fitted parameter, self-citation, or renamed input; the PDE evolution and convergence criterion are presented as standard numerical steps independent of the DCP outputs. Experimental metrics are applied post-restoration without evidence of tautological fitting.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The approach rests on the physical haze formation model and standard assumptions of PDE theory for existence and stability of solutions; no new entities are introduced.

axioms (2)
  • standard math Existence, uniqueness, and numerical stability of solutions to the fourth-order nonlinear telegraph PDE under the given boundary and fidelity conditions
    Invoked to justify convergence via relative error norm and the claim of improved stability.
  • domain assumption Dark Channel Prior holds sufficiently well to produce usable estimates of atmospheric light and transmission map in the target scenes
    Used to generate the guidance image that weights the fidelity term.

pith-pipeline@v0.9.0 · 5514 in / 1347 out tokens · 32078 ms · 2026-05-09T20:33:38.008324+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 1 canonical work pages · 1 internal anchor

  1. [1]

    Theorie der horizontalen Sichtweite,

    H. Koschmieder, “Theorie der horizontalen Sichtweite,”Beitr¨ age zur Physik der freien Atmosph¨ are, vol. 12, pp. 33–53, 1924. 15

  2. [2]

    E. J. McCartney,Optics of the Atmosphere: Scattering by Molecules and Particles. New York, NY, USA: John Wiley and Sons, 1976

  3. [3]

    Vision and the atmosphere,

    S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,”International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002

  4. [4]

    Contrast restoration of weather degraded images,

    S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713–724, 2003

  5. [5]

    Visibility in bad weather from a single image,

    R. Tan, “Visibility in bad weather from a single image,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8

  6. [6]

    Single image dehazing,

    R. Fattal, “Single image dehazing,”ACM Transactions on Graphics (SIGGRAPH), vol. 27, no. 3, article 72, 2008

  7. [7]

    Fast visibility restoration from a single color or gray level image,

    J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV), 2009, pp. 2201–2208

  8. [8]

    Single image haze removal using dark channel prior,

    K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 1956–1963

  9. [9]

    Single image haze removal using dark channel prior,

    K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,”IEEE Transac- tions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011

  10. [10]

    Guided image filtering,

    K. He, J. Sun, and X. Tang, “Guided image filtering,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013

  11. [11]

    Improved Dark Channel Prior for Image Defogging Using RGB and YCbCr Color Space,

    Z. Tufail, K. Khurshid, A. Salman, I. F. Nizami, K. Khurshid, and B. Jeon, “Improved Dark Channel Prior for Image Defogging Using RGB and YCbCr Color Space,”IEEE Access, vol. 6, pp. 32576–32587, 2018

  12. [12]

    Segmentation-based image defogging using modified dark channel prior,

    A. Sabir, K. Khurshid, and A. Salman, “Segmentation-based image defogging using modified dark channel prior,”EURASIP Journal on Image and Video Processing, vol. 2020, no. 6, 2020

  13. [13]

    Image Defogging Framework Using Segmentation and the Dark Channel Prior,

    S. Anan, M. I. Khan, M. M. S. Kowsar, K. Deb, P. K. Dhar, and T. Koshiba, “Image Defogging Framework Using Segmentation and the Dark Channel Prior,”Entropy, vol. 23, no. 3, p. 285, 2021

  14. [14]

    A real-time framework for HD video defogging using modified dark channel prior,

    X. Wu, X. Chen, X. Wang, X. Zhang, S. Yuan, B. Sun, X. Huang, and L. Liu, “A real-time framework for HD video defogging using modified dark channel prior,”Journal of Real-Time Image Processing, vol. 21, no. 55, 2024

  15. [15]

    Single Image Defogging using Depth Estimation and Scene-Specific Dark Channel Prior,

    T. Kokul and S. Anparasy, “Single Image Defogging using Depth Estimation and Scene-Specific Dark Channel Prior,” in20th International Conference on Advances in ICT for Emerging Regions (ICTer), 2020, pp. 190–195

  16. [16]

    An Improved Image Defogging Method Based on Dark Channel Prior,

    C. Li, T. Fan, X. Ma, Z. Zhang, H. Wu, and L. Chen, “An Improved Image Defogging Method Based on Dark Channel Prior,” in2017 2nd International Conference on Image, Vision and Computing, 2017, pp. 1–5

  17. [17]

    Single image defogging,

    M. Chen, A. Men, P. Fan, and B. Yang, “Single image defogging,” inProceedings of IC-NIDC, 2009, pp. 1–5

  18. [18]

    An analysis of single image defogging methods using a color ellipsoid framework,

    K. B. Gibson and T. Q. Nguyen, “An analysis of single image defogging methods using a color ellipsoid framework,”EURASIP Journal on Image and Video Processing, vol. 2013, no. 37, 2013

  19. [19]

    Comprehensive review of single image defogging techniques: enhancement, prior, and learning based approaches,

    P. Pandey, R. Gupta, and N. Goel, “Comprehensive review of single image defogging techniques: enhancement, prior, and learning based approaches,”Artificial Intelligence Review, vol. 58, p. 116, 2025

  20. [20]

    A fast single image haze removal algorithm using color attenuation prior,

    Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,”IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015

  21. [21]

    Efficient image dehazing with boundary constraint and contextual regularization,

    G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” inIEEE International Conference on Computer Vision (ICCV), 2013, pp. 617–624. 16

  22. [22]

    Non-local image dehazing,

    D. Berman, S. Avidan,et al., “Non-local image dehazing,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1674–1682

  23. [23]

    Bayesian defogging,

    K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,”International Journal of Computer Vision, vol. 98, no. 3, pp. 263–278, 2012

  24. [24]

    A closed-form solution to natural image matting,

    A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228–242, 2008

  25. [25]

    Single image dehazing by multi-scale fusion,

    C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,”IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013

  26. [26]

    DehazeNet: An end-to-end system for single image haze removal,

    B. Cai, X. Xu, K. Jia, C. Chun, and D. Tao, “DehazeNet: An end-to-end system for single image haze removal,”IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016

  27. [27]

    Single image dehazing via multi- scale convolutional neural networks,

    W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi- scale convolutional neural networks,” inEuropean Conference on Computer Vision (ECCV), 2016, pp. 154–169

  28. [28]

    AOD-Net: All-in-one dehazing network,

    B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-Net: All-in-one dehazing network,” inIEEE International Conference on Computer Vision (ICCV), 2017, pp. 4770–4778

  29. [29]

    Densely connected pyramid dehazing network,

    H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8193–8202

  30. [30]

    Haze removal method for natural restoration of images with sky,

    Y. Zhu, G. Tang, X. Zhang, J. Jiang, and Q. Tian, “Haze removal method for natural restoration of images with sky,”Neurocomputing, vol. 275, pp. 499–510, 2018

  31. [31]

    An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation Systems,

    S. C. Huang, B. H. Chen, and Y. J. Cheng, “An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Intelligent Transportation Systems,”IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 5, pp. 2321–2332, 2014

  32. [32]

    Salazar-Colores, E

    S. Salazar-Colores, E. Cabal-Y´ epez, J. M. Ramos-Arregu´ ın, G. Botella, L. M. Ledesma-Carrillo, and S. Ledesma, A fast image dehazing algorithm using morphological reconstruction, IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2357–2366, 2019

  33. [33]

    Vision enhancement in homogeneous and heterogeneous fog,

    J.-P. Tarel, N. Hautiere, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,”IEEE Intelligent Transportation Systems Magazine, vol. 4, no. 2, pp. 6–20, 2012

  34. [34]

    The retinex theory of color vision,

    E. H. Land, “The retinex theory of color vision,”Scientific American, vol. 237, no. 6, pp. 108–128, 1977

  35. [35]

    Adaptive histogram equalization and its variations,

    S. M. Pizeret al., “Adaptive histogram equalization and its variations,”Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, 1987

  36. [36]

    Deep photo: Model-based photograph enhancement and viewing,

    J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, and M. Uyttendaele, “Deep photo: Model-based photograph enhancement and viewing,”ACM Transactions on Graphics, vol. 27, no. 5, pp. 116:1–116:10, 2008

  37. [37]

    Instant dehazing of images using polariza- tion,

    Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polariza- tion,” inIEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2001, vol. 1, pp. 325–332

  38. [38]

    Scale-space and edge detection using anisotropic diffusion,

    P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,”IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990

  39. [39]

    Nonlinear total variation based noise removal algorithms,

    L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259–268, 1992

  40. [40]

    Chan and J

    T. Chan and J. Shen,Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. SIAM, 2005

  41. [41]

    Weickert,Anisotropic Diffusion in Image Processing

    J. Weickert,Anisotropic Diffusion in Image Processing. Teubner Stuttgart, 1998. 17

  42. [42]

    Majee, R

    S. Majee, R. K. Ray, A. K. Majee, A gray level indicator-based regularized telegraph diffusion model: application to image despeckling,SIAM J. Imaging Sci., 13(2) (2020), pp. 844–870

  43. [43]

    Y.-L. You, M. Kaveh, Fourth-order partial differential equations for noise removal,IEEE Trans. Image Process., 9(10) (2000), pp. 1723–1730

  44. [44]

    R. K. Ray, M. Kumar, New fourth-order grayscale indicator-based telegraph diffusion model for image despeckling,arXiv preprint arXiv:2509.26010, (2025)

  45. [45]

    Li,Numerical Solutions to Partial Differential Equations, Higher Education Press, Beijing, 2009

    Z. Li,Numerical Solutions to Partial Differential Equations, Higher Education Press, Beijing, 2009

  46. [46]

    Zauderer,Partial Differential Equations of Applied Mathematics, Pure Appl

    E. Zauderer,Partial Differential Equations of Applied Mathematics, Pure Appl. Math. (N. Y.), 71, John Wiley Sons, New York, 2011

  47. [47]

    Zhang, J

    W. Zhang, J. Li, Y. Yang, A class of nonlocal tensor telegraph-diffusion equations applied to coher- ence enhancement,

  48. [48]

    F. Fang, F. Li, T. Zeng, Single image dehazing and denoising: A fast variational approach, SIAM Journal on Imaging Sciences 7 (2) (2014) 969–996

  49. [49]

    Galdran, J

    A. Galdran, J. Vazquez-Corral, D. Pardo, M. Bertalmio, Enhanced variational image dehazing, SIAM Journal on Imaging Sciences 8 (3) (2015) 1519–1546

  50. [50]

    Y. Liu, J. Shang, L. Pan, A. Wang, M. Wang, A unified variational model for single image dehazing, IEEE Access 7 (2019) 2019. 18