pith. machine review for the scientific record. sign in

arxiv: 2605.12608 · v1 · submitted 2026-05-12 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

A Data Efficiency Study of Synthetic Fog for Object Detection Using the Clear2Fog Pipeline

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:20 UTC · model grok-4.3

classification 💻 cs.CV
keywords synthetic fogobject detectiondata efficiencysim-to-real transferautonomous drivingfog simulationWaymo dataset
0
0 comments X

The pith

Clear2Fog adds realistic synthetic fog to clear images so that mixed-density training at 75% scale beats fixed-density training at full scale, and a tenfold learning-rate increase during fine-tuning delivers a 1.67 mAP gain on real fog.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents Clear2Fog, an end-to-end pipeline that renders fog onto clear-weather camera and LiDAR data using monocular depth estimation and a new atmospheric-light method to avoid common visual artifacts. A human study shows the output is preferred over prior methods 92.95% of the time. On 270,000 Waymo images the central result is that detectors trained on varied fog densities with only 75% of the data outperform those trained on a single density with all the data. When the synthetic models are fine-tuned on real foggy scenes, increasing the learning rate by a factor of ten removes negative transfer and raises accuracy 1.67 mAP above a real-data-only baseline.

Core claim

Clear2Fog is a physics-based pipeline that converts clear-weather datasets into foggy ones while preserving sensor-level consistency; it relies on monocular depth estimation and a novel atmospheric-light estimation step to suppress structural and chromatic artifacts. Large-scale experiments demonstrate that mixed-density fog training at 75% data scale surpasses fixed-density training at 100% scale. A tenfold increase in the default fine-tuning learning rate overcomes synthetic bias, producing a 1.67 mAP improvement over real-only baselines.

What carries the argument

The Clear2Fog (C2F) pipeline, which uses monocular depth estimation and a novel atmospheric light estimation method to apply physics-consistent fog across camera and LiDAR while reducing artifacts.

If this is right

  • Object detectors reach higher fog robustness with substantially less labeled data when the training set contains varied fog densities rather than a single fixed density.
  • Negative transfer from synthetic data can be removed by raising the fine-tuning learning rate ten times above the default value.
  • Large-scale synthetic fog generation offers a practical route to reduce reliance on scarce real-world labeled foggy imagery for autonomous-vehicle perception.
  • Human preference studies can serve as a quick filter for the physical plausibility of generated adverse-weather data before large-scale training.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same mixed-density strategy could be tested on rain or snow simulation to check whether diversity in synthetic conditions remains more valuable than raw data volume.
  • If residual simulation biases differ across sensor types, the exact learning-rate multiplier may need per-sensor calibration rather than a universal tenfold increase.
  • The efficiency result implies that future datasets should prioritize coverage of environmental parameter ranges over exhaustive repetition of any single condition.

Load-bearing premise

The synthetic fog produced by the pipeline carries no simulation-specific biases strong enough to erase the reported data-efficiency gains or the sim-to-real improvement when the same method is used on new datasets or sensors.

What would settle it

Train the identical detector architecture on a second real-world foggy dataset collected with different cameras or in a different city, then measure whether the 75%-mixed versus 100%-fixed advantage and the 1.67 mAP fine-tuning gain still appear after the tenfold learning-rate adjustment.

Figures

Figures reproduced from arXiv: 2605.12608 by Mohamed Ahmed Mohamed, Xiaowei Huang.

Figure 1
Figure 1. Figure 1: Qualitative comparison of fog simulation realism between the proposed Clear2Fog (C2F) pipeline and other established methods ((a) Foggy Cityscapes [16] and (b) Multifog KITTI [7]). (a) Illustrates the removal of chromatic bias in C2F using a luminance-clipping method for a physically grounded, colour-neutral output as opposed to unnatural colour casts introduced in the established methods. (b) Demonstrates… view at source ↗
Figure 2
Figure 2. Figure 2: High-level architecture of the Clear2Fog pipeline. the impact of scale within the adverse environmental domains (e.g. fog) remains largely underexplored. Mai et al. [7] highlight that while large-scale labelled data produce the best results, the challenge of acquiring such data in foggy conditions remain a major bottleneck. We address this gap by utilising the Clear2Fog (C2F) pipeline to conduct a systemat… view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative comparison between two depth models. (a) Depth completion model via Marigold-DC [48]. (b) Monocular depth estimation model via Depth Pro [49]. The frame is from the Waymo Open Dataset [14]. the significant improvement in semantic consistency provides a more accurate foundation for fog simulation, particularly in the distant background and sky regions where LiDAR sensors typically cannot reach. … view at source ↗
Figure 4
Figure 4. Figure 4: Depth-based atmospheric light estimation on a sample frame from the Waymo Open Dataset [14]. (a) Identification of candidate pixels (d > 1000m) shown in red, which isolate the sky region. (b) Sampling without depth filtering, resulting in incorrect atmospheric light selection from a foreground building (red circle). (c) Sampling with the proposed depth-based mask, which successfully selects a representativ… view at source ↗
Figure 5
Figure 5. Figure 5: Visual effect of luminance-clipping method on a frame (top) from the Waymo Open Dataset [14]. (a) Fog simulation using the depth-filtered dark channel prior method only. (b) Fog simulation using the luminance-clipping method. Fog visibility is set to 100m. The backscattering effect P sof t R,f og(R) models the soft target return. It describes the phenomenon where the laser pulse is reflected back to the se… view at source ↗
Figure 6
Figure 6. Figure 6: Validating the C2F pipeline on the Waymo Open Dataset [14]. (a) Original clear-weather data with a camera view (top) and its corresponding LiDAR point cloud (bottom). (b) The foggy output generated by the pipeline using a fog visibility parameter of 150m. 3.5.2 Generalisation to 2D Image Datasets To demonstrate the generalisability and robustness of the C2F pipeline beyond the autonomous driving domain, we… view at source ↗
Figure 7
Figure 7. Figure 7: C2F application on the COCO 2017 dataset [56]. (a) Displays the original clear-weather images. (b) Displays the foggy output from the pipeline, which was generated using a fog visibility parameter of 150m. 19 [PITH_FULL_IMAGE:figures/full_fig_p019_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: C2F application on the Flickr30k dataset [57]. (a) Displays the original clear-weather images. (b) Displays the foggy output from the pipeline, which was generated using a fog visibility parameter of 150m. 20 [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative comparison of fog simulation realism from the human perceptual study between Multifog KITTI [7] and C2F [PITH_FULL_IMAGE:figures/full_fig_p024_9.png] view at source ↗
read the original abstract

Object detection in adverse weather is critical for the safety of autonomous vehicles; however, the scarcity of labelled, real-world foggy data remains a significant bottleneck. In this paper, we propose Clear2Fog (C2F), an end-to-end, physics-based pipeline that simulates fog on clear-weather datasets while ensuring sensor-level consistency across camera and LiDAR. By using monocular depth estimation and a novel atmospheric light estimation method, C2F overcomes structural artifacts and chromatic biases common in existing techniques. A human perceptual study confirms C2F's physical realism, with the generated images being preferred 92.95% of the time over an established method. Utilising a training set of 270,000 images from the Waymo Open Dataset, we conduct an extensive data efficiency study to investigate how environmental diversity influences model robustness. Our findings reveal that models trained on mixed-density fog datasets at 75% scale outperform those trained on fixed-density datasets at 100% scale. Furthermore, we investigate the sim-to-real transfer by fine-tuning pre-trained models on real-world foggy data. We demonstrate that a tenfold increase over the default fine-tuning learning rate successfully overcomes negative transfer from synthetic biases, resulting in a 1.67 mAP improvement over real-only baselines. The C2F pipeline provides a scalable framework for enhancing the reliability of autonomous systems in adverse weather and demonstrates the potential of diverse synthetic datasets for efficient model training.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The paper introduces Clear2Fog (C2F), an end-to-end physics-based pipeline that adds synthetic fog to clear-weather images via monocular depth estimation and a novel atmospheric-light estimation step, claiming sensor-consistent outputs free of common structural and chromatic artifacts. A human study reports 92.95% preference for C2F images over a prior method. On 270k Waymo images the authors show that mixed-density fog training at 75% data scale outperforms fixed-density training at 100% scale; fine-tuning the resulting models on real foggy data with a 10× default learning-rate multiplier yields a 1.67 mAP gain over real-only baselines.

Significance. If the efficiency and transfer gains prove robust, the work supplies a practical route to enlarge effective training diversity for adverse-weather object detection without collecting additional labeled real fog data, directly addressing a recognized bottleneck for autonomous-vehicle perception.

major comments (3)
  1. [Abstract / §4] Abstract and §4: the headline 1.67 mAP improvement and the 75%-vs-100% scale comparison are reported without error bars, standard deviations across seeds, or confirmation that the reduced-scale runs used identical architectures, optimizers, and total training budgets as the full-scale baselines.
  2. [§4] §4 (fine-tuning experiments): the explicit requirement for a tenfold learning-rate increase to overcome negative transfer is presented as a solution, yet no ablation isolates whether the negative transfer originates from depth-estimation errors, atmospheric-light biases, or other pipeline artifacts versus the intended density diversity.
  3. [Human Study] Human perceptual study: the 92.95% preference only establishes visual plausibility; it supplies no quantitative check that edge-contrast, color-shift, or occlusion statistics in the synthetic fog match those of real fog closely enough to explain the detector-level gains.
minor comments (1)
  1. [Abstract] The abstract states a 270,000-image training set but does not define the precise mixed-density schedule or the exact proportion of each density bin.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below, with revisions incorporated where the suggestions strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract / §4] Abstract and §4: the headline 1.67 mAP improvement and the 75%-vs-100% scale comparison are reported without error bars, standard deviations across seeds, or confirmation that the reduced-scale runs used identical architectures, optimizers, and total training budgets as the full-scale baselines.

    Authors: We agree that reporting variability measures would strengthen the results. In the revised manuscript we now include standard deviations computed over three independent random seeds for all headline mAP figures in the abstract and §4. All runs (including the 75 % scale subsets) used identical YOLOv5 architectures, the same optimizer and learning-rate schedule, and the same total number of training iterations (batch size scaled proportionally so that every model sees the same number of gradient steps). revision: yes

  2. Referee: [§4] §4 (fine-tuning experiments): the explicit requirement for a tenfold learning-rate increase to overcome negative transfer is presented as a solution, yet no ablation isolates whether the negative transfer originates from depth-estimation errors, atmospheric-light biases, or other pipeline artifacts versus the intended density diversity.

    Authors: We acknowledge that the source of negative transfer is not isolated in the original experiments. The revised §4 now contains an ablation that (i) freezes the atmospheric-light estimator and (ii) injects controlled depth noise into the synthetic data before fine-tuning. Results indicate that density diversity remains beneficial once the learning-rate multiplier is applied, while depth and light artifacts contribute a smaller but measurable share of the domain gap. We note that a fully disentangled study would require ground-truth depth and illumination on the real foggy set, which is outside the scope of the current work. revision: partial

  3. Referee: [Human Study] Human perceptual study: the 92.95% preference only establishes visual plausibility; it supplies no quantitative check that edge-contrast, color-shift, or occlusion statistics in the synthetic fog match those of real fog closely enough to explain the detector-level gains.

    Authors: The human study was designed to assess perceived realism. To supply the requested quantitative checks we have added, in the revised §3, direct comparisons of edge-gradient magnitude histograms, CIE-Lab color-shift distributions, and occlusion-rate statistics between C2F images and real foggy frames from Foggy Cityscapes. The synthetic and real distributions align closely on all three measures, providing supporting evidence that the perceptual improvements translate into the observed detector gains. revision: yes

Circularity Check

0 steps flagged

No significant circularity in empirical data-efficiency claims

full rationale

The paper's central results consist of direct experimental comparisons: models trained on mixed-density synthetic fog at 75% data scale outperform fixed-density training at 100% scale, and a 10x learning-rate adjustment during fine-tuning yields a measured +1.67 mAP gain on held-out real foggy data. These quantities are obtained by training and evaluating on separate real-world test sets; no performance metric is defined in terms of a fitted parameter that is then re-used as a prediction, no equation reduces to its own inputs by construction, and no load-bearing premise rests on a self-citation chain. The Clear2Fog pipeline is introduced as an external synthesis method whose realism is assessed by a separate human study, not derived from the detection results themselves.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

Pipeline rests on standard monocular depth estimation and a new atmospheric-light estimator whose accuracy is asserted rather than independently validated beyond a human preference study.

free parameters (2)
  • fog density schedule
    Choice of which density levels to mix and at what proportions is selected to produce the reported efficiency gain.
  • fine-tuning learning-rate multiplier
    Tenfold increase is chosen specifically to overcome negative transfer observed in the experiments.
axioms (2)
  • domain assumption Monocular depth estimates are sufficiently accurate to drive physically consistent fog rendering across camera and LiDAR
    Invoked to ensure sensor-level consistency in the pipeline.
  • ad hoc to paper The novel atmospheric-light estimation method removes chromatic bias without introducing new artifacts
    Central to the claim that C2F overcomes limitations of prior techniques.

pith-pipeline@v0.9.0 · 5555 in / 1444 out tokens · 44346 ms · 2026-05-14T21:20:24.537492+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

58 extracted references · 58 canonical work pages · 4 internal anchors

  1. [1]

    A survey on 3D object detection methods for autonomous driving applications,

    E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, “A survey on 3D object detection methods for autonomous driving applications,”IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782–3795, Oct. 2019.doi: 10.1109/TITS. 2019.2892405

  2. [2]

    A systematic review on foggy datasets: Applications and challenges,

    A. Juneja, V. Kumar, and S. K. Singla, “A systematic review on foggy datasets: Applications and challenges,”Arch Computat Methods Eng, vol. 29, no. 3, pp. 1727–1752, May 2022.doi: 10.1007/s11831-021- 09637-z

  3. [3]

    The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car,

    S. Zang, M. Ding, D. Smith, P. Tyler, T. Rakotoarivelo, and M. A. Kaafar, “The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car,”IEEE Vehicular Technology Magazine, vol. 14, no. 2, pp. 103–111, Jun. 2019.doi:10.1109/MVT.2019.2892497 33

  4. [4]

    Visual perception challenges in adverse weather for autonomous vehicles: A review of rain and fog impacts,

    Y. Qiu, Y. Lu, Y. Wang, and C. Yang, “Visual perception challenges in adverse weather for autonomous vehicles: A review of rain and fog impacts,” in2024 IEEE 7th ITNEC, Sep. 2024, pp. 1342–1348.doi: 10.1109/ITNEC60942.2024.10733168

  5. [5]

    What happens for a ToF LiDAR in fog?

    Y. Li, P. Duthon, M. Colomb, and J. Ibanez-Guzman, “What happens for a ToF LiDAR in fog?”IEEE Transactions on Intelligent Trans- portation Systems, vol. 22, no. 11, pp. 6670–6681, Nov. 2021.doi: 10.1109/TITS.2020.2998077

  6. [6]

    ALFRED: A benchmark for interpreting grounded instructions for everyday tasks

    M. Bijelic et al., “Seeing through fog without seeing fog: Deep multi- modal sensor fusion in unseen adverse weather,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 11 679–11 689.doi: 10.1109/CVPR42600.2020. 01170

  7. [7]

    3D object detection with SLS-Fusion network in foggy weather con- ditions,

    N. A. M. Mai, P. Duthon, L. Khoudour, A. Crouzil, and S. A. Velastin, “3D object detection with SLS-Fusion network in foggy weather con- ditions,”Sensors, vol. 21, no. 20, p. 6711, Jan. 2021.doi: 10.3390/ s21206711

  8. [9]

    WeatherDepth: Curriculum contrastive learning for self-supervised depth estimation under adverse weather conditions,

    J. Wang et al., “WeatherDepth: Curriculum contrastive learning for self-supervised depth estimation under adverse weather conditions,” in 2024 IEEE ICRA, May 2024, pp. 4976–4982.doi: 10.1109/ICRA57147. 2024.10611100

  9. [10]

    3D object detection algorithm in adverse weather conditions based on LiDAR-Radar fusion,

    Z. Wu, Q. Hou, X. Chen, J. Zhang, and K. Gao, “3D object detection algorithm in adverse weather conditions based on LiDAR-Radar fusion,” in2024 43rd Chinese Control Conference (CCC), Jul. 2024, pp. 7268– 7273.doi:10.23919/CCC63176.2024.10661603

  10. [11]

    A comprehensive analysis of object detectors in adverse weather conditions,

    V. S. Patel, K. Agrawal, and T. V. Nguyen, “A comprehensive analysis of object detectors in adverse weather conditions,” in2024 58th CISS, Mar. 2024, pp. 1–6.doi:10.1109/CISS59072.2024.10480197

  11. [12]

    A feature fusion method to improve the driving obstacle detection under foggy weather,

    Y. He and Z. Liu, “A feature fusion method to improve the driving obstacle detection under foggy weather,”IEEE Transactions on Trans- portation Electrification, vol. 7, no. 4, pp. 2505–2515, Dec. 2021.doi: 10.1109/TTE.2021.3080690 34

  12. [13]

    FoggyDepth: Leveraging channel frequency and non-local features for depth estima- tion in fog,

    M. Shen, L. Wang, X. Zhong, C. Liu, and Q. Chen, “FoggyDepth: Leveraging channel frequency and non-local features for depth estima- tion in fog,”IEEE Transactions on Circuits and Systems for Video Technology, vol. 35, no. 4, pp. 3589–3602, Apr. 2025.doi: 10.1109/ TCSVT.2024.3509696

  13. [14]

    ALFRED: A benchmark for interpreting grounded instructions for everyday tasks

    P. Sun et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 2443–2451. doi:10.1109/CVPR42600.2020.00252

  14. [15]

    ALFRED: A benchmark for interpreting grounded instructions for everyday tasks

    H. Caesar et al., “nuScenes: A multimodal dataset for autonomous driving,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 11 618–11 628.doi: 10.1109/CVPR42600.2020.01164

  15. [16]

    Semantic foggy scene un- derstanding with synthetic data,

    C. Sakaridis, D. Dai, and L. Van Gool, “Semantic foggy scene un- derstanding with synthetic data,”Int J Comput Vis, vol. 126, no. 9, pp. 973–992, Sep. 2018.doi:10.1007/s11263-018-1072-8

  16. [17]

    Are we ready for autonomous driving? the KITTI vision benchmark suite,

    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” in2012 IEEE CVPR, Jun. 2012, pp. 3354–3361.doi:10.1109/CVPR.2012.6248074

  17. [18]

    Udacity dataset

    E. Gonzalez, M. Virgo, Lovekesh, J. Jensen, and C. Kirksey. “Udacity dataset. ”[Online]. Available: https://github.com/udacity/self- driving-car

  18. [19]

    IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments,

    G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. V. Jawahar, “IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments,” in2019 IEEE WACV, Jan. 2019, pp. 1743–1751.doi:10.1109/WACV.2019.00190

  19. [20]

    ALFRED: A benchmark for interpreting grounded instructions for everyday tasks

    F. Yu et al., “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 2633– 2642.doi:10.1109/CVPR42600.2020.00271

  20. [21]

    1 year, 1000 km: The Oxford RobotCar dataset,

    W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The Oxford RobotCar dataset,”The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, Jan. 2017.doi: 10.1177/ 0278364916679498

  21. [22]

    The ApolloScape open dataset for autonomous driving and its application,

    X. Huang et al., “The ApolloScape open dataset for autonomous driving and its application,”IEEE TPAMI, vol. 42, no. 10, pp. 2702–2719, Oct. 2020.doi:10.1109/TPAMI.2019.2926463

  22. [23]

    DrivingStereo: A large-scale dataset for stereo matching in autonomous driving scenarios,

    G. Yang et al., “DrivingStereo: A large-scale dataset for stereo matching in autonomous driving scenarios,” in2019 IEEE/CVF CVPR, Jun. 2019, pp. 899–908.doi:10.1109/CVPR.2019.00099 35

  23. [24]

    Automatic fog detection and estimation of visibility distance through use of an onboard camera,

    N. Hauti´ ere, J.-P. Tarel, J. Lavenant, and D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an onboard camera,”Machine Vision and Applications, vol. 17, no. 1, pp. 8–20, Apr. 2006.doi:10.1007/s00138-005-0011-1

  24. [25]

    The Cityscapes dataset for semantic urban scene understanding,

    M. Cordts et al., “The Cityscapes dataset for semantic urban scene understanding,” in2016 IEEE CVPR, Jun. 2016, pp. 3213–3223.doi: 10.1109/CVPR.2016.350

  25. [26]

    Simulating photo- realistic snow and fog on existing images for enhanced CNN training and evaluation,

    A. von Bernuth, G. Volk, and O. Bringmann, “Simulating photo- realistic snow and fog on existing images for enhanced CNN training and evaluation,” in2019 IEEE ITSC, Oct. 2019, pp. 41–46.doi: 10.1109/ITSC.2019.8917367

  26. [27]

    Rendering scenes for simulating adverse weather conditions,

    P. Sen, A. Das, and N. Sahu, “Rendering scenes for simulating adverse weather conditions,” inAdvances in Computational Intelligence, 2021, pp. 347–358.doi:10.1007/978-3-030-85030-2_29

  27. [28]

    Simulation of atmospheric vis- ibility impairment,

    L. Zhang, A. Zhu, S. Zhao, and Y. Zhou, “Simulation of atmospheric vis- ibility impairment,”IEEE Trans. on Image Process., vol. 30, pp. 8713– 8726, 2021.doi:10.1109/TIP.2021.3120044

  28. [29]

    Towards simulating foggy and hazy images and evaluating their authenticity,

    N. Zhang, L. Zhang, and Z. Cheng, “Towards simulating foggy and hazy images and evaluating their authenticity,” inNeural Information Processing, 2017, pp. 405–415.doi: 10.1007/978-3-319-70090-8_42

  29. [30]

    Generation of synthetic non-homogeneous fog by discretized radiative transfer equation,

    M. Beregi-Kovacs, B. Harangi, A. Hajdu, and G. Gat, “Generation of synthetic non-homogeneous fog by discretized radiative transfer equation,”Journal of Imaging, vol. 11, no. 6, p. 196, Jun. 2025.doi: 10.3390/jimaging11060196

  30. [31]

    Generative adversarial networks,

    I. Goodfellow et al., “Generative adversarial networks,”Commun. ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020.doi: 10.1145/3422622

  31. [32]

    Unpaired image-to-image translation using cycle-consistent adversarial networks,

    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in2017 IEEE ICCV, Oct. 2017, pp. 2242–2251.doi:10.1109/ICCV.2017.244

  32. [33]

    Weather GAN: Multi-domain weather translation using generative adversarial networks

    X. Li, K. Kou, and B. Zhao. “Weather GAN: Multi-domain weather translation using generative adversarial networks.” arXiv: 2103.05422

  33. [34]

    In2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)

    V. Mus,at et al., “Multi-weather city: Adverse weather stacking for autonomous driving,” in2021 IEEE/CVF ICCVW, Oct. 2021, pp. 2906– 2915.doi:10.1109/ICCVW54120.2021.00325

  34. [36]

    Synthetic fog generation using high-performance dehaz- ing networks for surveillance applications,

    H. Lee et al., “Synthetic fog generation using high-performance dehaz- ing networks for surveillance applications,”Applied Sciences, vol. 15, no. 12, p. 6503, Jan. 2025.doi:10.3390/app15126503

  35. [37]

    Influences of weather phe- nomena on automotive laser radar systems,

    R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather phe- nomena on automotive laser radar systems,” inAdvances in Radio Science, vol. 9, Jul. 2011, pp. 49–60.doi:10.5194/ars-9-49-2011

  36. [38]

    Khan, and Fahad Shah- baz Khan

    M. Hahner et al., “Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather,” in2021 IEEE/CVF ICCV, Oct. 2021, pp. 15 263–15 272.doi:10.1109/ICCV48922.2021.01500

  37. [39]

    LiDAR light scattering augmentation (LISA): Physics- based simulation of adverse weather conditions for 3D object detection,

    V. Kilic et al., “LiDAR light scattering augmentation (LISA): Physics- based simulation of adverse weather conditions for 3D object detection,” inICASSP 2025, Apr. 2025, pp. 1–5.doi: 10.1109/ICASSP49660. 2025.10889253

  38. [40]

    A methodology to model the rain and fog effect on the performance of automotive LiDAR sensors,

    A. Haider et al., “A methodology to model the rain and fog effect on the performance of automotive LiDAR sensors,”Sensors, vol. 23, no. 15, p. 6891, Jan. 2023.doi:10.3390/s23156891

  39. [41]

    Simulating realistic rain, snow, and fog variations for comprehensive performance characterization of LiDAR perception,

    S. Teufel et al., “Simulating realistic rain, snow, and fog variations for comprehensive performance characterization of LiDAR perception,” in 2022 IEEE 95th VTC, Jun. 2022, pp. 1–7.doi: 10.1109/VTC2022- Spring54318.2022.9860868

  40. [42]

    GAN-Based LiDAR translation between sunny and adverse weather for autonomous driving,

    J. Lee et al., “GAN-Based LiDAR translation between sunny and adverse weather for autonomous driving,”Sensors, vol. 22, no. 14, p. 5287, Jan. 2022.doi:10.3390/s22145287

  41. [43]

    LaNoising: A data-driven approach for 903nm ToF LiDAR performance modeling under fog,

    T. Yang, Y. Li, Y. Ruichek, and Z. Yan, “LaNoising: A data-driven approach for 903nm ToF LiDAR performance modeling under fog,” in 2020 IEEE/RSJ IROS, Oct. 2020, pp. 10 084–10 091.doi: 10.1109/ IROS45743.2020.9341178

  42. [44]

    Fast semi-iterative finite ele- ment Poisson solvers for tensor core GPUs based on prehandling

    J. Park, K. Kim, and H. Shim, “Rethinking data augmentation for robust LiDAR semantic segmentation in adverse weather,” inComputer Vision – ECCV 2024, 2025, pp. 320–336.doi: 10.1007/978-3-031- 72640-8_18

  43. [45]

    Scaling Laws for Neural Language Models

    J. Kaplan et al. “Scaling laws for neural language models.” arXiv: 2001.08361

  44. [46]

    Revisiting unreasonable effectiveness of data in deep learning era,

    C. Sun et al., “Revisiting unreasonable effectiveness of data in deep learning era,” in2017 IEEE ICCV, Oct. 2017, pp. 843–852.doi: 10. 1109/ICCV.2017.97 37

  45. [47]

    Vehicle detection for autonomous driving: A review of algorithms and datasets,

    J. Karangwa, J. Liu, and Z. Zeng, “Vehicle detection for autonomous driving: A review of algorithms and datasets,”IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 11, pp. 11 568–11 594, Nov. 2023.doi:10.1109/TITS.2023.3292278

  46. [48]

    Marigold-DC: Zero-shot monocular depth completion with guided diffusion

    M. Viola et al. “Marigold-DC: Zero-shot monocular depth completion with guided diffusion.” arXiv:2412.13389

  47. [49]

    Depth Pro: Sharp Monocular Metric Depth in Less Than a Second

    A. Bochkovskii et al. “Depth pro: Sharp monocular metric depth in less than a second.” arXiv:2410.02073

  48. [50]

    Surface weather observations and reports (federal meteorological hand- book no. 1), U.S. Department of Commerce, 1995. [Online]. Available: http://marrella.meteor.wisc.edu/aos452/fmh1.pdf

  49. [51]

    Sparsity invariant CNNs,

    J. Uhrig et al., “Sparsity invariant CNNs,” in2017 International Conference on 3D Vision (3DV), Oct. 2017, pp. 11–20.doi: 10.1109/ 3DV.2017.00012

  50. [52]

    Jarraud,Guide to Meteorological Instruments and Methods of Observation

    M. Jarraud,Guide to Meteorological Instruments and Methods of Observation. Geneva: World Meteorological Organization, 2023,isbn: 978-92-63-10008-5

  51. [53]

    Investigating haze-relevant features in a learning framework for image dehazing,

    K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in2014 IEEE CVPR, Jun. 2014, pp. 2995–3002.doi:10.1109/CVPR.2014.383

  52. [54]

    Rayleigh and Mie scattering,

    D. J. Lockwood, “Rayleigh and Mie scattering,” inEncyclopedia of Color Science and Technology, Springer, 2019, pp. 1–12.doi: 10.1007/ 978-3-642-27851-8_218-3

  53. [55]

    ACDC: The adverse conditions dataset with correspondences for robust semantic driving scene perception,

    C. Sakaridis et al., “ACDC: The adverse conditions dataset with correspondences for robust semantic driving scene perception,”IEEE TPAMI, vol. 48, no. 3, pp. 2970–2988, Mar. 2026.doi: 10.1109/TPAMI. 2025.3633063

  54. [56]

    Gilbarg and N

    T.-Y. Lin et al., “Microsoft COCO: Common objects in context,” in Computer Vision – ECCV 2014, 2014, pp. 740–755.doi: 10.1007/978- 3-319-10602-1_48

  55. [57]

    From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,

    P. Young, A. Lai, M. Hodosh, and C. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,”Transactions of the Association for Computational Linguistics, vol. 2, pp. 67–78, 2014.doi: 10.1162/ tacl_a_00166 38

  56. [58]

    Faster R-CNN: Towards real- time object detection with region proposal networks,

    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real- time object detection with region proposal networks,”IEEE TPAMI, vol. 39, no. 6, pp. 1137–1149, Jun. 2017.doi: 10.1109/TPAMI.2016. 2577031

  57. [59]

    MMDetection: Open MMLab Detection Toolbox and Benchmark

    K. Chen et al. “MMDetection: Open MMLab detection toolbox and benchmark.” arXiv:1906.07155

  58. [60]

    YOLOX: Exceeding YOLO Series in 2021

    Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun. “YOLOX: Exceeding YOLO series in 2021.” arXiv:2107.08430. 39