pith. machine review for the scientific record. sign in

arxiv: 2604.23814 · v1 · submitted 2026-04-26 · 💻 cs.CV · cs.AI

Recognition: unknown

Mapping License Plate Recoverability Under Extreme Viewing Angles for Oppor-tunistic Urban Sensing

Authors on Pith no claims yet

Pith reviewed 2026-05-08 06:27 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords license plate recognitionrecoverability mapsopportunistic sensingimage restorationextreme viewing anglesdegradation parametersurban camerasboundary analysis
0
0 comments X

The pith

Recoverability maps show sensing geometry rather than model architecture limits license plate recovery to about 93 percent of extreme viewing conditions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces recoverability maps as a task-agnostic way to measure the boundary between recoverable and unrecoverable degradation in images for secondary tasks like license plate recognition. It does so by densely sampling synthetic combinations of viewing angle, resolution, noise, and other camera artifacts, then summarizing the viable region with boundary area-under-curve and a reliability score that tracks failure frequency and severity. When the maps are generated for four different restoration networks, the recoverable fraction of parameter space reaches roughly 93 percent for the strongest model, and the other models perform similarly. This pattern indicates that the physical constraints of extreme urban viewpoints set the practical limit more than the specific choice of AI architecture.

Core claim

Recoverability maps built from a dense synthetic sweep of degradation parameters and summarized by boundary area-under-curve plus reliability score demonstrate that the best restoration model recovers approximately 93 percent of the parameter space for license plate recognition under extreme angles and realistic camera artifacts, with comparable results across U-Net, Restormer, Pix2Pix, and SR3 models indicating that sensing geometry rather than architecture determines the recovery limit.

What carries the argument

Recoverability maps, which quantify the recoverable fraction of a synthetic degradation-parameter space by combining boundary area-under-curve estimates with a reliability score that captures failure frequency and severity.

If this is right

  • The maps supply a concrete criterion for deciding which existing urban cameras can be repurposed for license-plate tasks without additional hardware.
  • Because recovery rates remain similar across architectures, further gains from model improvements are expected to be marginal compared with changes in camera placement or resolution.
  • High-failure regions identified on the maps point to specific angle and resolution combinations where installing additional sensors would produce the largest increase in usable opportunistic data.
  • The same synthetic-sweep approach can be reused to evaluate recoverability for other secondary inference tasks performed on degraded urban imagery.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Urban infrastructure planners could consult these maps when siting new cameras to enlarge the fraction of viewpoints that support multiple secondary uses.
  • If the synthetic model aligns with reality, the remaining 7 percent unrecoverable region implies that multi-view or higher-resolution complementary sensors will still be needed for full coverage.
  • Adding motion blur and temporal degradation factors to the parameter sweep would test whether the current maps underestimate real-world failure rates for moving vehicles.
  • The observed dominance of geometry over architecture suggests that theoretical bounds derived from projective geometry alone could predict the recoverable fraction without training any networks.

Load-bearing premise

The synthetic sweep of degradation parameters accurately models real-world extreme viewing angles and camera artifacts encountered in opportunistic urban sensing.

What would settle it

A large collection of real license-plate images captured from extreme angles by actual urban cameras, with measured recovery success rates compared against the synthetic maps' predicted 93-percent recoverable fraction, would falsify the claim if real-world performance deviates substantially downward.

Figures

Figures reproduced from arXiv: 2604.23814 by Alexander Apartsin, Igor Adamenko, Orpaz Ben Aharon, Yehudit Aperstein.

Figure 1
Figure 1. Figure 1: Opportunistic sensing in a smart city environment. Multiple imaging sensors deployed for unrelated purposes, ATM machine cameras, body-worn cameras, vehicle dashcams, pole-mounted CCTV, and handheld smartphones, incidentally cap￾ture a passing vehicle at diverse, uncontrolled viewing angles. This paper asks at which of these angles deep-learning image restoration can still recover a readable license plate … view at source ↗
Figure 3
Figure 3. Figure 3: Synthetic training-data construction. (a) Viewing geometry: 𝛼 is the azimuthal (lateral) rotation about the vertical axis, 𝛽 the elevational rotation about the horizontal axis. (b)–(e) Pipeline stages: a clean plate is subjected to a 3D rotation 𝑅(𝛼, 𝛽) and perspective projection Π, then realistic camera artifacts (blur, colour jitter, JPEG compression), and finally de￾warped with 𝑅 −1 and resized to 256 ×… view at source ↗
Figure 4
Figure 4. Figure 4: Test-split PSNR (dB, top row) and SSIM (bottom row) per model on the Standard and Extreme datasets. Lollipops show deviation from the U-Net baseline (dashed line). Restormer consistently leads; Diffusion-SR3 consistently lags. Absolute PSNR drops by 3–4 dB on the Extreme dataset, reflecting its heavier emphasis on oblique angles. 5.2. Spatial Recognition Patterns Evaluating plate-level OCR accuracy over th… view at source ↗
Figure 5
Figure 5. Figure 5: Dataset-shift sensitivity of boundary-AUC (left) and reliability score (right, log scale) for each model as training distribution shifts from Standard to Extreme. Discriminative models (U-Net variants, Restormer) are essentially flat in AUC and tightly grouped in, indicating stable performance. Diffusion-SR3 shows a dramatic increase (0.572 → 1.124), confirming hallucination sensitivity to angle-distributi… view at source ↗
Figure 6
Figure 6. Figure 6: Plate-level PSNR vs. plate-level OCR accuracy for Restormer (left, blue) and Diffusion-SR3 (right, red) on the Stand￾ard dataset. Solid lines show the exact linear fits (Eqs. 10–11). Binned means (circles) and one-standard-deviation error bars confirm a tight, low-variance relationship across the full PSNR range. Horizontal dashed line marks the OCR threshold. Each additional 1 dB of PSNR yields approximat… view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative plate restoration examples at three representative angle regimes of increasing difficulty: good recovery, single-axis extreme; partial recovery, mixed extreme; and failure, double extreme. Rows show each model's output; the first two rows (distorted input and ground truth) are shared reference. Discriminative models (U-Net variants, Restormer) pro￾duce legible digit sequences in the good- and p… view at source ↗
read the original abstract

Urban environments contain many imaging sensors built for specific purposes, including ATM, body-worn, CCTV, and dashboard cameras. Under the opportunistic sensing paradigm, these sensors can be repurposed for secondary inference tasks such as license plate recognition. Yet objects of interest in such imagery are often noisy, low-resolution, and captured from extreme viewpoints. Recent advances in AI-based restoration can recover use-ful information even from severely degraded images. A central challenge is determining which distortion parame-ters allow reliable recovery and which lead to inference failure. This paper introduces recoverability maps, a task-agnostic method for quantifying this boundary. The method combines a dense synthetic sweep of degrada-tion parameters with two summary measures: boundary area-under-curve, which estimates the recoverable frac-tion of the parameter space, and a reliability score, which captures the frequency and severity of failures within that region. We demonstrate the method on license plate recognition from highly angled views under realistic camera artifacts. Several restoration architectures are trained and evaluated, including U-Net, Restormer, Pix2Pix, and SR3 diffusion. The best model recovers about 93% of the parameter space. Similar results across models sug-gest that sensing geometry, rather than architecture, sets the limit of recovery.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces recoverability maps as a task-agnostic method to quantify the boundary of reliable license plate recovery from extreme viewing angles and camera artifacts in opportunistic urban sensing. It performs a dense synthetic sweep over degradation parameters, trains and evaluates U-Net, Restormer, Pix2Pix, and SR3 restoration models, and reports summary statistics including boundary AUC (approximately 93% for the best model) and a reliability score. The central claim is that cross-model similarity indicates sensing geometry, rather than architecture, primarily limits recovery.

Significance. If the synthetic degradation model is representative, the recoverability-map framework offers a practical way to assess feasibility of secondary tasks on existing urban sensors and could inform camera deployment decisions. The multi-architecture evaluation and dense parameter sweep are strengths that provide evidence against architecture-specific bottlenecks within the tested regime.

major comments (3)
  1. [§4] §4 (Experiments): The headline 93% recovery figure and boundary AUC are presented without error bars, run-to-run variance, exact integration limits over the parameter space, or the precise definition of how AUC is computed from the success/failure surface; these omissions make the quantitative claim difficult to interpret or reproduce.
  2. [§3] §3 (Method) and §4: The inference that geometry rather than architecture sets the limit rests on the assumption that all four models were trained to equivalent convergence on the synthetic distribution; no training curves, epoch counts, validation losses, or capacity ablations are reported to exclude under-training as an alternative explanation for the observed similarity.
  3. [§5] §5 (Discussion): The generalizability claim for opportunistic urban sensing depends on the synthetic sweep faithfully reproducing real distributions of motion blur, JPEG artifacts, lens distortion, and lighting; no real-world validation set, cross-dataset comparison, or sensitivity analysis to omitted factors is provided, which is load-bearing for the geometry-limit conclusion.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'boundary area-under-curve' is introduced without a forward reference to its exact definition or computation in the methods section.
  2. [Title] Title: The hyphen in 'Oppor-tunistic' appears to be an artifact of line breaking and should be removed for cleanliness.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thorough review and constructive feedback on our manuscript. We address each of the major comments below and outline the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: [§4] §4 (Experiments): The headline 93% recovery figure and boundary AUC are presented without error bars, run-to-run variance, exact integration limits over the parameter space, or the precise definition of how AUC is computed from the success/failure surface; these omissions make the quantitative claim difficult to interpret or reproduce.

    Authors: We agree that additional details are necessary for reproducibility and interpretability. In the revised version, we will provide the precise mathematical definition of the boundary AUC, specify the exact integration limits used in the parameter space, and include error bars or variance estimates from multiple runs with different random seeds. This will clarify the quantitative claims. revision: yes

  2. Referee: [§3] §3 (Method) and §4: The inference that geometry rather than architecture sets the limit rests on the assumption that all four models were trained to equivalent convergence on the synthetic distribution; no training curves, epoch counts, validation losses, or capacity ablations are reported to exclude under-training as an alternative explanation for the observed similarity.

    Authors: We acknowledge the need to demonstrate that the models reached comparable levels of training. We will include training and validation loss curves, report the number of epochs and convergence criteria for each model, and add a brief capacity analysis or parameter count comparison to support that the cross-model similarity is not attributable to under-training. revision: yes

  3. Referee: [§5] §5 (Discussion): The generalizability claim for opportunistic urban sensing depends on the synthetic sweep faithfully reproducing real distributions of motion blur, JPEG artifacts, lens distortion, and lighting; no real-world validation set, cross-dataset comparison, or sensitivity analysis to omitted factors is provided, which is load-bearing for the geometry-limit conclusion.

    Authors: We agree that validating the synthetic degradation model against real-world data would strengthen the generalizability claims. However, our current work focuses on the recoverability map framework using controlled synthetic sweeps, which allow dense sampling not feasible in real data. We will expand §5 to include a sensitivity analysis to key omitted factors (e.g., varying lighting models) and explicitly discuss the limitations of the synthetic approach for real opportunistic sensing. A full real-world validation would require a new dataset with ground-truth license plates under extreme angles, which we consider future work. revision: partial

Circularity Check

0 steps flagged

No circularity; derivation is a self-contained simulation study

full rationale

The paper defines recoverability maps by generating a dense synthetic grid of degradation parameters (viewing angles, resolution, artifacts), training restoration models (U-Net, Restormer, Pix2Pix, SR3) on this data, and computing boundary AUC and reliability scores from performance on held-out points in the same synthetic distribution. This chain does not reduce to self-definition, fitted inputs renamed as predictions, or self-citation load-bearing steps; the 93% recovery figure and cross-model similarity are direct empirical outputs of the simulation rather than tautological re-statements of inputs. No uniqueness theorems or ansatzes are imported from prior author work, and the geometry-vs-architecture conclusion follows from comparative evaluation rather than renaming a known result.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the domain assumption that synthetic degradation parameters faithfully represent real extreme viewing conditions; no free parameters or invented entities are explicitly introduced in the abstract.

axioms (1)
  • domain assumption Synthetic degradations model real extreme views
    The recoverability maps are constructed from a dense synthetic sweep; the mapping to real-world utility rests on this unverified equivalence.

pith-pipeline@v0.9.0 · 5535 in / 1166 out tokens · 24691 ms · 2026-05-08T06:27:52.409129+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

36 extracted references · 31 canonical work pages · 3 internal anchors

  1. [1]

    The pothole patrol: Using a mobile sensor netw ork for road surface monitoring

    Eriksson, J.; Girod, L.; Hull, B.; Newton, R.; Madden, S.; Balakrishnan, H. The pothole patrol: Using a mobile sensor netw ork for road surface monitoring. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services (MobiSys); ACM, 2008; pp. 29–39. https://doi.org/10.1145/1378600.1378605

  2. [2]

    D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A

    Lane, N. D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A. T. A survey of mobile phone sensing. IEEE Com- munications Magazine 2010, 48, 140–150. https://doi.org/10.1109/MCOM.2010.5560598

  3. [3]

    T.; Eisenman, S

    Campbell, A. T.; Eisenman, S. B.; Lane, N. D.; Miluzzo, E.; Peterson, R. A. The rise of people -centric sensing. IEEE Internet Computing 2008, 12, 12–21. https://doi.org/10.1109/MIC.2008.90

  4. [4]

    K.; Ye, F.; Lei, H

    Ganti, R. K.; Ye, F.; Lei, H. Mobile crowdsensing: Current state and future challenges. IEEE Communications Magazine 2011, 49, 32–39. https://doi.org/10.1109/MCOM.2011.6069707

  5. [5]

    The real -time city? Big data and smart urbanism

    Kitchin, R. The real -time city? Big data and smart urbanism. GeoJournal 2014, 79, 1 –14. https://doi.org/10.1007/s10708-013- 9516-8

  6. [6]

    Automated license plate recognition: A survey on methods and datasets

    Shashirangana, J.; Padmasiri, H.; Meedeniya, D.; Perera, C. Automated license plate recognition: A survey on methods and datasets. IEEE Access 2021, 9, 11203–11228. https://doi.org/10.1109/ACCESS.2020.3047929

  7. [7]

    Anagnostopoulos, C. N. E.; Anagnostopoulos, I. E.; Psoroulas, I. D.; Kayafas, E. License plate recognition from still imag es and video sequences: A survey. IEEE Transactions on Intelligent Transportation Systems 2008, 9, 377 –391. https://doi.org/10.1109/TITS.2008.922938

  8. [8]

    The DET curve in assessment of detection task perfor- mance

    Martin, A.; Doddington, G.; Kamm, T.; Ordowski, M.; Przybocki, M. The DET curve in assessment of detection task perfor- mance. In Proceedings of Eurospeech 1997, 1997; pp. 1895–1898. https://doi.org/10.21437/Eurospeech.1997-504

  9. [9]

    A.; Estrin, D.; Hansen, M.; Parker, A.; Ramanathan, N.; Reddy, S.; Srivastava, M

    Burke, J. A.; Estrin, D.; Hansen, M.; Parker, A.; Ramanathan, N.; Reddy, S.; Srivastava, M. B. Participatory sensing. In W ork- shop on World-Sensor-Web at ACM SenSys 2006, 2006. Available online: https://escholarship.org/uc/item/19h777qd

  10. [10]

    Goodchild, M. F. Citizens as sensors: The world of volunteered geographic information. GeoJournal 2007, 69, 211 –221. https://doi.org/10.1007/s10708-007-9111-y

  11. [11]

    The new science of cities; MIT Press, 2013

    Batty, M. The new science of cities; MIT Press, 2013

  12. [12]

    Urban computing: Concepts, methodologies, and applications

    Zheng, Y.; Capra, L.; Wolfson, O.; Yang, H. Urban computing: Concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology 2014, 5, Article 38. https://doi.org/10.1145/2629592

  13. [13]

    Townsend, A. M. Smart cities: Big data, civic hackers, and the quest for a new utopia; W. W. Norton & Company, 2013

  14. [14]

    Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (2000) 1330–1334

    Zhang, Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000, 22, 1330–1334. https://doi.org/10.1109/34.888718

  15. [15]

    Large -scale privacy protection in Google Street View

    Frome, A.; Cheung, G.; Abdulkader, A.; Zennaro, M.; Wu, B.; Bissacco, A.; Adam, H.; Neven, H.; Vincent, L. Large -scale privacy protection in Google Street View. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2009; pp. 2373–2380. https://doi.org/10.1109/ICCV.2009.5459413

  16. [16]

    A.; Gonçalves, G

    Laroca, R.; Zanlorensi, L. A.; Gonçalves, G. R.; Todt, E.; Schwartz, W. R.; Menotti, D. An efficient and layout -independent automatic license plate recognition system based on the YOLO detector. IET Intelligent Transport Systems 2021, 15, 483 –503. https://doi.org/10.1049/itr2.12030

  17. [17]

    V., Lucio , D

    Laroca, R.; Cardoso, E. V.; Lucio, D. R.; Estevam, V.; Menotti, D. On the cross-dataset generalization in license plate recogni- tion. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), 2022. https://doi.org/10.5220/0010846800003124

  18. [18]

    M.; Jung, C

    Silva, S. M.; Jung, C. R. License plate detection and recognition in unconstrained scenarios. In European Conference on Computer Vision (ECCV); Springer, 2018; pp. 580–596. https://doi.org/10.1007/978-3-030-01258-8_36

  19. [19]

    2D license plate recognition based on automatic perspective rectification

    Xu, H.; Guo, Z.; Wang, D.; Zhou, X.; Zheng, Y. 2D license plate recognition based on automatic perspective rectification. In 2020 25th International Conference on Pattern Recognition (ICPR); IEEE, 2021; pp. 7803 –7810. https://doi.org/10.1109/ICPR48806.2021.9413152

  20. [20]

    Semantic super-resolution for extremely low-resolution vehicle license plates

    Zou, Y.; Wang, S.; Xu, J.; Li, H. Semantic super-resolution for extremely low-resolution vehicle license plates. In ICASSP 2019 – IEEE International Conference on Acoustics, Speech and Signal Processing, 2019; pp. 3772 –3776. https://doi.org/10.1109/ICASSP.2019.8682507

  21. [21]

    K.; Koo, V

    Hamdi, A.; Chan, Y. K.; Koo, V. C. A new image enhancement and super -resolution technique for license plate recognition. Heliyon 2021, 7, e08341. https://doi.org/10.1016/j.heliyon.2021.e08341

  22. [22]

    IEEE Trans

    Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13, 600–612. https://doi.org/10.1109/TIP.2003.819861

  23. [23]

    A comparative study of image restoration networks for general backbone network de- sign.arXiv preprint arXiv:2310.11881, 2023

    Chen, X.; Li, Z.; Pu, Y.; Liu, Y.; Zhou, J.; Qiao, Y.; Dong, C. A comparative study of image restoration networks for gen eral backbone network design. In European Conference on Computer Vision (ECCV), 2024. Available online: https://arxiv.org/abs/2310.11881

  24. [24]

    Bovier and F

    Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer -Assisted Intervention (MICCAI); Springer, 2015; pp. 234 –241. https://doi.org/10.1007/978-3-319- 24574-4_28

  25. [25]

    Attention U-Net: Learning Where to Look for the Pancreas

    Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N. Y.; Kain z, B.; Glocker, B.; Rueckert, D. Attention U-net: Learning where to look for the pancreas. In Medical Imaging with Deep Learning (MIDL), 2018. Available online: https://arxiv.org/abs/1804.03999

  26. [26]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022

    Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M. H. Restormer : Efficient transformer for high -resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022; pp. 5728–5739. https://doi.org/10.1109/CVPR52688.2022.00564

  27. [27]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022

    Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general U -shaped transformer for image restoration. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022; pp. 17683 –17693. https://doi.org/10.1109/CVPR52688.2022.01716

  28. [28]

    Generative adver- sarial nets

    Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adver- sarial nets. Advances in Neural Information Processing Systems 2014, 27, 2672 –2680. Available online: https://dl.acm.org/doi/10.5555/2969033.2969125

  29. [29]

    Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; pp. 1125 –1134. https://doi.org/10.1109/CVPR.2017.632

  30. [30]

    C.; Qiao, Y.; Tang, X

    Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Loy, C. C.; Qiao, Y.; Tang, X. ESRGAN: Enhanced super-resolution gener- ative adversarial networks. In European Conference on Computer Vision Workshops (ECCVW), 2018. https://doi.org/10.1007/978-3-030-11021-5_5

  31. [31]

    J.; Norouzi, M

    Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; Norouzi, M. Image super -resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence 2022, 45, 4713–4726. https://doi.org/10.1109/TPAMI.2022.3204461

  32. [32]

    Burhenne, S.; Jacob, D.; Henze, G. P. Sampling based on Sobol' sequences for Monte Carlo techniques applied to building simulations. In Proceedings of Building Simulation 2011; IBPSA, 2011; pp. 1816–1823

  33. [33]

    Joe, S.; Kuo, F. Y. Constructing Sobol sequences with better two -dimensional projections. SIAM Journal on Scientific Com- puting 2008, 30, 2635–2654. https://doi.org/10.1137/070709359

  34. [34]

    M., Sternberg, S

    Haralick, R. M., Sternberg, S. R., & Zhuang, X. (1987). Image analysis using mathematical morphology. IEEE transactions on pattern analysis and machine intelligence, (4), 532-550

  35. [35]

    Denoising Diffusion Probabilistic Models

    Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 2020, 33, 6840–6851. Available online: https://arxiv.org/abs/2006.11239

  36. [36]

    Denoising Diffusion Implicit Models

    Song, J.; Meng, C.; Ermon, S. Denoising diffusion implicit models. arXiv 2020, arXiv:2010.02502. Available online: https://arxiv.org/abs/2010.02502