pith. machine review for the scientific record. sign in

arxiv: 2604.11993 · v1 · submitted 2026-04-13 · 💻 cs.CV · physics.optics

Recognition: unknown

Ultra-low-light computer vision using trained photon correlations

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:06 UTC · model grok-4.3

classification 💻 cs.CV physics.optics
keywords correlated photon illuminationultra-low-light imagingobject recognitionTransformer classifiercorrelation-aware trainingphoton-limited visionend-to-end optimization
0
0 comments X

The pith

End-to-end training of photon correlations with a Transformer classifier achieves up to 15 percentage points better accuracy in ultra-low-light object recognition.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that jointly optimizing the spatial pattern of photon correlations in the illumination source and the weights of a Transformer classifier lets the system learn to use correlated signal photons to separate object features from random detector noise. This differs from earlier correlated-photon work that aimed at image reconstruction; here the correlations are specialized for the inference task of identifying which object is present. A sympathetic reader would care because many practical sensing scenarios, from astronomy to microscopy, must operate with very few photons per frame, and the method extracts more reliable decisions without needing brighter light or longer exposures.

Core claim

The central claim is that correlation-aware training (CAT) of a trainable correlated-photon illumination source together with a Transformer backend enables the network to exploit learned spatial correlations, yielding classification accuracy gains of up to 15 percentage points over both uncorrelated illumination and untrained correlated sources when only 100 or fewer noisy shots are available.

What carries the argument

Correlation-aware training (CAT): an end-to-end optimization that simultaneously tunes the spatial correlation structure of the photon source and the parameters of the Transformer classifier for the object-recognition task.

If this is right

  • The hybrid optical-electronic pipeline outperforms standard computer-vision pipelines that treat each photon detection as independent.
  • Task-specific training of the correlation pattern produces larger gains than using a fixed or random correlated source.
  • The accuracy advantage is largest when photon counts are low and detector noise is high.
  • Only a modest number of camera frames suffices for the Transformer to learn to exploit the correlations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same joint-training approach could be applied to other inference tasks such as object detection or semantic segmentation by changing only the loss function.
  • Practical deployment would require additional calibration steps to compensate for optical misalignments and detector non-uniformities not captured in simulation.
  • If fast-reconfigurable sources become available, the method could extend to video-rate recognition of moving objects.

Load-bearing premise

The spatial correlations generated by the photon source can be precisely controlled in a real optical system and remain stable enough for the Transformer to learn useful patterns from a small number of shots despite experimental noise and imperfections.

What would settle it

A physical experiment implementing the trained correlation pattern with actual hardware in which classification accuracy shows no meaningful improvement over an uncorrelated-illumination baseline under matched ultra-low-light conditions.

Figures

Figures reproduced from arXiv: 2604.11993 by Benjamin A. Ash, J\'er\'emie Laydevant, Logan G. Wright, Mandar M. Sohoni, Mathieu Ouellet, Peter L. McMahon, Ryotatsu Yanagimoto, Shi-Yuan Ma, Tatsuhiro Onodera, Tianyu Wang.

Figure 1
Figure 1. Figure 1: FIG. 1 [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2 [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3 [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
read the original abstract

Illumination using correlated photon sources has been established as an approach to allowing high-fidelity images to be reconstructed from noisy camera frames by taking advantage of the knowledge that signal photons are spatially correlated whereas detector clicks due to noise are uncorrelated. However, in computer-vision tasks, the goal is often not ultimately to reconstruct an image, but to make inferences about a scene -- such as what object is present. Here we show how correlated-photon illumination can be used to gain an advantage in a hybrid optical-electronic computer-vision pipeline for object recognition. We demonstrate correlation-aware training (CAT): end-to-end optimization of a trainable correlated-photon illumination source and a Transformer backend in a way that the Transformer can learn to benefit from the correlations, using a small number (<= 100) of shots. We show a classification accuracy enhancement of up to 15 percentage points over conventional, uncorrelated-illumination-based computer vision in ultra-low-light and noisy imaging conditions, as well as an improvement over using untrained correlated-photon illumination. Our work illustrates how specializing to a computer-vision task -- object recognition -- and training the pattern of photon correlations in conjunction with a digital backend allows us to push the limits of accuracy in highly photon-budget-constrained scenarios beyond existing methods focused on image reconstruction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces correlation-aware training (CAT), an end-to-end optimization approach that jointly trains a spatially correlated photon illumination source and a Transformer classifier to perform object recognition directly from a small number (≤100) of ultra-low-light, noisy frames. It reports classification accuracy gains of up to 15 percentage points relative to conventional uncorrelated illumination and to untrained correlated illumination.

Significance. If the reported gains are reproducible under realistic optical conditions, the work would demonstrate a concrete advantage for task-specific training of photon correlations in photon-starved computer vision, shifting the paradigm from image reconstruction to direct inference. This hybrid optical-ML strategy could be relevant for applications with severe photon budgets, and the explicit comparison to both uncorrelated and untrained correlated baselines is a strength.

major comments (2)
  1. [§4.2] §4.2 and the associated experimental protocol: the central 15 pp accuracy claim depends on the learned correlation pattern remaining exploitable by the Transformer after propagation through a real optical system; the manuscript provides no quantitative tolerance analysis (e.g., to alignment drift, loss, or detector dark counts) showing that the advantage survives these effects for ≤100 shots.
  2. [Table 3] Table 3 (results across datasets): the reported gains are presented as point estimates without error bars, number of independent trials, or statistical significance tests; this makes it impossible to determine whether the 15 pp margin is robust to random seeds or post-hoc hyperparameter choices.
minor comments (2)
  1. [Abstract] The abstract states 'up to 15 percentage points' while the main text reports a maximum of 14.8 pp on one dataset; harmonize the headline figure with the precise maximum shown in the results.
  2. [§3.1] Notation for the correlation matrix C(·) is introduced without an explicit definition of its normalization or how it is constrained to be physically realizable during training.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review. We address each major comment below and will revise the manuscript accordingly to improve clarity and rigor.

read point-by-point responses
  1. Referee: [§4.2] §4.2 and the associated experimental protocol: the central 15 pp accuracy claim depends on the learned correlation pattern remaining exploitable by the Transformer after propagation through a real optical system; the manuscript provides no quantitative tolerance analysis (e.g., to alignment drift, loss, or detector dark counts) showing that the advantage survives these effects for ≤100 shots.

    Authors: We agree that robustness to realistic optical imperfections is essential to substantiate the reported gains. Our current simulations incorporate photon noise and shot limits but assume ideal propagation. In the revised manuscript we will add a dedicated tolerance analysis subsection to §4.2. This will include Monte Carlo simulations of alignment drift (pixel shifts up to 5 %), optical losses (10–20 %), and elevated detector dark counts, demonstrating that the accuracy advantage over baselines persists for ≤100 shots. Sensitivity curves and discussion of practical implications will be provided. revision: yes

  2. Referee: [Table 3] Table 3 (results across datasets): the reported gains are presented as point estimates without error bars, number of independent trials, or statistical significance tests; this makes it impossible to determine whether the 15 pp margin is robust to random seeds or post-hoc hyperparameter choices.

    Authors: We concur that statistical measures are necessary to establish robustness. We will revise Table 3 to report mean accuracy ± standard deviation computed over 10 independent training runs with distinct random seeds for each dataset. The number of trials will be stated explicitly, and we will add paired t-test p-values to confirm that the observed improvements are statistically significant. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical training result stands on measured accuracy gains

full rationale

The paper reports an end-to-end trainable optical-electronic pipeline for object recognition under photon-starved conditions. The central claim is a measured classification accuracy improvement (up to 15 pp) obtained by jointly optimizing a correlated-photon source and a Transformer backend, compared against uncorrelated illumination and untrained correlations. No equations, uniqueness theorems, or ansatzes are invoked that reduce the reported gain to a fitted parameter or self-referential definition by construction. The result is presented as an experimental demonstration whose validity rests on physical realization and benchmarking, not on algebraic identity with its inputs. Self-citations, if present in the full text, are not load-bearing for the accuracy claim.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the controllability of photon correlations in an optical experiment and on the assumption that a neural network can learn to use those correlations from limited shots; these are standard domain assumptions in quantum optics and machine learning but are not independently verified in the provided abstract.

axioms (1)
  • domain assumption Spatial correlations between signal photons can be generated and maintained through an optical system while noise remains uncorrelated.
    Invoked by the use of correlated-photon illumination as the starting point for training.

pith-pipeline@v0.9.0 · 5570 in / 1213 out tokens · 64489 ms · 2026-05-10T16:06:59.779306+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

103 extracted references · 9 canonical work pages · 1 internal anchor

  1. [1]

    Wetzstein, A

    G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. Miller, and D. Psaltis, Inference in artificial intelligence with deep optics and photonics, Nature588, 39 (2020)

  2. [2]

    Pirandola, B

    S. Pirandola, B. R. Bardhan, T. Gehring, C. Weedbrook, and S. Lloyd, Advances in photonic quantum sensing, Nature Photonics12, 724 (2018)

  3. [3]

    O. S. Magana-Loaiza and R. W. Boyd, Quantum imaging and information, Reports on Progress in Physics82, 124401 (2019)

  4. [4]

    Moreau, E

    P.-A. Moreau, E. Toninelli, T. Gregory, and M. J. Padgett, Imaging with quantum states of light, Nature Reviews Physics 1, 367 (2019)

  5. [5]

    Defienne, W

    H. Defienne, W. P. Bowen, M. Chekhova, G. B. Lemos, D. Oron, S. Ramelow, N. Treps, and D. Faccio, Advances in quantum imaging, Nature Photonics18, 1024 (2024)

  6. [6]

    C. Tsao, H. Ling, A. Hinkle, Y. Chen, K. K. Jha, Z.-L. Yan, and H. Utzat, Enhancing spectroscopy and microscopy with emerging methods in photon correlation and quantum illumination, Nature Nanotechnology20, 1001 (2025)

  7. [7]

    C. A. Casacio, L. S. Madsen, A. Terrasson, M. Waleed, K. Barnscheidt, B. Hage, M. A. Taylor, and W. P. Bowen, Quantum-enhanced nonlinear microscopy, Nature594, 201 (2021)

  8. [8]

    Nagata, R

    T. Nagata, R. Okamoto, J. L. O’brien, K. Sasaki, and S. Takeuchi, Beating the standard quantum limit with four-entangled photons, Science316, 726 (2007)

  9. [9]

    Israel, S

    Y. Israel, S. Rosen, and Y. Silberberg, Supersensitive polarization microscopy using noon states of light, Physical review letters112, 103604 (2014)

  10. [10]

    Slussarenko, M

    S. Slussarenko, M. M. Weston, H. M. Chrzanowski, L. K. Shalm, V. B. Verma, S. W. Nam, and G. J. Pryde, Unconditional violation of the shot-noise limit in photonic quantum metrology, Nature Photonics11, 700 (2017)

  11. [11]

    C. M. Caves, Quantum-mechanical noise in an interferometer, Physical Review D23, 1693 (1981)

  12. [12]

    Brida, M

    G. Brida, M. Genovese, and I. Ruo Berchera, Experimental realization of sub-shot-noise quantum imaging, Nature Pho- tonics4, 227 (2010)

  13. [13]

    M. A. Taylor, J. Janousek, V. Daria, J. Knittel, B. Hage, H.-A. Bachor, and W. P. Bowen, Biological measurement beyond the quantum limit, Nature Photonics7, 229 (2013)

  14. [14]

    Zhang, Z

    Y. Zhang, Z. He, X. Tong, D. C. Garrett, R. Cao, and L. V. Wang, Quantum imaging of biological organisms through spatial and polarization entanglement, Science Advances10, eadk1495 (2024)

  15. [15]

    Lloyd, Enhanced sensitivity of photodetection via quantum illumination, Science321, 1463 (2008)

    S. Lloyd, Enhanced sensitivity of photodetection via quantum illumination, Science321, 1463 (2008)

  16. [16]

    D. V. Dylov, L. Waller, and J. W. Fleischer, Nonlinear restoration of diffused images via seeded instability, IEEE Journal of Selected Topics in Quantum Electronics18, 916 (2011)

  17. [17]

    Lopaeva, I

    E. Lopaeva, I. Ruo Berchera, I. P. Degiovanni, S. Olivares, G. Brida, and M. Genovese, Experimental realization of quantum illumination, Physical review letters110, 153603 (2013)

  18. [18]

    Gregory, P.-A

    T. Gregory, P.-A. Moreau, E. Toninelli, and M. J. Padgett, Imaging through noise with quantum illumination, Science advances6, eaay2652 (2020)

  19. [19]

    T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Optical imaging by means of two-photon quantum entanglement, Physical Review A52, R3429 (1995)

  20. [20]

    Strekalov, A

    D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, Observation of two-photon “ghost” interference and diffraction, Physical review letters74, 3600 (1995)

  21. [21]

    two-photon

    R. S. Bennink, S. J. Bentley, and R. W. Boyd, “two-photon” coincidence imaging with a classical source, Physical review letters89, 113601 (2002)

  22. [22]

    Ortolano, C

    G. Ortolano, C. Napoli, C. Harney, S. Pirandola, G. Leonetti, P. Boucher, E. Losero, M. Genovese, and I. Ruo-Berchera, Quantum-enhanced pattern recognition, Physical Review Applied20, 024072 (2023)

  23. [23]

    Giovannetti, S

    V. Giovannetti, S. Lloyd, L. Maccone, and J. H. Shapiro, Sub-rayleigh-diffraction-bound quantum imaging, Physical Review A—Atomic, Molecular, and Optical Physics79, 013827 (2009)

  24. [24]

    Toninelli, P.-A

    E. Toninelli, P.-A. Moreau, T. Gregory, A. Mihalyi, M. Edgar, N. Radwell, and M. Padgett, Resolution-enhanced quantum imaging by centroid estimation of biphotons, Optica6, 347 (2019)

  25. [25]

    Z. He, Y. Zhang, X. Tong, L. Li, and L. V. Wang, Quantum microscopy of cells at the heisenberg limit, Nature communi- cations14, 2441 (2023)

  26. [26]

    P. T. Grochowski, M. Fadel, and R. Filip, Distributed phase-insensitive displacement sensing, arXiv preprint arXiv:2602.03727 (2026)

  27. [27]

    2106.09681 , archivePrefix=

    A. El-Nouby, H. Touvron, M. Caron, P. Bojanowski, M. Douze, A. Joulin, I. Laptev, N. Neverova, G. Synnaeve, J. Verbeek, et al., Xcit: Cross-covariance image transformers, arXiv preprint arXiv:2106.09681 (2021)

  28. [28]

    Defienne, M

    H. Defienne, M. Reichert, and J. W. Fleischer, Adaptive quantum optics with spatially entangled photon pairs, Physical review letters121, 233601 (2018)

  29. [29]

    O. Lib, G. Hasson, and Y. Bromberg, Real-time shaping of entangled photons by classical control and feedback, Science Advances6, eabb6298 (2020). 13

  30. [30]

    Cameron, B

    P. Cameron, B. Courme, C. Vernière, R. Pandya, D. Faccio, and H. Defienne, Adaptive optical imaging with entangled photons, Science383, 1142 (2024)

  31. [31]

    Boucher, H

    P. Boucher, H. Defienne, and S. Gigan, Engineering spatial correlations of entangled photon pairs by pump beam shaping, Optics Letters46, 4200 (2021)

  32. [32]

    Nirala, S

    G. Nirala, S. T. Pradyumna, A. Kumar, and A. M. Marino, Information encoding in the spatial correlations of entangled twin beams, Science Advances9, eadf9161 (2023)

  33. [33]

    Vernière and H

    C. Vernière and H. Defienne, Hiding images in quantum correlations, Physical Review Letters133, 093601 (2024)

  34. [34]

    97, edited by K

    J.Lee, Y.Lee, J.Kim, A.Kosiorek, S.Choi,andY.W.Teh,Settransformer: Aframeworkforattention-basedpermutation- invariant neural networks, inProceedings of the 36th International Conference on Machine Learning, Proceedings of Ma- chine Learning Research, Vol. 97, edited by K. Chaudhuri and R. Salakhutdinov (PMLR, 2019) pp. 3744–3753

  35. [35]

    H. Kim, Y. Zhou, Y. Xu, K. Varma, A. H. Karamlou, I. T. Rosen, J. C. Hoke, C. Wan, J. P. Zhou, W. D. Oliver, Y. D. Lensky, K. Q. Weinberger, and E.-A. Kim, Attention to quantum complexity (2024), arXiv:2405.11632 [quant-ph]

  36. [36]

    Cameron, B

    P. Cameron, B. Courme, D. Faccio, and H. Defienne, Shaping the spatial correlations of entangled photon pairs, Journal of Physics: Photonics6, 033001 (2024)

  37. [37]

    S. P. Walborn, C. Monken, S. Pádua, and P. S. Ribeiro, Spatial correlations in parametric down-conversion, Physics Reports495, 87 (2010)

  38. [38]

    Wasilewski, A

    W. Wasilewski, A. I. Lvovsky, K. Banaszek, and C. Radzewicz, Pulsed squeezed light: Simultaneous squeezing of multiple modes, Physical Review A—Atomic, Molecular, and Optical Physics73, 063819 (2006)

  39. [39]

    Quesada, L

    N. Quesada, L. Helt, M. Menotti, M. Liscidini, and J. Sipe, Beyond photon pairs—nonlinear quantum photonics in the high-gain regime: a tutorial, Advances in Optics and Photonics14, 291 (2022)

  40. [40]

    Law and J

    C. Law and J. Eberly, Analysis and interpretation of high transverse entanglement<? format?> in optical parametric down conversion, Physical review letters92, 127903 (2004)

  41. [41]

    Brambilla, A

    E. Brambilla, A. Gatti, M. Bache, and L. A. Lugiato, Simultaneous near-field and far-field spatial quantum correlations in the high-gain regime of parametric down-conversion, Physical Review A69, 023802 (2004)

  42. [42]

    Gatti, E

    A. Gatti, E. Brambilla, L. Lugiato, and M. Kolobov, Quantum entangled images, Physical review letters83, 1763 (1999)

  43. [43]

    Gatti, E

    A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, Ghost imaging with thermal light: comparing entanglement and classical correlation, Physical review letters93, 093602 (2004)

  44. [44]

    J. H. Shapiro, Computational ghost imaging, Physical Review A—Atomic, Molecular, and Optical Physics78, 061802 (2008)

  45. [45]

    L. G. Wright, T. Onodera, M. M. Stein, T. Wang, D. T. Schachter, Z. Hu, and P. L. McMahon, Deep physical neural networks trained with backpropagation, Nature601, 549 (2022)

  46. [46]

    E.Jang, S.Gu,andB.Poole,Categoricalreparameterizationwithgumbel-softmax,inInternational Conference on Learning Representations (ICLR)(2017)

  47. [47]

    C. J. Maddison, A. Mnih, and Y. W. Teh, The concrete distribution: A continuous relaxation of discrete random variables, inInternational Conference on Learning Representations (ICLR)(2017)

  48. [48]

    Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation

    Y. Bengio, N. Léonard, and A. Courville, Estimating or propagating gradients through stochastic neurons for conditional computation, arXiv preprint arXiv:1308.3432 (2013)

  49. [49]

    X. Yang, L. Prasad, and L. J. Latecki, Affinity learning with diffusion on tensor product graph, IEEE transactions on pattern analysis and machine intelligence35, 28 (2012)

  50. [50]

    J. M. Guerra, Super-resolution through illumination by diffraction-born evanescent waves, Applied physics letters66, 3555 (1995)

  51. [51]

    M. G. Gustafsson, Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy, Journal of microscopy198, 82 (2000)

  52. [52]

    E. H. Rego and L. Shao, Practical structured illumination microscopy, inAdvanced Fluorescence Microscopy: Methods and Protocols(Springer, 2014) pp. 175–192

  53. [53]

    Schraivogel, T

    D. Schraivogel, T. M. Kuhn, B. Rauscher, M. Rodríguez-Martínez, M. Paulsen, K. Owsley, A. Middlebrook, C. Tischer, B. Ramasz, D. Ordoñez-Rueda,et al., High-speed fluorescence image–enabled cell sorting, Science375, 315 (2022)

  54. [54]

    Rozenberg, A

    E. Rozenberg, A. Karnieli, O. Yesharim, J. Foley-Comer, S. Trajtenberg-Mills, D. Freedman, A. M. Bronstein, and A. Arie, Inverse design of spontaneous parametric downconversion for generation of high-dimensional qudits, Optica9, 602 (2022)

  55. [55]

    K.Roberts, O.Wolley, T.Gregory,andM.Padgett,Acomparisonbetweenthemeasurementofquantumspatialcorrelations usingqcmosphoton-numberresolvingandelectronmultiplyingccdcameratechnologies,ScientificReports14,14687(2024)

  56. [56]

    G. V. Resta, L. Stasi, M. Perrenoud, S. El-Khoury, T. Brydges, R. Thew, H. Zbinden, and F. Bussières, Gigahertz detection rates and dynamic photon-number resolution with superconducting nanowire arrays, Nano Letters23, 6018 (2023)

  57. [57]

    Sloan, M

    J. Sloan, M. Horodynski, S. Z. Uddin, Y. Salamin, M. Birk, P. Sidorenko, I. Kaminer, M. Soljačić, and N. Rivera, Programmable control of the spatiotemporal quantum noise of light, arXiv preprint arXiv:2509.03482 (2025)

  58. [58]

    Yanagimoto, B

    R. Yanagimoto, B. A. Ash, M. M. Sohoni, M. M. Stein, Y. Zhao, F. Presutti, M. Jankowski, L. G. Wright, T. Onodera, and P. L. McMahon, Programmable on-chip nonlinear photonics, arXiv preprint arXiv:2503.19861 (2025)

  59. [59]

    Figurnov, S

    M. Figurnov, S. Mohamed, and A. Mnih, Implicit reparameterization gradients, Advances in neural information processing systems31(2018)

  60. [60]

    Jankowiak and T

    M. Jankowiak and T. Karaletsos, Pathwise derivatives for multivariate distributions, inThe 22nd International Conference on Artificial Intelligence and Statistics(PMLR, 2019) pp. 333–342

  61. [61]

    M. N. Wernick and G. M. Morris, Image classification at low light levels, Journal of the Optical Society of America A3, 2179 (1986). 14

  62. [62]

    S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, et al., Ghost cytometry, Science360, 1246 (2018)

  63. [63]

    Goyal and M

    B. Goyal and M. Gupta, Photon-starved scene inference using single photon cameras, inProceedings of the IEEE/CVF International Conference on Computer Vision(2021) pp. 2512–2521

  64. [64]

    Minati, S

    G. Minati, S. Roncallo, S. Scrofana, A. R. Morgillo, N. Spagnolo, C. Macchiavello, L. Maccone, V. Cimini, and F. Sciarrino, Quantum optical neuron for image classification via multiphoton interference, arXiv preprint arXiv:2603.28879 (2026)

  65. [65]

    L. A. Saaf and G. M. Morris, Photon-limited image classification with a feedforward neural network, Applied optics34, 3963 (1995)

  66. [66]

    Y. Zhu, J. Shi, X. Wu, X. Liu, G. Zeng, J. Sun, L. Tian, and F. Su, Photon-limited non-imaging object detection and classification based on single-pixel imaging system., Applied Physics B: Lasers & Optics126(2020)

  67. [67]

    Chen and P

    B. Chen and P. Perona, Vision without the image, Sensors16, 484 (2016)

  68. [68]

    S.-Y. Ma, J. Laydevant, M. M. Sohoni, L. G. Wright, T. Wang, and P. L. McMahon, Machine vision with small numbers of detected photons per inference, arXiv preprint arXiv:2603.23974 (2026)

  69. [69]

    Zhang, S

    Z. Zhang, S. Mouradian, F. N. Wong, and J. H. Shapiro, Entanglement-enhanced sensing in a lossy and noisy environment, Physical review letters114, 110506 (2015)

  70. [70]

    Sarovar, Quantum computational imaging and sensing, inQuantum Nanophotonic Materials, Devices, and Systems 2023, Vol

    M. Sarovar, Quantum computational imaging and sensing, inQuantum Nanophotonic Materials, Devices, and Systems 2023, Vol. 12657 (SPIE, 2023) pp. 6–9

  71. [71]

    S. A. Khan, S. Prabhu, L. G. Wright, and P. L. McMahon, Quantum computational-sensing advantage, arXiv preprint arXiv:2507.16918 (2025)

  72. [72]

    Weedbrook, S

    C. Weedbrook, S. Pirandola, J. Thompson, V. Vedral, and M. Gu, How discord underlies the noise resilience of quantum illumination, New Journal of Physics18, 043027 (2016)

  73. [73]

    Couteau, Spontaneous parametric down-conversion, Contemporary Physics59, 291 (2018)

    C. Couteau, Spontaneous parametric down-conversion, Contemporary Physics59, 291 (2018)

  74. [74]

    T. Ono, R. Okamoto, and S. Takeuchi, An entanglement-enhanced microscope, Nature communications4, 2426 (2013)

  75. [75]

    Defienne, B

    H. Defienne, B. Ndagano, A. Lyons, and D. Faccio, Polarization entanglement-enabled quantum holography, Nature Physics 17, 591 (2021)

  76. [76]

    F. Hu, G. Angelatos, S. A. Khan, M. Vives, E. Türeci, L. Bello, G. E. Rowlands, G. J. Ribeill, and H. E. Türeci, Tackling sampling noise in physical systems for machine learning applications: Fundamental limits and eigentasks, Physical Review X13, 041020 (2023)

  77. [77]

    Choi and A

    M. Choi and A. Majumdar, Free-space optical encoder for computer vision, npj Nanophotonics2, 36 (2025)

  78. [78]

    G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, Quantum optical neural networks, npj Quantum Information 5, 60 (2019)

  79. [79]

    Avella, I

    A. Avella, I. Ruo-Berchera, I. P. Degiovanni, G. Brida, and M. Genovese, Absolute calibration of an emccd camera by quantum correlation, linking photon counting to the analog regime, Optics letters41, 1841 (2016). 15 Appendix A: The benefit of correlations during sensing To demonstrate that correlations in the illumination source can help when classifying...

  80. [80]

    Tables A1→A4 show the probabilities of measuring|00⟩ → |11⟩from the possible illuminations and object classes

    Now, there are four possible measurements that can be made –|00⟩,|10⟩,|01⟩and|11⟩. Tables A1→A4 show the probabilities of measuring|00⟩ → |11⟩from the possible illuminations and object classes. 0 1 2 3 |00⟩ 1 4 p0(1−ϵ) 2 1 4 p0(1−ϵ) 2 1 4 p0(1−ϵ) 2 1 4 p0(1−ϵ) 2 |10⟩ 0 0 1 4 p1(1−ϵ) 2 1 4 p1(1−ϵ) 2 |01⟩ 0 1 4 p2(1−ϵ) 2 0 1 4 p2(1−ϵ) 2 |11⟩ 0 0 0 1 4 p3(1−...

Showing first 80 references.