pith. machine review for the scientific record. sign in

arxiv: 2605.10008 · v1 · submitted 2026-05-11 · ⚛️ physics.optics · cs.CV· cs.ET

Recognition: 2 theorem links

· Lean Theorem

Measurement-Adapted Eigentask Representations for Photon-Limited Optical Readout

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:53 UTC · model grok-4.3

classification ⚛️ physics.optics cs.CVcs.ET
keywords eigentasksoptical readoutphoton-limitednoise adaptationfeature representationclassificationlow-light imagingsensor compression
0
0 comments X

The pith

Eigentasks order optical sensor features by their resolvability under noise to provide better low-dimensional representations for inference.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper demonstrates that eigentasks create a measurement-adapted representation by ordering features from optical sensors according to how well they can be resolved given noise sources like photon shot noise. This matters for low-light imaging because the way noisy data is compressed before classification directly affects performance when light is scarce. Experiments show these representations beat principal component analysis and other baselines, with gains up to 10 percentage points in challenging few-shot multi-class settings. This identifies a promising direction for handling constraints on photons, time, and task complexity in optical systems.

Core claim

Eigentasks provide a measurement-adapted representation for optical sensor outputs by ordering readout features according to their resolvability under noise. On experimental data from lens-based imaging and single-photon neural network reanalysis, they outperform PCA and filtering compression most in photon-limited, few-shot, and difficult classification tasks, improving sample-efficient learning.

What carries the argument

The eigentask representation, which orders high-dimensional readout features by resolvability under modeled noise to form an adapted basis for downstream tasks.

If this is right

  • Better accuracy in photon-limited regimes for optical classification.
  • Gains of about 10 percentage points in few-shot classification as class count rises.
  • Improved sample efficiency for learning from limited optical data.
  • A strategy for optical inference under tight photon budget and acquisition time constraints.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The noise-model ordering could extend to other physical sensing systems facing similar measurement limits.
  • Combining eigentasks with adaptive or learned noise models might address cases where the assumed noise does not match reality.
  • This method underscores the value of physics-informed feature selection over purely data-driven compression in sensor applications.

Load-bearing premise

That strictly ordering features by resolvability under the noise model produces the most useful representation for any downstream task without task-specific adjustments.

What would settle it

If eigentask representations do not outperform PCA in accuracy on a new photon-limited classification experiment using identical dimensions and classifiers, the central advantage would be disproven.

Figures

Figures reproduced from arXiv: 2605.10008 by Hakan E. T\"ureci, J\'er\'emie Laydevant, Mandar M. Sohoni, Peter L. McMahon, Saeed A. Khan, Shi-Yuan Ma, Tianyang Chen, Tianyu Wang.

Figure 1
Figure 1. Figure 1: FIG. 1. Image classification through eigentask-based noise mitigation. (a) A schematic of the optical classification pipeline. Inputs are [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2. Comparative evaluation of noise-mitigation techniques for MNIST classification under photon-limited conditions using a simple [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3. Comparative evaluation of noise-mitigation techniques for MPEG-7 classification under photon-limited conditions using a simple [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4. Performance of noise-mitigation techniques in MNIST classification using SPDNN experimental data (Ref. [ [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
read the original abstract

Optical readout in low-light imaging is fundamentally limited by measurement noise, including photon shot noise, detector noise, and quantization error. In this regime, downstream inference depends not only on the optical front end, but also on how noisy high-dimensional sensor measurements are represented before classification or decision-making. Here we show that eigentasks provide a measurement-adapted representation for optical sensor outputs by ordering readout features according to their resolvability under noise. Using experimental data from a lens-based optical imaging system and a reanalysis of published data from a single-photon-detection neural network, we find that eigentask representations frequently outperform standard baselines including principal component analysis and filtering-based compression. The advantage is most pronounced in photon-limited, few-shot, and higher-difficulty classification regimes. In few-shot MPEG-7 classification, for example, the advantage over other methods reaches about 10 percentage points as the number of classes increases. In these settings, eigentasks yield more informative low-dimensional features and improve sample-efficient downstream learning. These results identify measurement-adapted representation as a promising strategy for optical inference when photon budget, acquisition time, and task complexity are constrained.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces eigentasks as a measurement-adapted representation for photon-limited optical readouts. Features from sensor outputs are ordered by their individual resolvability under a composite noise model (shot noise, detector noise, quantization). Experiments on lens-based imaging data and reanalysis of single-photon neural network datasets (including MPEG-7) show that low-dimensional eigentask representations outperform PCA and filtering baselines in classification accuracy, with the largest gains in few-shot and high-class-count regimes (approximately 10 percentage points in the latter). The approach is presented as task-agnostic and label-free, improving sample efficiency for downstream inference.

Significance. If the central empirical claims hold under the stated noise model, the work supplies a concrete, measurement-driven alternative to generic dimensionality reduction for noisy optical data. The emphasis on experimental validation and reanalysis of published single-photon data, together with the absence of free parameters in the core ordering, strengthens the case for practical utility in photon-budget-constrained settings. The reported advantage in few-shot, high-difficulty regimes is a potentially useful finding for optical sensing applications.

major comments (2)
  1. [Experimental Results] Experimental Results section: the claim that eigentask ordering is useful for arbitrary downstream tasks rests on classification experiments only. No regression, detection, or reconstruction tasks are reported; if discriminative information lies primarily in lower-resolvability directions, the top-k selection could discard task-relevant signal, undermining the generality asserted in the abstract.
  2. [Methods] Methods (noise model and resolvability definition): the resolvability metric is central to the ordering. The manuscript should explicitly derive or state the precise functional form (e.g., whether it is an SNR ratio, eigenvalue of the inverse noise covariance, or mutual-information proxy) and confirm that no data-dependent parameters enter the ordering itself, as any implicit estimation from the collected measurements would introduce a form of task-dependent tuning.
minor comments (2)
  1. [Figures] Figure captions and axis labels should explicitly indicate the number of shots or photons per pixel for each curve to allow direct comparison of photon-limited performance.
  2. [Introduction] The introduction should include a brief comparison to prior noise-aware dimensionality-reduction techniques (e.g., weighted PCA or SNR-based feature selection) to clarify the incremental contribution of the resolvability ordering.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive feedback on our manuscript. We have carefully considered the major comments and provide point-by-point responses below. We believe these clarifications and planned revisions will strengthen the paper.

read point-by-point responses
  1. Referee: [Experimental Results] Experimental Results section: the claim that eigentask ordering is useful for arbitrary downstream tasks rests on classification experiments only. No regression, detection, or reconstruction tasks are reported; if discriminative information lies primarily in lower-resolvability directions, the top-k selection could discard task-relevant signal, undermining the generality asserted in the abstract.

    Authors: We agree that our experimental validation is limited to classification tasks, which are representative of many photon-limited optical sensing applications. The eigentask representation is constructed in a task-agnostic manner by ordering features based solely on their resolvability under the noise model, without using any labels or task-specific information. This design ensures that the top-k features retain the most distinguishable signal components under noise, which should benefit a wide range of downstream inference tasks. However, we acknowledge that without explicit experiments on regression or detection, the generality remains an assertion supported by the method's construction rather than comprehensive empirical evidence. In the revised manuscript, we will expand the discussion to address potential applications to other tasks and include a caveat regarding the scope of current experiments. We do not believe this invalidates the abstract's claims, as the method is presented as a general representation strategy, but we will tone down any overly broad assertions if needed. revision: partial

  2. Referee: [Methods] Methods (noise model and resolvability definition): the resolvability metric is central to the ordering. The manuscript should explicitly derive or state the precise functional form (e.g., whether it is an SNR ratio, eigenvalue of the inverse noise covariance, or mutual-information proxy) and confirm that no data-dependent parameters enter the ordering itself, as any implicit estimation from the collected measurements would introduce a form of task-dependent tuning.

    Authors: We appreciate this comment and will revise the Methods section to provide a clear derivation of the resolvability metric. The resolvability for each feature is defined as the signal-to-noise ratio, specifically the ratio of the feature's signal variance to the sum of variances from shot noise, detector noise, and quantization noise. This can be shown to be equivalent to selecting directions with the largest eigenvalues in the inverse of the noise covariance matrix. The computation relies exclusively on the known physical parameters of the optical system and sensor (e.g., photon flux, detector characteristics), with no estimation or fitting from the experimental data itself. Thus, the ordering is completely determined by the measurement model and introduces no task-dependent or data-dependent parameters. We will include the explicit mathematical formulation and a proof of its independence from data in the revised version. revision: yes

Circularity Check

0 steps flagged

No significant circularity: eigentask ordering derived independently from noise model

full rationale

The paper constructs eigentasks by ordering readout features according to their resolvability under an explicit measurement noise model (photon shot noise, detector noise, quantization error). This ordering uses only the forward measurement model and contains no downstream task labels or fitted parameters that would make the representation equivalent to task performance by construction. Empirical comparisons to PCA and filtering baselines are performed on held-out classification tasks using experimental data, providing external validation rather than a self-referential derivation. No load-bearing self-citations, uniqueness theorems, or ansatzes are invoked to justify the core representation; the method remains self-contained against the stated noise model and independent benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Only the abstract is available, so the ledger is necessarily incomplete. The central claim rests on an unstated noise model and a definition of resolvability whose parameters are not enumerated here.

invented entities (1)
  • eigentasks no independent evidence
    purpose: measurement-adapted ordering of optical readout features by resolvability under noise
    Core new construct introduced to replace or augment PCA-style bases; no independent evidence outside the paper's experiments is described in the abstract.

pith-pipeline@v0.9.0 · 5544 in / 1409 out tokens · 55311 ms · 2026-05-12T03:53:10.343818+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

65 extracted references · 65 canonical work pages

  1. [1]

    Research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-25-1-0261. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government ...

  2. [2]

    EMCCD gain calibration and photon-budget estimation For the lens-based experiments, the EMCCD gain was calibrated from the shot-to-shot fluctuations of the raw camera readout. In the large-gain limit, the photon-induced pixel valuesX ph satisfy [44, 45, 55]: E[Xph] =ηλg,Var[X ph] = 2ηλg 2,(A1) whereλis the mean incident photon number during one exposure,η...

  3. [3]

    The covariance matrix is defined by taking the expectation value over the inputsV=E u[Σ(u)]

    Estimation of eigentasks and the transformation matrix of eigentask learning For a specific inputu, the covariances are given by the covariance matrixΣ(u)∈R K×K withΣ kk′ = CovX [ζk(u), ζk′(u)] = EX [ζk(u)ζk′(u)]. The covariance matrix is defined by taking the expectation value over the inputsV=E u[Σ(u)]. The Gram matrix is defined byG=E u[x(u)x(u)T]. In ...

  4. [4]

    Estimation of principal components and the transformation matrix of PCA Similar to eigentasks, we also estimate principal components and the transformation based on the training set with the maxi- mum number of shotsS max collected and apply them to the full dataset with fewer shots. Firstly, theS max-shot mean of features ¯X(u)are calculated and the mean...

  5. [5]

    Fourier-domain low-pass filtering and spatial coarse graining In Fourier-domain low-pass filtering, we apply the Fourier transform to the averaged features ¯X(u)and extract the real and imaginary parts of the low-frequency components as input to the output layer. For output features from EMCCD two- dimensional pixel arrays, the features can be expressed a...

  6. [6]

    To emulate operation on a smaller sampling budgetS≤S max, we formed theS-shot feature vector ¯X(u (n)) = 1 S SX s=1 X (s)(u(n)),(A9) using the firstSshots acquired for that input

    Dataset-specific data splits and evaluation protocols For each input imageu (n), we collected up toS max repeated stochastic measurements{X (s)(u(n))}Smax s=1 , whereX (s) ∈R K denotes the raw sensor readout from a single shot. To emulate operation on a smaller sampling budgetS≤S max, we formed theS-shot feature vector ¯X(u (n)) = 1 S SX s=1 X (s)(u(n)),(...

  7. [7]

    For eigentask and PCA, we first computed the full ordered set of transformed features¯Y, normalized these features, and then retained the leadingKr components

    Downstream classifiers, optimization, and reporting For eachK r, features were normalized using statistics computed from the training subset only. For eigentask and PCA, we first computed the full ordered set of transformed features¯Y, normalized these features, and then retained the leadingKr components. For low-pass filtering, the full Fourier-transform...

  8. [8]

    For each inputuin the test set, we applied the learned transform to each single-shot readoutX(s)(u)to obtain the noise-mitigated single-shot featuresY (s)(u)≡WX (s)(u)+b(s= 1,2,

    Estimation of empirical per-feature SNRs After learning the transform with matrixWand biasbfrom the training set, we quantified how noise is redistributed by the transformed features using an empirical per-feature SNR estimate. For each inputuin the test set, we applied the learned transform to each single-shot readoutX(s)(u)to obtain the noise-mitigated ...

  9. [9]

    Ettinger and T

    A. Ettinger and T. Wittmann, Chapter 5 - fluorescence live cell imaging, inQuantitative Imaging in Cell Biology, Methods in Cell Biology, V ol. 123, edited by J. C. Waters and T. Wittman (Academic Press, 2014) pp. 77–94

  10. [10]

    P. P. Laissue, R. A. Alghamdi, P. Tomancak, E. G. Reynaud, and H. Shroff, Assessing phototoxicity in live fluorescence imaging, Nature Methods14, 657 (2017)

  11. [11]

    J. Icha, M. Weber, J. C. Waters, and C. Norden, Phototoxicity in live fluorescence microscopy, and how to avoid it, BioEssays39, 1700003 (2017)

  12. [12]

    Fazel, K

    M. Fazel, K. S. Grussmayer, B. Ferdman, A. Radenovic, Y . Shechtman, J. Enderlein, and S. Press´e, Fluorescence microscopy: A statistics- optics perspective, Reviews of Modern Physics96, 025003 (2024)

  13. [13]

    Klein and R

    T. Klein and R. Huber, High-speed OCT light sources and systems [Invited], Biomedical Optics Express8, 828 (2017)

  14. [14]

    Zhang, J

    J. Zhang, J. Newman, Z. Wang, Y . Qian, P. Feliciano-Ramos, W. Guo, T. Honda, Z. S. Chen, C. Linghu, R. Etienne-Cummings, E. Fos- sum, E. Boyden, and M. Wilson, Pixel-wise programmability enables dynamic high-SNR cameras for high-speed microscopy, Nature Communications15, 4480 (2024)

  15. [15]

    Platisa, X

    J. Platisa, X. Ye, A. M. Ahrens, C. Liu, I. A. Chen, I. G. Davison, L. Tian, V . A. Pieribone, and J. L. Chen, High-speed low-light in vivo two-photon voltage imaging of large neuronal populations, Nature Methods20, 1095 (2023)

  16. [16]

    Healey and R

    G. Healey and R. Kondepudy, Radiometric CCD camera calibration and noise estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence16, 267 (1994)

  17. [17]

    El Gamal and H

    A. El Gamal and H. Eltoukhy, CMOS image sensors, IEEE Circuits and Devices Magazine21, 6 (2005)

  18. [18]

    J. R. Janesick, T. Elliott, S. Collins, M. M. Blouke, and J. Freeman, Scientific charge-coupled devices, Optical Engineering26, 692 (1987)

  19. [19]

    Bigas, E

    M. Bigas, E. Cabruja, J. Forest, and J. Salvi, Review of CMOS image sensors, Microelectronics Journal37, 433 (2006)

  20. [20]

    A. Foi, M. Trimeche, V . Katkovnik, and K. Egiazarian, Practical Poissonian-Gaussian Noise Modeling and Fitting for Single-Image Raw-Data, IEEE Transactions on Image Processing17, 1737 (2008)

  21. [21]

    Farrell, F

    J. Farrell, F. Xiao, and S. Kavusi, Resolution and light sensitivity tradeoff with pixel size, inDigital Photography II, V ol. 6069 (SPIE,

  22. [22]

    S. H. Chan, H. K. Weerasooriya, W. Zhang, P. Abshire, I. Gyongy, and R. K. Henderson, Resolution limit of single-photon lidar, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(2024) pp. 25307–25316

  23. [23]

    J. R. Schott, A. Gerace, C. E. Woodcock, S. Wang, Z. Zhu, R. H. Wynne, and C. E. Blinn, The impact of improved signal-to-noise ratios on algorithm performance: Case studies for Landsat class instruments, Remote Sensing of Environment Landsat 8 Science Results,185, 37 (2016)

  24. [24]

    Dodge and L

    S. Dodge and L. Karam, Understanding how image quality affects deep neural networks, in2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)(2016) pp. 1–6

  25. [25]

    Hendrycks and T

    D. Hendrycks and T. Dietterich, Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, inInternational Conference on Learning Representations(2018)

  26. [26]

    J. Rapp, J. Tachella, Y . Altmann, S. McLaughlin, and V . K. Goyal, Advances in Single-Photon Lidar for Autonomous Vehicles: Working Principles, Challenges, and Recent Advances, IEEE Signal Processing Magazine37, 62 (2020)

  27. [27]

    L. Bian, H. Song, L. Peng, X. Chang, X. Yang, R. Horstmeyer, L. Ye, C. Zhu, T. Qin, D. Zheng, and J. Zhang, High-resolution single- photon imaging with physics-informed deep learning, Nature Communications14, 5902 (2023)

  28. [28]

    R. C. Gonzalez,Digital image processing(Pearson education india, 2009)

  29. [29]

    J. P. Cunningham and Z. Ghahramani, Linear Dimensionality Reduction: Survey, Insights, and Generalizations, Journal of Machine Learning Research16, 2859 (2015)

  30. [30]

    Rasti, P

    B. Rasti, P. Scheunders, P. Ghamisi, G. Licciardi, and J. Chanussot, Noise Reduction in Hyperspectral Imagery: Overview and Applica- tion, Remote Sensing10, 10.3390/rs10030482 (2018)

  31. [31]

    F. Hu, G. Angelatos, S. A. Khan, M. Vives, E. T¨ureci, L. Bello, G. E. Rowlands, G. J. Ribeill, and H. E. T¨ureci, Tackling Sampling Noise in Physical Systems for Machine Learning Applications: Fundamental Limits and Eigentasks, Physical Review X13, 041020 (2023)

  32. [32]

    A. M. Polloreno, Restrictions on physical stochastic reservoir computers, Physical Review Applied24, 014031 (2025)

  33. [33]

    Y . Wang, C. Oh, J. Liu, L. Jiang, and S. Zhou, Advancing quantum imaging through learning theory, Nature Communications17, 1124 (2026). 15

  34. [34]

    Turin, An introduction to matched filters, IRE Transactions on Information Theory6, 311 (1960)

    G. Turin, An introduction to matched filters, IRE Transactions on Information Theory6, 311 (1960)

  35. [35]

    ¨Olc ¸er and A.¨Onc¨u, Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing, Sensors 17, 10.3390/s17061288 (2017)

    ˙I. ¨Olc ¸er and A.¨Onc¨u, Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing, Sensors 17, 10.3390/s17061288 (2017)

  36. [36]

    Walter, P

    T. Walter, P. Kurpiers, S. Gasparinetti, P. Magnard, A. Poto ˇcnik, Y . Salath´e, M. Pechal, M. Mondal, M. Oppliger, C. Eichler, and A. Wallraff, Rapid High-Fidelity Single-Shot Dispersive Readout of Superconducting Qubits, Physical Review Applied7, 054020 (2017)

  37. [37]

    S. A. Khan, R. Kaufman, B. Mesits, M. Hatridge, and H. E. T ¨ureci, Practical Trainable Temporal Postprocessor for Multistate Quantum Measurement, PRX Quantum5, 020364 (2024)

  38. [41]

    C. M. Bishop, Training with Noise is Equivalent to Tikhonov Regularization, Neural Computation7, 108 (1995)

  39. [42]

    Milanfar and M

    P. Milanfar and M. Delbracio, Denoising: A powerful building block for imaging, inverse problems and machine learning, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences383, 20240326 (2025)

  40. [43]

    Goyal, A

    B. Goyal, A. Dogra, S. Agrawal, B. S. Sohi, and A. Sharma, Image denoising review: From classical to state-of-the-art approaches, Information Fusion55, 220 (2020)

  41. [44]

    I. T. Jolliffe and J. Cadima, Principal component analysis: A review and recent developments, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences374, 20150202 (2016)

  42. [45]

    Greenacre, P

    M. Greenacre, P. J. F. Groenen, T. Hastie, A. I. D’Enza, A. Markos, and E. Tuzhilina, Principal component analysis, Nature Reviews Methods Primers2, 100 (2022)

  43. [47]

    Hughes, On the mean accuracy of statistical pattern recognizers, IEEE Transactions on Information Theory14, 55 (1968)

    G. Hughes, On the mean accuracy of statistical pattern recognizers, IEEE Transactions on Information Theory14, 55 (1968)

  44. [50]

    D. J. Field, Relations between the statistics of natural images and the response properties of cortical cells, JOSA A4, 2379 (1987)

  45. [51]

    P. Pope, C. Zhu, A. Abdelkader, M. Goldblum, and T. Goldstein, The Intrinsic Dimension of Images and Its Impact on Learning, in International Conference on Learning Representations(2020)

  46. [54]

    K. B. W. Harpsøe, M. I. Andersen, and P. Kjægaard, Bayesian photon counting with electron-multiplying charge coupled devices (EMC- CDs), Astronomy & Astrophysics537, A50 (2012)

  47. [56]

    L. G. Wright, T. Onodera, M. M. Stein, T. Wang, D. T. Schachter, Z. Hu, and P. L. McMahon, Deep physical neural networks trained with backpropagation, Nature601, 549 (2022)

  48. [57]

    Z. Xue, T. Zhou, Z. Xu, S. Yu, Q. Dai, and L. Fang, Fully forward mode training for optical neural networks, Nature632, 280 (2024)

  49. [58]

    Momeni, B

    A. Momeni, B. Rahmani, B. Scellier, L. G. Wright, P. L. McMahon, C. C. Wanjura, Y . Li, A. Skalli, N. G. Berloff, T. Onodera, I. Oguz, F. Morichetti, P. del Hougne, M. Le Gallo, A. Sebastian, A. Mirhoseini, C. Zhang, D. Markovi ´c, D. Brunner, C. Moser, S. Gigan, F. Marquardt, A. Ozcan, J. Grollier, A. J. Liu, D. Psaltis, A. Al `u, and R. Fleury, Training...

  50. [60]

    Zhou and S

    C. Zhou and S. K. Nayar, Computational cameras: convergence of optics and processing, IEEE Transactions on Image Processing20, 3322 (2011)

  51. [61]

    J. N. Mait, G. W. Euliss, and R. A. Athale, Computational imaging, Advances in Optics and Photonics10, 409 (2018)

  52. [62]

    T. Chen, M. M. Sohoni, S. A. Khan, J. Laydevant, S.-Y . Ma, T. Wang, P. L. McMahon, and H. E. T¨ureci, Measurement-adapted eigentask representations for photon-limited optical readout: data, precomputed results, and code snapshot (2026)

  53. [63]

    Measurement-Adapted Eigentask Representations for Photon-Limited Optical Readout

    M. Hirsch, R. J. Wareham, M. L. Martin-Fernandez, M. P. Hobson, and D. J. Rolfe, A Stochastic Model for Electron Multiplication Charge-Coupled Devices – From Theory to Practice, PLOS ONE8, e53671 (2013). 1 Supplemental Material for “Measurement-Adapted Eigentask Representations for Photon-Limited Optical Readout” Tianyang Chen, Mandar M. Sohoni, Saeed A. ...

  54. [64]

    LeCun, L

    Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE86, 2278 (1998)

  55. [65]

    Bober, MPEG-7 visual shape descriptors, IEEE Transactions on Circuits and Systems for Video Technology11, 716 (2001)

    M. Bober, MPEG-7 visual shape descriptors, IEEE Transactions on Circuits and Systems for Video Technology11, 716 (2001)

  56. [66]

    J. W. Goodman and M. E. Cox, Introduction to fourier optics (1969)

  57. [67]

    A. G. Basden, C. A. Haniff, and C. D. Mackay, Photon counting strategies with low-light-level CCDs, Monthly Notices of the Royal Astronomical Society345, 985 (2003)

  58. [68]

    Lantz, J.-L

    E. Lantz, J.-L. Blanchet, L. Furfaro, and F. Devaux, Multi-imaging and Bayesian estimation for photon counting with EMCCDs, Monthly Notices of the Royal Astronomical Society386, 2262 (2008)

  59. [69]

    Hirsch, R

    M. Hirsch, R. J. Wareham, M. L. Martin-Fernandez, M. P. Hobson, and D. J. Rolfe, A Stochastic Model for Electron Multiplication Charge-Coupled Devices – From Theory to Practice, PLOS ONE8, e53671 (2013)

  60. [70]

    Toninelli,Quantum-enhanced imaging and sensing with spatially correlated biphotons, Ph.D

    E. Toninelli,Quantum-enhanced imaging and sensing with spatially correlated biphotons, Ph.D. thesis, University of Glasgow (2020)

  61. [71]

    P4,https://www.princetoninstruments.com/ products/proem-family/pro-em(2020), accessed: 2026-05-06

    Teledyne Princeton Instruments, ProEM-HS: 512BX3 Datasheet, Rev. P4,https://www.princetoninstruments.com/ products/proem-family/pro-em(2020), accessed: 2026-05-06

  62. [72]

    Hughes, On the mean accuracy of statistical pattern recognizers, IEEE Transactions on Information Theory14, 55 (1968)

    G. Hughes, On the mean accuracy of statistical pattern recognizers, IEEE Transactions on Information Theory14, 55 (1968). 14

  63. [73]

    G. V . Trunk, A Problem of Dimensionality: A Simple Example, IEEE Transactions on Pattern Analysis and Machine IntelligencePAMI- 1, 306 (1979)

  64. [74]

    J. Hua, Z. Xiong, J. Lowey, E. Suh, and E. R. Dougherty, Optimal number of features as a function of sample size for various classification rules, Bioinformatics21, 1509 (2005)

  65. [75]

    S.-Y . Ma, T. Wang, J. Laydevant, L. G. Wright, and P. L. McMahon, Quantum-limited stochastic optical neural networks operating at a few quanta per activation, Nature Communications16, 359 (2025)