pith. machine review for the scientific record. sign in

arxiv: 2605.12085 · v1 · submitted 2026-05-12 · 🧮 math.NA · cs.NA

Recognition: 2 theorem links

· Lean Theorem

A Line--Search--Based Stochastic Gradient Method for 3D Computed Tomography

Elena Morotti, Federica Porta, Ilaria Trombini, Tatiana A. Bubba, Valeria Ruggiero

Pith reviewed 2026-05-13 04:09 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords 3D computed tomographystochastic gradient methodline searchforward-backward algorithminverse problemsvolumetric reconstructionmini-batch samplingCT reconstruction
0
0 comments X

The pith

A line-search stochastic gradient method with full-projection mini-batches accelerates 3D CT reconstruction without training data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces FB-LISA, a forward-backward generalization of a line-search stochastic gradient algorithm, to solve the high-dimensional inverse problem of volumetric reconstruction from CT data. It forms stochastic mini-batches from complete 2D projections rather than smaller subsets, which keeps the physical geometry of the acquisition intact and produces faster progress during the first iterations. The approach repurposes optimization ideas common in deep learning to reduce the computational burden of forward and back-projections while avoiding any need for training data or learned priors.

Core claim

FB-LISA applies forward-backward splitting to a line-search-based stochastic gradient method so that mini-batches of full 2D projections can be used on the large-scale 3D CT operator; the resulting scheme yields substantial speed-ups in early iterations while preserving the physical structure of the acquisition process and requiring no training data or learned priors.

What carries the argument

FB-LISA, the forward-backward line-search stochastic gradient algorithm that samples mini-batches consisting of entire 2D projections to address the high-dimensional 3D CT inverse problem.

Load-bearing premise

The forward-backward generalization of the line-search stochastic gradient method keeps reliable convergence and reconstruction quality when applied to the high-dimensional 3D CT operator with full-projection mini-batches.

What would settle it

A test on a standard 3D CT benchmark dataset in which FB-LISA either diverges, fails to reach acceptable error in a reasonable number of iterations, or yields image quality metrics markedly worse than a deterministic line-search method would disprove the central claim.

Figures

Figures reproduced from arXiv: 2605.12085 by Elena Morotti, Federica Porta, Ilaria Trombini, Tatiana A. Bubba, Valeria Ruggiero.

Figure 1
Figure 1. Figure 1: Reconstruction results for Case 1. The first column shows the ground truth and [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Relative error vs execution time in seconds for Case 3. [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
read the original abstract

We introduce FB-LISA, a forward-backward (FB) generalization of a recently proposed line-search-based stochastic gradient algorithm to address the imaging problem of volumetric reconstruction in Computed Tomography, a substantially high demanding problem, which involves orders of magnitude of data, a high computational burden for forward and backprojection, and memory requirements that push current GPU architectures to their limits. Our formulation employs stochastic mini-batches composed of full 2D projections, preserving the physical structure of the acquisition process while enabling significant speed-ups during early iterations. The resulting method demonstrates how concepts traditionally associated with deep learning can be repurposed to accelerate large-scale inverse problems, without relying on training data or learned priors.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript introduces FB-LISA, a forward-backward generalization of a line-search-based stochastic gradient algorithm for volumetric reconstruction in 3D Computed Tomography. The formulation employs stochastic mini-batches of full 2D projections to preserve the physical structure of the acquisition process while enabling speed-ups in early iterations, and it repurposes concepts from deep learning to accelerate large-scale inverse problems without training data or learned priors.

Significance. If the claimed convergence behavior and reconstruction quality hold, the work would be significant for practical acceleration of high-dimensional CT problems on limited hardware. It offers a structure-preserving stochastic approach that bridges optimization ideas from machine learning with classical inverse problems, potentially reducing computational and memory demands without requiring external training data.

minor comments (2)
  1. [Abstract] The abstract asserts speed-ups and structure preservation but would benefit from a brief quantitative statement (e.g., iteration count or wall-clock reduction) to ground the central claim, even if full results appear later in the manuscript.
  2. [Methods] Notation for the stochastic mini-batch selection (full 2D projections) should be introduced with a clear equation or diagram in the methods section to make the preservation of acquisition physics explicit.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive summary of our work on FB-LISA and for recommending minor revision. No specific major comments were provided in the report.

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The paper introduces FB-LISA as a forward-backward generalization of a recently proposed line-search stochastic gradient method for 3D CT reconstruction, using stochastic mini-batches of full 2D projections to preserve acquisition physics and accelerate early iterations without training data. No equations, fitted parameters, or explicit predictions are shown in the abstract or description that reduce by construction to inputs. The formulation is presented as a direct methodological extension that repurposes deep learning concepts for inverse problems, with the central claim remaining independent and self-contained rather than tautological or forced by self-citation chains. No self-definitional steps, fitted-input predictions, or load-bearing uniqueness theorems appear.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract supplies no explicit free parameters, axioms, or invented entities; the method is described only at the level of algorithmic structure.

pith-pipeline@v0.9.0 · 5425 in / 1039 out tokens · 75183 ms · 2026-05-13T04:09:53.835828+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

32 extracted references · 32 canonical work pages

  1. [1]

    A. H. Andersen and A. C. Kak. Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm.Ultrasonic imaging, 6(1):81–94, 1984

  2. [2]

    Barzilai and J

    J. Barzilai and J. M. Borwein. Two-point step size gradient methods.IMA Journal of Numerical Analysis, 8:141–148, 1988

  3. [3]

    Beck.First-order methods in optimization

    A. Beck.First-order methods in optimization. SIAM, 2017

  4. [4]

    Bonettini, I

    S. Bonettini, I. Loris, F. Porta, and M. Prato. Variable metric inexact line-search-based methods for nonsmooth optimization.SIAM Journal on Optimization, 26(2):891–921, 2016

  5. [5]

    Bottou, F

    L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning.SIAM Review, 60(2):223–311, 2018

  6. [6]

    S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.Foundations and Trends in Machine learning, 3(1):1–22, 2011

  7. [7]

    Cavicchioli, J

    R. Cavicchioli, J. C. Hu, E. Loli Piccolomini, E. Morotti, and L. Zanni. Gpu accel- eration of a model-based iterative method for digital breast tomosynthesis.Scientific reports, 10(1):43, 2020

  8. [8]

    Chambolle, C

    A. Chambolle, C. Delplancke, M. J. Ehrhardt, C.-B. Sch¨ onlieb, and J. Tang. Stochastic primal–dual hybrid gradient algorithm with adaptive step sizes.Journal of Mathemat- ical Imaging and Vision, 66:294–313, 2024. 14

  9. [9]

    Chambolle, M

    A. Chambolle, M. J. Ehrhardt, P. Richt´ arik, and C.-B. Sch¨ onlieb. Stochastic primal- dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM Journal on Optimization, 28:2783–2808, 2018

  10. [10]

    Chambolle and T

    A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging.Journal of Mathematical Imaging and Vision, 40(1):120– 145, 2011

  11. [11]

    P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward split- ting.Multiscale Modeling & Simulation, 4(4):1168–1200, 2005

  12. [12]

    L. Condat. A primal–dual splitting method for convex optimization involving lips- chitzian, proximable and linear composite terms.Journal of optimization theory and applications, 158(2):460–479, 2013

  13. [13]

    Dossal, S

    C. Dossal, S. Hurault, and N. Papadakis. Optimization with first order algorithms. arXiv preprint arXiv:2410.19506, 2024

  14. [14]

    M. J. Ehrhardt, ˇZ. Kereta, J. Liang, and J. Tang. A guide to stochastic optimisation for large-scale inverse problems.Inverse Problems, 41(5), 2025

  15. [15]

    M. J. Ehrhardt, P. J. Markiewicz, and C.-B. Sch¨ onlieb. Faster pet reconstruction with non-smooth priors by randomization and preconditioning.Physics in Medicine & Biology, 64:225019, 2019

  16. [16]

    Erdogan and J

    H. Erdogan and J. A. Fessler. Ordered subsets algorithms for transmission tomography. Phys. Med. Biol., 44(11):2835, 1999

  17. [17]

    L. A. Feldkamp, L. C. Davis, and J. W. Kress. Practical cone-beam algorithm.Journal of the Optical Society of America A, 1(6):612–619, 1984

  18. [18]

    Franchini, F

    G. Franchini, F. Porta, V. Ruggiero, and I. Trombini. A line search based proximal stochastic gradient algorithm with dynamical variance reduction.Journal of Scientific Computing, 94:23, 2023

  19. [19]

    Franchini, F

    G. Franchini, F. Porta, V. Ruggiero, I. Trombini, and L. Zanni. A stochastic gradient method with variance control and variable learning rate for deep learning.Journal of Computational and Applied Mathematics, 451:116083, 2024

  20. [20]

    Frassoldati, G

    G. Frassoldati, G. Zanghirati, and L. Zanni. New adaptive stepsize selections in gra- dient methods.J. Ind. Manag. Optim., 4(2):299–312, 2008

  21. [21]

    G. T. Herman and L. B. Meyer. Algebraic reconstruction techniques can be made computationally efficient (positron emission tomography application).IEEE Trans. Med. Imag., 12(3):600–609, 1993

  22. [22]

    A. C. Kak and M. Slaney.Principles of computerized tomographic imaging. SIAM, 2001. 15

  23. [23]

    D. Kim, S. Ramani, and J. A. Fessler. Combining ordered subsets and momentum for accelerated x-ray ct image reconstruction.IEEE Trans. Med. Imag., 34(1):167–178, 2015

  24. [24]

    Lazzaretti, Z

    M. Lazzaretti, Z. Kereta, C. Estatico, and L. Calatroni. Stochastic gradient descent for linear inverse problems in variable exponent lebesgue spaces. InInternational Con- ference on Scale Space and Variational Methods in Computer Vision, pages 457–470. Springer, 2023

  25. [25]

    A. Meaney. Cone-beam computed tomography dataset of a walnut (1.1.0) [data set]. Zenodo, 2022

  26. [26]

    Papoutsellis, M

    E. Papoutsellis, M. A. G. Duff, J. S. Jørgensen, S. Porter, C. Delplancke, G. Fardell, E. Pasca, and K. Thielemans. A modular approach to stochastic optimisation for inverse problems using the core imaging library.arXiv preprint arXiv:2603.21230, 2026

  27. [27]

    Polyak.Introduction to Optimization

    B. Polyak.Introduction to Optimization. Optimization Software, New York, 1987

  28. [28]

    Reisenhofer, S

    R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand. A Haar Wavelet-Based Percep- tual Similarity Index for Image Quality Assessment.Signal Process. Image, 61:33–43, 2018

  29. [29]

    J. Tang, K. Egiazarian, and M. Davies. The limitation and practical acceleration of stochastic gradient algorithms in inverse problems. In IEEE, editor,ICASSP 2019- 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7680–7684, 2019

  30. [30]

    J. Tang, K. Egiazarian, M. Golbabaee, and M. Davies. The practicality of stochas- tic optimization in imaging inverse problems.IEEE Transactions on Computational Imaging, 6:1471–1485, 2020

  31. [31]

    Van Aarle, W

    W. Van Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. De Beenhouwer, K. J. Batenburg, and J. Sijbers. Fast and flexible X-ray tomography using the ASTRA toolbox.Optics express, 24(22):25129–25147, 2016

  32. [32]

    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004. 16