pith. machine review for the scientific record. sign in

arxiv: 2604.11762 · v1 · submitted 2026-04-13 · 💻 cs.CV · cs.LG· eess.SP· physics.med-ph· stat.ML

Recognition: unknown

MosaicMRI: A Diverse Dataset and Benchmark for Raw Musculoskeletal MRI

Berk Tinaz, Mahdi Soltanolkotabi, Maryam Soltanolkotabi, Mohammad Shahab Sepehri, Paula Arguello

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:48 UTC · model grok-4.3

classification 💻 cs.CV cs.LGeess.SPphysics.med-phstat.ML
keywords musculoskeletal MRIraw MRI datasetaccelerated reconstructionanatomical diversitycross-anatomy generalizationdeep learningscaling behaviordomain shift
0
0 comments X

The pith

Training reconstruction models on scans from many body parts together improves results when data is scarce by exploiting shared anatomical features.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The authors create MosaicMRI, the largest public collection of raw musculoskeletal MRI data with thousands of volumes spanning multiple body parts, contrasts, and orientations. Their experiments with reconstruction models demonstrate that in situations with few training examples, using data from all anatomies at once yields better performance than training separately for each body part such as the knee or spine alone. This points to useful correlations between different anatomies that the models can leverage. They also show varying levels of generalization when models are trained on one anatomy and tested on another, with some pairs like foot and elbow transferring effectively. The dataset addresses the gap left by existing public MRI resources that focus mainly on brain and knee imaging.

Core claim

MosaicMRI comprises 2,671 volumes and 80,156 slices of fully sampled raw MSK MR measurements with diversity in orientations, contrasts, anatomies including spine knee hip ankle and coil numbers. Baseline experiments with accelerated reconstruction show that models trained on combined anatomies significantly outperform anatomy-specific models in low-sample regimes due to anatomical diversity and cross-anatomical correlations. Cross-anatomy tests reveal groups of body parts that generalize well and that domain shift performance depends on training size anatomy and protocols.

What carries the argument

The MosaicMRI dataset of diverse raw musculoskeletal MRI volumes, which supports experiments on scaling behavior and cross-anatomy generalization for reconstruction tasks.

Load-bearing premise

Performance gains from combined training arise from models learning shared features across anatomies rather than from increased total data volume or similar acquisition protocols.

What would settle it

Repeating the low-sample reconstruction experiments with a single-anatomy training set whose total slice count matches the combined set and finding no performance difference would indicate the gains do not stem from cross-anatomical correlations.

Figures

Figures reproduced from arXiv: 2604.11762 by Berk Tinaz, Mahdi Soltanolkotabi, Maryam Soltanolkotabi, Mohammad Shahab Sepehri, Paula Arguello.

Figure 1
Figure 1. Figure 1: MosaicMRI overview. (left) Anatomy distribution by volume count, showing a long-tailed composition prevalently by spine (49%, 1,316 volumes), followed by shoulder (14%, 373) and knee (14%, 362). (right) Representative slices spanning six anatomy groups and three orientations (axial, sagittal, coronal); overlays report in-plane matrix size, receive-coil count, and number of slices. quality at high accelerat… view at source ↗
Figure 2
Figure 2. Figure 2: PSNR versus training-set fraction for E2E-VarNet on [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Mean PSNR (dB) of E2E-VarNet for cross-anatomy transfer on [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Protocol generalization in MOSAICMRI. Mean PSNR (dB) for E2E-VarNet trained on each protocol (columns) and tested on each protocol (rows); Baseline is trained on all protocols. Boxes mark single-protocol models within 1 dB of the best per row. 5.4 Robustness to Acquisition Protocol Some of the structure in cross-anatomy transfer may be explained by differences in protocol composition across anatomy groups.… view at source ↗
Figure 5
Figure 5. Figure 5: Dataset diversity by anatomy. Violin/box plots summarize the distribution of receive-coil counts and slices per volume for [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative accelerated reconstruction examples across anatomies. For each panel, columns (left to right) show the masked k-space after applying the undersampling pattern, the zero-filled RSS reconstruction, the reconstruction produced by VarNet trained on full MOSAICMRI, and the fully sampled target. 15 [PITH_FULL_IMAGE:figures/full_fig_p015_6.png] view at source ↗
read the original abstract

Deep learning underpins a wide range of applications in MRI, including reconstruction, artifact removal, and segmentation. However, progress has been driven largely by public datasets focused on brain and knee imaging, shaping how models are trained and evaluated. As a result, careful studies of the reliability of these models across diverse anatomical settings remain limited. In this work, we introduce MosaicMRI, a large and diverse collection of fully sampled raw musculoskeletal (MSK) MR measurements designed for training and evaluating machine-learning-based methods. MosaicMRI is the largest open-source raw MSK MRI dataset to date, comprising 2,671 volumes and 80,156 slices. The dataset offers substantial diversity in volume orientation (e.g., axial, sagittal), imaging contrasts (e.g., PD, T1, T2), anatomies (e.g., spine, knee, hip, ankle, and others), and numbers of acquisition coils. Using VarNet as a baseline for accelerated reconstruction task, we perform a comprehensive set of experiments to study scaling behavior with respect to both model capacity and dataset size. Interestingly, models trained on the combined anatomies significantly outperform anatomy-specific models in low-sample regimes, highlighting the benefits of anatomical diversity and the presence of exploitable cross-anatomical correlations. We further evaluate robustness and cross-anatomy generalization by training models on one anatomy (e.g., spine) and testing them on another (e.g., knee). Notably, we identify groups of body parts (e.g., foot and elbow) that generalize well with each other, and highlight that performance under domain shifts depends on both training set size, anatomy, and protocol-specific factors.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper presents MosaicMRI, the largest open-source raw musculoskeletal MRI dataset to date (2,671 volumes, 80,156 slices) spanning multiple anatomies (spine, knee, hip, ankle, etc.), orientations, contrasts (PD, T1, T2), and coil counts. Using VarNet for accelerated reconstruction, the authors report scaling experiments with model capacity and dataset size, demonstrate that combined-anatomy training significantly outperforms anatomy-specific models in low-sample regimes (attributed to cross-anatomical correlations), and evaluate cross-anatomy generalization, identifying well-generalizing groups such as foot and elbow while noting dependence on training size, anatomy, and protocol factors.

Significance. If the empirical claims hold after controls, the work supplies a much-needed large-scale raw MSK benchmark that diversifies beyond brain/knee focus, enabling more reliable studies of DL reconstruction across anatomies. The scaling and cross-anatomy results, if isolated from confounds, would provide concrete evidence for the value of anatomical diversity in low-data regimes and could guide dataset construction and training strategies in MRI DL.

major comments (3)
  1. [Abstract / Experiments] Abstract and experiments section: the headline claim that combined-anatomy training outperforms anatomy-specific models due to 'exploitable cross-anatomical correlations' is not isolated from the confound of total training cardinality. The combined set uses the union of slices across anatomies (substantially larger than any single-anatomy slice count), yet no control experiment equalizing sample volume (e.g., repeating/augmenting single-anatomy data or subsampling to matched cardinality) is described. This directly undermines attribution of gains to correlations rather than data volume or shared protocol factors.
  2. [Abstract / Methods / Experiments] Abstract and methods: full details on data splits (train/val/test ratios per anatomy and combined), exact metrics (beyond implied reconstruction error), and statistical tests (significance of outperformance, error bars, multiple-comparison correction) are absent. Without these, the numerical claims on scaling and cross-anatomy generalization cannot be fully assessed for robustness.
  3. [Experiments] Experiments: all results are reported exclusively with VarNet; no architecture ablation (e.g., U-Net, MoDL, or transformer-based reconstructor) is performed. This limits the generality of the conclusion that anatomical diversity benefits low-sample regimes.
minor comments (2)
  1. [Abstract] Abstract: the phrase 'significantly outperform' should be accompanied by quantitative deltas or p-values once statistical details are added.
  2. [Dataset] Dataset description: clarify whether all volumes are fully sampled raw k-space and provide explicit coil-sensitivity map handling details for the VarNet baseline.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback on our manuscript. We address each of the three major comments point-by-point below, indicating where revisions will be made to strengthen the work.

read point-by-point responses
  1. Referee: [Abstract / Experiments] Abstract and experiments section: the headline claim that combined-anatomy training outperforms anatomy-specific models due to 'exploitable cross-anatomical correlations' is not isolated from the confound of total training cardinality. The combined set uses the union of slices across anatomies (substantially larger than any single-anatomy slice count), yet no control experiment equalizing sample volume (e.g., repeating/augmenting single-anatomy data or subsampling to matched cardinality) is described. This directly undermines attribution of gains to correlations rather than data volume or shared protocol factors.

    Authors: We agree that the current presentation does not fully isolate cross-anatomical correlations from the effect of larger total training cardinality. In the revised manuscript we will add explicit control experiments: for the low-sample regimes we will augment or repeat single-anatomy slices (with appropriate randomization) to match the exact slice count used in the combined setting, and we will also report results when the combined set is subsampled to the same cardinality as the largest single-anatomy set. These controls will allow clearer attribution of any remaining gains to anatomical diversity. revision: yes

  2. Referee: [Abstract / Methods / Experiments] Abstract and methods: full details on data splits (train/val/test ratios per anatomy and combined), exact metrics (beyond implied reconstruction error), and statistical tests (significance of outperformance, error bars, multiple-comparison correction) are absent. Without these, the numerical claims on scaling and cross-anatomy generalization cannot be fully assessed for robustness.

    Authors: We will expand the Methods and Experiments sections to provide complete information. The revised manuscript will include: (i) explicit train/validation/test ratios and slice counts for every anatomy and for the combined dataset, (ii) the precise quantitative metrics employed (NMSE, SSIM, PSNR), and (iii) statistical reporting with error bars from multiple random seeds, p-values for key comparisons, and Bonferroni or FDR correction for multiple tests. These additions will make the numerical claims fully reproducible and assessable. revision: yes

  3. Referee: [Experiments] Experiments: all results are reported exclusively with VarNet; no architecture ablation (e.g., U-Net, MoDL, or transformer-based reconstructor) is performed. This limits the generality of the conclusion that anatomical diversity benefits low-sample regimes.

    Authors: We acknowledge that restricting all experiments to VarNet limits the generality of the claim. In the revised manuscript we will add an architecture ablation by repeating the key low-sample-regime scaling and cross-anatomy experiments with at least one additional reconstructor (a standard U-Net and, if space permits, a MoDL variant). This will demonstrate whether the observed benefits of combined-anatomy training hold across different network architectures. revision: yes

Circularity Check

0 steps flagged

No circularity; purely empirical dataset and benchmark results

full rationale

The paper presents a new raw MSK MRI dataset and reports direct experimental observations using VarNet for accelerated reconstruction. The key claim—that combined-anatomy training outperforms anatomy-specific models in low-sample regimes—is an empirical finding from training and testing on the collected data, with no mathematical derivations, no parameters fitted and then relabeled as predictions, no self-citations invoked as load-bearing uniqueness theorems, and no ansatzes or renamings of prior results. All scaling and generalization statements are tied to explicit experiments on the new dataset rather than reducing to inputs by construction. The noted concern about total training volume is a potential confounding factor in interpretation but does not constitute circularity in any derivation chain.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Paper introduces no new theoretical constructs, free parameters, or invented entities; relies on standard deep learning assumptions for MRI reconstruction and existing VarNet architecture.

axioms (1)
  • domain assumption The forward model for undersampled k-space data used by VarNet accurately represents the acquisition process.
    Implicit in all reconstruction experiments.

pith-pipeline@v0.9.0 · 5631 in / 1270 out tokens · 71620 ms · 2026-05-10T15:48:13.442420+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

43 extracted references · 4 canonical work pages

  1. [1]

    Simultaneous multiple resonance frequency imaging (smurf): Fat-water imaging using multi-band principles.Magnetic Resonance in Medicine, 85(3):1379–1396, 2021

    Beata Bachrata, Bernhard Strasser, Wolfgang Bogner, Albrecht Ingo Schmid, Radim Korinek, Martin Krˇsˇs´ak, Siegfried Trattnig, and Simon Daniel Robinson. Simultaneous multiple resonance frequency imaging (smurf): Fat-water imaging using multi-band principles.Magnetic Resonance in Medicine, 85(3):1379–1396, 2021

  2. [2]

    The perception-distortion tradeoff

    Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, page 6228–6237. IEEE, 2018

  3. [3]

    Quantitative motion-corrected 7T sub-millimeter raw MRI database of the adult lifespan, 2022

    Matthan Caan. Quantitative motion-corrected 7T sub-millimeter raw MRI database of the adult lifespan, 2022

  4. [4]

    Candes, Justin K

    Emmanuel J. Candes, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements.Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59(8):1207–1223, 2006

  5. [5]

    Lbm: Latent bridge matching for fast image-to-image translation, 2025

    Cl´ement Chadebec, Onur Tasar, Sanjeev Sreetharan, and Benjamin Aubin. Lbm: Latent bridge matching for fast image-to-image translation, 2025

  6. [6]

    Tsaftaris

    Agisilaos Chartsias, Thomas Joyce, Rohan Dharmakumar, and Sotirios A. Tsaftaris. Adversarial Image Synthesis for Unpaired Multi-Modal Cardiac Data. InInternational Workshop on Simulation and Synthesis in Medical Imaging, pages 3–13. Springer, 2017

  7. [7]

    Ocmr (v1

    Chong Chen, Yingmin Liu, Philip Schniter, Matthew Tong, Karolina Zareba, Orlando Simonetti, Lee Potter, and Rizwan Ahmad. Ocmr (v1. 0)–open-access multi-coil k-space dataset for cardiovascular magnetic resonance imaging.arXiv preprint arXiv:2008.03410, 2020

  8. [8]

    Joseph Y . Cheng. Stanford 2D FSE. http://mridata.org/list?project=Stanford

  9. [9]

    Score-based diffusion models for accelerated mri, 2022

    Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated mri, 2022

  10. [10]

    3D U-Net: Learning dense volumetric segmentation from sparse annotation

    ¨Ozg¨un C ¸ic ¸ek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, pages 424–432. Springer, 2016. 10

  11. [11]

    Resvit: Residual vision transformers for multimodal medical image synthesis.IEEE Transactions on Medical Imaging, 41(10):2598–2614, 2022

    Onat Dalmaz, Mahmut Yurt, and Tolga C ¸ukur. Resvit: Residual vision transformers for multimodal medical image synthesis.IEEE Transactions on Medical Imaging, 41(10):2598–2614, 2022

  12. [12]

    Dar, Mahmut Yurt, Levent Karacan, Aykut Erdem, Erkut Erdem, and TolgaC ¸ukur

    Salman UH. Dar, Mahmut Yurt, Levent Karacan, Aykut Erdem, Erkut Erdem, and TolgaC ¸ukur. Image synthesis in multi-contrast mri with conditional generative adversarial networks.IEEE Transactions on Medical Imaging, 38(10):2375–2388, 2019

  13. [13]

    Measuring robustness in deep learning based compressive sensing

    Mohammad Zalbagi Darestani, Akshay S Chaudhari, and Reinhard Heckel. Measuring robustness in deep learning based compressive sensing. InProceedings of the 38th International Conference on Machine Learning, pages 2433–2444. PMLR, 2021

  14. [14]

    Skm-tea: A dataset for accelerated mri reconstruction with dense image labels for quantitative clinical evaluation, 2022

    Arjun D Desai, Andrew M Schmidt, Elka B Rubin, Christopher M Sandino, Marianne S Black, Valentina Mazzoli, Kathryn J Stevens, Robert Boutin, Christopher R´e, Garry E Gold, Brian A Hargreaves, and Akshay S Chaudhari. Skm-tea: A dataset for accelerated mri reconstruction with dense image labels for quantitative clinical evaluation, 2022

  15. [15]

    David L. Donoho. Compressed sensing.IEEE Transactions on Information Theory, 52(4):1289–1306, 2006

  16. [16]

    Humus-net: Hybrid unrolled multi-scale network architecture for accelerated mri reconstruction

    Zalan Fabian, Berk Tinaz, and Mahdi Soltanolkotabi. Humus-net: Hybrid unrolled multi-scale network architecture for accelerated mri reconstruction. InAdvances in Neural Information Processing Systems, pages 25306–25319. Curran Associates, Inc., 2022

  17. [17]

    Domain-aware scaling laws uncover data synergy

    Kimia Hamidieh, Lester Mackey, and David Alvarez-Melis. Domain-aware scaling laws uncover data synergy. InNeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle: Benchmarks, Emergent Abilities, and Scaling, 2025

  18. [18]

    Learning a variational network for reconstruction of accelerated MRI data.Magnetic Resonance in Medicine, 79(6):3055–3071, 2018

    Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated MRI data.Magnetic Resonance in Medicine, 79(6):3055–3071, 2018

  19. [19]

    Σ-net: Systematic evaluation of iterative deep neural networks for fast parallel MR image reconstruction.arXiv preprint arXiv:1912.09278, 2019

    Kerstin Hammernik, Jo Schlemper, Chen Qin, Jinming Duan, Ronald M Summers, and Daniel Rueckert. Σ-net: Systematic evaluation of iterative deep neural networks for fast parallel MR image reconstruction.arXiv preprint arXiv:1912.09278, 2019

  20. [20]

    Framing U-Net via deep convolutional framelets: Application to sparse-view CT

    Yoseob Han and Jong Chul Ye. Framing U-Net via deep convolutional framelets: Application to sparse-view CT. IEEE Transactions on Medical Imaging, 37(6):1418–1429, 2018

  21. [21]

    Rae, Oriol Vinyals, and Laurent Sifre

    Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre...

  22. [22]

    Deep learning for undersampled MRI reconstruction.Physics in Medicine & Biology, 63(13):135007, 2018

    Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deep learning for undersampled MRI reconstruction.Physics in Medicine & Biology, 63(13):135007, 2018

  23. [23]

    Ismrm raw data format: A proposed standard for mri raw datasets.Magnetic resonance in medicine, 77(1):411–421, 2017

    Souheil J Inati, Joseph D Naegele, Nicholas R Zwart, Vinai Roopchansingh, Martin J Lizak, David C Hansen, Chia-Ying Liu, David Atkinson, Peter Kellman, Sebastian Kozerke, et al. Ismrm raw data format: A proposed standard for mri raw datasets.Magnetic resonance in medicine, 77(1):411–421, 2017

  24. [24]

    Johnson, Geunu Jeong, Kerstin Hammernik, Jo Schlemper, Chen Qin, Jinming Duan, Daniel Rueckert, Jingu Lee, Nicola Pezzotti, Elwin De Weerdt, Sahar Yousefi, Mohamed S

    Patricia M. Johnson, Geunu Jeong, Kerstin Hammernik, Jo Schlemper, Chen Qin, Jinming Duan, Daniel Rueckert, Jingu Lee, Nicola Pezzotti, Elwin De Weerdt, Sahar Yousefi, Mohamed S. Elmahdy, Jeroen Hendrikus Franciscus Van Gemert, Christophe Sch¨ulke, Mariya Doneva, Tim Nielsen, Sergey Kastryulin, Boudewijn P. F. Lelieveldt, Matthias J. P. Van Osch, Marius S...

  25. [25]

    Assessment of the Generalization of Learned Image Reconstruction and the Potential for Transfer Learning

    Florian Knoll, Kerstin Hammernik, Erich Kobler, Thomas Pock, Michael P Recht, and Daniel K Sodickson. Assessment of the Generalization of Learned Image Reconstruction and the Potential for Transfer Learning. Magnetic Resonance in Medicine, 81(1):116–128, 2019

  26. [26]

    Robustness of deep learning for accelerated mri: Benefits of diverse training data, 2024

    Kang Lin and Reinhard Heckel. Robustness of deep learning for accelerated mri: Benefits of diverse training data, 2024

  27. [27]

    Improving deep learning for accelerated mri with data filtering, 2025

    Kang Lin, Anselm Krainovic, Kun Wang, and Reinhard Heckel. Improving deep learning for accelerated mri with data filtering, 2025. 11

  28. [28]

    Le, Barret Zoph, Jason Wei, and Adam Roberts

    Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V . Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: Designing data and methods for effective instruction tuning, 2023

  29. [29]

    M4raw: A multi-contrast, multi-repetition, multi-channel mri k-space dataset for low-field mri research.Scientific Data, 10(1):264, 2023

    Mengye Lyu, Lifeng Mei, Shoujin Huang, Sixing Liu, Yi Li, Kexin Yang, Yilong Liu, Yu Dong, Linzheng Dong, and Ed X Wu. M4raw: A multi-contrast, multi-repetition, multi-channel mri k-space dataset for low-field mri research.Scientific Data, 10(1):264, 2023

  30. [30]

    Results of the 2020 fastMRI challenge for machine learning MR image reconstruction.IEEE transactions on Medical Imaging, 40(9):2306–2317, 2021

    Matthew J Muckley, Bruno Riemenschneider, Alireza Radmanesh, Sunwoo Kim, Geunu Jeong, Jingyu Ko, Yohan Jun, Hyungseob Shin, Dosik Hwang, Mahmoud Mostapha, et al. Results of the 2020 fastMRI challenge for machine learning MR image reconstruction.IEEE transactions on Medical Imaging, 40(9):2306–2317, 2021

  31. [31]

    I-RIM applied to the fastMRI Challenge.arXiv:1910.08952, 2019

    Patrick Putzky, Dimitrios Karkalousos, Jonas Teuwen, Nikita Miriakov, Bart Bakker, Matthan Caan, and Max Welling. I-RIM applied to the fastMRI Challenge.arXiv:1910.08952, 2019

  32. [32]

    U-net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241, 2015

  33. [33]

    Berg, and Li Fei-Fei

    Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge, 2015

  34. [34]

    fastmri breast: A publicly available radial k-space dataset of breast dynamic contrast-enhanced mri.Radiology: Artificial Intelligence, 7(1):e240345, 2025

    Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, and Laura Heacock. fastmri breast: A publicly available radial k-space dataset of breast dynamic contrast-enhanced mri.Radiology: Artificial Intelligence, 7(1):e240345, 2025

  35. [35]

    Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson

    Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C. Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated MRI reconstruction. InMedical Image Computing and Computer Assisted Intervention, pages 64–73, 2020

  36. [36]

    Deep ADMM-Net for compressive sensing MRI.Advances in Neural Information Processing Systems, 29, 2016

    Jian Sun, Huibin Li, Zongben Xu, et al. Deep ADMM-Net for compressive sensing MRI.Advances in Neural Information Processing Systems, 29, 2016

  37. [37]

    Fastmri prostate: A public, biparametric mri dataset to advance machine learning for prostate cancer imaging.Scientific data, 11(1):404, 2024

    Radhika Tibrewala, Tarun Dutt, Angela Tong, Luke Ginocchio, Riccardo Lattanzi, Mahesh B Keerthivasan, Steven H Baete, Sumit Chopra, Yvonne W Lui, Daniel K Sodickson, et al. Fastmri prostate: A public, biparametric mri dataset to advance machine learning for prostate cancer imaging.Scientific data, 11(1):404, 2024

  38. [38]

    Murphy, Patrick Virtue, Michael Elad, John M

    Martin Uecker, Peng Lai, Mark J. Murphy, Patrick Virtue, Michael Elad, John M. Pauly, Shreyas S. Vasanawala, and Michael Lustig. ESPIRiT— an Eigenvalue Approach to Autocalibrating Parallel MRI: Where SENSE Meets GRAPPA.Magnetic Resonance in Medicine, 71(3):990–1001, 2014

  39. [39]

    Cmrxrecon: A publicly available k-space dataset and benchmark to advance deep learning for cardiac mri.Scientific Data, 11(1):687, 2024

    Chengyan Wang, Jun Lyu, Shuo Wang, Chen Qin, Kunyuan Guo, Xinyu Zhang, Xiaotong Yu, Yan Li, Fanwen Wang, Jianhua Jin, et al. Cmrxrecon: A publicly available k-space dataset and benchmark to advance deep learning for cardiac mri.Scientific Data, 11(1):687, 2024

  40. [40]

    Learning to discover at test time, 2026

    Mert Yuksekgonul, Daniel Koceja, Xinhao Li, Federico Bianchi, Jed McCaleb, Xiaolong Wang, Jan Kautz, Yejin Choi, James Zou, Carlos Guestrin, and Yu Sun. Learning to discover at test time, 2026

  41. [41]

    Zbontar, F

    Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J. Muckley, Aaron De- fazio, Ruben Stern, Patricia Johnson, Mary Bruno, Marc Parente, Krzysztof J. Geras, Joe Katsnelson, Hersh Chan- darana, Zizhao Zhang, Michal Drozdzal, Adriana Romero, Michael Rabbat, Pascal Vincent, Nafissa Yakubova, James Pinkerton, Duo Wang, Erich ...

  42. [42]

    ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing

    Jian Zhang and Bernard Ghanem. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1828–1837, 2018

  43. [43]

    Unet++: A nested U-net architecture for medical image segmentation

    Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested U-net architecture for medical image segmentation. InDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3–11. 2018. 12 A Appendix A.1 Accelerated MRI fundamentals In Magnetic Resonance Imaging (MRI), anatomical...