pith. machine review for the scientific record. sign in

arxiv: 2605.05447 · v1 · submitted 2026-05-06 · 💻 cs.CV

Recognition: unknown

EchoXFlow: A Beamspace Echocardiography Dataset for Cardiac Motion, Flow, and Function

Elias Stenhede , Joanna Sulkowska , Eivind Bj{\o}rkan Orstad , Henrik Schirmer , Arian Ranjbar

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:18 UTC · model grok-4.3

classification 💻 cs.CV
keywords echocardiographyultrasound datasetbeamspacecardiac motionblood flowmulti-modal learningmachine learning
0
0 comments X

The pith

EchoXFlow dataset keeps 37,125 echocardiography recordings in native beamspace geometry with separate anatomy, motion, and flow streams.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces EchoXFlow as a clinical dataset of 37,125 recordings from 666 routine examinations that retains the original ultrasound acquisition format instead of releasing only processed Cartesian videos. It keeps temporally resolved 1D, 2D, and 3D data streams, multiple Doppler modalities, synchronized ECG, and annotations ranging from guideline measurements to dense 2D myocardial contours and 3D left-ventricular meshes. The authors position the dataset as enabling cross-modal learning tasks that combine cardiac anatomy, motion, and blood flow while respecting physical acquisition constraints. This structure is presented as necessary because conventional public datasets discard the raw geometry and modality relationships through vendor display processing.

Core claim

EchoXFlow comprises 37,125 recordings from 666 examinations, each retained as separable modality-specific streams of temporally resolved 1D, 2D, and 3D data alongside multiple Doppler modalities and a synchronized ECG, paired with clinical annotations that span guideline-based measurements, dense 2D myocardial contours, and 3D left-ventricular endocardial meshes, thereby enabling cross-modal, acquisition-aware learning tasks unavailable from scan-converted videos alone.

What carries the argument

Preservation of native beamspace geometry together with separable modality streams that maintain original timing, geometry, and relationships between anatomy, motion, and flow signals.

If this is right

  • Models can be trained directly on relationships between cardiac anatomy, myocardial motion, and blood flow without the distortions introduced by scan conversion.
  • The dataset supports 4D vision methods that operate on the original acquisition coordinates and timing.
  • Dense 2D contours and 3D meshes become available for supervised training of segmentation and reconstruction networks.
  • Physically grounded multi-modal learning becomes feasible because ECG, Doppler, and B-mode streams remain aligned at the acquisition level.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Head-to-head comparisons between models trained on native versus converted data could quantify how much performance is lost when raw geometry is discarded.
  • The dataset may allow development of acquisition-aware networks that adapt to differences in beam density or probe settings across vendors.
  • Researchers could test whether preserving Doppler as separate channels rather than RGB overlays improves flow estimation accuracy.

Load-bearing premise

The assumption that retaining native beamspace geometry and separate modality streams supplies meaningful advantages for machine learning over existing scan-converted datasets.

What would settle it

A controlled experiment that trains identical model architectures on EchoXFlow recordings versus on scan-converted versions of the same data and finds no improvement in accuracy for any cardiac motion, flow, or function prediction task.

Figures

Figures reproduced from arXiv: 2605.05447 by Arian Ranjbar, Eivind Bj{\o}rkan Orstad, Elias Stenhede, Henrik Schirmer, Joanna Sulkowska.

Figure 1
Figure 1. Figure 1: A frame of a color Doppler video in EchoXFlow, which can be scan-converted to a clinical view at source ↗
Figure 2
Figure 2. Figure 2: Recordings are paired with ECGs, enabling alignment between recordings within an exam. view at source ↗
Figure 3
Figure 3. Figure 3: Structure of multimodal echocardiography recordings showing one example of the ac view at source ↗
Figure 4
Figure 4. Figure 4: Three rows from three unique patients in EchoXFlow, each column corresponds to an view at source ↗
Figure 5
Figure 5. Figure 5: Shows strain curves over time, each row corresponds to a row in Figure 4. view at source ↗
Figure 6
Figure 6. Figure 6: Shows a single frame of a 3D volume with the LV endocardial segmentation mask. The view at source ↗
Figure 7
Figure 7. Figure 7: Shows the volume curve calculated from the LV endocardial segmentation mask. This view at source ↗
read the original abstract

We introduce EchoXFlow, a clinical echocardiography dataset for learning from ultrasound in its native acquisition geometry rather than from scan-converted Cartesian videos. Existing public datasets offer limited opportunities to study cross-modal relationships between cardiac anatomy, myocardial motion, and blood flow, as Doppler is typically absent or fused as RGB overlays, and acquisitions are released after lossy vendor display processing. EchoXFlow comprises 37125 recordings from 666 routine-care examinations, preserving the timing, geometry, and modality relationships needed for physically grounded echo learning. Each recording is retained as separable modality-specific streams: temporally resolved 1D, 2D, and 3D data alongside multiple Doppler modalities, paired with a synchronized ECG. Clinical annotations span guideline-based measurements to dense 2D myocardial contours and 3D left-ventricular endocardial meshes. With its associated open-source tooling, EchoXFlow enables cross-modal, acquisition-aware learning tasks that cannot be formulated from conventional scan-converted videos alone, and serves as a testbed for 4D vision and physically grounded multi-modal learning more broadly.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces EchoXFlow, a clinical echocardiography dataset comprising 37,125 recordings from 666 routine-care examinations. It preserves native beamspace geometry and separable modality-specific streams (temporally resolved 1D/2D/3D data, multiple Doppler modalities, synchronized ECG) rather than releasing only scan-converted Cartesian videos with fused RGB overlays. Clinical annotations include guideline-based measurements, dense 2D myocardial contours, and 3D left-ventricular endocardial meshes. The central claim is that the dataset and associated open-source tooling enable cross-modal, acquisition-aware learning tasks impossible to formulate from conventional scan-converted videos, serving as a testbed for 4D vision and physically grounded multi-modal learning.

Significance. If released as described, EchoXFlow would be a valuable resource for the field. By retaining native acquisition geometry and modality separability, it directly supports formulation of tasks that exploit ultrasound physics (e.g., beamspace processing, Doppler-anatomy alignment, ECG synchronization) that are lost in vendor-processed outputs. The scale (over 37k recordings) and breadth of annotations are strengths for supervised and self-supervised 4D/multi-modal work. The enabling property of the dataset holds by construction from the described release format, independent of empirical superiority demonstrations.

major comments (2)
  1. [Methods (implied missing)] The manuscript provides high-level description of data composition and annotations in the abstract and introduction, but lacks a dedicated Methods section detailing acquisition protocols, quality control, exclusion criteria, and annotation validation procedures (e.g., inter-observer variability for contours and meshes). These details are load-bearing for assessing reproducibility and suitability for training robust models, as noted in the soundness assessment.
  2. [Introduction / Dataset Description] While the claim that native beamspace enables new tasks is true by construction, the paper does not include even a small illustrative example or baseline experiment (e.g., in a Results or Experiments section) showing a concrete cross-modal task formulation that is impossible on scan-converted data. Adding one would make the significance of the release more concrete without requiring full benchmarking.
minor comments (2)
  1. [Abstract / Dataset Overview] Clarify the breakdown of the 37,125 recordings by modality type, view, or patient in a table or supplementary material for better transparency on dataset composition.
  2. [Conclusion] The open-source tooling is mentioned but not described in terms of specific functions or access instructions; a brief usage example or repository link with documentation would improve usability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive review and positive assessment of EchoXFlow. We appreciate the feedback highlighting opportunities to strengthen the manuscript and address each major comment below.

read point-by-point responses
  1. Referee: [Methods (implied missing)] The manuscript provides high-level description of data composition and annotations in the abstract and introduction, but lacks a dedicated Methods section detailing acquisition protocols, quality control, exclusion criteria, and annotation validation procedures (e.g., inter-observer variability for contours and meshes). These details are load-bearing for assessing reproducibility and suitability for training robust models, as noted in the soundness assessment.

    Authors: We agree that a dedicated Methods section is required for proper evaluation of reproducibility. In the revised manuscript we will add a Methods section describing the acquisition protocols for the routine-care examinations, quality control procedures applied to the recordings, exclusion criteria for the 666 examinations, and annotation validation including inter-observer variability metrics for the 2D contours and 3D meshes. revision: yes

  2. Referee: [Introduction / Dataset Description] While the claim that native beamspace enables new tasks is true by construction, the paper does not include even a small illustrative example or baseline experiment (e.g., in a Results or Experiments section) showing a concrete cross-modal task formulation that is impossible on scan-converted data. Adding one would make the significance of the release more concrete without requiring full benchmarking.

    Authors: We acknowledge that an illustrative example would make the enabling properties more tangible for readers. Although the claim holds by construction, we will add a brief, limited-scope example in the revised manuscript (e.g., a new subsection) demonstrating one concrete cross-modal task such as ECG-synchronized Doppler-anatomy alignment in native beamspace, which cannot be formulated from scan-converted videos. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

This is a dataset release paper with no mathematical derivations, predictions, fitted parameters, or load-bearing self-citations. The central claim—that EchoXFlow enables cross-modal tasks impossible from scan-converted videos—follows directly from the explicit description of its construction (37125 recordings retained as separable modality streams with native geometry, Doppler, ECG, and annotations). No step reduces to an input by definition or self-reference; the contribution is the data itself rather than any derived result.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a dataset release paper with no mathematical derivations, free parameters, axioms, or invented entities. The contribution consists entirely of data collection, curation, and tooling.

pith-pipeline@v0.9.0 · 5511 in / 1290 out tokens · 94056 ms · 2026-05-08T16:18:13.553716+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

39 extracted references · 26 canonical work pages · 1 internal anchor

  1. [1]

    Echocardiography.Medicine, 42(8): 447–451, August 2014

    Matthias Paul, Lindsay Smith, and Mark Monaghan. Echocardiography.Medicine, 42(8): 447–451, August 2014. ISSN 1357-3039. doi: 10.1016/j.mpmed.2014.05.015. URL https: //www.sciencedirect.com/science/article/pii/S1357303914001443

  2. [2]

    GBD 2019 Diseases and Injuries Collaborators. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the global Burden of Disease Study 2019.The Lancet, 396(10258):1204–1222, 2020

  3. [3]

    Lang, Luigi P

    Roberto M. Lang, Luigi P. Badano, Victor Mor-Avi, Jonathan Afilalo, Anderson Armstrong, Laura Ernande, Frank A. Flachskampf, Elyse Foster, Steven A. Goldstein, Tatiana Kuznetsova, Patrizio Lancellotti, Denisa Muraru, Michael H. Picard, Ernst R. Rietzschel, Lawrence Rudski, Kirk T. Spencer, Wendy Tsang, and Jens-Uwe V oigt. Recommendations for Cardiac Cham...

  4. [4]

    Rahko, Lori A

    Carol Mitchell, Peter S. Rahko, Lori A. Blauwet, Barry Canaday, Joshua A. Finstuen, Michael C. Foster, Kenneth Horton, Kofo O. Ogunyankin, Richard A. Palma, and Eric J. Velazquez. Guidelines for Performing a Comprehensive Transthoracic Echocardiographic Examination in Adults: Recommendations from the American Society of Echocardiography.Journal of the Ame...

  5. [5]

    Olaisen, Andreas Østvik, Sigbjorn Sabo, Håkon N

    David Pasdeloup, Sindre H. Olaisen, Andreas Østvik, Sigbjorn Sabo, Håkon N. Pettersen, Espen Holte, Bjørnar Grenne, Stian B. Stølen, Erik Smistad, Svein Arne Aase, Håvard Dalen, and Lasse Løvstakken. Real-Time Echocardiography Guidance for Optimized Apical Standard Views. Ultrasound in Medicine & Biology, 49(1):333–346, January 2023. ISSN 03015629. doi: 1...

  6. [6]

    Real-Time Standard View Classification in Transthoracic Echocardiography Us- ing Convolutional Neural Networks.Ultrasound in Medicine & Biology, 45(2):374–384, February 2019

    Andreas Østvik, Erik Smistad, Svein Arne Aase, Bjørn Olav Haugen, and Lasse Lovs- takken. Real-Time Standard View Classification in Transthoracic Echocardiography Us- ing Convolutional Neural Networks.Ultrasound in Medicine & Biology, 45(2):374–384, February 2019. ISSN 03015629. doi: 10.1016/j.ultrasmedbio.2018.07.024. URL https: //linkinghub.elsevier.com...

  7. [7]

    Fast and accurate view classification of echocardiograms using deep learning.npj Digital Medicine, 1(1):6, March

    Ali Madani, Ramy Arnaout, Mohammad Mofrad, and Rima Arnaout. Fast and accurate view classification of echocardiograms using deep learning.npj Digital Medicine, 1(1):6, March

  8. [8]

    doi: 10.1038/s41746-017-0013-1

    ISSN 2398-6352. doi: 10.1038/s41746-017-0013-1. URL https://www.nature.com/ articles/s41746-017-0013-1

  9. [9]

    Construction of correlation functions in two and three dimensions

    Sarah Leclerc, Erik Smistad, Joao Pedrosa, Andreas Ostvik, Frederic Cervenansky, Florian Espinosa, Torvald Espeland, Erik Andreas Rye Berg, Pierre-Marc Jodoin, Thomas Grenier, Carole Lartizien, Jan Dhooge, Lasse Lovstakken, and Olivier Bernard. Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography.IEEE transactions on med...

  10. [10]

    Langlotz, Paul A

    David Ouyang, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curtis P. Langlotz, Paul A. Heidenreich, Robert A. Harrington, David H. Liang, Euan A. Ashley, and James Y . Zou. Video-based AI for beat-to-beat assessment of cardiac function.Nature, 580(7802): 252–256, April 2020. ISSN 1476-4687. doi: 10.1038/s41586-020-2145-8. URL https: //www.nature...

  11. [11]

    Contemporary applications of artificial intelligence and machine learning in echocardiography.npj Cardiovascular Health, 2(1):30, June 2025

    Nastaran Raissi-Dehkordi, Negar Raissi-Dehkordi, and Bo Xu. Contemporary applications of artificial intelligence and machine learning in echocardiography.npj Cardiovascular Health, 2(1):30, June 2025. ISSN 2948-2836. doi: 10.1038/s44325-025-00064-8. URL https: //www.nature.com/articles/s44325-025-00064-8. 11

  12. [12]

    EchoApex: A General-Purpose Vision Foundation Model for Echocardiography, October 2024

    Abdoul Aziz Amadou, Yue Zhang, Sebastien Piat, Paul Klein, Ingo Schmuecking, Tiziano Passerini, and Puneet Sharma. EchoApex: A General-Purpose Vision Foundation Model for Echocardiography, October 2024. URL http://arxiv.org/abs/2410.11092. arXiv:2410.11092

  13. [13]

    Vision–language foundation model for echocardiogram interpretation.Nature Medicine, 30(5):1481–1488, May

    Matthew Christensen, Milos Vukadinovic, Neal Yuan, and David Ouyang. Vision–language foundation model for echocardiogram interpretation.Nature Medicine, 30(5):1481–1488, May

  14. [14]

    doi: 10.1038/s41591-024-02959-y

    ISSN 1546-170X. doi: 10.1038/s41591-024-02959-y. URL https://www.nature. com/articles/s41591-024-02959-y

  15. [15]

    EchoFM: Foundation Model for Generalizable Echocardiogram Analy- sis.IEEE Transactions on Medical Imaging, 44(10):4049–4062, October 2025

    Sekeun Kim, Pengfei Jin, Sifan Song, Cheng Chen, Yiwei Li, Hui Ren, Xiang Li, Tianming Liu, and Quanzheng Li. EchoFM: Foundation Model for Generalizable Echocardiogram Analy- sis.IEEE Transactions on Medical Imaging, 44(10):4049–4062, October 2025. ISSN 1558- 254X. doi: 10.1109/TMI.2025.3580713. URL https://ieeexplore.ieee.org/document/ 11040094

  16. [16]

    EchoJEPA: A Latent Predictive Foundation Model for Echocardiography,

    Alif Munim, Adibvafa Fallahpour, Teodora Szasz, Ahmadreza Attarpour, River Jiang, Brana Sooriyakanthan, Maala Sooriyakanthan, Heather Whitney, Jeremy Slivnick, Barry Rubin, Wendy Tsang, and Bo Wang. EchoJEPA: A Latent Predictive Foundation Model for Echocardiography,

  17. [17]

    URLhttps://arxiv.org/abs/2602.02603

  18. [18]

    Comprehensive echocardiogram evaluation with view primed vision language AI.Nature, 650(8103):970–977, February 2026

    Milos Vukadinovic, I.-Min Chiu, Xiu Tang, Neal Yuan, Tien-Yu Chen, Paul Cheng, Debiao Li, Susan Cheng, Bryan He, and David Ouyang. Comprehensive echocardiogram evaluation with view primed vision language AI.Nature, 650(8103):970–977, February 2026. ISSN 1476-

  19. [19]

    Comprehensive echocardiogram evaluation with view primed vision language AI

    doi: 10.1038/s41586-025-09850-x. URL https://www.nature.com/articles/ s41586-025-09850-x

  20. [20]

    Echocar- diography Segmentation With Enforced Temporal Consistency.IEEE Transactions on Medical Imaging, 41(10):2867–2878, October 2022

    Nathan Painchaud, Nicolas Duchateau, Olivier Bernard, and Pierre-Marc Jodoin. Echocar- diography Segmentation With Enforced Temporal Consistency.IEEE Transactions on Medical Imaging, 41(10):2867–2878, October 2022. ISSN 0278-0062, 1558-254X. doi: 10.1109/TMI.2022.3173669. URL https://ieeexplore.ieee.org/document/9771186/

  21. [21]

    Cheng, Neal Yuan, Bryan He, Alan C

    Grant Duffy, Paul P. Cheng, Neal Yuan, Bryan He, Alan C. Kwan, Matthew J. Shun-Shin, Kevin M. Alexander, Joseph Ebinger, Matthew P. Lungren, Florian Rader, David H. Liang, Ingela Schnittger, Euan A. Ashley, James Y . Zou, Jignesh Patel, Ronald Witteles, Susan Cheng, and David Ouyang. High-Throughput Precision Phenotyping of Left Ventricular Hypertrophy Wi...

  22. [22]

    URL https://jamanetwork.com/journals/ jamacardiology/fullarticle/2789370

    doi: 10.1001/jamacardio.2021.6059. URL https://jamanetwork.com/journals/ jamacardiology/fullarticle/2789370

  23. [23]

    Tmed 2: a dataset for semi-supervised classification of echocardiograms

    Zhe Huang, Gary Long, Benjamin Wessler, and Michael C Hughes. Tmed 2: a dataset for semi-supervised classification of echocardiograms. InDataPerf: Benchmarking Data for Data-Centric AI Workshop, volume 5, page 15, 2022

  24. [24]

    GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation,

    Jiewen Yang, Xinpeng Ding, Ziyang Zheng, Xiaowei Xu, and Xiaomeng Li. GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation,

  25. [25]

    URLhttps://arxiv.org/abs/2309.11145

  26. [26]

    MIMIC-IV-ECHO: Echocardiogram Matched Subset, 2026

    Brian Gow, Tom Pollard, Nathaniel Greenbaum, Benjamin Moody, Ahram Han, Jonathan W Waks, Alistair Johnson, Elizabeth Herbst, Parastou Eslami, Ashish Chaudhari, Tanner Carbonati, Seth Berkowitz, Roger Mark, and Steven Horng. MIMIC-IV-ECHO: Echocardiogram Matched Subset, 2026. URLhttps://physionet.org/content/mimic-iv-echo/1.0/

  27. [27]

    Lang, Karima Addetia, Akhil Narang, and Victor Mor-Avi

    Roberto M. Lang, Karima Addetia, Akhil Narang, and Victor Mor-Avi. 3-Dimensional Echocardiography.JACC: Cardiovascular Imaging, 11(12):1854–1878, December 2018. ISSN 1936878X. doi: 10.1016/j.jcmg.2018.06.024. URL https://linkinghub.elsevier.com/ retrieve/pii/S1936878X18308301

  28. [28]

    Tobon-Gomez, M

    C. Tobon-Gomez, M. De Craene, K. McLeod, L. Tautz, W. Shi, A. Hennemuth, A. Prakosa, H. Wang, G. Carr-White, S. Kapetanakis, A. Lutz, V . Rasche, T. Schaeffter, C. Butakoff, O. Friman, T. Mansi, M. Sermesant, X. Zhuang, S. Ourselin, H-O. Peitgen, X. Pennec, R. Razavi, D. Rueckert, A.F. Frangi, and K.S. Rhode. Benchmarking framework for myocardial tracking...

  29. [29]

    Geleijnse, Alexandros Papachristidis, Johan G

    Olivier Bernard, Brecht Heyde, Martino Alessandrini, Daniel Barbosa, Sorina Camarasu- Pop, Frédéric Cervenansky, Sébastien Valette, Oana Mirea, Elena Galli, Marcel L. Geleijnse, Alexandros Papachristidis, Johan G. Bosch, and Jan D’hooge. Challenge on Endocardial Three- dimensional Ultrasound Segmentation (CETUS), 2014. URL https://lirias.kuleuven. be/retr...

  30. [30]

    Maso Talou, Gina M

    Debbie Zhao, Edward Ferdian, Gonzalo D. Maso Talou, Gina M. Quill, Kathleen Gilbert, Vicky Y . Wang, Thiranja P. Babarenda Gamage, João Pedrosa, Jan D’hooge, Timothy M. Sutton, Boris S. Lowe, Malcolm E. Legget, Peter N. Ruygrok, Robert N. Doughty, Oscar Camara, Alistair A. Young, and Martyn P. Nash. MITEA: A dataset for machine learning segmentation of th...

  31. [31]

    Jolley, Christian Herz, Mehdi Eskandari, Daniel Bainbridge, and Terry M

    Patrick Carnahan, Olivia Ginty, John Moore, Andras Lasso, Matthew A. Jolley, Christian Herz, Mehdi Eskandari, Daniel Bainbridge, and Terry M. Peters. Interactive-Automatic Segmentation and Modelling of the Mitral Valve, May 2019. URL http://arxiv.org/abs/1905.01344. arXiv:1905.01344

  32. [32]

    Automated Interpretable 2D Video Extraction from 3D Echocardiography, 2025

    Milos Vukadinovic, Hirotaka Ieki, Yuki Sahashi, David Ouyang, and Bryan He. Automated Interpretable 2D Video Extraction from 3D Echocardiography, 2025. URL https://arxiv. org/abs/2511.15946

  33. [33]

    Liebgott, A

    H. Liebgott, A. Rodriguez-Molares, F. Cervenansky, J.A. Jensen, and O. Bernard. Plane-Wave Imaging Challenge in Medical Ultrasound. In2016 IEEE International Ultrasonics Symposium (IUS), pages 1–4, Tours, France, September 2016. IEEE. ISBN 9781467398978. doi: 10.1109/ ULTSYM.2016.7728908. URLhttp://ieeexplore.ieee.org/document/7728908/

  34. [34]

    Eldar, Jiaqi Huang, Massimo Mischi, Hassan Rivaz, David Sinden, Ruud J

    Dongwoon Hyun, Alycen Wiacek, Sobhan Goudarzi, Sven Rothlübbers, Amir Asif, Klaus Eickel, Yonina C. Eldar, Jiaqi Huang, Massimo Mischi, Hassan Rivaz, David Sinden, Ruud J. G. Van Sloun, Hannah Strohm, and Muyinatu A. Lediju Bell. Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework and Open Datasets.IEEE Transactions on Ultrasonics, Fe...

  35. [35]

    DICOM PS3.3: Information Object Definitions, C.8.5.5 US Region Calibration Module, 2026

    National Electrical Manufacturers Association. DICOM PS3.3: Information Object Definitions, C.8.5.5 US Region Calibration Module, 2026. URL https://dicom.nema.org/medical/ dicom/current/output/chtml/part03/sect_C.8.5.5.html

  36. [36]

    Resolution in ultrasound imaging.Continuing Education in Anaesthesia Critical Care & Pain, 11(5):186–192, October 2011

    Alexander Ng and Justiaan Swanevelder. Resolution in ultrasound imaging.Continuing Education in Anaesthesia Critical Care & Pain, 11(5):186–192, October 2011. ISSN 17431816. doi: 10.1093/bjaceaccp/mkr030. URL https://linkinghub.elsevier.com/retrieve/ pii/S1743181617302068

  37. [37]

    Menhel Kinno, Prashant Nagpal, Stephen Horgan, and Alfonso H. Waller. Comparison of Echocardiography, Cardiac Magnetic Resonance, and Computed Tomographic Imaging for the Evaluation of Left Ventricular Myocardial Function: Part 1 (Global Assessment).Current Cardiology Reports, 19(1):9, January 2017. ISSN 1523-3782, 1534-3170. doi: 10.1007/ s11886-017-0815...

  38. [38]

    U-Net: Convolutional Networks for Biomedical Image Segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, May 2015. URL http://arxiv.org/abs/1505.04597. arXiv:1505.04597

  39. [39]

    Muon: An optimizer for hidden layers in neural networks, 2024

    Keller Jordan, Yuchen Jin, Vlado Boza, Jiacheng You, Franz Cesista, Laker Newhouse, and Jeremy Bernstein. Muon: An optimizer for hidden layers in neural networks, 2024. URL https://kellerjordan.github.io/posts/muon/. 13 A Training Setup This appendix is intended to give sufficient detail to reimplement the benchmarks in Table 4. The three tasks share a si...