Recognition: unknown
DyABD: The Abdominal Muscle Segmentation in Dynamic MRI Benchmark
Pith reviewed 2026-05-08 08:40 UTC · model grok-4.3
The pith
DyABD is the first benchmark dataset for abdominal muscle segmentation in dynamic MRIs of exercising hernia patients, where most models reach only 0.82 Dice.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
DyABD establishes the first annotated collection of dynamic abdominal MRIs acquired during various exercises in patients with abdominal hernias, encompassing both pre- and post-corrective surgery scans, and through systematic testing shows that current segmentation models from different learning paradigms attain an average Dice coefficient of 0.82 on this unseen benchmark.
What carries the argument
The DyABD benchmark dataset, which defines the abdominal muscle segmentation task on dynamic MRIs featuring high anatomical variability from exercise and pre/post surgery conditions.
If this is right
- Clinical studies of abdominal hernia recurrence can now incorporate quantitative segmentation of muscle structures from dynamic scans.
- Segmentation models require evaluation on dynamic data with extreme anatomical variation to prove real-world generalization.
- The 0.82 Dice ceiling on this benchmark indicates that progress metrics based on static images overstate capability for motion-heavy cases.
- Pre- and post-surgery image pairs enable direct measurement of muscle changes after corrective procedures.
- Future medical imaging benchmarks should include similar exercise-induced variability to reflect practical diagnostic conditions.
Where Pith is reading between the lines
- Accurate dynamic segmentation could improve surgical planning by revealing how muscles deform under load in hernia patients.
- The dataset format may encourage parallel benchmarks for other moving anatomy such as cardiac or respiratory structures.
- Persistent gaps in zero-shot performance suggest that transfer from static MRI datasets is insufficient without motion-aware architectures.
Load-bearing premise
The high-quality annotations accurately and consistently mark abdominal muscles across all frames of the dynamic sequences despite motion-induced distortions, and the patient cohort with chosen exercises represents typical clinical cases.
What would settle it
Independent experts re-annotating a held-out subset of images and reporting inter-annotator Dice agreement below 0.9, or a new segmentation method achieving sustained Dice scores above 0.90 on the full DyABD test set.
read the original abstract
This work introduces DyABD, a novel and complex benchmark dataset of dynamic abdominal MRIs from patients with abdominal hernias and associated high quality abdominal muscle annotations. DyABD is the first-of-its-kind in four key ways; (1) it proposes the first abdominal muscle segmentation task, (2) the dynamic MRIs are acquired whilst the patients perform various exercises, introducing extreme anatomical variability, making it one of the most challenging segmentation datasets to date, (3) it includes both pre and post corrective MRIs and (4) DyABD promotes clinical research into the high recurrence rates of abdominal hernias. Beyond dataset introduction, this work provides a comprehensive evaluation of the generalisation capabilities of existing segmentation models across Supervised, Few Shot and Zero Shot paradigms on the unseen DyABD dataset. This work reveals that there is still room for substantial improvement in the field of medical image segmentation, with the majority of techniques achieving a Dice Coefficient of 0.82. This work therefore sheds light on the true progress of the field and redefines the benchmark for progress in medical image segmentation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This manuscript introduces DyABD, a benchmark dataset of dynamic abdominal MRI scans from patients with abdominal hernias, acquired during various exercises and including both pre- and post-corrective surgery images, along with annotations for abdominal muscle segmentation. It claims to be the first such dataset in four respects: defining the abdominal muscle segmentation task, capturing extreme anatomical variability from dynamic exercise imaging, including pre/post scans, and enabling clinical research on hernia recurrence rates. The paper also reports a comprehensive evaluation of existing segmentation models under supervised, few-shot, and zero-shot paradigms on held-out DyABD data, concluding that most techniques achieve a Dice coefficient of 0.82 and that substantial room for improvement remains in medical image segmentation.
Significance. If the annotations are shown to be reliable and the dataset representative of clinical variability, DyABD could serve as a valuable, challenging benchmark for advancing segmentation methods that handle dynamic anatomical changes in MRI, with direct relevance to improving understanding of abdominal hernia recurrence. The evaluation across multiple learning paradigms is a positive step toward assessing generalization. However, the lack of supporting details on annotation quality and experimental protocols currently limits the strength of the claim that current methods have reached a performance ceiling.
major comments (2)
- Dataset description: The central claim that DyABD supplies 'high quality abdominal muscle annotations' for an extremely variable dynamic task is load-bearing for both the benchmark value and the reported performance ceiling (Dice 0.82). The manuscript describes acquisition during exercises and pre/post scans but supplies no quantitative evidence on the annotation process: number of raters, training, slice-by-slice vs. propagated labels, or agreement statistics (e.g., mean Dice or surface distance between independent annotations on the same dynamic sequence). Without these, it is impossible to separate model failure from label noise in the reported generalization results.
- Evaluation and results: The abstract and main text describe a comprehensive evaluation across supervised, few-shot, and zero-shot paradigms on unseen data, yet supply no details on model architectures, training protocols, data splits, or statistical tests. This makes it unclear whether the 0.82 Dice figure fairly represents current capabilities or whether the conclusion of 'substantial room for improvement' is supported by reproducible evidence.
minor comments (1)
- Abstract: The claim that DyABD is 'one of the most challenging segmentation datasets to date' would be strengthened by a brief comparison to existing dynamic or abdominal MRI benchmarks in terms of variability metrics.
Simulated Author's Rebuttal
We thank the referee for their constructive comments on our manuscript. We address each major comment point by point below, agreeing where the manuscript requires additional detail and outlining the revisions we will make.
read point-by-point responses
-
Referee: Dataset description: The central claim that DyABD supplies 'high quality abdominal muscle annotations' for an extremely variable dynamic task is load-bearing for both the benchmark value and the reported performance ceiling (Dice 0.82). The manuscript describes acquisition during exercises and pre/post scans but supplies no quantitative evidence on the annotation process: number of raters, training, slice-by-slice vs. propagated labels, or agreement statistics (e.g., mean Dice or surface distance between independent annotations on the same dynamic sequence). Without these, it is impossible to separate model failure from label noise in the reported generalization results.
Authors: We agree that quantitative evidence on the annotation process is necessary to support the claim of high-quality annotations and to enable readers to distinguish between model limitations and potential label noise. The current manuscript describes the acquisition protocol and pre/post scans but does not include the requested quantitative details. In the revised manuscript, we will add a dedicated subsection on annotation methodology that specifies the number of expert raters, their training, whether annotations were performed slice-by-slice or with temporal propagation, and inter-rater agreement statistics (including mean Dice and surface distance on a sampled subset of dynamic sequences). revision: yes
-
Referee: Evaluation and results: The abstract and main text describe a comprehensive evaluation across supervised, few-shot, and zero-shot paradigms on unseen data, yet supply no details on model architectures, training protocols, data splits, or statistical tests. This makes it unclear whether the 0.82 Dice figure fairly represents current capabilities or whether the conclusion of 'substantial room for improvement' is supported by reproducible evidence.
Authors: We acknowledge that the manuscript would be strengthened by explicit details on the experimental setup to support reproducibility and the performance claims. While the paper reports results across the three paradigms and states the average Dice score, it does not provide the specific model architectures, training protocols, data split information, or statistical tests. In the revision, we will expand the evaluation section to include these elements: the exact architectures evaluated, training hyperparameters and protocols, the precise train/validation/test splits on DyABD, and results of any statistical significance tests. This will better substantiate the conclusion that substantial room for improvement remains. revision: yes
Circularity Check
No circularity: new dataset introduction with standard empirical evaluation on held-out data
full rationale
The manuscript introduces DyABD as a new benchmark dataset of dynamic abdominal MRIs with annotations and evaluates existing segmentation models (supervised, few-shot, zero-shot) using conventional metrics such as Dice coefficient. No equations, derivations, fitted parameters, or predictions appear in the provided text. Claims of novelty rest on dataset collection details (exercise-acquisition, pre/post scans) rather than any self-referential logic. Reported Dice scores of ~0.82 are direct empirical measurements on unseen data, not reductions to inputs by construction. Annotation quality is asserted but not quantitatively verified in the excerpt; this is a limitation of evidence strength, not circularity in any derivation chain.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Dice coefficient is an appropriate and sufficient metric for assessing segmentation performance in this domain.
Reference graph
Works this paper leans on
-
[1]
In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pp
Ouyang, C., Biffi, C., Chen, C., Kart, T., Qiu, H., Rueckert, D.: Self-supervision with superpixels: Training few-shot medical image segmentation without annota- tion. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pp. 762–780 (2020). Springer
2020
-
[2]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp
Tang, H., Liu, X., Sun, S., Yan, X., Xie, X.: Recurrent mask refinement for few- shot medical image segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3918–3928 (2021)
2021
-
[3]
Medical Image Analysis78, 102385 (2022)
Hansen, S., Gautam, S., Jenssen, R., Kampffmeyer, M.: Anomaly detection- inspired few-shot medical image segmentation through self-supervision with supervoxels. Medical Image Analysis78, 102385 (2022)
2022
-
[4]
In: European Conference on Computer Vision, pp
Wu, H., Xiao, F., Liang, C.: Dual contrastive learning with anatomical auxiliary supervision for few-shot medical image segmentation. In: European Conference on Computer Vision, pp. 417–434 (2022). Springer
2022
-
[5]
In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp
Ding, H., Sun, C., Tang, H., Cai, D., Yan, Y.: Few-shot medical image segmenta- tion with cycle-resemblance attention. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2488–2497 (2023)
2023
-
[6]
IEEE Transactions on Medical Imaging (2023)
Lei, W., Su, Q., Jiang, T., Gu, R., Wang, N., Liu, X., Wang, G., Zhang, X., Zhang, S.: One-shot weakly-supervised segmentation in 3d medical images. IEEE Transactions on Medical Imaging (2023)
2023
-
[7]
In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention, pp
Zhu, Y., Wang, S., Xin, T., Zhang, H.: Few-shot medical image segmentation via a region-enhanced prototypical transformer. In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention, pp. 271–280 (2023). Springer 26
2023
-
[8]
IEEE Transactions on Medical Imaging (2024)
Cheng, Z., Wang, S., Xin, T., Zhou, T., Zhang, H., Shao, L.: Few-shot medi- cal image segmentation via generating multiple representative descriptors. IEEE Transactions on Medical Imaging (2024)
2024
-
[9]
JAMA surgery (2024)
Bhardwaj, P., Huayllani, M.T., Olson, M.A., Janis, J.E.: Year-over-year ventral hernia recurrence rates and risk factors. JAMA surgery (2024)
2024
-
[10]
arXiv preprint arXiv:2401.04722 (2024)
Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv preprint arXiv:2401.04722 (2024)
-
[11]
Nature methods18(2), 203–211 (2021)
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods18(2), 203–211 (2021)
2021
-
[12]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp
Butoi, V.I., Ortiz, J.J.G., Ma, T., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Uni- verseg: Universal medical image segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 21438–21451 (2023)
2023
-
[13]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp
Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y.,et al.: Segment anything. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4015–4026 (2023)
2023
-
[14]
Nature Communications15(1), 654 (2024)
Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nature Communications15(1), 654 (2024)
2024
-
[15]
SAM 2: Segment Anything in Images and Videos
Ravi, N., Gabeur, V., Hu, Y.-T., Hu, R., Ryali, C., Ma, T., Khedr, H., R¨ adle, R., Rolland, C., Gustafson, L., et al.: Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 (2024)
work page internal anchor Pith review arXiv 2024
-
[16]
Dalcassian publishing company, ??? (1920)
Stedman, T.: Stedman’s Medical Dictionary. Dalcassian publishing company, ??? (1920)
1920
-
[17]
Computer Methods and Programs in Biomedicine217, 106667 (2022)
Jourdan, A., Rapacchi, S., Guye, M., Bendahan, D., Masson, C., B` ege, T.: Dynamic-mri quantification of abdominal wall motion and deformation dur- ing breathing and muscular contraction. Computer Methods and Programs in Biomedicine217, 106667 (2022)
2022
-
[18]
Clinical Biomechanics121, 106396 (2025)
Joppin, V., Jourdan, A., Bendahan, D., Soucasse, A., Guye, M., Masson, C., B` ege, T.: Towards a better understanding of abdominal wall biomechanics: In vivo relationship between dynamic intra-abdominal pressure and magnetic resonance imaging measurements. Clinical Biomechanics121, 106396 (2025)
2025
-
[19]
In: Proc
Landman, B., Xu, Z., Igelsias, J., Styner, M., Langerak, T., Klein, A.: Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge. In: Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, vol. 5, p. 12 (2015) 27
2015
-
[20]
IEEE Transactions on Medical Imaging41(7), 1837–1848 (2022)
Ouyang, C., Biffi, C., Chen, C., Kart, T., Qiu, H., Rueckert, D.: Self-supervised learning for few-shot medical image segmentation. IEEE Transactions on Medical Imaging41(7), 1837–1848 (2022)
2022
-
[21]
Medical image analysis31, 77–87 (2016)
Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of mri. Medical image analysis31, 77–87 (2016)
2016
-
[22]
Medical image analysis87, 102792 (2023)
Chaitanya, K., Erdil, E., Karani, N., Konukoglu, E.: Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation. Medical image analysis87, 102792 (2023)
2023
-
[23]
IEEE transactions on pattern analysis and machine intelligence41(12), 2933–2946 (2018)
Zhuang, X.: Multivariate mixture model for myocardial segmentation combin- ing multi-source images. IEEE transactions on pattern analysis and machine intelligence41(12), 2933–2946 (2018)
2018
-
[24]
Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.-A., Cetin, I., Lekadir, K., Camara, O., Ballester, M.A.G.,et al.: Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE transactions on medical imaging37(11), 2514–2525 (2018)
2018
-
[25]
Heller, N., Sathianathen, N., Kalapara, A., Walczak, E., Moore, K., Kaluzniak, H., Rosenberg, J., Blake, P., Rengel, Z., Oestreich, M., et al.: The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes. arXiv preprint arXiv:1904.00445 (2019)
-
[26]
IEEE Transactions on Medical Imaging40(10), 2575–2588 (2021)
Feng, R., Zheng, X., Gao, T., Chen, J., Wang, W., Chen, D.Z., Wu, J.: Interactive few-shot learning: Limited supervision, better medical image segmentation. IEEE Transactions on Medical Imaging40(10), 2575–2588 (2021)
2021
-
[27]
Medical Image Analysis69, 101950 (2021)
Kavur, A.E., Gezer, N.S., Barı¸ s, M., Aslan, S., Conze, P.-H., Groza, V., Pham, D.D., Chatterjee, S., Ernst, P., ¨Ozkan, S.,et al.: Chaos challenge-combined (ct- mr) healthy abdominal organ segmentation. Medical Image Analysis69, 101950 (2021)
2021
-
[28]
Nature communications13(1), 4128 (2022)
Antonelli, M., Reinke, A., Bakas, S., Farahani, K., Kopp-Schneider, A., Landman, B.A., Litjens, G., Menze, B., Ronneberger, O., Summers, R.M.,et al.: The medical segmentation decathlon. Nature communications13(1), 4128 (2022)
2022
-
[29]
Medical Image Analysis84, 102680 (2023)
Bilic, P., Christ, P., Li, H.B., Vorontsov, E., Ben-Cohen, A., Kaissis, G., Szeskin, A., Jacobs, C., Mamani, G.E.H., Chartrand, G., Bakas, S.,et al.: The liver tumor segmentation benchmark (lits). Medical Image Analysis84, 102680 (2023)
2023
-
[30]
In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp
Ogier, A., Sdika, M., Foure, A., Le Troter, A., Bendahan, D.: Individual muscle segmentation in mr images: A 3d propagation through 2d non-linear registra- tion approaches. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 317–320 (2017). IEEE 28
2017
-
[31]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp
Wang, K., Liew, J.H., Zou, Y., Zhou, D., Feng, J.: Panet: Few-shot image seman- tic segmentation with prototype alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9197–9206 (2019)
2019
-
[32]
In: Medical Image Computing and Computer-assisted intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed- ical image segmentation. In: Medical Image Computing and Computer-assisted intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241 (2015). Springer
2015
-
[33]
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recogni- tion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
2016
-
[34]
In: Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ ar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755 (2014). Springer
2014
-
[35]
Medical Image Analysis89, 102918 (2023)
Mazurowski, M.A., Dong, H., Gu, H., Yang, J., Konz, N., Zhang, Y.: Segment anything model for medical image analysis: an experimental study. Medical Image Analysis89, 102918 (2023)
2023
-
[36]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey, D.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv: 2010.11929 (2020)
work page internal anchor Pith review arXiv 2010
-
[37]
In: International Conference on Machine Learning, pp
Ryali, C., Hu, Y.-T., Bolya, D., Wei, C., Fan, H., Huang, P.-Y., Aggarwal, V., Chowdhury, A., Poursaeed, O., Hoffman, J.,et al.: Hiera: A hierarchical vision transformer without the bells-and-whistles. In: International Conference on Machine Learning, pp. 29441–29454 (2023). PMLR
2023
-
[38]
In: Medical Imaging with Deep Learning Short Papers (2024)
Joppin, V., Belton, N., Hostin, M.A., Bellemare, M.-E., Lawlor, A., Curran, K.M., B` ege, T., Masson, C., Bendahan, D.: Automatic muscle segmentation on healthy abdominal mri using nnunet. In: Medical Imaging with Deep Learning Short Papers (2024)
2024
-
[39]
arXiv preprint arXiv:2107.02314 (2021)
Baid, U., Ghodasara, S., Mohan, S., Bilello, M., Calabrese, E., Colak, E., Fara- hani, K., Kalpathy-Cramer, J., Kitamura, F.C., Pati, S., Bakas, S.: The rsna-asnr- miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv preprint arXiv:2107.02314 (2021)
-
[40]
Xie, Y., Zhang, J., Shen, C., Xia, Y.: Cotr: Efficiently bridging cnn and trans- former for 3d medical image segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part III 24, pp. 171–180 (2021). Springer
2021
-
[41]
In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp
Hatamizadeh, A., Tang, Y., Nath, V., Yang, D., Myronenko, A., Landman, B., 29 Roth, H.R., Xu, D.: Unetr: Transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)
2022
-
[42]
In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22, pp
Wang, X., Han, S., Chen, Y., Gao, D., Vasconcelos, N.: Volumetric attention for 3d medical image segmentation and detection. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22, pp. 175–184 (2019). Springer
2019
-
[43]
In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp
Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 578–588 (2024). Springer
2024
-
[44]
Machine Intelligence Research19(6), 531–549 (2022)
Ji, G.-P., Xiao, G., Chou, Y.-C., Fan, D.-P., Zhao, K., Chen, G., Van Gool, L.: Video polyp segmentation: A deep learning perspective. Machine Intelligence Research19(6), 531–549 (2022)
2022
-
[45]
In: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
Chen, Y., Son, M., Hua, C., Kim, J.-Y.: Aop-sam: Automation of prompts for efficient segmentation. In: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
-
[46]
In: Short Papers Medical Imaging with Deep Learning (2024) 30
Belton, N., Joppin, V., Lawlor, A., Curran, K.M., Masson, C., Bege, T., Benda- han, D.: Dyabd: A dataset and technique for synthetically generating dynamic abdominal mris with dual class and anatomically conditioned diffusion models. In: Short Papers Medical Imaging with Deep Learning (2024) 30
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.