pith. machine review for the scientific record. sign in

arxiv: 2604.10970 · v1 · submitted 2026-04-13 · 💻 cs.CV

Recognition: unknown

Using Deep Learning Models Pretrained by Self-Supervised Learning for Protein Localization

Andreas Weinmann, Ben Isselmann, Dilara G\"oksu, Heinz Neumann

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:52 UTC · model grok-4.3

classification 💻 cs.CV
keywords self-supervised learningprotein localizationmicroscopytransfer learningvision transformersDINOcell imaging
0
0 comments X

The pith

DINO-pretrained vision transformers transfer protein localization features from HPA images to new OpenCell datasets at high accuracy even without fine-tuning.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests whether self-supervised models trained on large collections of microscopy images can be reused on smaller protein localization tasks with different staining methods. It evaluates DINO-based Vision Transformer backbones pretrained on HPA field-of-view images or ImageNet, measuring zero-shot performance on OpenCell and the gains from subsequent fine-tuning. A reader would care because many biology imaging datasets are too small to train robust models from scratch, so reusable pretrained features could make deep learning practical for protein localization studies. The results show the HPA-pretrained model reaches 0.822 macro F1 zero-shot and 0.860 after fine-tuning, while single-cell embeddings from domain-specific pretraining perform best in nearest-neighbor classification.

Core claim

DINO-based ViT backbones pretrained on HPA FOV or ImageNet-1k transfer well to OpenCell even without fine-tuning. The HPA FOV-pretrained model achieved the highest zero-shot performance (macro F1 0.822 ± 0.007). Fine-tuning further improved performance to 0.860 ± 0.013. At the single-cell level, the HPA single-cell-pretrained model achieved the highest k-nearest neighbor performance across all neighborhood sizes (macro F1 ≥ 0.796).

What carries the argument

DINO self-supervised pretraining of Vision Transformer backbones on large domain-specific microscopy sets such as HPA FOV, whose learned embeddings are extracted and used for downstream protein localization classification.

If this is right

  • Zero-shot transfer from HPA-pretrained DINO models outperforms ImageNet pretraining and other baselines on OpenCell protein localization.
  • Fine-tuning the transferred embeddings on even small fractions of OpenCell labels raises macro F1 from 0.822 to 0.860.
  • HPA single-cell pretraining yields the strongest k-nearest-neighbor accuracy at every neighborhood size on labeled OpenCell subsets.
  • Two channel-mismatch handling strategies were tested and did not prevent effective transfer.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pretrained embeddings could be reused for other cell-imaging tasks that share similar visual structures but lack large labeled sets.
  • Scaling the pretraining corpus to more microscopy modalities might further reduce the labeled data needed for new protein localization problems.
  • Combining the zero-shot embeddings with lightweight adapters rather than full fine-tuning could preserve performance while lowering compute cost.

Load-bearing premise

That embeddings from SSL pretraining on HPA FOV or ImageNet-1k capture features robust to differences in staining protocols and channel configurations when transferred to OpenCell.

What would settle it

A new test set with substantially different staining or channel layout where the pretrained models drop below the performance of a model trained from scratch on OpenCell data would disprove the transfer claim.

read the original abstract

Background: Task-specific microscopy datasets are often small, making it difficult to train deep learning models that learn robust features. While self-supervised learning (SSL) has shown promise through pretraining on large, domain-specific datasets, generalizability across datasets with differing staining protocols and channel configurations remains underexplored. We investigated the generalizability of SSL models pretrained on ImageNet-1k and HPA FOV, evaluating their embeddings on OpenCell with and without fine-tuning, two channel-mismatch strategies, and varying fine-tuning data fractions. We additionally analyzed single-cell embeddings on a labeled OpenCell subset. Result: DINO-based ViT backbones pretrained on HPA FOV or ImageNet-1k transfer well to OpenCell even without fine-tuning. The HPA FOV-pretrained model achieved the highest zero-shot performance (macro $F_1$ 0.822 $\pm$ 0.007). Fine-tuning further improved performance to 0.860 $\pm$ 0.013. At the single-cell level, the HPA single-cell-pretrained model achieved the highest k-nearest neighbor performance across all neighborhood sizes (macro $F_1$ $\geq$ 0.796). Conclusion: SSL methods like DINO, pretrained on large domain-relevant datasets, enable effective use of deep learning features for fine-tuning on small, task-specific microscopy datasets.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. The paper evaluates the transferability of DINO-pretrained Vision Transformer embeddings from ImageNet-1k and HPA FOV datasets to the OpenCell protein localization task. It reports strong zero-shot performance (highest macro F1 of 0.822 ± 0.007 for HPA-pretrained model), further gains after fine-tuning (to 0.860 ± 0.013), benefits from two channel-mismatch handling strategies, and competitive single-cell kNN classification (macro F1 ≥ 0.796) on a labeled OpenCell subset, concluding that SSL pretraining on large domain-relevant data enables effective use on small task-specific microscopy datasets.

Significance. If the empirical results hold, the work provides concrete evidence that SSL methods like DINO can produce features robust enough for cross-dataset transfer in fluorescence microscopy despite differences in staining and channels. This directly addresses the common constraint of small labeled microscopy datasets and could reduce reliance on task-specific supervised pretraining. The inclusion of error bars, varying fine-tuning fractions, and single-cell analysis adds empirical rigor to the generalizability claim.

major comments (1)
  1. [Results] Results (zero-shot and fine-tuning paragraphs): The reported macro F1 improvements and standard deviations are load-bearing for the transferability claim, yet the manuscript does not specify the number of independent runs, the precise train/validation/test splits on OpenCell, or the exact implementation of the two channel-mismatch strategies. Without these, it is difficult to judge whether the ±0.007 and ±0.013 intervals reflect true variability or are sensitive to particular splits.
minor comments (3)
  1. [Abstract] Abstract and Results: Consistently label the single-cell kNN metric as macro F1 and report the exact values rather than only the lower bound (≥ 0.796) to allow direct comparison with the image-level results.
  2. [Methods] Methods: Provide a brief description or reference for how the ViT backbone is adapted for the varying number of input channels across HPA, ImageNet, and OpenCell; this is essential for reproducibility given the channel-mismatch focus.
  3. [Results] Figure or Table (if present): Ensure any performance tables include baseline comparisons (e.g., randomly initialized ViT or supervised ImageNet pretraining) so the added value of SSL is quantified.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the positive assessment of our work and the recommendation for minor revision. We address the single major comment below and will update the manuscript accordingly.

read point-by-point responses
  1. Referee: [Results] Results (zero-shot and fine-tuning paragraphs): The reported macro F1 improvements and standard deviations are load-bearing for the transferability claim, yet the manuscript does not specify the number of independent runs, the precise train/validation/test splits on OpenCell, or the exact implementation of the two channel-mismatch strategies. Without these, it is difficult to judge whether the ±0.007 and ±0.013 intervals reflect true variability or are sensitive to particular splits.

    Authors: We appreciate the referee's request for these reproducibility details. The reported means and standard deviations were computed over 5 independent runs using different random seeds for data shuffling, model initialization, and fine-tuning. The OpenCell dataset was partitioned into a fixed 70/15/15 train/validation/test split, stratified by the 17 localization classes to preserve class balance. The two channel-mismatch strategies were implemented as follows: (1) channel averaging, in which the embeddings produced by the available channels are averaged before the classification head; and (2) zero-padding, in which missing channels are replaced by zero-valued images of the same spatial size before being passed through the pretrained backbone. We will add these specifications to the Methods and Results sections (including a new paragraph on experimental protocol) in the revised manuscript. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper reports an empirical evaluation of zero-shot and fine-tuned transfer of DINO-pretrained ViT embeddings from HPA FOV and ImageNet-1k pretraining onto the held-out OpenCell dataset, with explicit metrics (macro F1 0.822 ± 0.007 zero-shot; 0.860 ± 0.013 fine-tuned), channel-mismatch strategies, and single-cell kNN results. All load-bearing claims rest on direct numerical performance on an external test set rather than any derivation, fitted parameter renamed as prediction, or self-citation chain that reduces the result to its own inputs by construction. The evaluation protocol is independent of the pretraining procedure and provides falsifiable external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work rests on standard transfer learning assumptions in computer vision applied to biological imaging; no free parameters or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption Self-supervised pretraining on large domain-relevant or general image datasets produces embeddings that generalize to downstream protein localization tasks despite variations in staining protocols and channel configurations.
    Invoked in the background and results sections as the basis for evaluating transfer to OpenCell.

pith-pipeline@v0.9.0 · 5552 in / 1309 out tokens · 46842 ms · 2026-05-10T15:52:32.849607+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

46 extracted references · 43 canonical work pages · 5 internal anchors

  1. [1]

    High-throughput single- cell sequencing in cancer research

    Jia Q, Chu H, Jin Z, Long H, Zhu B. High-throughput single- cell sequencing in cancer research. Signal Transduction and Tar- geted Therapy. 2022 May;7(1):1–20. Publisher: Nature Publishing G roup. 24 https://doi.org/10.1038/s41392-022-00990-4

  2. [2]

    Future medical applications of single-cell sequen cing in cancer

    Navin N, Hicks J. Future medical applications of single-cell sequen cing in cancer. Genome Medicine. 2011 May;3(5):31. https://doi.org/10.1186/gm247

  3. [3]

    Single-cell chromatin accessibility reveals principles of regulato ry variation

    Buenrostro JD, Wu B, Litzenburger UM, Ruff D, Gonzales ML, Sny der MP, et al. Single-cell chromatin accessibility reveals principles of regulato ry variation. Nature. 2015 Jul;523(7561):486–490. https://doi.org/10.1038/nature14590

  4. [4]

    Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoa ssay Based on Inductively Coupled Plasma Time-of-Flight Mass Spectrometry

    Bandura DR, Baranov VI, Ornatsky OI, Antonov A, Kinach R, Lo u X, et al. Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoa ssay Based on Inductively Coupled Plasma Time-of-Flight Mass Spectrometry. A nalytical Chemistry. 2009 Aug;81(16):6813–6822. Publisher: American Chem ical Society. https://doi.org/10.1021/ac901049w

  5. [5]

    Single-cell RNA-seq reveals the immune escape and drug resistance mechanisms of mantle cell lymphoma

    Wang L, Mo S, Li X, He Y, Yang J. Single-cell RNA-seq reveals the immune escape and drug resistance mechanisms of mantle cell lymphoma. Cancer Biology and Medicine. 2020;17(3):726–739. https://doi.org/10.20892/j.issn.2095-3941.2020.0073

  6. [6]

    Best practices for single-cell analysis across modalities

    Heumos L, Schaar AC, Lance C, Litinetskaya A, Drost F, Zappia L , et al. Best practices for single-cell analysis across modalities. Nature Re views Genetics. 2023 Aug;24(8):550–572. Publisher: Nature Publishing Gr oup. https://doi.org/10.1038/s41576-023-00586-w

  7. [7]

    High- Content Screening for Quantitative Cell Biology

    Mattiazzi Usaj M, Styles EB, Verster AJ, Friesen H, Boone C, An drews BJ. High- Content Screening for Quantitative Cell Biology. Trends in Cell Biolog y. 2016 Aug;26(8):598–611. https://doi.org/10.1016/j.tcb.2016.03.008

  8. [8]

    Machin e learning and computer vision approaches for phenotypic profiling

    Grys BT, Lo DS, Sahin N, Kraus OZ, Morris Q, Boone C, et al. Machin e learning and computer vision approaches for phenotypic profiling. Journal of Cell Biology. 2016 Dec;216(1):65–71. https://doi.org/10.1083/jcb.201610026

  9. [9]

    Machine learning and image-ba sed profil- ing in drug discovery

    Scheeder C, Heigwer F, Boutros M. Machine learning and image-ba sed profil- ing in drug discovery. Current Opinion in Systems Biology. 2018 Aug;10 :43–52. https://doi.org/10.1016/j.coisb.2018.05.004

  10. [10]

    Pooled multicolour tagging for visualizing subcellular protein dynamics

    Reicher A, Reiniˇ s J, Ciobanu M, R ˚ uˇ ziˇ cka P, Malik M, Siklos M, e t al. Pooled multicolour tagging for visualizing subcellular protein dynamics. Nature Cell Biology. 2024 May;26(5):745–756. Publisher: Nature Publishing G roup. https://doi.org/10.1038/s41556-024-01407-w

  11. [11]

    Phenotypic drug discovery: recent successes, lessons learned and new direc tions

    Vincent F, Nueda A, Lee J, Schenone M, Prunotto M, Mercola M. Phenotypic drug discovery: recent successes, lessons learned and new direc tions. Nature Reviews Drug Discovery. 2022 Dec;21(12):899–914. Publisher: Nat ure Publishing Group. https://doi.org/10.1038/s41573-022-00472-w . 25

  12. [12]

    Harnessing the power of micro scopy images to accelerate drug discovery: what are the possibilities? Exp ert Opinion on Drug Discovery

    Boyd J, Fennell M, Carpenter A. Harnessing the power of micro scopy images to accelerate drug discovery: what are the possibilities? Exp ert Opinion on Drug Discovery. 2020 Jun;15(6):639–642. Publisher: Tay - lor & Francis eprint: https://doi.org/10.1080/17460441.2020.1743675. https://doi.org/10.1080/17460441.2020.1743675

  13. [13]

    Multiplex Cytological Profiling Assay to Measure Diverse Cellular S tates

    Gustafsdottir SM, Ljosa V, Sokolnicki KL, Wilson JA, Walpita D, K emp MM, et al. Multiplex Cytological Profiling Assay to Measure Diverse Cellular S tates. PLOS ONE. 2013 Dec;8(12):e80999. Publisher: Public Library of Scien ce. https://doi.org/10.1371/journal.pone.0080999

  14. [14]

    Cell Painting, a high-content image-based assay for morphological pro filing using multiplexed fluorescent dyes

    Bray MA, Singh S, Han H, Davis CT, Borgeson B, Hartland C, et al. Cell Painting, a high-content image-based assay for morphological pro filing using multiplexed fluorescent dyes. Nature protocols. 2016 Sep;11(9):1 757–1774. https://doi.org/10.1038/nprot.2016.105

  15. [15]

    Optimizing the Cell Painting assay for image-based profilin g

    Cimini BA, Chandrasekaran SN, Kost-Alimova M, Miller L, Goodale A , Fritch- man B, et al. Optimizing the Cell Painting assay for image-based profilin g. Nature Protocols. 2023 Jul;18(7):1981–2013. Publisher: Nature P ublishing Group. https://doi.org/10.1038/s41596-023-00840-9

  16. [16]

    Chandrasekaran SN, Ackerman J, Alix E, Ando DM, Arevalo J, Be nnion M, et al.: JUMP Cell Painting dataset: morphological impact of 136,000 ch emical and genetic perturbations. bioRxiv. Pages: 2023.03.23.534023 Section : New Results. Available from: https://www.biorxiv.org/content/10.1101/2023.03.23.534023v2

  17. [17]

    Thul, Lovisa Åkesson, Mikaela Wiking, Diana Mahdessian, Aikaterini Geladaki, Hammou Ait Blal, Tove Alm, Anna Asplund, Lars Björk, Lisa M

    Thul PJ, ˚ Akesson L, Wiking M, Mahdessian D, Geladaki A, Ait Blal H, et al. A subcellular map of the human proteome. Science. 2017 May;356(6340):eaal3321. Publisher: American Association for the A dvancement of Science. https://doi.org/10.1126/science.aal3321

  18. [18]

    OpenCell: Endogenous tagging for the cartography of human cellula r organiza- tion

    Cho NH, Cheveralls KC, Brunner AD, Kim K, Michaelis AC, Raghavan P, et al. OpenCell: Endogenous tagging for the cartography of human cellula r organiza- tion. Science. 2022 Mar;375(6585):eabi6983. Publisher: American A ssociation for the Advancement of Science. https://doi.org/10.1126/science.abi6983

  19. [19]

    CellProfiler: image analysis software for identifying and quantifying cell phenotypes

    Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, Fr iman O, et al. CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biology. 2006 Oct;7(10):R100 . https://doi.org/10.1186/gb-2006-7-10-r100

  20. [20]

    Deep learning

    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015 May;52 1(7553):436–

  21. [21]

    LeCun, Y

    Publisher: Nature Publishing Group. https://doi.org/10.1038/nature14539

  22. [22]

    Single-cell classification, analysis, and its applica- tion using deep learning techniques

    Premkumar R, Srinivasan A, Harini Devi KG, M D, E G, Jad- hav P, et al. Single-cell classification, analysis, and its applica- tion using deep learning techniques. BioSystems. 2024 Mar;237:1051 42. 26 https://doi.org/10.1016/j.biosystems.2024.105142

  23. [23]

    Self-supervision advances morphological profiling by unlocking powe rful image representations

    Kim V, Adaloglou N, Osterland M, Morelli FM, Halawa M, K¨ onig T, et a l. Self-supervision advances morphological profiling by unlocking powe rful image representations. Scientific Reports. 2025 Feb;15(1):4876. Publis her: Nature Publishing Group. https://doi.org/10.1038/s41598-025-88825-4

  24. [24]

    Doron M, Moutakanni T, Chen ZS, Moshkov N, Caron M, Touvro n H, et al.: Unbiased single-cell morphology with self-supervised vision tra nsform- ers. bioRxiv. Pages: 2023.06.16.545359 Section: New Results. Availa ble from: https://www.biorxiv.org/content/10.1101/2023.06.16.545359v1

  25. [25]

    Caron M, Touvron H, Misra I, J´ egou H, Mairal J, Bojanowski P , et al.: Emerging Properties in Self-Supervised Vision Transformers. arXiv. ArXiv:21 04.14294 [cs]. Available from: http://arxiv.org/abs/2104.14294

  26. [26]

    Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Un terthiner T, et al.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv. ArXiv:2010.11929 [cs]. Available from: http://arxiv.org/abs/2010.11929

  27. [27]

    Available from: https://arxiv.org/abs/2406.05308

    Yao H, Hanslovsky P, Huetter JC, Hoeckendorf B, Richmond D.: Weakly Super- vised Set-Consistency Learning Improves Morphological Profiling o f Single-Cell Images. Available from: https://arxiv.org/abs/2406.05308

  28. [28]

    Bao Y, Sivanandan S, Karaletsos T.: Channel Vision Transforme rs: An Image Is Worth 1 x 16 x 16 Words. arXiv. ArXiv:2309.16108 [cs]. Available from : http://arxiv.org/abs/2309.16108

  29. [29]

    Pham C, Caicedo JC, Plummer BA.: ChA-MAEViT: Unifying Channel-A ware Masked Autoencoders and Multi-Channel Vision Transformers for Improved Cross-Channel Learning. arXiv. ArXiv:2503.19331 [cs]. Available fro m: http://arxiv.org/abs/2503.19331

  30. [30]

    Scaling Channel-Adaptive Self-Supervised Learning

    Lorenci A VD, Yi SE, Moutakanni T, Bojanowski P, Couprie C, Caicedo JC, et al. Scaling Channel-Adaptive Self-Supervised Learning. Transactions on Machine Learning Research. 2025 Mar

  31. [31]

    Agrawal V, Peters J, Thompson TN, Sanian MV, Pham C, Moshko v N, et al.: CHAMMI-75: pre-training multi-channel models with hetero ge- neous microscopy images. arXiv. ArXiv:2512.20833 [cs]. Available from : http://arxiv.org/abs/2512.20833

  32. [32]

    Available from: https://arxiv.org/abs/2602.05527

    Isselmann B, G¨ oksu D, Weinmann A.: Generalization of Self-Supe rvised Vision Transformers for Protein Localization Across Microscopy Domains . Available from: https://arxiv.org/abs/2602.05527. 27

  33. [33]

    Berg, and Li Fei-Fei

    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et a l.: ImageNet Large Scale Visual Recognition Challenge. Available from: https://arxiv.org/abs/1409.0575

  34. [34]

    Analysis of the Human Protein Atlas Image Classification competition

    Ouyang W, Winsnes CF, Hjelmare M, Cesnik AJ, ˚ Akesson L, Xu H, et al. Analysis of the Human Protein Atlas Image Classification competition. Nature Methods. 2019 Dec;16(12):1254–1261. Publisher: Nature Publishin g Group.https://doi.org/10.1038/s41592-019-0658-6

  35. [35]

    Analysis of the Human Protein Atlas Weakly Supervised Single-Ce ll Clas- sification competition

    Le T, Winsnes CF, Axelsson U, Xu H, Mohanakrishnan Kaimal J, Ma hdessian D, et al. Analysis of the Human Protein Atlas Weakly Supervised Single-Ce ll Clas- sification competition. Nature Methods. 2022 Oct;19(10):1221–12 29. Publisher: Nature Publishing Group. https://doi.org/10.1038/s41592-022-01606-z

  36. [36]

    Automating Morphological Profiling with Generic Deep Convo- lutional Networks

    Pawlowski N, Caicedo JC, Singh S, Carpenter AE, Storkey A. Automating Morphological Profiling with Generic Deep Convo- lutional Networks. bioRxiv. 2016; https://doi.org/10.1101/085118. https://www.biorxiv.org/content/early/2016/11/02/085118.full.pdf

  37. [37]

    Ava ilable from: https://arxiv.org/abs/2310.19224

    Chen Z, Pham C, Wang S, Doron M, Moshkov N, Plummer BA, et al.: C HAMMI: A benchmark for channel-adaptive models in microscopy imaging. Ava ilable from: https://arxiv.org/abs/2310.19224

  38. [38]

    Decoupled Weight Decay Regularization

    Loshchilov I, Hutter F.: Decoupled Weight Decay Regularization. Available from: https://arxiv.org/abs/1711.05101

  39. [39]

    Speech and Language Processing: An I ntro- duction to Natural Language Processing, Computational Linguist ics, and Speech Recognition

    Jurafsky D, Martin JH. Speech and Language Processing: An I ntro- duction to Natural Language Processing, Computational Linguist ics, and Speech Recognition. 3rd ed. Pearson; 2025. Online draft available a t https://web.stanford.edu/∼ jurafsky/slp3/

  40. [40]

    Oquab M, Darcet T, Moutakanni T, Vo H, Szafraniec M, Khalidov V, et al.: DINOv2: Learning Robust Visual Features without Supervisio n. arXiv. ArXiv:2304.07193 [cs]. Available from: http://arxiv.org/abs/2304.07193

  41. [41]

    Sim´ eoni O, Vo HV, Seitzer M, Baldassarre F, Oquab M, Jose C, e t al.: DINOv3. arXiv. ArXiv:2508.10104 [cs]. Available from: http://arxiv.org/abs/2508.10104

  42. [42]

    He K, Chen X, Xie S, Li Y, Doll´ ar P, Girshick R.: Masked Autoencod ers Are Scalable Vision Learners. arXiv. ArXiv:2111.06377 [cs]. Available fr om: http://arxiv.org/abs/2111.06377

  43. [43]

    Chen T, Kornblith S, Norouzi M, Hinton G.: A Simple Framework for Contrastive Learning of Visual Representations. arXiv. ArXiv:2002.05709 [cs]. A vailable from: http://arxiv.org/abs/2002.05709. 28

  44. [44]

    Cellpose: a genera list algorithm for cellular segmentation

    Stringer C, Wang T, Michaelos M, Pachitariu M. Cellpose: a genera list algorithm for cellular segmentation. Nature Methods. 2021 Jan;18(1):100–1 06. Publisher: Nature Publishing Group. https://doi.org/10.1038/s41592-020-01018-x

  45. [45]

    Cellpose3: one-click image restoration for improved cel- lular segmentation

    Stringer C, Pachitariu M. Cellpose3: one-click image restoration for improved cel- lular segmentation. Nature Methods. 2025 Mar;22(3):592–599. Pu blisher: Nature Publishing Group. https://doi.org/10.1038/s41592-025-02595-5

  46. [46]

    Deep gene rative model for protein subcellular localization prediction

    Yuan GH, Li J, Yang Z, Chen YQ, Yuan Z, Chen T, et al. Deep gene rative model for protein subcellular localization prediction. Briefings in Bioinformat ics. 2025 Mar;26(2):bbaf152. https://doi.org/10.1093/bib/bbaf152. 29