pith. machine review for the scientific record. sign in

arxiv: 2601.21800 · v3 · submitted 2026-01-29 · 💻 cs.AI

Recognition: unknown

BioAgent Bench: An AI Agent Evaluation Suite for Bioinformatics

Authors on Pith no claims yet
classification 💻 cs.AI
keywords bioinformaticsevaluationmodelssuiteunderagentagentsartifacts
0
0 comments X
read the original abstract

This paper introduces BioAgent Bench, a benchmark dataset and an evaluation suite designed for measuring the performance and robustness of AI agents in common bioinformatics tasks. The benchmark contains curated end-to-end tasks (e.g., RNA-seq, variant calling, metagenomics) with prompts that specify concrete output artifacts to support automated assessment, including stress testing under controlled perturbations. We evaluate frontier closed-source and open-weight models across multiple agent harnesses, and use an LLM-based grader to score pipeline progress and outcome validity. We find that frontier agents can complete multi-step bioinformatics pipelines without elaborate custom scaffolding, often producing the requested final artifacts reliably. However, robustness tests reveal failure modes under controlled perturbations (corrupted inputs, decoy files, and prompt bloat), indicating that correct high-level pipeline construction does not guarantee reliable step-level reasoning. Finally, because bioinformatics workflows may involve sensitive patient data, proprietary references, or unpublished IP, closed-source models can be unsuitable under strict privacy constraints; in such settings, open-weight models may be preferable despite lower completion rates. We release the dataset and evaluation suite publicly.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Sound Agentic Science Requires Adversarial Experiments

    cs.AI 2026-04 unverdicted novelty 5.0

    Agentic science needs a falsification-first standard in which LLM agents actively search for ways a claim can fail rather than generating supporting narratives.