pith. machine review for the scientific record. sign in

arxiv: 2510.06708 · v3 · submitted 2025-10-08 · 💻 cs.SE · cs.AI

Recognition: unknown

AISysRev -- LLM-based Tool for Title-abstract Screening

Authors on Pith no claims yet
classification 💻 cs.SE cs.AI
keywords screeningtoolaisysrevllmsboundaryhumaneasyexcludes
0
0 comments X
read the original abstract

Conducting systematic reviews is laborious. In the screening or study selection phase, the number of papers can be overwhelming. Recent research has demonstrated that large language models (LLMs) can perform title-abstract screening and support humans in the task. To this end, we developed AISysRev, an LLM-based screening tool implemented as a containerized web application. The tool accepts CSV files containing paper titles and abstracts. Users specify inclusion and exclusion criteria. Multiple different LLMs can be used, such as Gemini, Claude, Mistral or ChatGPT via OpenRouter. We also support locally hosted models and any model compatible with the OpenAI SDK. AISysRev implements both zero-shot and few-shot prompting, and also allows for manual screening through interfaces that display LLM results as guidance for human reviewers. LLM calls are parallelized, meaning screening speed is typically between 100 to 300 papers per minute, depending on the model and the host. To demonstrate the tool's use in practice, we conducted a qualitative trial study with 137 papers using the tool. Our findings indicate that papers can be classified into four categories: Easy Includes, Easy Excludes, Boundary Includes, and Boundary Excludes. The Boundary cases, where LLMs are prone to errors, highlight the need for human intervention. While LLMs do not replace human judgment in systematic reviews, they can reduce the burden of assessing large volumes of scientific literature. Video: https://www.youtube.com/watch?v=HeblemlgnAQ Tool: https://github.com/EvoTestOps/AISysRev

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TiAb Review Plugin: A Browser-Based Tool for AI-Assisted Title and Abstract Screening

    cs.DL 2026-04 accept novelty 7.0

    A Chrome extension provides no-code, serverless AI-assisted title and abstract screening for systematic reviews by integrating LLMs and ML active learning with Google Sheets.