Recognition: unknown
CT Open: An Open-Access, Uncontaminated, Live Platform for the Open Challenge of Clinical Trial Outcome Prediction
Pith reviewed 2026-05-10 08:00 UTC · model grok-4.3
The pith
CT Open creates a live platform for predicting clinical trial outcomes using only pre-public data.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CT Open is an open-access platform that runs four clinical-trial-outcome-prediction challenges each year. Any method and any data source may be used because a fully automated pipeline repeatedly queries the web with large language models to identify the earliest public mention of each trial's result; human experts validated the pipeline's accuracy on sampled cases. The platform therefore guarantees that every evaluated prediction was made before the outcome was publicly known, and it releases an initial training set plus two time-stamped test benchmarks for Winter 2025 and Summer 2025.
What carries the argument
The decontamination pipeline, which performs iterative LLM-powered web searches to locate the earliest public mention of each trial outcome and thereby prevents leakage into the evaluation set.
If this is right
- Participants may employ any prediction technique without restrictions on data sources or model types.
- Four fresh challenges will be issued each year, each using trials whose outcomes are still private at submission time.
- The same decontamination method can be reused to create additional uncontaminated benchmarks beyond the initial Winter and Summer 2025 sets.
- Successful models on the platform could directly inform improvements in trial design and patient recruitment strategies.
Where Pith is reading between the lines
- The same search-based decontamination idea could be applied to other forecasting domains where information timing is critical, such as economic indicators or policy outcomes.
- If the platform grows, it may reduce the common problem of models inadvertently training on leaked future data.
- The approach highlights the value of maintaining live, time-stamped leaderboards that automatically update once ground truth appears.
- Wider adoption might encourage clinical registries to release results faster, since the pipeline already surfaces the true first-public dates.
Load-bearing premise
The iterative LLM web-search process will locate the absolute earliest public mention of every trial outcome across all possible sources.
What would settle it
An independent expert review that finds a publicly available outcome report dated earlier than the pipeline's identified date for any trial in the test benchmarks.
Figures
read the original abstract
Scientists have long sought to accurately predict outcomes of real-world events before they happen. Can AI systems do so more reliably? We study this question through clinical trial outcome prediction, a high-stakes open challenge even for domain experts. We introduce CT Open, an open-access, live platform that will run four challenge every year. Anyone can submit predictions for each challenge. CT Open evaluates those submissions on trials whose outcomes were not yet public at the time of submission but were made public afterwards. Determining if a trial's outcome is public on the internet before a certain date is surprisingly difficult. Outcomes posted on official registries may lag behind by years, while the first mention may appear in obscure articles. To address this, we propose a novel, fully automated decontamination pipeline that uses iterative LLM-powered web search to identify the earliest mention of trial outcomes. We validate the pipeline's quality and accuracy by human expert's annotations. Since CT Open's pipeline ensures that every evaluated trial had no publicly reported outcome when the prediction was made, it allows participants to use any methodology and any data source. In this paper, we release a training set and two time-stamped test benchmarks, Winter 2025 and Summer 2025. We believe CT Open can serve as a central hub for advancing AI research on forecasting real-world outcomes before they occur, while also informing biomedical research and improving clinical trial design. CT Open Platform is hosted at $\href{https://ct-open.net/}{https://ct-open.net/}$
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces CT Open, an open-access live platform that will host four annual challenges for predicting clinical trial outcomes. It proposes a novel fully automated decontamination pipeline that employs iterative LLM-powered web searches to identify the earliest public mentions of trial outcomes, thereby ensuring that evaluated trials have no prior public results at the time of prediction. The pipeline is validated through human expert annotations, and the paper releases a training set plus two time-stamped test benchmarks (Winter 2025 and Summer 2025).
Significance. If the decontamination pipeline can be shown to reliably exclude contaminated trials, CT Open would offer a valuable, reusable benchmark resource for AI forecasting of real-world events that permits unrestricted methods and data sources. The open, recurring challenge format and public platform could accelerate progress in temporal prediction while providing secondary benefits to biomedical research and trial design. The release of concrete training and test sets is a concrete strength that supports immediate community use.
major comments (1)
- The section describing the decontamination pipeline states that it is validated 'by human expert's annotations' but provides no quantitative details on sample size, precision or recall for earliest-date recovery, false-negative rate, or inter-annotator agreement. Because the central claim of guaranteed uncontaminated evaluation rests on correctly identifying that no public outcome mention existed before each challenge cutoff, the lack of these metrics leaves the pipeline's reliability unquantified and directly weakens the benchmark integrity guarantee.
minor comments (2)
- Abstract: 'four challenge every year' is grammatically incorrect and should read 'four challenges every year.'
- Abstract: 'human expert's annotations' should be 'human experts' annotations' to reflect plural possessive form.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback and recognition of CT Open's potential value as a benchmark resource. We address the single major comment below and will incorporate the requested details in the revised manuscript.
read point-by-point responses
-
Referee: The section describing the decontamination pipeline states that it is validated 'by human expert's annotations' but provides no quantitative details on sample size, precision or recall for earliest-date recovery, false-negative rate, or inter-annotator agreement. Because the central claim of guaranteed uncontaminated evaluation rests on correctly identifying that no public outcome mention existed before each challenge cutoff, the lack of these metrics leaves the pipeline's reliability unquantified and directly weakens the benchmark integrity guarantee.
Authors: We agree that the current description lacks the quantitative validation metrics needed to fully substantiate the decontamination pipeline's reliability. In the revised manuscript, we will expand the relevant section to report the sample size of trials reviewed by human experts, precision and recall for earliest public mention date recovery, false-negative rate for identifying pre-cutoff outcome mentions, and inter-annotator agreement statistics. These additions will directly address the concern and strengthen the evidence supporting the benchmark's uncontaminated evaluation guarantee. revision: yes
Circularity Check
No circularity: platform and pipeline are self-contained contributions
full rationale
The paper introduces a live challenge platform and an automated decontamination pipeline for identifying earliest public mentions of clinical trial outcomes. No mathematical derivations, equations, parameter fitting, or predictions are present that could reduce to inputs by construction. Validation relies on separate human expert annotations rather than self-referential steps. The central claim (uncontaminated benchmarks) rests on the described search process and external validation, not on any self-definition, fitted-input renaming, or self-citation load-bearing argument. This is a systems and data-resource paper with no derivation chain to inspect for circularity.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Iterative LLM-powered web search combined with human annotation can identify the earliest public mention of trial outcomes with high reliability.
Forward citations
Cited by 1 Pith paper
-
DeepImagine: Learning Biomedical Reasoning via Successive Counterfactual Imagining
DeepImagine trains LLMs on counterfactual pairs from clinical trials using supervised fine-tuning and reinforcement learning to improve outcome prediction by approximating causal mechanisms.
Reference graph
Works this paper leans on
-
[1]
Ezra Karger, Houtan Bastani, Chen Yueh-Han, Zachary Jacobs, Danny Halawi, Fred Zhang, and Philip E
URLhttps://arxiv.org/abs/2401.15269. Ezra Karger, Houtan Bastani, Chen Yueh-Han, Zachary Jacobs, Danny Halawi, Fred Zhang, and Philip E. Tetlock. Forecastbench: A dynamic benchmark of ai forecasting capabilities,
-
[2]
URLhttps://arxiv.org/abs/2409.19839. Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. Dynabench: Rethinking benchmar...
-
[3]
arXiv:2104.14337 (2021), https://arxiv.org/abs/2104.14337
URLhttps://arxiv.org/abs/2104.14337. Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. Nv-embed: Improved techniques for training llms as generalist embedding models. arXiv preprint arXiv:2405.17428, 2024. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, H...
-
[4]
URLhttps://arxiv.org/abs/2112.09332. OpenAI. Openai o3-mini. https://openai.com/index/openai-o3-mini/, January 2025. OpenAI release page. OpenAI. Openai gpt-5 system card, 2025. URLhttps://arxiv.org/abs/2601.03267. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. Gpqa...
-
[5]
InProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
URLhttps://arxiv.org/abs/2408.00727. Yutaro Yamada, Robert Tjarko Lange, Cong Lu, Shengran Hu, Chris Lu, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist-v2: Workshop-level automated scientific discovery via agentic tree search, 2025. URLhttps://arxiv.org/abs/2504.08066. Zhiling Yan, Dingjie Song, Zhe Fang, Yisheng Ji, Xiang Li, Quanzheng Li, an...
-
[6]
Full data feature set for tabular machine learning models . . . . . . . . . . . . . . . . . 16
-
[7]
Baseline Implementation Details and Hyperparameters . . . . . . . . . . . . . . . . . . . 18 C. Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
-
[8]
Decontamination Pipeline Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . . . 18
-
[9]
similar clinical trials
Answer Generation Pipeline Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . 21 D. Dataset Generation Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 E. Evaluation Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.