A certificate-based regret analysis framework for guided-diffusion black-box optimization is introduced, with mass lift as the central quantity explaining convergence from pretrained generators.
Open-Ended Task Discovery via Bayesian Optimization
1 Pith paper cite this work. Polarity classification is still indexing.
abstract
When applying Bayesian optimization (BO) to scientific workflow, a major yet often overlooked source of uncertainty is the task itself -- namely, what to optimize and how to evaluate it -- which can evolve as evidence accumulates. We introduce Generate-Select-Refine (GSR), a open-ended BO framework that alternates between task generation and task optimization. Starting from a user-provided seed task, GSR generates new tasks in a coarse-to-fine manner while a task-acquisition function schedules optimization. Asymptotically, it concentrates evaluations on the best task, incurring only logarithmic regret overhead relative to single-task BO. We apply GSR to new product development, chemical synthesis scaling, algorithm analysis, and patent repurposing, where it outperforms existing LLM-based optimizers.
fields
stat.ML 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Regret Analysis of Guided Diffusion for Black-Box Optimization over Structured Inputs
A certificate-based regret analysis framework for guided-diffusion black-box optimization is introduced, with mass lift as the central quantity explaining convergence from pretrained generators.