A cost-aware value-of-information acquisition function is derived to balance direct observations against noisy pairwise human comparisons in Bayesian optimization, approaching the convex hull of the individual information sources' performance trajectories.
hub
A Tutorial on Bayesian Optimization
22 Pith papers cite this work. Polarity classification is still indexing.
abstract
Bayesian optimization is an approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. It is best-suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations. It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample. In this tutorial, we describe how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient. We then discuss more advanced techniques, including running multiple function evaluations in parallel, multi-fidelity and multi-information source optimization, expensive-to-evaluate constraints, random environmental conditions, multi-task Bayesian optimization, and the inclusion of derivative information. We conclude with a discussion of Bayesian optimization software and future research directions in the field. Within our tutorial material we provide a generalization of expected improvement to noisy evaluations, beyond the noise-free setting where it is more commonly applied. This generalization is justified by a formal decision-theoretic argument, standing in contrast to previous ad hoc modifications.
hub tools
citation-role summary
citation-polarity summary
roles
background 1polarities
background 1representative citing papers
Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.
A myopic MINMPC framework learns a value function offline via inverse optimization from expert data, allowing short horizons with near-optimal performance and strict integer feasibility online for hybrid systems.
COBALT performs direct discrete optimization over high-dimensional categorical structural designs by anchoring latent embeddings as graphs and applying trust-region acquisition on additive Gaussian process surrogates fitted to Monte Carlo finite-element data.
PALM-Mean combines sign-aware piecewise-linear relaxations of locally important kernel terms with closed-form analytic bounds on the rest inside a reduced-space branch-and-bound framework, yielding valid lower bounds and ε-global convergence for GP posterior mean optimization.
CCBO enables collaborative contextual Bayesian optimization across clients with sublinear regret guarantees and shows substantial gains over non-collaborative methods in simulations and a hot rolling application even under heterogeneity.
SSLA approximates the posterior predictive distribution by refitting Bayesian models on self-predicted data, providing a sampling-free method that improves predictive calibration over classical Laplace approximations in regression tasks.
A tractable ensemble distributionally robust Bayesian optimization method achieves improved sublinear regret bounds under context uncertainty.
BACO replaces direct black-box calls in collaborative optimization with Gaussian process surrogates at both subsystem and system levels, achieving lower objectives and near-zero constraint violations on MDO benchmarks and a CRM wing problem within limited evaluations.
A novel permutation-invariant GP kernel using set divergence is introduced for Bayesian optimization in CCS well placement and tested on synthetic benchmarks plus one real formation case.
HASOD is a hybrid adaptive framework that unifies factor screening via a new CWESS statistic and response optimization using Gaussian processes, achieving 97% detection accuracy in simulations with asymptotic consistency guarantees.
Experiments on real industrial time series show that partial model sharing improves diffusion model performance in bandwidth-limited non-IID settings, while full sharing stabilizes GAN training but offers less robustness than VAE or DDPM alternatives.
OrthoBO introduces an orthogonal acquisition estimator subtracting an optimally weighted score-function control variate to reduce Monte Carlo variance, preserve the acquisition target, and improve ranking stability in Bayesian hyperparameter optimization.
A 256-qubit neutral atom simulator computes Quantum Evolution Kernels for graph classification on the PROTEINS dataset, achieving slightly better performance than classical kernels.
Bayesian optimization with dimensionality reduction improves Hyperledger Fabric throughput by up to 12% in a 317-dimensional configuration space via an automated Caliper benchmarking loop.
AgentOpt introduces a framework-agnostic package that uses algorithms like UCB-E to find cost-effective model assignments in multi-step LLM agent pipelines, cutting evaluation budgets by 62-76% while maintaining near-optimal accuracy on benchmarks.
A trust-region Bayesian optimization framework integrates LEED multiple scattering models to jointly optimize structural and experimental parameters for automated surface reconstruction.
History-conditioned RL policies recover nearly all privileged-state performance with deployable well data, and latent model-based retuning outperforms direct model-free retuning under abnormal reservoir conditions.
A hybrid switching approach integrates Direct Search into model-based derivative-free optimization, with a convergence proof for single-objective cases and empirical gains on ML tasks and CUTEr benchmarks.
BayMOTH unifies meta-Bayesian optimization with a usefulness-based fallback to lookahead, demonstrating competitive results on function optimization tasks even under low task relatedness.
A comprehensive survey of PEFT algorithms for large models, covering their performance, overhead, applications, and real-world system implementations.
Bayesian optimization automates the scientific discovery cycle by modeling observations with surrogate models and using acquisition functions to select experiments that balance known information with new exploration.
citing papers explorer
-
Elicitation-Augmented Bayesian Optimization
A cost-aware value-of-information acquisition function is derived to balance direct observations against noisy pairwise human comparisons in Bayesian optimization, approaching the convex hull of the individual information sources' performance trajectories.
-
Bayesian Optimization with Structured Measurements: A Vector-Valued RKHS Framework
Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.
-
Learning myopic mixed-integer nonlinear model predictive control from expert demonstrations
A myopic MINMPC framework learns a value function offline via inverse optimization from expert data, allowing short horizons with near-optimal performance and strict integer feasibility online for hybrid systems.
-
Categorical Optimization with Bayesian Anchored Latent Trust Regions for Structural Design under High-Dimensional Uncertainty
COBALT performs direct discrete optimization over high-dimensional categorical structural designs by anchoring latent embeddings as graphs and applying trust-region acquisition on additive Gaussian process surrogates fitted to Monte Carlo finite-element data.
-
An Efficient Spatial Branch-and-Bound Algorithm for Global Optimization of Gaussian Process Posterior Mean Functions
PALM-Mean combines sign-aware piecewise-linear relaxations of locally important kernel terms with closed-form analytic bounds on the rest inside a reduced-space branch-and-bound framework, yielding valid lower bounds and ε-global convergence for GP posterior mean optimization.
-
Collaborative Contextual Bayesian Optimization
CCBO enables collaborative contextual Bayesian optimization across clients with sublinear regret guarantees and shows substantial gains over non-collaborative methods in simulations and a hot rolling application even under heterogeneity.
-
Self-Supervised Laplace Approximation for Bayesian Uncertainty Quantification
SSLA approximates the posterior predictive distribution by refitting Bayesian models on self-predicted data, providing a sampling-free method that improves predictive calibration over classical Laplace approximations in regression tasks.
-
Ensemble Distributionally Robust Bayesian Optimisation
A tractable ensemble distributionally robust Bayesian optimization method achieves improved sublinear regret bounds under context uncertainty.
-
Bayesian Algorithm for Collaborative Optimization with Application to Aircraft Design
BACO replaces direct black-box calls in collaborative optimization with Gaussian process surrogates at both subsystem and system levels, achieving lower objectives and near-zero constraint violations on MDO benchmarks and a CRM wing problem within limited evaluations.
-
Inducing Permutation Invariant Priors in Bayesian Optimization for Carbon Capture and Storage Applications
A novel permutation-invariant GP kernel using set divergence is introduced for Bayesian optimization in CCS well placement and tested on synthetic benchmarks plus one real formation case.
-
HASOD: A Hybrid Adaptive Screening-Optimization Design for High-Dimensional Industrial Experiments
HASOD is a hybrid adaptive framework that unifies factor screening via a new CWESS statistic and response optimization using Gaussian processes, achieving 97% detection accuracy in simulations with asymptotic consistency guarantees.
-
On the Tradeoffs of On-Device Generative Models in Federated Predictive Maintenance Systems
Experiments on real industrial time series show that partial model sharing improves diffusion model performance in bandwidth-limited non-IID settings, while full sharing stabilizes GAN training but offers less robustness than VAE or DDPM alternatives.
-
ORTHOBO: Orthogonal Bayesian Hyperparameter Optimization
OrthoBO introduces an orthogonal acquisition estimator subtracting an optimally weighted score-function control variate to reduce Monte Carlo variance, preserve the acquisition target, and improve ranking stability in Bayesian hyperparameter optimization.
-
Harnessing a 256-qubit Neutral Atom Simulator for Graph Classification
A 256-qubit neutral atom simulator computes Quantum Evolution Kernels for graph classification on the PROTEINS dataset, achieving slightly better performance than classical kernels.
-
Caliper-in-the-Loop: Black-Box Optimization for Hyperledger Fabric Performance Tuning
Bayesian optimization with dimensionality reduction improves Hyperledger Fabric throughput by up to 12% in a 317-dimensional configuration space via an automated Caliper benchmarking loop.
-
AgentOpt v0.1 Technical Report: Client-Side Optimization for LLM-Based Agent
AgentOpt introduces a framework-agnostic package that uses algorithms like UCB-E to find cost-effective model assignments in multi-step LLM agent pipelines, cutting evaluation budgets by 62-76% while maintaining near-optimal accuracy on benchmarks.
-
Physics-informed automated surface reconstructing via low-energy electron diffraction based on Bayesian optimization
A trust-region Bayesian optimization framework integrates LEED multiple scattering models to jointly optimize structural and experimental parameters for automated surface reconstruction.
-
Closed-Loop CO2 Storage Control With History-Based Reinforcement Learning and Latent Model-Based Adaptation
History-conditioned RL policies recover nearly all privileged-state performance with deployable well data, and latent model-based retuning outperforms direct model-free retuning under abnormal reservoir conditions.
-
Enhancing Model Based Derivative Free Optimization using Direct Search
A hybrid switching approach integrates Direct Search into model-based derivative-free optimization, with a convergence proof for single-objective cases and empirical gains on ML tasks and CUTEr benchmarks.
-
BayMOTH: Bayesian optiMizatiOn with meTa-lookahead -- a simple approacH
BayMOTH unifies meta-Bayesian optimization with a usefulness-based fallback to lookahead, demonstrating competitive results on function optimization tasks even under low task relatedness.
-
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
A comprehensive survey of PEFT algorithms for large models, covering their performance, overhead, applications, and real-world system implementations.
-
Efficient and Principled Scientific Discovery through Bayesian Optimization: A Tutorial
Bayesian optimization automates the scientific discovery cycle by modeling observations with surrogate models and using acquisition functions to select experiments that balance known information with new exploration.