pith. machine review for the scientific record. sign in

arxiv: 2605.10260 · v1 · submitted 2026-05-11 · 💻 cs.NE

Recognition: no theorem link

Meta-Black-Box Optimization Can Do Search Guidance for Expensive Constrained Multi-Objective Optimization

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:12 UTC · model grok-4.3

classification 💻 cs.NE
keywords meta-black-box optimizationsearch guidanceconstrained multi-objective optimizationsurrogate-assisted evolutionary algorithmregion abstractiondiffusion-based initializationmeta-policy
0
0 comments X

The pith

A meta-policy supplies search guidance for expensive constrained multi-objective optimization by abstracting constraints into scalar region levels.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes that meta-black-box optimization can supply search guidance rather than only controlling optimizers for expensive constrained multi-objective problems. It introduces a bi-level framework in which a meta-policy directs a low-level surrogate-assisted evolutionary algorithm using a new abstraction of feasible regions. This abstraction converts heterogeneous constraint values into ordered scalar levels that remain useful across different problems. Diffusion-based initialization then converts the meta-policy output into concrete starting populations, while an attention mechanism keeps the representation scalable as dimensions and objective counts change. Experiments indicate the resulting method beats existing baselines on multiple benchmarks and transfers to new problem distributions.

Core claim

By defining the Max-Min Constraint-Calibrated Inequality to map constraint evaluations to a single ordered scalar that abstracts feasible regions in a problem-agnostic manner, and feeding the resulting region-level signal into diffusion-based population initialization inside a bi-level meta-framework, a meta-policy can deliver effective search guidance to surrogate-assisted evolutionary algorithms on expensive constrained multi-objective problems.

What carries the argument

Max-Min Constraint-Calibrated Inequality (MM-CCI), a mapping that converts heterogeneous constraint evaluations into an ordered scalar level to create compact, problem-agnostic feasible-region abstractions.

If this is right

  • The bi-level MetaSG-SAEA framework outperforms state-of-the-art baselines on diverse ECMOP benchmarks.
  • The learned meta-policy generalizes across different problem distributions.
  • Diffusion-based initialization successfully converts region-level meta-guidance into solution-level priors for the low-level SAEA.
  • The attention-based state representation scales the meta-policy to varying numbers of objectives, constraints, dimensions, and population sizes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The scalar region abstraction could be reused to add guidance to single-objective or unconstrained expensive optimization tasks with only minor changes.
  • Pre-training the meta-policy on synthetic distributions might produce reusable priors that accelerate real-world ECMOP solving.
  • The combination of diffusion models and evolutionary search could be tested on problems where evaluation noise is high rather than merely expensive.

Load-bearing premise

The Max-Min Constraint-Calibrated Inequality supplies a compact abstraction that preserves the information needed to guide search without loss.

What would settle it

Run MetaSG-SAEA and the current baselines on a fresh collection of ECMOPs whose constraint structures differ markedly from the training distribution and check whether the performance gap closes.

Figures

Figures reproduced from arXiv: 2605.10260 by Chongshuang Hu, Haiyue Yu, Haobo Liu, Jiang Jiang, Shengkun Chang, Shuaiwen Tang, Xiaotong Xie, Yukun Du.

Figure 1
Figure 1. Figure 1: Bi-level paradigm of existing MetaBBOs. (EC), leading to many effective, human-designed black-box optimizers (Slowik & Kwasnicka, 2020; Zhan et al., 2022). However, their transferability is often limited, as an opti￾mizer that performs well on one class of problems may degrade on others and typically requires substantial manual adaptation (Eiben & Smit, 2011; Guo et al., 2024b; Zhan et al., 2025). To impro… view at source ↗
Figure 2
Figure 2. Figure 2: MetaSG-SAEA search guidance process. (left) The framework of MetaSG-SAEA. (right) The Meta-policy takes objective values and computed MM-CCI levels as input to determine the search regions. The evaluated solutions within the decision region are passed to the diffusion model for training, which generates the initial population. During the environmental selection phase of the evolutionary algorithm, individu… view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of region-level search guidance. (a) At the early stage, most solutions lie in low-λ regions, and the action a (1) provides broad guidance by selecting a wide range of levels. (b) As optimization progresses, the guidance shifts toward more diverse near-feasible/feasible solutions. (c) The action a (2) enables more focused guidance by restricting selection to a specific MM-CCI level interval. C… view at source ↗
Figure 4
Figure 4. Figure 4: Average reward during meta-policy training. selected for expensive evaluation, across all algorithms. Ad￾ditionally, the value of K for MM-CCI is set to 40. Further training and testing details are provided in the following experimental sections. All experiments were conducted on a compute cluster equipped with 14 × 32GB VGPUs and 14 × 25-core Intel Xeon Platinum 8481C processors. Evaluation metric. We ado… view at source ↗
Figure 5
Figure 5. Figure 5: (left) Decision consistency heatmap across meta-policies over training. (right) MM-CCI comparison with other methods. denotes that no feasible solution is found within the bud￾get. The failure cases on DAS-CMOP4–6 are discussed in Appendix D.4. 4.4. Consistency and MM-CCI Comparison (RQ3, RQ4) We evaluate whether MetaSG-SAEA learns search guidance tailored to problem distributions by measuring the decision… view at source ↗
Figure 6
Figure 6. Figure 6: The computation graph of attention-based ELA. Attention-Based Aggregation. As illustrated in [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Impact of elite solution batch size and MM-CCI segment count on optimization performance. D.2. Ablation Study In this section, we present an ablation study to evaluate the impact of key components in MetaSG-SAEA, including the two types of actions in the action space, the adaptive hyperparameter estimation for Max-CCI, and the diffusion-model-based warm-start for population initialization. For the ablation… view at source ↗
Figure 8
Figure 8. Figure 8: Ablation Study of MetaSG-SAEA on MW Problems. warm-start is necessary for effectively translating the meta-policy’s region-level decision into informative solution-level priors and thus improving the overall optimization performance. D.3. Model Complexity Comparison To further analyze the impact of model complexity in MetaSG-SAEA’s attention architecture, we compare six configurations with different number… view at source ↗
Figure 9
Figure 9. Figure 9: The impact of model complexity on zero-shot performance and training reward. As shown in [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Optimization performance of DAS-CMOP4, DAS-CMOP5, and DAS-CMOP6. x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 Decision Variables 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 Sample Index 0.0 0.2 0.4 0.6 0.8 1.0 Value (a) DASCMOP4 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 Decision Variables 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 Sample Index 0.2 0.4 0.6 0.8 1.0 Value (b) MW1 [PITH_FULL_IMAGE:figure… view at source ↗
Figure 11
Figure 11. Figure 11: Comparison of Decision Variable Heatmaps during the Sampling Process between DAS-CMOP4 and MW1. D.4. Supplementary Analysis for RQ2 In this subsection, we provide additional analysis for RQ2, focusing on why MetaSG-SAEA may fail to find feasible solutions on DAS-CMOP4, DAS-CMOP5, and DAS-CMOP6. We observe that these instances share a key constraint function, implying highly similar constraint structures a… view at source ↗
read the original abstract

Existing Meta-Black-Box Optimization (MetaBBO) methods focus on how to search when controlling optimizers, but largely overlook where to search. We propose MetaSG-SAEA, a bi-level MetaBBO framework for expensive constrained multi-objective optimization problems (ECMOPs), in which a meta-policy provides search guidance to the low-level Surrogate-Assisted Evolutionary Algorithm (SAEA). To achieve this, we introduce Max-Min Constraint-Calibrated Inequality (MM-CCI), a compact, problem-agnostic region abstraction that maps heterogeneous constraint evaluations to an ordered scalar level; we further provide a theoretical analysis of its fundamental properties. Building on this region abstraction, we adopt diffusion-based population initialization to translate the meta-policy's region-level guidance into solution-level priors for the SAEA. To make MetaSG-SAEA scalable, we construct an attention-based state representation across varying problem dimensions, population sizes, and numbers of objectives and constraints. Experimental results demonstrate that MetaSG-SAEA outperforms state-of-the-art baselines across diverse benchmarks and exhibits the ability to generalize across problem distributions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes MetaSG-SAEA, a bi-level Meta-Black-Box Optimization (MetaBBO) framework for expensive constrained multi-objective optimization problems (ECMOPs). A meta-policy supplies search guidance to a low-level Surrogate-Assisted Evolutionary Algorithm (SAEA) via the introduced Max-Min Constraint-Calibrated Inequality (MM-CCI) abstraction, which maps heterogeneous constraints to an ordered scalar region level; this is combined with diffusion-based population initialization to convert region guidance into solution priors and an attention-based state representation for scalability across dimensions, objectives, and constraints. The authors supply a theoretical analysis of MM-CCI properties and report that MetaSG-SAEA outperforms state-of-the-art baselines on diverse benchmarks while generalizing across problem distributions.

Significance. If the empirical claims are substantiated, the work is significant for extending meta-BBO to the 'where to search' problem in expensive constrained settings. The MM-CCI abstraction with its theoretical properties, diffusion initialization, and attention mechanism for variable problem sizes represent a coherent integration of meta-learning with surrogate-assisted constrained MO optimization. Explicit credit is due for the problem-agnostic region abstraction and the attempt at generalization testing, which could influence future work on meta-policies for engineering design tasks if train/test separation is clearly demonstrated.

major comments (2)
  1. [Abstract and experimental results] Abstract and experimental results section: The headline claim that MetaSG-SAEA 'outperforms state-of-the-art baselines across diverse benchmarks and exhibits the ability to generalize across problem distributions' lacks supporting details on experimental design, number of runs, statistical tests, baseline selection, or explicit separation of training versus held-out test distributions (e.g., whether test instances use qualitatively different constraint structures or merely parameter variations within the same benchmark families). This directly undermines confidence in the central empirical result.
  2. [MM-CCI definition and properties] Section introducing MM-CCI (theoretical analysis subsection): The claim that MM-CCI maps heterogeneous constraint evaluations to an ordered scalar without losing critical information for search guidance is load-bearing for the framework. While properties are analyzed, the manuscript should provide a concrete verification (e.g., on a multi-constraint example) showing that the ordering preserves distinctions between feasible/infeasible regions sufficiently to guide the diffusion initialization effectively.
minor comments (2)
  1. [State representation] Clarify the exact form of the attention-based state representation (e.g., how variable numbers of objectives and constraints are embedded) to improve reproducibility.
  2. [Experimental setup] Ensure all benchmark names, constraint counts, and objective dimensions are tabulated for the reported experiments.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which help us improve the clarity and rigor of the manuscript. We address each major comment point-by-point below. Where appropriate, we will revise the paper to incorporate additional details and examples as suggested.

read point-by-point responses
  1. Referee: [Abstract and experimental results] Abstract and experimental results section: The headline claim that MetaSG-SAEA 'outperforms state-of-the-art baselines across diverse benchmarks and exhibits the ability to generalize across problem distributions' lacks supporting details on experimental design, number of runs, statistical tests, baseline selection, or explicit separation of training versus held-out test distributions (e.g., whether test instances use qualitatively different constraint structures or merely parameter variations within the same benchmark families). This directly undermines confidence in the central empirical result.

    Authors: We agree that the abstract is necessarily concise and that the experimental results section would benefit from expanded details to strengthen confidence in the claims. In the revised manuscript, we will add a dedicated subsection on experimental setup that explicitly states: (i) 20 independent runs per problem instance with reported means and standard deviations; (ii) statistical significance via Wilcoxon rank-sum tests (p < 0.05) with Holm-Bonferroni correction; (iii) rationale for baseline selection (including why specific SOTA methods were chosen over others); and (iv) a clear description of the train/test split. Regarding generalization, the test set includes problems with qualitatively different constraint structures (e.g., varying numbers and types of constraints, different feasible region topologies) drawn from held-out benchmark families not used in meta-training, as opposed to mere parameter variations. We will include a new table summarizing the distribution differences between train and test sets to make this separation explicit. revision: yes

  2. Referee: [MM-CCI definition and properties] Section introducing MM-CCI (theoretical analysis subsection): The claim that MM-CCI maps heterogeneous constraint evaluations to an ordered scalar without losing critical information for search guidance is load-bearing for the framework. While properties are analyzed, the manuscript should provide a concrete verification (e.g., on a multi-constraint example) showing that the ordering preserves distinctions between feasible/infeasible regions sufficiently to guide the diffusion initialization effectively.

    Authors: We acknowledge that a concrete multi-constraint example would make the MM-CCI properties more accessible and directly illustrate its utility for diffusion-based initialization. In the revision, we will add a new illustrative example (with accompanying figure) in the theoretical analysis subsection. This example will use a problem with two heterogeneous inequality constraints, show the step-by-step computation of the Max-Min CCI scalar, and demonstrate how the resulting ordered region level distinguishes feasible from infeasible areas while preserving the relative ordering needed for effective guidance. We will also explicitly link this to how the diffusion model translates the scalar into solution priors, confirming that no critical information for search guidance is lost. revision: yes

Circularity Check

0 steps flagged

No circularity; new abstractions and empirical claims are independently motivated

full rationale

The abstract and description introduce MM-CCI as a novel mapping with separate theoretical analysis of properties, diffusion initialization to operationalize guidance, and attention-based state representation for scalability. These are presented as problem-motivated constructions rather than reductions of outputs to inputs. The headline result is an empirical outperformance claim on benchmarks, not a derivation that collapses to fitted parameters or self-citations by construction. No equations or steps in the provided text exhibit self-definitional, fitted-prediction, or load-bearing self-citation patterns. The framework remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Only the abstract is available, so free parameters, axioms, and invented entities cannot be audited in detail. The framework likely depends on hyperparameters in the meta-policy training, diffusion model, and attention mechanism, but none are specified. MM-CCI is introduced as a new abstraction whose properties are analyzed theoretically.

invented entities (1)
  • MM-CCI no independent evidence
    purpose: Compact problem-agnostic region abstraction mapping heterogeneous constraints to an ordered scalar level for search guidance
    Newly proposed in the paper; no external validation or independent evidence provided in the abstract.

pith-pipeline@v0.9.0 · 5511 in / 1331 out tokens · 53899 ms · 2026-05-12T04:12:33.512926+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

66 extracted references · 66 canonical work pages · 1 internal anchor

  1. [1]

    Artificial Intelligence Review , year=

    A survey on evolutionary computation for complex continuous optimization , author=. Artificial Intelligence Review , year=

  2. [2]

    Swarm and evolutionary computation , year=

    Parameter tuning for configuring and analyzing evolutionary algorithms , author=. Swarm and evolutionary computation , year=

  3. [3]

    Evolutionary algorithms and their applications to engineering problems , journal=

    Slowik, Adam and Kwasnicka, Halina , year=. Evolutionary algorithms and their applications to engineering problems , journal=

  4. [4]

    International Conference on Learning Representations , year =

    Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers , author=. International Conference on Learning Representations , year =

  5. [5]

    IEEE Transactions on Evolutionary Computation , year=

    Toward automated algorithm design: A survey and practical guide to meta-black-box-optimization , author=. IEEE Transactions on Evolutionary Computation , year=

  6. [6]

    Advances in Neural Information Processing Systems , year=

    Metabox: A benchmark platform for meta-black-box optimization with reinforcement learning , author=. Advances in Neural Information Processing Systems , year=

  7. [7]

    International Conference on Learning Representations , year =

    Neural Exploratory Landscape Analysis for Meta-Black-Box-Optimization , author=. International Conference on Learning Representations , year =

  8. [8]

    Swarm and Evolutionary Computation , year=

    Meta-Black-Box optimization for evolutionary algorithms: Review and perspective , author=. Swarm and Evolutionary Computation , year=

  9. [9]

    Proceedings of the Genetic and Evolutionary Computation Conference , year=

    Surrogate learning in meta-black-box optimization: A preliminary study , author=. Proceedings of the Genetic and Evolutionary Computation Conference , year=

  10. [10]

    Proceedings of the AAAI Conference on Artificial Intelligence , year=

    Meta-black-box optimization with bi-space landscape analysis and dual-control mechanism for SAEA , author=. Proceedings of the AAAI Conference on Artificial Intelligence , year=

  11. [11]

    Swarm and Evolutionary Computation , year=

    Deep reinforcement learning assisted surrogate model management for expensive constrained multi-objective optimization , author=. Swarm and Evolutionary Computation , year=

  12. [12]

    IEEE Transactions on Evolutionary Computation , year=

    A multi-stage expensive constrained multi-objective optimization algorithm based on ensemble infill criterion , author=. IEEE Transactions on Evolutionary Computation , year=

  13. [13]

    Machine Intelligence Research , volume=

    Evolutionary computation for expensive optimization: A survey , author=. Machine Intelligence Research , volume=. 2022 , publisher=

  14. [14]

    IEEE Transactions on Evolutionary Computation , year=

    Turning high-dimensional optimization into computationally expensive optimization , author=. IEEE Transactions on Evolutionary Computation , year=

  15. [15]

    Information Sciences , year=

    An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems , author=. Information Sciences , year=

  16. [16]

    IEEE Transactions on Systems, Man, and Cybernetics: Systems , year=

    Expensive optimization via surrogate-assisted and model-free evolutionary optimization , author=. IEEE Transactions on Systems, Man, and Cybernetics: Systems , year=

  17. [17]

    Journal of Membrane Computing , year=

    A survey of surrogate-assisted evolutionary algorithms for expensive optimization , author=. Journal of Membrane Computing , year=

  18. [18]

    Proceedings of the 13th annual conference on Genetic and evolutionary computation , year=

    Exploratory landscape analysis , author=. Proceedings of the 13th annual conference on Genetic and evolutionary computation , year=

  19. [19]

    Algorithms , year=

    A survey of advances in landscape analysis for optimisation , author=. Algorithms , year=

  20. [20]

    Proceedings of the Genetic and Evolutionary Computation Conference , year=

    Fitness landscape analysis of graph neural network architecture search spaces , author=. Proceedings of the Genetic and Evolutionary Computation Conference , year=

  21. [21]

    arXiv preprint arXiv:2509.23189 , year=

    AutoEP: LLMs-Driven Automation of Hyperparameter Evolution for Metaheuristic Algorithms , author=. arXiv preprint arXiv:2509.23189 , year=

  22. [22]

    arXiv preprint arXiv:2509.15810 , year=

    Instance generation for meta-black-box optimization through latent space reverse engineering , author=. arXiv preprint arXiv:2509.15810 , year=

  23. [23]

    arXiv preprint arXiv:2511.15199 , year=

    Learning Where, What and How to Transfer: A Multi-Role Reinforcement Learning Approach for Evolutionary Multitasking , author=. arXiv preprint arXiv:2511.15199 , year=

  24. [24]

    arXiv preprint arXiv:2505.17745 , year=

    MetaBox-v2: A Unified Benchmark Platform for Meta-Black-Box Optimization , author=. arXiv preprint arXiv:2505.17745 , year=

  25. [25]

    International Conference on Learning Representations , year =

    SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning , author=. International Conference on Learning Representations , year =

  26. [26]

    IEEE Transactions on Systems, Man, and Cybernetics: Systems , year=

    Deep reinforcement learning for dynamic algorithm selection: A proof-of-principle study on differential evolution , author=. IEEE Transactions on Systems, Man, and Cybernetics: Systems , year=

  27. [27]

    Proceedings of the Genetic and Evolutionary Computation Conference , year=

    Reinforcement learning-based self-adaptive differential evolution through automated landscape feature learning , author=. Proceedings of the Genetic and Evolutionary Computation Conference , year=

  28. [28]

    Proceedings of the Genetic and Evolutionary Computation Conference Companion , pages=

    Transoptas: Transformer-based algorithm selection for single-objective optimization , author=. Proceedings of the Genetic and Evolutionary Computation Conference Companion , pages=

  29. [29]

    Proceedings of the 33rd International Joint Conference on Artificial Intelligence, IJCAI 2024 , year=

    Large Language Model-Enhanced Algorithm Selection: Towards Comprehensive Algorithm Representation , author=. Proceedings of the 33rd International Joint Conference on Artificial Intelligence, IJCAI 2024 , year=

  30. [30]

    IEEE transactions on artificial intelligence , year=

    A recommender system for metaheuristic algorithms for continuous optimization based on deep recurrent neural networks , author=. IEEE transactions on artificial intelligence , year=

  31. [31]

    A multi-agent reinforcement learning driven artificial bee colony algorithm with the central controller , journal=

    Zhao, Fuqing and Wang, Zhenyu and Wang, Ling and Xu, Tianpeng and Zhu, Ningning and Jonrinaldi , year=. A multi-agent reinforcement learning driven artificial bee colony algorithm with the central controller , journal=

  32. [32]

    Constrained evolutionary optimization based on reinforcement learning using the objective function and constraints , journal=

    Hu, Zhenzhen and Gong, Wenyin , year=. Constrained evolutionary optimization based on reinforcement learning using the objective function and constraints , journal=

  33. [33]

    Proceedings of the Genetic and Evolutionary Computation Conference Companion , year=

    Evolution transformer: In-context evolutionary optimization , author=. Proceedings of the Genetic and Evolutionary Computation Conference Companion , year=

  34. [34]

    International Conference on Learning Representations , year=

    Large language models as optimizers , author=. International Conference on Learning Representations , year=

  35. [35]

    Advances in Neural Information Processing Systems , year=

    Meta-learning of black-box solvers using deep reinforcement learning , author=. Advances in Neural Information Processing Systems , year=

  36. [36]

    arXiv preprint arXiv:2403.01131 , year=

    Llamoco: Instruction tuning of large language models for optimization code generation , author=. arXiv preprint arXiv:2403.01131 , year=

  37. [37]

    arXiv preprint arXiv:2410.14716 , year=

    A systematic survey on large language models for algorithm design , author=. arXiv preprint arXiv:2410.14716 , year=

  38. [38]

    arXiv preprint arXiv:2510.12399 , year=

    A survey of vibe coding with large language models , author=. arXiv preprint arXiv:2510.12399 , year=

  39. [39]

    Advances in Neural Information Processing Systems , year=

    Pretrained optimization model for zero-shot black box optimization , author=. Advances in Neural Information Processing Systems , year=

  40. [40]

    2025 , booktitle =

    Li, Junjun and Ma, Zeyuan and Huang, Ting and Gong, Yue-Jiao , title =. 2025 , booktitle =

  41. [41]

    IEEE/CAA Journal of Automatica Sinica , year=

    Constrained multi-objective optimization with deep reinforcement learning assisted operator selection , author=. IEEE/CAA Journal of Automatica Sinica , year=

  42. [42]

    Swarm and Evolutionary Computation , year=

    A two-stage surrogate-assisted evolutionary algorithm (TS-SAEA) for expensive multi/many-objective optimization , author=. Swarm and Evolutionary Computation , year=

  43. [43]

    Expected Improvement Matrix-Based Infill Criteria for Expensive Multiobjective Optimization , journal=

    Zhan, Dawei and Cheng, Yuansheng and Liu, Jun , year=. Expected Improvement Matrix-Based Infill Criteria for Expensive Multiobjective Optimization , journal=

  44. [44]

    Proceedings of the Genetic and Evolutionary Computation Conference 2016 , year=

    A generative kriging surrogate model for constrained and unconstrained multi-objective optimization , author=. Proceedings of the Genetic and Evolutionary Computation Conference 2016 , year=

  45. [45]

    IEEE Congress on Evolutionary Computation , year=

    A constrained multi-objective surrogate-based optimization algorithm , author=. IEEE Congress on Evolutionary Computation , year=

  46. [46]

    Constrained Probabilistic Pareto Dominance for Expensive Constrained Multiobjective Optimization Problems , year=

    Zhang, Zhiyao and Wang, Yong and Sun, Guangyong and Pang, Tong and Tang, Ke , journal=. Constrained Probabilistic Pareto Dominance for Expensive Constrained Multiobjective Optimization Problems , year=

  47. [47]

    A Two-Phase Kriging-Assisted Evolutionary Algorithm for Expensive Constrained Multiobjective Optimization Problems , year=

    Zhang, Zhiyao and Wang, Yong and Liu, Jiao and Sun, Guangyong and Tang, Ke , journal=. A Two-Phase Kriging-Assisted Evolutionary Algorithm for Expensive Constrained Multiobjective Optimization Problems , year=

  48. [48]

    Denoising Diffusion Probabilistic Models , year =

    Ho, Jonathan and Jain, Ajay and Abbeel, Pieter , booktitle =. Denoising Diffusion Probabilistic Models , year =

  49. [49]

    International Conference on Machine Learning , year=

    Diffusion models for black-box optimization , author=. International Conference on Machine Learning , year=

  50. [50]

    and Pratap, A

    Deb, K. and Pratap, A. and Agarwal, S. and Meyarivan, T. , year=. A fast and elitist multiobjective genetic algorithm: NSGA-II , journal=

  51. [51]

    Attention is All you Need , year =

    Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, ukasz and Polosukhin, Illia , booktitle =. Attention is All you Need , year =

  52. [52]

    International conference on machine learning , year=

    Batch normalization: Accelerating deep network training by reducing internal covariate shift , author=. International conference on machine learning , year=

  53. [53]

    Layer Normalization

    Layer normalization , author=. arXiv preprint arXiv:1607.06450 , year=

  54. [54]

    13th USENIX symposium on operating systems design and implementation (OSDI 18) , year=

    Ray: A distributed framework for emerging \ AI \ applications , author=. 13th USENIX symposium on operating systems design and implementation (OSDI 18) , year=

  55. [55]

    IEEE Transactions on Evolutionary Computation , year=

    A kriging-assisted two-archive evolutionary algorithm for expensive many-objective optimization , author=. IEEE Transactions on Evolutionary Computation , year=

  56. [56]

    IEEE Transactions on Evolutionary Computation , year=

    Evolutionary constrained multiobjective optimization: Test suite construction and performance comparisons , author=. IEEE Transactions on Evolutionary Computation , year=

  57. [57]

    Evolutionary computation , year=

    Difficulty adjustable and scalable constrained multiobjective test problem toolkit , author=. Evolutionary computation , year=

  58. [58]

    Proceedings of the AAAI conference on artificial intelligence , year=

    Deep reinforcement learning with double q-learning , author=. Proceedings of the AAAI conference on artificial intelligence , year=

  59. [59]

    International conference on machine learning , year=

    Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor , author=. International conference on machine learning , year=

  60. [60]

    IEEE transactions on evolutionary computation , year=

    An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints , author=. IEEE transactions on evolutionary computation , year=

  61. [61]

    IEEE/CAA Journal of Automatica Sinica , year=

    Constraints separation based evolutionary multitasking for constrained multi-objective optimization problems , author=. IEEE/CAA Journal of Automatica Sinica , year=

  62. [62]

    A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms , journal=

    Derrac, Joaquín and García, Salvador and Molina, Daniel and Herrera, Francisco , year=. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms , journal=

  63. [63]

    International Conference on Machine Learning , year=

    Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model , author=. International Conference on Machine Learning , year=

  64. [64]

    Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization , year =

    Luo, Fu and Lin, Xi and Liu, Fei and Zhang, Qingfu and Wang, Zhenkun , booktitle =. Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization , year =

  65. [65]

    Artificial Intelligence Review , year=

    A survey on large language models driven meta-optimizers for automated intelligent optimization , author=. Artificial Intelligence Review , year=

  66. [66]

    2026 , journal=

    ReflexDiffusion: Reflection-Enhanced Trajectory Planning for High-lateral-acceleration Scenarios in Autonomous Driving , author=. 2026 , journal=