Bayesian Optimization Algorithm (BOA)

From GM-RKB
(Redirected from Bayesian optimization)
Jump to navigation Jump to search

A Bayesian Optimization Algorithm (BOA) is a global sequential model-based optimization algorithm for black-box function optimization that places a prior over the objective function and uses Bayesian inference to sample the most informative points.



References

2024

  • LLM

ALGORITHM BayesianOptimization:

   INPUT:
       - objective_function: black-box function to optimize
       - domain: search space for input parameters
       - max_iterations: maximum number of iterations
       - acquisition_function: function to determine next sampling point
       - initial_samples: number of initial random samples
   OUTPUT:
       - best_parameters: parameters that yield the best observed value
       - best_value: best observed value of the objective function
   // Initialize
   1. Generate initial_samples random points in the domain
   2. Evaluate objective_function at these points
   3. Initialize surrogate_model with these points and their evaluations
   4. best_value = best evaluation so far
   5. best_parameters = parameters corresponding to best_value
   // Main optimization loop
   FOR iteration = 1 TO max_iterations:
       // Update surrogate model
       6. Update surrogate_model with all available data points
       // Determine next sampling point
       7. next_point = MAXIMIZE(acquisition_function(surrogate_model, domain))
       // Evaluate objective function at new point
       8. new_value = objective_function(next_point)
       // Update best solution if necessary
       9. IF new_value > best_value:
           10. best_value = new_value
           11. best_parameters = next_point
       // Add new data point to dataset
       12. Add (next_point, new_value) to dataset
   RETURN best_parameters, best_value

FUNCTION acquisition_function(surrogate_model, domain):

   // This function balances exploration and exploitation
   // It can be implemented as Expected Improvement, Upper Confidence Bound, etc.
   // Returns a function that can be maximized to find the next sampling point

END ALGORITHM

2023

2019

  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12.
    • Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it.

      The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.

2012

  • Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Proc. of NIPS .

2010