An Experimental Investigation of Model-Based Parameter Optimization: - - PowerPoint PPT Presentation

an experimental investigation of model based parameter
SMART_READER_LITE
LIVE PREVIEW

An Experimental Investigation of Model-Based Parameter Optimization: - - PowerPoint PPT Presentation

An Experimental Investigation of Model-Based Parameter Optimization: SPO and Beyond Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown, Kevin P. Murphy Department of Computer Science University of British Columbia Canada { hutter, hoos,


slide-1
SLIDE 1

An Experimental Investigation of Model-Based Parameter Optimization: SPO and Beyond

Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown, Kevin P. Murphy

Department of Computer Science University of British Columbia Canada {hutter, hoos, kevinlb, murphyk}@cs.ubc.ca

slide-2
SLIDE 2

Motivation for Parameter Optimization

Genetic Algorithms & Evolutionary Strategies are

+ Very flexible frameworks

2

slide-3
SLIDE 3

Motivation for Parameter Optimization

Genetic Algorithms & Evolutionary Strategies are

+ Very flexible frameworks – Tedious to configure for a new domain

◮ Population size ◮ Mating scheme ◮ Mutation rate ◮ Search operators ◮ Hybridizations, ... 2

slide-4
SLIDE 4

Motivation for Parameter Optimization

Genetic Algorithms & Evolutionary Strategies are

+ Very flexible frameworks – Tedious to configure for a new domain

◮ Population size ◮ Mating scheme ◮ Mutation rate ◮ Search operators ◮ Hybridizations, ...

Automated parameter optimization can help

◮ High-dimensional optimization problem ◮ Automate saves time & improves results

2

slide-5
SLIDE 5

Parameter Optimization Methods

◮ Numerical parameters

– See Blackbox optimization workshop (this GECCO) – Algorithm parameters: CALIBRA [Adenso-Diaz & Laguna, ’06]

3

slide-6
SLIDE 6

Parameter Optimization Methods

◮ Numerical parameters

– See Blackbox optimization workshop (this GECCO) – Algorithm parameters: CALIBRA [Adenso-Diaz & Laguna, ’06]

◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02]

3

slide-7
SLIDE 7

Parameter Optimization Methods

◮ Numerical parameters

– See Blackbox optimization workshop (this GECCO) – Algorithm parameters: CALIBRA [Adenso-Diaz & Laguna, ’06]

◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters

– Genetic algorithms [Terashima-Mar´

ın, Ross & Valenzuela-R´ endon, ’99]

3

slide-8
SLIDE 8

Parameter Optimization Methods

◮ Numerical parameters

– See Blackbox optimization workshop (this GECCO) – Algorithm parameters: CALIBRA [Adenso-Diaz & Laguna, ’06]

◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters

– Genetic algorithms [Terashima-Mar´

ın, Ross & Valenzuela-R´ endon, ’99]

– Iterated Local Search

[Hutter, Hoos, Leyton-Brown & St¨ utzle, ’07-’09]

Dozens of parameters (e.g., CPLEX with 63 parameters) For many problems: SAT, MIP, time-tabling, protein folding, MPE, ...

3

slide-9
SLIDE 9

Parameter Optimization Methods

Model-free Parameter Optimization

◮ Numerical parameters: see BBOB workshop (this GECCO) ◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters [e.g., Terashima-Mar´ ın, Ross & Valenzuela-R´ endon, ’99, Hutter, Hoos, Leyton-Brown & St¨ utzle, ’07-’09]

4

slide-10
SLIDE 10

Parameter Optimization Methods

Model-free Parameter Optimization

◮ Numerical parameters: see BBOB workshop (this GECCO) ◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters [e.g., Terashima-Mar´ ın, Ross & Valenzuela-R´ endon, ’99, Hutter, Hoos, Leyton-Brown & St¨ utzle, ’07-’09]

Model-based Parameter Optimization

4

slide-11
SLIDE 11

Parameter Optimization Methods

Model-free Parameter Optimization

◮ Numerical parameters: see BBOB workshop (this GECCO) ◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters [e.g., Terashima-Mar´ ın, Ross & Valenzuela-R´ endon, ’99, Hutter, Hoos, Leyton-Brown & St¨ utzle, ’07-’09]

Model-based Parameter Optimization

◮ Methods

– Fractional factorial designs [e.g., Ridge & Kudenko, ’07] – Sequential Parameter Optimization (SPO)

[Bartz-Beielstein, Preuss, Lasarczyk, ’05-’09]

4

slide-12
SLIDE 12

Parameter Optimization Methods

Model-free Parameter Optimization

◮ Numerical parameters: see BBOB workshop (this GECCO) ◮ Few categorical parameters: racing algorithms [Birattari, St¨ utzle, Paquete & Varrentrapp, ’02] ◮ Many categorical parameters [e.g., Terashima-Mar´ ın, Ross & Valenzuela-R´ endon, ’99, Hutter, Hoos, Leyton-Brown & St¨ utzle, ’07-’09]

Model-based Parameter Optimization

◮ Methods

– Fractional factorial designs [e.g., Ridge & Kudenko, ’07] – Sequential Parameter Optimization (SPO)

[Bartz-Beielstein, Preuss, Lasarczyk, ’05-’09] ◮ Can use model for more than optimization

– Importance of each parameter – Interaction between parameters

4

slide-13
SLIDE 13

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

5

slide-14
SLIDE 14

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

6

slide-15
SLIDE 15

SMBO: Introduction

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

. . True function . .

First step of SMBO

7

slide-16
SLIDE 16

SMBO: Introduction

  • 1. Get response values at initial design points

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

. True function Function evaluations .

First step of SMBO

7

slide-17
SLIDE 17

SMBO: Introduction

  • 1. Get response values at initial design points

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

. . . Function evaluations .

First step of SMBO

7

slide-18
SLIDE 18

SMBO: Introduction

  • 1. Get response values at initial design points
  • 2. Fit a model to the data

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev . Function evaluations .

First step of SMBO

7

slide-19
SLIDE 19

SMBO: Introduction

  • 1. Get response values at initial design points
  • 2. Fit a model to the data
  • 3. Use model to pick most promising next design point (based
  • n expected improvement criterion)

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev . Function evaluations EI (scaled)

First step of SMBO

7

slide-20
SLIDE 20

SMBO: Introduction

  • 1. Get response values at initial design points
  • 2. Fit a model to the data
  • 3. Use model to pick most promising next design point (based
  • n expected improvement criterion)

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev True function Function evaluations EI (scaled)

First step of SMBO

7

slide-21
SLIDE 21

SMBO: Introduction

  • 1. Get response values at initial design points
  • 2. Fit a model to the data
  • 3. Use model to pick most promising next design point (based
  • n expected improvement criterion)
  • 4. Repeat 2. and 3. until time is up

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev True function Function evaluations EI (scaled)

First step of SMBO

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev True function Function evaluations EI (scaled)

Second step of SMBO

7

slide-22
SLIDE 22

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

8

slide-23
SLIDE 23

Dealing with Noise: SKO vs SPO

◮ Method I (used in SKO) [Huang, Allen, Notz & Zeng, ’06.]

– Fit standard GP assuming Gaussian observation noise – Can only fit the mean of the responses

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

GP mean prediction GP mean +/− 2*stddev True function Function evaluations .

Method I: noisy fit of original response

9

slide-24
SLIDE 24

Dealing with Noise: SKO vs SPO

◮ Method I (used in SKO) [Huang, Allen, Notz & Zeng, ’06.]

– Fit standard GP assuming Gaussian observation noise – Can only fit the mean of the responses

◮ Method II (used in SPO) [Bartz-Beielstein, Preuss, Lasarczyk, ’05-’09]

– Compute statistic of empirical distribution of responses at each design point – Fit noise-free GP to that

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

GP mean prediction GP mean +/− 2*stddev True function Function evaluations .

Method I: noisy fit of original response

0.2 0.4 0.6 0.8 1 −5 5 10 15 20 25 30

parameter x response y

DACE mean prediction DACE mean +/− 2*stddev True function Function evaluations .

Method II: noise-free fit of cost statistic

9

slide-25
SLIDE 25

Experiment: SPO vs SKO for Tuning CMA-ES

◮ CMA-ES [Hansen et al., ’95-’09]

– Evolutionary strategy for global optimization – State-of-the-art (see BBOB workshop this GECCO) – Parameters: population size, number of parents, learning rate, damping parameter

10

slide-26
SLIDE 26

Experiment: SPO vs SKO for Tuning CMA-ES

◮ CMA-ES [Hansen et al., ’95-’09]

– Evolutionary strategy for global optimization – State-of-the-art (see BBOB workshop this GECCO) – Parameters: population size, number of parents, learning rate, damping parameter

◮ Tuning objective

– Solution cost: best function value found in budget – Here: Sphere function – Minimize mean solution cost across many runs

10

slide-27
SLIDE 27

Experiment: SPO vs SKO for Tuning CMA-ES

◮ CMA-ES [Hansen et al., ’95-’09]

– Evolutionary strategy for global optimization – State-of-the-art (see BBOB workshop this GECCO) – Parameters: population size, number of parents, learning rate, damping parameter

◮ Tuning objective

– Solution cost: best function value found in budget – Here: Sphere function – Minimize mean solution cost across many runs

50 100 150 200 10

−6

10

−5

10

−4

10

−3

number of target algorithm runs k cost ck

SKO SPO 0.3 SPO 0.4 SPO+ 10

slide-28
SLIDE 28

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

11

slide-29
SLIDE 29

Components of SPO: initial design

◮ Fixed number of initial design points (250) and repeats (2)

– Size of initial design studied before [Bartz-Beielstein & Preuss, ’06]

◮ Here: studied which 250 design points to use

12

slide-30
SLIDE 30

Components of SPO: initial design

◮ Fixed number of initial design points (250) and repeats (2)

– Size of initial design studied before [Bartz-Beielstein & Preuss, ’06]

◮ Here: studied which 250 design points to use

– Sampled uniformly at random – Random Latin Hypercube – Iterated Hypercube Sampling [Beachkofski & Grandhi, ’02] – SPO’s standard LHD

12

slide-31
SLIDE 31

Components of SPO: initial design

◮ Fixed number of initial design points (250) and repeats (2)

– Size of initial design studied before [Bartz-Beielstein & Preuss, ’06]

◮ Here: studied which 250 design points to use

– Sampled uniformly at random – Random Latin Hypercube – Iterated Hypercube Sampling [Beachkofski & Grandhi, ’02] – SPO’s standard LHD

◮ Result: no significant difference

– Initial design not very important – Using cheap random LHD from here on

12

slide-32
SLIDE 32

Components of SPO: Transformations

◮ Compute empirical cost statistics ˆ

c(θ) first

◮ Then transform cost statistics: log(ˆ

c(θ))

13

slide-33
SLIDE 33

Components of SPO: Transformations

◮ Compute empirical cost statistics ˆ

c(θ) first

◮ Then transform cost statistics: log(ˆ

c(θ))

◮ Data: solution cost of CMA-ES on sphere

– Training: 250 · 2 data points as above – Test: 250 new points, sampled uniformly at random

13

slide-34
SLIDE 34

Components of SPO: Transformations

◮ Compute empirical cost statistics ˆ

c(θ) first

◮ Then transform cost statistics: log(ˆ

c(θ))

◮ Data: solution cost of CMA-ES on sphere

– Training: 250 · 2 data points as above – Test: 250 new points, sampled uniformly at random

−6 −4 −2 2 4 −6 −4 −2 2 4

log10(true response) log10(predicted response)

No transformation

13

slide-35
SLIDE 35

Components of SPO: Transformations

◮ Compute empirical cost statistics ˆ

c(θ) first

◮ Then transform cost statistics: log(ˆ

c(θ))

◮ Data: solution cost of CMA-ES on sphere

– Training: 250 · 2 data points as above – Test: 250 new points, sampled uniformly at random

−6 −4 −2 2 4 −6 −4 −2 2 4

log10(true response) log10(predicted response)

No transformation

−6 −4 −2 2 4 −6 −4 −2 2 4

log10(true response) log10(predicted response)

Log transformation

13

slide-36
SLIDE 36

Components of SPO: Transformations

◮ Compute empirical cost statistics ˆ

c(θ) first

◮ Then transform cost statistics: log(ˆ

c(θ))

◮ Data: solution cost of CMA-ES on sphere

– Training: 250 · 2 data points as above – Test: 250 new points, sampled uniformly at random

−6 −4 −2 2 4 −6 −4 −2 2 4

log10(true response) log10(predicted response)

No transformation

−6 −4 −2 2 4 −6 −4 −2 2 4

log10(true response) log10(predicted response)

Log transformation

Note: In newer experiments, SKO with log models was competitive

13

slide-37
SLIDE 37

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

14

slide-38
SLIDE 38

Components of SPO: expected improvement criterion

User wants to optimize some objective c

◮ We transform c to improve the model ◮ But that doesn’t change the user’s objective

Have to adapt expected improvement criterion to handle un-transformed objective

15

slide-39
SLIDE 39

Components of SPO: expected improvement criterion

User wants to optimize some objective c

◮ We transform c to improve the model ◮ But that doesn’t change the user’s objective

Have to adapt expected improvement criterion to handle un-transformed objective

Fix for log-transform: new expected improvement criterion

◮ Want to optimize Iexp(θ) = max{0, fmin − exp[f (θ)]} ◮ There is a closed-form solution (see paper)

15

slide-40
SLIDE 40

Components of SPO: expected improvement criterion

User wants to optimize some objective c

◮ We transform c to improve the model ◮ But that doesn’t change the user’s objective

Have to adapt expected improvement criterion to handle un-transformed objective

Fix for log-transform: new expected improvement criterion

◮ Want to optimize Iexp(θ) = max{0, fmin − exp[f (θ)]} ◮ There is a closed-form solution (see paper) ◮ However: no significant improvement in our experiments

15

slide-41
SLIDE 41

Components of SPO: choosing the incumbent parameter setting in presence of noise

Some algorithm runs can be lucky

need extra mechanism to ensure incumbent is really good SPO increases number of repeats over time

16

slide-42
SLIDE 42

Components of SPO: choosing the incumbent parameter setting in presence of noise

Some algorithm runs can be lucky

need extra mechanism to ensure incumbent is really good SPO increases number of repeats over time

SPO’s mechanism in a nutshell

◮ Compute cost statistic ˆ

c(θ) for each configuration θ

16

slide-43
SLIDE 43

Components of SPO: choosing the incumbent parameter setting in presence of noise

Some algorithm runs can be lucky

need extra mechanism to ensure incumbent is really good SPO increases number of repeats over time

SPO’s mechanism in a nutshell

◮ Compute cost statistic ˆ

c(θ) for each configuration θ

◮ θinc ← configuration with lowest ˆ

c(θ)

16

slide-44
SLIDE 44

Components of SPO: choosing the incumbent parameter setting in presence of noise

Some algorithm runs can be lucky

need extra mechanism to ensure incumbent is really good SPO increases number of repeats over time

SPO’s mechanism in a nutshell

◮ Compute cost statistic ˆ

c(θ) for each configuration θ

◮ θinc ← configuration with lowest ˆ

c(θ)

◮ Perform up to R runs for θinc to ensure it is good

– Increase R over time

16

slide-45
SLIDE 45

Components of SPO: choosing the incumbent parameter setting in presence of noise

Some algorithm runs can be lucky

need extra mechanism to ensure incumbent is really good SPO increases number of repeats over time

SPO’s mechanism in a nutshell

◮ Compute cost statistic ˆ

c(θ) for each configuration θ

◮ θinc ← configuration with lowest ˆ

c(θ)

◮ Perform up to R runs for θinc to ensure it is good

– Increase R over time

◮ But what if it doesn’t perform well?

– Then a different incumbent is picked in the next iteration – That might also turn out not to be good...

16

slide-46
SLIDE 46

Components of SPO: choosing the incumbent parameter setting in presence of noise

Simple fix

◮ Iteratively perform runs for single most promising θnew

◮ Compare against current incumbent θinc ◮ Once θnew has as many runs as θinc: make it new θinc

◮ Maintain invariant: θinc has the most runs of all

17

slide-47
SLIDE 47

Components of SPO: choosing the incumbent parameter setting in presence of noise

Simple fix

◮ Iteratively perform runs for single most promising θnew

◮ Compare against current incumbent θinc ◮ Once θnew has as many runs as θinc: make it new θinc

◮ Maintain invariant: θinc has the most runs of all ◮ Substantially improves robustness → new SPO variant: SPO+

17

slide-48
SLIDE 48

Components of SPO: choosing the incumbent parameter setting in presence of noise

Simple fix

◮ Iteratively perform runs for single most promising θnew

◮ Compare against current incumbent θinc ◮ Once θnew has as many runs as θinc: make it new θinc

◮ Maintain invariant: θinc has the most runs of all ◮ Substantially improves robustness → new SPO variant: SPO+

500 600 700 800 900 1000 10

−3

10

−2

10

−1

10 number of algorithm runs k performance pk SPO 0.3 SPO 0.4 SPO+

Tuning CMA-ES on Griewangk

17

slide-49
SLIDE 49

Components of SPO: choosing the incumbent parameter setting in presence of noise

Simple fix

◮ Iteratively perform runs for single most promising θnew

◮ Compare against current incumbent θinc ◮ Once θnew has as many runs as θinc: make it new θinc

◮ Maintain invariant: θinc has the most runs of all ◮ Substantially improves robustness → new SPO variant: SPO+

500 600 700 800 900 1000 10

−3

10

−2

10

−1

10 number of algorithm runs k performance pk SPO 0.3 SPO 0.4 SPO+

Tuning CMA-ES on Griewangk

500 600 700 800 900 1000 10

1

10

2

10

3

number of algorithm runs k performance pk SPO 0.3 SPO 0.4 SPO+

Tuning CMA-ES on Rastrigin

17

slide-50
SLIDE 50

Summary of Study of SPO components & Definition of SPO+

Model Quality

◮ Initial design not very important

use simple random LHD in SPO+

◮ Log-transforms sometimes improve model quality a lot

use them in SPO+ (for positive functions)

18

slide-51
SLIDE 51

Summary of Study of SPO components & Definition of SPO+

Model Quality

◮ Initial design not very important

use simple random LHD in SPO+

◮ Log-transforms sometimes improve model quality a lot

use them in SPO+ (for positive functions)

Sequential Experimental Design

◮ Expected improvement criterion

New one that’s better in theory but not in practice Use original one in SPO+

◮ New mechanism for increasing #runs & selecting incumbent

substantially improves robustness Use it in SPO+

18

slide-52
SLIDE 52

Comparison to State of the Art for tuning SAPS

◮ SAPS

◮ Stochastic local search algorithm for SAT ◮ 4 continuous parameters ◮ Here: min. search steps for single problem instance 19

slide-53
SLIDE 53

Comparison to State of the Art for tuning SAPS

◮ SAPS

◮ Stochastic local search algorithm for SAT ◮ 4 continuous parameters ◮ Here: min. search steps for single problem instance

◮ Results known for CALIBRA & ParamILS [Hutter et al, AAAI’07]

19

slide-54
SLIDE 54

Comparison to State of the Art for tuning SAPS

◮ SAPS

◮ Stochastic local search algorithm for SAT ◮ 4 continuous parameters ◮ Here: min. search steps for single problem instance

◮ Results known for CALIBRA & ParamILS [Hutter et al, AAAI’07]

10

3

10

4

0.5 1 1.5 2 2.5 3 x 10

4

number of algorithm runs k performance pk SPO 0.3 SPO 0.4 SPO+

Comparison to SPO variants, with varying budget

19

slide-55
SLIDE 55

Comparison to State of the Art for tuning SAPS

◮ SAPS

◮ Stochastic local search algorithm for SAT ◮ 4 continuous parameters ◮ Here: min. search steps for single problem instance

◮ Results known for CALIBRA & ParamILS [Hutter et al, AAAI’07]

10

3

10

4

0.5 1 1.5 2 2.5 3 x 10

4

number of algorithm runs k performance pk SPO 0.3 SPO 0.4 SPO+

Comparison to SPO variants, with varying budget Procedure SAPS median run-time/103 SAPS default 85.5 CALIBRA(100) 10.7 ± 1.1 BasicILS(100) 10.9 ± 0.6 FocusedILS 10.6 ± 0.5 SPO 0.3 18.3 ± 13.7 SPO 0.4 10.4 ± 0.7 SPO+ 10.0 ± 0.4 With budget of 20000 runs of SAPS

19

slide-56
SLIDE 56

Outline

  • 1. Sequential Model-Based Optimization (SMBO): Introduction
  • 2. Comparing Two SMBO Methods: SPO vs SKO
  • 3. Components of SPO: Model Quality
  • 4. Components of SPO: Sequential Experimental Design
  • 5. Conclusions and Future Work

20

slide-57
SLIDE 57

Conclusions

◮ SMBO can help design algorithms

◮ More principled, saves development time ◮ Can exploit full potential of flexible algorithms 21

slide-58
SLIDE 58

Conclusions

◮ SMBO can help design algorithms

◮ More principled, saves development time ◮ Can exploit full potential of flexible algorithms

◮ Our contribution

◮ Insights: what makes a popular SMBO algorithm, SPO, work ◮ Improved version, SPO+, often performs better than SPO 21

slide-59
SLIDE 59

Ongoing & Future Work

Ongoing Extensions of Model-Based Framework

◮ Use of different models in SPO+ framework ◮ Dealing with categorical parameters ◮ Optimization for Sets/Distributions of Instances

22

slide-60
SLIDE 60

Ongoing & Future Work

Ongoing Extensions of Model-Based Framework

◮ Use of different models in SPO+ framework ◮ Dealing with categorical parameters ◮ Optimization for Sets/Distributions of Instances

Use of models for scientific understanding

◮ Interactions of instance features and parameter values ◮ Can help understand and hopefully improve algorithms

22

slide-61
SLIDE 61

Thanks to

◮ Thomas Bartz-Beielstein

– SPO implementation & CMA-ES wrapper

◮ Theodore Allen

– SKO implementation

23