Basic Setup Comparative Algorithm Study

41

4.3.2 Algorithm Performance

To visualize the temporal performance of the algorithms, the average error from ten consecutive optimization runs is plotted against the number of ROHR model func- tion evaluations 1 Figure 4.5 a. After large initial errors, followed by similar perfor- mance characteristics at approximately 7’500 function evaluations, the four algorithms investigated vary in both temporal performance and final error the opti- mizations are stopped after a specific CPU-time. Whereas the GA and GADS algorithms show a linear - however different in slope - performance improvement, the EA and CMA-ES feature exponential and stepwise improvements respectively. While EA and GADS final optimization errors may differ by nearly 40 f Error.EA = 11’200 vs. f Error.GADS = 15’600, the only signifi- cant deviations found in global ROHR characteristics are at ϕ 50 and ϕ 90 Figure 4.5 b. Analyzing the four calibrated models using both the Pearson’s correlation coeffi- cients 2 and the linear regression slopes 3 , yields similar results for the GA, EA and CMA-ES algorithm Figure 4.6 a. The correlation coefficients for the ϕ SOC , ϕ 10 and ϕ 50 characteristics determined using these three algorithms are of the order of 0.9, while the ϕ 90 correlation coefficients are slightly worse 0.75 to 0.8. The corre- lation coefficients for the GADS algorithm drop almost linearly from 0.9 ϕ SOC to 0.55 ϕ 90 . 1. n fEval function evaluations = n oc operating conditions n fCallsAlgorithm function calls by the algorithm a b Fig. 4.5 Comparison of the 4 Algorithms Used for the ROHR Model: a Performance Plot, b “1-to-1” Scatter Plot Best vs. Worst Algorithm 2. Pearson’s correlation coefficient r - measure of how well a linear equation describes the relation between two variables x and y; defined as the covariance of x, y and the product of the standard deviations, r = covx, y σ x σ y , r = 1 : perfect linear correlation 3. Linear regression slope m - slope of best-fit line plotted through x and y using the method of “least squares”; y = mx + b c.f. Figure 4.6 b E rr o r M e a su re m e n t - S im u la ti o n [ -] 10000 15000 20000 25000 30000 35000 40000 Function Evaluations [-] 15000 30000 45000 60000 75000 Genet ic Algorit hm GA Evolutionary Algorithm EA Matlab GADS-Toolbox Covariance Matrix Adapt ion CMA-ES ϕ S im u la ti o n [° C A a T D C ] 350 360 370 380 390 400 410 420 ϕ Measurment [° CA aTDC] 350 360 370 380 390 400 410 420 EA Best GADS Worst St art of Combust ion 10 Energy Release 50 Energy Release 90 Energy Release 1 – r 1 ≤ ≤ 42 All four algorithms investigated have linear regression slopes m of approximately unity for the ϕ SOC and ϕ 10 characteristics, whereas the values for the ϕ 50 and ϕ 90 characteristics are significantly lower than 1, specifically for both the CMA-ES and GADS algorithm. As an example, Figure 4.6 b compares the ϕ 50 data obtained from both the GA and GADS calibrated ROHR models, visualizing the effects mea- sured by the linear regression slope m. A slope m which is lower than unity refers to a reduced sensitivity of the simulation output, i.e. while low measurement values are over predicted by the simulation, higher values are under predicted.

4.3.3 Stochastic Initialization Evolution

In order to determine the influence of a stochastic initialization on the performance of evolutionary algorithms, 25 consecutive ROHR model calibrations are performed using the EA algorithm. As shown in Figure 4.7 a, the initial variations caused by the stochastic initializa- tion decrease with the number of function evaluations Δ f Error at initialization: 24’600; after 50’000 function evaluations: 1’700. Furthermore, neither the optimiza- tion case with the best nor the worst stochastic parameter initialization remains the best nor worst case at the end of the optimization. Thus, although there is a signifi- cant influence on the initial phase of the optimization, the stochastic manipulations used during the evolutionary processes i.e. recombination and mutation have a larger impact on the optimization outcome. To illustrate the influence of stochastic initialization on the individual model parameters, Figure 4.7 b shows the development of the combustion induced tur- bulence scaling factor c Comb for the 25 consecutive optimization runs. Whereas ini- tial values are randomly distributed, the solutions tend to approach the best overall value with an increasing number of function evaluations similar to the performance value variation decrease. a b Fig. 4.6 Comparative Algorithm Study Statistics: a Person’s Correlation Coefficient Linear Regression Slope b ϕ 50 “1-to-1” Plot P e a rs o n s C o rr e la ti o n C o e ff ic ie n t r [ -] L in e a r R e g re ss io n S lo p e m [ -] 0. 5 0. 6 0. 7 0. 8

0. 9 1. 0

1. 1 1. 2 GA EA CMA-ES GADS SOC r corr m reg ϕ 50 r corr m reg ϕ 10 r corr m reg ϕ 90 r corr m reg ϕ 5 S im u la ti o n s [ ° C A a T D C ] 355 360 365 370 375 380 385 390 ϕ 50 Measurements [ ° CA aTDC] 355 360 365 370 375 380 385 390 EA GADS m GADS m EA