Utilizing machine learning algorithms in the ensemble-based optimization (EnOpt) method for enhancing gradient estimation
Master thesis
Permanent lenke
https://hdl.handle.net/11250/3013944Utgivelsesdato
2022Metadata
Vis full innførselSamlinger
- Studentoppgaver (TN-IER) [147]
Sammendrag
High or even prohibitive computational cost is one of the key limitations of robust optimization using the Ensemble-based Optimization (EnOpt) approach, especially when a computationally demanding forward model is involved (e.g., a reservoir simulation model). It is because, in EnOpt, many realizations of the forward model are considered to represent uncertainty, and many runs of forward modeling need to be performed to estimate gradients for optimization. This work aims to develop, investigate, and discuss an approach, named EnOpt-ML in the thesis, of utilizing machine learning (ML) methods for speeding up EnOpt, particularly for the gradient estimation in the EnOpt method.The significance of any deviations is investigated on three different optimization test functions: Himmelblau, Bukin function number 6 and Rosenbrock for their different characteristics. A thousand simulations are performed for each configuration setting to do the analyses, compare means and standard deviations of the ensembles. Singled out cases are shown as examples of gradient learning curves differences between EnOpt and EnOpt-ML, and the spread of their samples over the test function.Objectives:Objective1: Building of a code with a main function that would allow easy configurations and tweaking of parameters of EnOpt, Machine learning (ML) algorithms and test function or objective functions in general (with two variables). Codes necessary for test functions, ML algorithms, plotting and simulation data saving files are defined outside of that main function.The code is attached in the Appendix. Objective2: Testing and analysis of results to detect any special improvement with EnOpt-ML compared to EnOpt. The use of Himmelblau as a primary test function was with a modification of specific parameters, one at a time, starting with a base configuration case for possible comparisons. After gathering traits of effects of those configurations, an example where the improvement could show interesting were presented and then applied to the other two test functions and analyzed. The main objective then has been to reduce the number of times the objective function is evaluated while not considerably reducing the optimization quality. EnOpt-ML yielded slightly better results when compared to EnOpt under the same conditions when fixing a maximum objective function evaluations through the number of samples and the iteration at which this number is reduced.