Show simple item record

dc.contributor.advisorHong, Aojie
dc.contributor.advisorPetvipusit, Kurt
dc.contributor.authorRaji, Nidaa
dc.date.accessioned2022-08-27T15:51:22Z
dc.date.available2022-08-27T15:51:22Z
dc.date.issued2022
dc.identifierno.uis:inspera:107970678:8926733
dc.identifier.urihttps://hdl.handle.net/11250/3013944
dc.description.abstractHigh or even prohibitive computational cost is one of the key limitations of robust ‎optimization using the Ensemble-based Optimization (EnOpt) approach, especially when a ‎computationally demanding forward model is involved (e.g., a reservoir simulation model). ‎It is because, in EnOpt, many realizations of the forward model are considered to represent ‎uncertainty, and many runs of forward modeling need to be performed to estimate gradients ‎for optimization. This work aims to develop, investigate, and discuss an approach, named ‎EnOpt-ML in the thesis, of utilizing machine learning (ML) methods for speeding up ‎EnOpt, particularly for the gradient estimation in the EnOpt method.‎ The significance of any deviations is investigated on three different optimization test ‎functions: Himmelblau, Bukin function number 6 and Rosenbrock for their different ‎characteristics. A thousand simulations are performed for each configuration setting to do ‎the analyses, compare means and standard deviations of the ensembles. Singled out cases ‎are shown as examples of gradient learning curves differences between EnOpt and EnOpt-‎ML, and the spread of their samples over the test function.‎ Objectives:‎ Objective1: Building of a code with a main function that would allow easy configurations ‎and tweaking of parameters of EnOpt, Machine learning (ML) algorithms and test function ‎or objective functions in general (with two variables). Codes necessary for test functions, ‎ML algorithms, plotting and simulation data saving files are defined outside of that main ‎function.‎ The code is attached in the Appendix. ‎ Objective2: Testing and analysis of results to detect any special improvement with EnOpt-‎ML compared to EnOpt. The use of Himmelblau as a primary test function was with a ‎modification of specific parameters, one at a time, starting with a base configuration case ‎for possible comparisons. After gathering traits of effects of those configurations, an ‎example where the improvement could show interesting were presented and then applied to ‎the other two test functions and analyzed. ‎ The main objective then has been to reduce the number of times the objective function is ‎evaluated while not considerably reducing the optimization quality. ‎ EnOpt-ML yielded slightly better results when compared to EnOpt under the same ‎conditions when fixing a maximum objective function evaluations through the number of ‎samples and the iteration at which this number is reduced.‎
dc.description.abstract
dc.languageeng
dc.publisheruis
dc.titleUtilizing machine learning algorithms in the ensemble-based optimization (EnOpt) ‎method for enhancing gradient estimation‎
dc.typeMaster thesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record