Vis enkel innførsel

dc.contributor.advisorSui, Dan
dc.contributor.authorGocmen, Emre Baris
dc.date.accessioned2022-09-06T15:51:24Z
dc.date.available2022-09-06T15:51:24Z
dc.date.issued2022
dc.identifierno.uis:inspera:107970678:64565915
dc.identifier.urihttps://hdl.handle.net/11250/3016072
dc.description.abstractVarious researchers proposed several types of methods, algorithms, and simulator to control bottom hole assembly (BHA) while drilling a deviated well. These works consist of drilling, control, mechanical and electrical engineering knowledge. The last year at University of Stavanger, Jerez established a work for this purpose. The aim of this work developing a physical method to control bit directions through the well path inside RSS simulator environment. This study structured upon RSS simulator developed at University of Stavanger and propose a different perspective on trajectory control. After a serious struggle on RSS model to shape for reinforcement learning, managed to have a new environment for the trajectory control. This new environment cleaned all possible errors of RSS model. On the other hand, make possible to control path by weight on bit and rotational speed. Also, the observation parameters selected as coordinates, measured depth, and drilling time. Adding tool face angle and dog leg severity values to observation caused bad training for the agents. Afterward, according to discrete observations and actions there was two RL agent options given by MATLAB reinforcement learning toolbox. The first one is proximity policy optimization agent, and the other one is deep q-network agent. After countless training sessions on J shape well, managed to create significant reward functions to test the environment on different well shapes. First try made on J shape well with both RL agents offered by MATLAB and results were satisfactory. However, simulations attempt for S and complex shape wells were not precise and needs more development. Therefore, utilization of RL environment, reward function and optimization of time demand became crucial outputs of these attempts.
dc.description.abstract
dc.languageeng
dc.publisheruis
dc.titleTrajectory Control via Reinforcement Learning with RSS Model
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel