TY - JOUR
T1 - Robust Optimal Well Control using an Adaptive Multigrid Reinforcement Learning Framework
AU - Dixit, Atish
AU - Elsheikh, Ahmed H.
N1 - Funding Information:
This study is made possible through the generous support of the American people through the United States Agency for International Development – President’s Emergency Plan for AIDS Relief (USAID - PEPFAR) and Oak and Novo Foundations. The contents are the responsibility of the authors and do not necessarily reflect the views of USAID, the United States Government, or of Oak or Novo Foundations.
Funding Information:
The first author would like to acknowledge the Ali Danesh scholarship to fund his Ph.D. studies at Heriot-Watt University. The authors also acknowledge EPSRC funding through the EP/V048899/1 grant.
Publisher Copyright:
© 2022, The Author(s).
PY - 2023/4
Y1 - 2023/4
N2 - Reinforcement learning (RL) is a promising tool for solving robust optimal well control problems where the model parameters are highly uncertain and the system is partially observable in practice. However, the RL of robust control policies often relies on performing a large number of simulations. This could easily become computationally intractable for cases with computationally intensive simulations. To address this bottleneck, an adaptive multigrid RL framework is introduced which is inspired by principles of geometric multigrid methods used in iterative numerical algorithms. RL control policies are initially learned using computationally efficient low-fidelity simulations with coarse grid discretization of the underlying partial differential equations (PDEs). Subsequently, the simulation fidelity is increased in an adaptive manner towards the highest fidelity simulation that corresponds to the finest discretization of the model domain. The proposed framework is demonstrated using a state-of-the-art, model-free policy-based RL algorithm, namely the proximal policy optimization algorithm. Results are shown for two case studies of robust optimal well control problems, which are inspired from SPE-10 model 2 benchmark case studies. Prominent gains in computational efficiency are observed using the proposed framework, saving around 60-70% of the computational cost of its single fine-grid counterpart.
AB - Reinforcement learning (RL) is a promising tool for solving robust optimal well control problems where the model parameters are highly uncertain and the system is partially observable in practice. However, the RL of robust control policies often relies on performing a large number of simulations. This could easily become computationally intractable for cases with computationally intensive simulations. To address this bottleneck, an adaptive multigrid RL framework is introduced which is inspired by principles of geometric multigrid methods used in iterative numerical algorithms. RL control policies are initially learned using computationally efficient low-fidelity simulations with coarse grid discretization of the underlying partial differential equations (PDEs). Subsequently, the simulation fidelity is increased in an adaptive manner towards the highest fidelity simulation that corresponds to the finest discretization of the model domain. The proposed framework is demonstrated using a state-of-the-art, model-free policy-based RL algorithm, namely the proximal policy optimization algorithm. Results are shown for two case studies of robust optimal well control problems, which are inspired from SPE-10 model 2 benchmark case studies. Prominent gains in computational efficiency are observed using the proposed framework, saving around 60-70% of the computational cost of its single fine-grid counterpart.
KW - Adaptive
KW - Multigrid framework
KW - Reinforcement learning
KW - Robust optimal control
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85141399882&partnerID=8YFLogxK
U2 - 10.1007/s11004-022-10033-x
DO - 10.1007/s11004-022-10033-x
M3 - Article
AN - SCOPUS:85141399882
SN - 1874-8961
VL - 55
SP - 345
EP - 375
JO - Mathematical Geosciences
JF - Mathematical Geosciences
IS - 3
ER -