Model-based reinforcement learning for active flow control

Research output: Contribution to journalArticlepeer-review

33 Downloads (Pure)

Abstract

Recent advances in reinforcement learning (RL) have demonstrated potential for active flow control (AFC), driven primarily by model-free algorithms that optimize control strategy through direct interactions with computational fluid dynamics simulators. However, sample inefficiency poses significant barriers due to the high computational costs of flow simulation. Model-based reinforcement learning (MBRL) addresses this limitation by incorporating surrogate models to reduce the number of expensive environment interactions. Despite its potential, key algorithmic choices within MBRL—such as surrogate model training objectives and utilization strategies—remain insufficiently explored in AFC contexts. We adapt probabilistic ensembles of fully connected neural networks as surrogate models for predicting state transitions and rewards. This simple architecture achieves sufficient accuracy when trained with a multi-step predictive loss. We compare two representative MBRL approaches: probabilistic ensembles with trajectory sampling (PETS), employing model predictive control, and model-based policy optimization (MBPO), performing direct policy optimization on model-generated rollouts. We evaluate MBRL algorithms on two challenging AFC problems: drag reduction around a cylinder using jet actuation and subsurface reservoir production maximization using well flow rate control. Both MBRL methods match state-of-the-art model-free baseline (proximal policy optimization algorithm) performance while achieving 2–9× better sample efficiency and 2–7× faster convergence. Comprehensive ablation studies reveal that PETS exhibits low hyperparameter sensitivity and minimal data requirements but can converge to local optima. MBPO consistently achieves optimal solutions through superior exploration despite requiring more hyperparameter tuning and occasional training instability. These findings provide crucial insights for practical MBRL implementation in AFC applications.
Original languageEnglish
Article number093363
JournalPhysics of Fluids
Volume37
Issue number9
Early online date19 Sept 2025
DOIs
Publication statusPublished - Sept 2025

Fingerprint

Dive into the research topics of 'Model-based reinforcement learning for active flow control'. Together they form a unique fingerprint.

Cite this