Energy optimization of wind turbines via a neural control policy based on reinforcement learning Markov chain Monte Carlo algorithm

Küçük Resim Yok

Tarih

2023

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Elsevier Sci Ltd

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

This study focuses on the numerical analysis and optimal control of vertical-axis wind turbines (VAWT) using Bayesian reinforcement learning (RL). We specifically address small-scale wind turbines, which are well-suited to local and compact production of electrical energy on a small scale, such as urban and rural infrastructure installations. Existing literature concentrates on large scale wind turbines which run in unobstructed, mostly constant wind profiles. However urban installations generally must cope with rapidly changing wind patterns. To bridge this gap, we formulate and implement an RL strategy using the Markov chain Monte Carlo (MCMC) algorithm to optimize the long-term energy output of a wind turbine. Our MCMC-based RL algorithm is a model-free and gradient-free algorithm, in which the designer does not have to know the precise dynamics of the plant and its uncertainties. Our method addresses the uncertainties by using a multiplicative reward structure, in contrast with additive reward used in conventional RL approaches. We have shown numerically that the method specifically overcomes the shortcomings typically associated with conventional solutions, including, but not limited to, component aging, modeling errors, and inaccuracies in the estimation of wind speed patterns. Our results show that the proposed method is especially successful in capturing power from wind transients; by modulating the generator load and hence the rotor torque load, so that the rotor tip speed quickly reaches the optimum value for the anticipated wind speed. This ratio of rotor tip speed to wind speed is known to be critical in wind power applications. The wind to load energy efficiency of the proposed method was shown to be superior to two other methods; the classical maximum power point tracking method and a generator controlled by deep deterministic policy gradient (DDPG) method.

Açıklama

Anahtar Kelimeler

Reinforcement Learning (Rl), Wind Turbine Energy Maximization, Neural Control, Markov Chain Monte Carlo (Mcmc), Bayesian Learning, Deep Deterministic Policy Gradient (Ddpg)

Kaynak

Applied Energy

WoS Q Değeri

N/A

Scopus Q Değeri

Q1

Cilt

341

Sayı

Künye