Deep reinforcement learning assisted co-evolutionary differential evolution for constrained optimization

dc.authoridGong, Wenyin/0000-0003-1610-6865
dc.authorwosidGong, Wenyin/A-5916-2009
dc.contributor.authorHu, Zhenzhen
dc.contributor.authorGong, Wenyin
dc.contributor.authorPedrycz, Witold
dc.contributor.authorLi, Yanchi
dc.date.accessioned2024-05-19T14:40:25Z
dc.date.available2024-05-19T14:40:25Z
dc.date.issued2023
dc.departmentİstinye Üniversitesien_US
dc.description.abstractSolving constrained optimization problems (COPs) with evolutionary algorithms (EAs) is a popular research direction due to its potential and diverse applications. One of the key issues in solving COPs is the choice of constraint handling techniques (CHTs), as different CHTs can lead to different evolutionary directions. Combining EAs with deep reinforcement learning (DRL) is a promising and emerging approach for solving COPs. Although DRL can help solve the problem of pre-setting operators in EAs, neural networks need to obtain diverse training data within a limited number of evaluations in EAs. Based on the above considerations, this work proposes a DRL assisted co-evolutionary differential evolution, named CEDE-DRL, which can effectively use DRL to help EAs solve COPs. (1) This method incorporates co-evolution into the extraction of training data for the first time, ensuring the diversity of samples and improving the accuracy of the neural network model through information exchange between multiple populations. (2) Multiple CHTs are used for offspring selection to ensure the algorithm's generality and flexibility. (3) DRL is used to evaluate the population state, taking into account feasibility, convergence, and diversity in the state setting and using the overall degree of improvement as a reward. The neural network selects suitable parent populations and corresponding archives for mutation. Finally, (4) to avoid premature convergence and local optima, an adaptive operator selection and individual archive elimination mechanism is added. Comparisons with state-of-the-art algorithms on benchmark functions CEC2010 and CEC2017 show that the proposed method performs competitively and produced robust solutions. The results of the application test set CEC2020 show that the proposed algorithm is also effective in real-world problems.en_US
dc.description.sponsorshipNational Natural Science Foundation of China [62076225]en_US
dc.description.sponsorshipThis work was partly supported by the National Natural Science Foundation of China under Grant No. 62076225. All authors approved the version of the manuscript to be published.en_US
dc.identifier.doi10.1016/j.swevo.2023.101387
dc.identifier.issn2210-6502
dc.identifier.issn2210-6510
dc.identifier.scopus2-s2.0-85169978491en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.urihttps://doi.org10.1016/j.swevo.2023.101387
dc.identifier.urihttps://hdl.handle.net/20.500.12713/4957
dc.identifier.volume83en_US
dc.identifier.wosWOS:001072771600001en_US
dc.identifier.wosqualityN/Aen_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.relation.ispartofSwarm and Evolutionary Computationen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.snmz20240519_kaen_US
dc.subjectConstraint Handling Techniqueen_US
dc.subjectDeep Reinforcement Learningen_US
dc.subjectDifferential Evolutionen_US
dc.subjectCo-Evolutionen_US
dc.subjectEvolutionary Operatoren_US
dc.titleDeep reinforcement learning assisted co-evolutionary differential evolution for constrained optimizationen_US
dc.typeArticleen_US

Dosyalar