A multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management

dc.authoridCoskun, Serdar/0000-0002-7080-0340
dc.authoridYAZAR, OZAN/0000-0002-4593-0178
dc.contributor.authorCoskun, Serdar
dc.contributor.authorYazar, Ozan
dc.contributor.authorZhang, Fengqi
dc.contributor.authorLi, Lin
dc.contributor.authorHuang, Cong
dc.contributor.authorKarimi, Hamid Reza
dc.date.accessioned2025-03-17T12:27:21Z
dc.date.available2025-03-17T12:27:21Z
dc.date.issued2024
dc.departmentTarsus Üniversitesi
dc.description.abstractConnected and autonomous vehicles have offered unprecedented opportunities to improve fuel economy and reduce emissions of hybrid electric vehicle (HEV) in vehicular platoons. In this context, a hierarchical control strategy is put forward for connected HEVs. Firstly, we consider a deep deterministic policy gradient (DDPG) algorithm to compute the optimized vehicle speed using a trained optimal policy via vehicle-to-vehicle communication in the upper level. A multi-objective reward function is introduced, integrating vehicle fuel consumption, battery state-of-the-charge, emissions, and vehicle car-following objectives. Secondly, an adaptive equivalent consumption minimization strategy is devised to implement vehicle-level torque allocation in the platoon. Two drive cycles, HWFET and human-in-the-loop simulator driving cycles are utilized for realistic testing of the considered platoon energy management. It is shown that DDPG runs the engine more efficiently than the widely-implemented Q-learning and deep Q-network, thus showing enhanced fuel savings. Further, the contribution of this paper is to speed up the higher-level vehicular control application of deep learning algorithms in the connected and automated HEV platoon energy management applications.
dc.description.sponsorshipScientific and Technological Research Council of Turkiye [121E260]; Italian Ministry of University and Research [P2022EXP2W]
dc.description.sponsorshipThis study is supported in part by the Scientific and Technological Research Council of Turkiye with Project No. 121E260 under the grant name CAREER and in part by the Italian Ministry of University and Research under grant 'Learning-based Model Predictive Control by Exploration and Exploitation in Uncertain Environments' (PRIN PNRR 2022 fund, ID P2022EXP2W).
dc.identifier.doi10.1016/j.conengprac.2024.106104
dc.identifier.issn0967-0661
dc.identifier.issn1873-6939
dc.identifier.scopus2-s2.0-85204689826
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1016/j.conengprac.2024.106104
dc.identifier.urihttps://hdl.handle.net/20.500.13099/2206
dc.identifier.volume153
dc.identifier.wosWOS:001325038200001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherPergamon-Elsevier Science Ltd
dc.relation.ispartofControl Engineering Practice
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WOS_20250316
dc.subjectConnected and automated vehicles
dc.subjectDeep learning
dc.subjectDeep reinforcement learning
dc.subjectHybrid electric vehicles
dc.subjectEnergy management
dc.titleA multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management
dc.typeArticle

Dosyalar