A multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management
dc.authorid | Coskun, Serdar/0000-0002-7080-0340 | |
dc.authorid | YAZAR, OZAN/0000-0002-4593-0178 | |
dc.contributor.author | Coskun, Serdar | |
dc.contributor.author | Yazar, Ozan | |
dc.contributor.author | Zhang, Fengqi | |
dc.contributor.author | Li, Lin | |
dc.contributor.author | Huang, Cong | |
dc.contributor.author | Karimi, Hamid Reza | |
dc.date.accessioned | 2025-03-17T12:27:21Z | |
dc.date.available | 2025-03-17T12:27:21Z | |
dc.date.issued | 2024 | |
dc.department | Tarsus Üniversitesi | |
dc.description.abstract | Connected and autonomous vehicles have offered unprecedented opportunities to improve fuel economy and reduce emissions of hybrid electric vehicle (HEV) in vehicular platoons. In this context, a hierarchical control strategy is put forward for connected HEVs. Firstly, we consider a deep deterministic policy gradient (DDPG) algorithm to compute the optimized vehicle speed using a trained optimal policy via vehicle-to-vehicle communication in the upper level. A multi-objective reward function is introduced, integrating vehicle fuel consumption, battery state-of-the-charge, emissions, and vehicle car-following objectives. Secondly, an adaptive equivalent consumption minimization strategy is devised to implement vehicle-level torque allocation in the platoon. Two drive cycles, HWFET and human-in-the-loop simulator driving cycles are utilized for realistic testing of the considered platoon energy management. It is shown that DDPG runs the engine more efficiently than the widely-implemented Q-learning and deep Q-network, thus showing enhanced fuel savings. Further, the contribution of this paper is to speed up the higher-level vehicular control application of deep learning algorithms in the connected and automated HEV platoon energy management applications. | |
dc.description.sponsorship | Scientific and Technological Research Council of Turkiye [121E260]; Italian Ministry of University and Research [P2022EXP2W] | |
dc.description.sponsorship | This study is supported in part by the Scientific and Technological Research Council of Turkiye with Project No. 121E260 under the grant name CAREER and in part by the Italian Ministry of University and Research under grant 'Learning-based Model Predictive Control by Exploration and Exploitation in Uncertain Environments' (PRIN PNRR 2022 fund, ID P2022EXP2W). | |
dc.identifier.doi | 10.1016/j.conengprac.2024.106104 | |
dc.identifier.issn | 0967-0661 | |
dc.identifier.issn | 1873-6939 | |
dc.identifier.scopus | 2-s2.0-85204689826 | |
dc.identifier.scopusquality | Q1 | |
dc.identifier.uri | https://doi.org/10.1016/j.conengprac.2024.106104 | |
dc.identifier.uri | https://hdl.handle.net/20.500.13099/2206 | |
dc.identifier.volume | 153 | |
dc.identifier.wos | WOS:001325038200001 | |
dc.identifier.wosquality | Q1 | |
dc.indekslendigikaynak | Web of Science | |
dc.indekslendigikaynak | Scopus | |
dc.language.iso | en | |
dc.publisher | Pergamon-Elsevier Science Ltd | |
dc.relation.ispartof | Control Engineering Practice | |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.snmz | KA_WOS_20250316 | |
dc.subject | Connected and automated vehicles | |
dc.subject | Deep learning | |
dc.subject | Deep reinforcement learning | |
dc.subject | Hybrid electric vehicles | |
dc.subject | Energy management | |
dc.title | A multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management | |
dc.type | Article |