News Release

LearningEMS: A new framework for electric vehicle energy management

Peer-Reviewed Publication

Higher Education Press

The learning-based EMS framework and system design of LearningEMS.

image: 

(a) It consists of three layers: the EV environment layer, the learning-based algorithm layer, and the application layer. D3QN: dueling DDQN; CQL: conservative Q-learning; BCQ: batch-Constrained Q-Learning; SB3: stable-Baselines3; RLlib: reinforcement learning library; (b) Training pipeline of LearningEMS: First, choose an EV environment, users can create new environments or add modules to existing ones. Then, select an algorithm and dataset. Finally, start training. After training the policy in simulation, it can be directly deployed into the controller, enabling hardware-in-the-loop (HIL) or vehicle-in-the-loop (VIL) experiments.

view more 

Credit: Yong Wang et al.

A new study published in Engineering introduces LearningEMS, a unified framework and open-source benchmark designed to revolutionize the development and assessment of energy management strategies (EMS) for electric vehicles (EVs).

The automotive industry has recently undergone a transformative shift fueled by the growing global emphasis on sustainability and environmental conservation. EVs have become a crucial part of the future of transportation. However, effectively managing the energy in EVs, especially those with complex power systems like battery EVs, hybrid EVs, fuel cell EVs, and plug-in EVs, remains a challenge. An efficient EMS is essential for optimizing the energy efficiency of these vehicles.

LearningEMS provides a general platform that supports various EV configurations. It allows for detailed comparisons of several EMS algorithms, including imitation learning, deep reinforcement learning (RL), offline RL, model predictive control, and dynamic programming. The framework comes with three distinct EV platforms, over 10 000 km of EMS policy dataset, ten state-of-the-art algorithms, and over 160 benchmark tasks, along with three learning libraries.

The researchers rigorously evaluated these algorithms from multiple perspectives, such as energy efficiency, consistency, adaptability, and practicability. For example, in the benchmark test results, they found that discrete action space algorithms like DQN and D3QN perform well in simple EMS tasks but are less efficient when dealing with complex control parameters. On the other hand, off-policy algorithms with continuous action spaces, like DDPG, TD3, and SAC, show great potential in optimizing energy efficiency and maintaining consistency across different driving conditions. The on-policy algorithm PPO, however, exhibits significant performance variations in different vehicles or operational conditions.

The study also delves into important aspects of RL in EV energy management, such as the design of state, reward, and action settings. The researchers discuss how these elements can significantly impact the overall performance of the EMS. Additionally, they introduce a policy extraction and reconstruction method for deploying learning-based EMS onto real-world vehicle controllers and conduct hardware-in-the-loop experiments to prove its feasibility.

According to the researchers, LearningEMS has the potential to improve energy efficiency, reduce vehicle operating costs, and extend the lifespan of power systems. The open-source nature of LearningEMS encourages further research and innovation in the field, allowing engineers and researchers to develop more advanced EMS algorithms.

The paper “LearningEMS: A Unified Framework and Open-source Benchmark for Learning-based Energy Management of Electric Vehicles,” authored by Yong Wang, Hongwen He, Yuankai Wu, Pei Wang, Haoyu Wang, Renzong Lian, Jingda Wu, Qin Li, Xiangfei Meng, Yingjuan Tang, Fengchun Sun, and Amir Khajepour. Full text of the open access paper: https://doi.org/10.1016/j.eng.2024.10.021. For more information about the Engineering, follow us on X (https://twitter.com/EngineeringJrnl) & like us on Facebook (https://www.facebook.com/EngineeringJrnl).


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.