ADAPTIVE MULTI-AGENT REINFORCEMENT LEARNING FOR ENHANCING SECURITY AND PRIVACY IN EV FAST-CHARGING NETWORKS FOR SUSTAINABILITY
DOI:
https://doi.org/10.14456/aisr.%202025.17Keywords:
Adaptive Multi-Agent Reinforcement Learning, Electric Vehicle, Fast Charging Network, Security and Privacy, SustainabilityAbstract
Electric Vehicle (EV) adoption is rapidly increasing, necessitating robust and secure fast-charging networks. However, existing infrastructures face significant security and privacy challenges. This paper proposes an innovative approach using Adaptive Multi-Agent Reinforcement Learning (MARL) to address these issues. Our methodology involves formulating the problem within a MARL framework, designing adaptive agents that optimize security protocols while preserving user privacy. We conducted experiments in a simulated EV charging environment, demonstrating that our approach enhances security measures such as intrusion detection and privacy-preserving data handling. Key findings indicate significant improvements in network resilience and user privacy, validated through comprehensive metrics and visualization. This research contributes to advancing the understanding and application of MARL in critical infrastructure security and suggests future directions for integrating adaptive intelligence into EV charging networks for sustainability.
Downloads
References
Andwari, A., Pesiridis, A., Rajoo, S., Martinez-Botas, R., & Esfahanian, V. (2017). A review of battery electric vehicle technology and readiness levels. Renewable and Sustainable Energy Reviews, 78, 414-430.
Basu, S. (2022). Adaptive multi-agent reinforcement learning for cyber-physical systems: A review. IEEE Transactions on Industrial Informatics, 18(1), 512-525.
Chen, X. (2022). Security challenges and solutions in electric vehicle charging networks. IEEE Transactions on Vehicular Technology, 71(4), 3642-3655.
Egbue, O., & Long, S. (2012). Barriers to widespread adoption of electric vehicles: An analysis of consumer attitudes and perceptions. Energy Policy, 48, 717-729.
European Union. (2021). Regulation of the European Parliament and of the Council on the deployment of alternative fuels infrastructure. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0559.
Guo, Y., & Liu, S. (2020). Multi-agent reinforcement learning for cyber-physical security in EV charging systems. IEEE Transactions on Industrial Informatics, 16(6), 4115-4125.
Hardman, S., Jenn, A., Tal, G., Axsen, J., Beard, G., Daina, N., Figenbaum, E., Jakobsson, N., Jochem, P., Kinnear, N., Plötz, P., Pontes, J., Refa, N., Sprei, F., Turrentine, T., & Witkamp, B. (2018). A review of consumer preferences of and interactions with electric vehicle charging infrastructure. Transportation Research Part D: Transport and Environment, 62, 508-523.
International Energy Agency. (2023). Global EV outlook 2023: Catching up with climate goals. Retrieved from www.iea.org/reports/global-ev-outlook-2023.
Jiang, Y., & Zhang, C. (2020). Multi-agent reinforcement learning based energy management system for EV charging station. IEEE Transactions on Smart Grid, 11(5), 4475-4485.
Kempton, W., & Tomić, J. (2005). Vehicle-to-grid power fundamentals: Calculating capacity and net revenue. Journal of Power Sources, 144(1), 268-279.
Khan, S., Ahmad, A., & Javed, M. (2020). Cybersecurity issues in electric vehicle charging infrastructure: A survey. IEEE Access, 8, 182345-182358.
Li, J., Zhang, Y., & Chen, X. (2019). Multi-agent reinforcement learning for electric vehicle charging station management. IEEE Transactions on Smart Grid, 10(5), 4892-4901.
Li, Z., Wang, Y., & Liu, Q. (2022). Machine learning-based attack detection in smart grid and EV charging networks. Journal of Cybersecurity, 4(1), 1-15.
Liu, H. (2020). Adaptive multi-agent reinforcement learning for cybersecurity: A survey. IEEE Access, 8, 110739-110756.
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., & Mordatch, I. (2017). Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in Neural Information Processing Systems, 30, 6379-6390.
Nicholas, M., & Hall, D. (2018). Lessons learned on early electric vehicle fast-charging deployments. Washington, D.C.: International Council on Clean Transportation.
Sanghvi, A., & Lim, Y. (2021). Cybersecurity challenges in electric vehicle charging infrastructure: A review. Energy Reports, 7, 4212-4223.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. Retrieved from https://doi.org/10.48550/arXiv.1707.06347.
Sun, L. (2022). Intelligent electric vehicle charging scheduling using multi-agent reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 23(3), 1581-1593.
Sutton, R., & Barto, A. (2018). Reinforcement learning: An introduction (2nd ed.). Massachusetts: MIT Press.
Wang, S., Li, J., & Wu, Q. (2021). Multi-agent reinforcement learning for vehicle-to-grid systems: A decentralized approach. Energy, 227, 120489.
Wang, Z., & Zhang, Y. (2020). Privacy preserving data aggregation in electric vehicle charging networks. IEEE Transactions on Sustainable Computing, 5(3), 253-264.
Yang, J. (2023). Deep reinforcement learning for autonomous cyber defense: A survey. IEEE Transactions on Network and Service Management, 20(1), 512-525.
Ye, Y., Qiu, D., & Sun, M. (2020). Multi-agent deep reinforcement learning for EV charging optimization. IEEE Transactions on Power Systems, 35(4), 3056-3067.
Zhang, X., Liu, Y., & Chen, Z. (2023). Securing vehicle-to-grid systems with multi-agent reinforcement learning: A preliminary study. Sustainable Energy, Grids and Networks, 33, 100945.
Zhang, Y., Chen, Z., & Li, J. (2020). Smart charging strategies for electric vehicles: A review of optimization techniques. Renewable Energy, 152, 1234-1245.
Zhao, H. (2023). Blockchain-based privacy protection mechanism for electric vehicle charging networks. IEEE Transactions on Intelligent Transportation Systems, 24(1), 122-135.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







