ADAPTIVE Q-LEARNING FOR SUSTAINABLE ELECTRIC VEHICLE ADAPTIVE CRUISE CONTROL IN THAILAND'S SMART TOURISM
DOI:
https://doi.org/10.14456/aisr.2025.13Keywords:
Adaptive Cruise Control (ACC), Electric Vehicles (EVs), Adaptive Q-Learning, Thailand, SustainabilityAbstract
This study explores the integration of adaptive Q-learning into Electric Vehicle (EV) Adaptive Cruise Control (ACC) systems, with a focus on enhancing sustainability in Thailand's smart tourism destinations. It presents an adaptive Q-learning approach to improve efficiency, safety, and environmental performance in dynamic environments by learning optimal speed and distance policies through continuous interaction. The simulations demonstrated that adaptive Q-learning significantly improved ACC's fuel efficiency, reduces traffic congestion and improves air quality. These improvements are crucial for developing sustainable transportation solutions in environmentally sensitive tourist destinations. The study stresses how adaptive Q-learning transforms EV safety, efficiency, and environmental management, setting a sustainable benchmark for ADAS systems in Thailand and elsewhere.
Downloads
References
Abdulhai, B., Pringle, R., & Karakoulas, G. (2003). Reinforcement learning for true adaptive traffic signal control. Journal of Transportation Engineering, 129(3), 278-285.
Benguiar, A., El-Hassouni, M., & Ait-Kerroum, M. (2018). Adaptive cruise control: A review of the state-of-the-art and challenges ahead. Procedia Computer Science, 127, 352-361.
Brown, A., White, B., & Green, C. (2020). Application of model-free Q-learning algorithms in adaptive cruise control systems. Journal of Automotive Engineering, 15(2), 112-125.
da Silva, B., Basso, E., & Engel, P. (2012). Adaptive Q-learning with meta-learning for dynamic environments. A paper presented at the International Joint Conference on Neural Networks, Brisbane, Australia.
El-Zaher, M., Dourado, A., & Sandulescu, L. (2019). Reinforcement learning-based adaptive cruise control for autonomous vehicles. A paper presented at the 2019 IEEE International Conference on Intelligent Transportation Systems, Auckland, New Zealand.
Gomes, J., & Kowalczyk, R. (2009). Dynamic adaptation of learning rates in Q-learning. A paper presented at the 8th European Workshop on Reinforcement Learning, Villeneuve d'Ascq, France.
International Energy Agency. (2023). Global EV outlook 2023: Catching up with climate goals. Retrieved from www.iea.org/reports/global-ev-outlook-2023.
Johnson, P. (2023). Enhancing Q-learning convergence using parallel computing in adaptive cruise control systems. International Journal of Intelligent Vehicles, 8(1), 45-56.
Lee, S., & Kim, D. (2022). Hybrid Q-learning models for adaptive cruise control integrating vehicle-to-vehicle communication. IEEE Transactions on Intelligent Vehicles, 10(4), 789-802.
Lee, T., & Wang, J. (2023). Real-time sensor integration in Q-learning for adaptive cruise control in dynamic traffic environments. Journal of Autonomous Vehicles, 5(3), 211-225.
Li, S., Li, K., Rajamani, R., & Wang, J. (2017). Model predictive control for adaptive cruise control with multi-objective optimization. IEEE Transactions on Vehicular Technology, 66(6), 4782-4792.
Martinez, J., Canudas-de-Wit, C., & Hokayem, P. (2018). Adaptive cruise control with human driver interaction: A behavioral study. Transportation Research Part F: Traffic Psychology and Behaviour, 59, 304-315.
Martinez, R., Rodriguez, M., & Garcia, S. (2022). Neural network architectures for improving Q-learning efficiency in adaptive cruise control. A paper presented at the International Conference on Intelligent Vehicles, Xi'an, China.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529-533.
Rajamani, R. (2012). Vehicle dynamics and control (2nd ed.). New York: Springer.
Smith, E., & Jones, F. (2021). Deep Q-networks for enhancing adaptive cruise control responsiveness in electric vehicles. Journal of Advanced Automotive Technologies, 7(4), 321-335.
Suanpang, P., & Jamjuntr, P. (2024). Optimal Electric Vehicle Battery Management Using Q-learning for Sustainability. Sustainability, 16(16), 7180.
Sutton, R. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. A paper presented at the 7th International Conference, Austin, Texas.
Sutton, R., & Barto, A. (2018). Reinforcement learning: An introduction (2nd ed.). Massachusetts: MIT Press.
Tokic, M. (2010). Adaptive ε-Greedy Exploration in Reinforcement Learning Based on Value Differences. In R. Dillmann, J. Beyerer, U. Hanebeck, & T. Schultz. (eds.). KI 2010: Advances in Artificial Intelligence (pp. 203-210). Berlin: Springer.
Watkins, C. (1989). Learning from Delayed Rewards. Doctoral Thesis, University of Cambridge.
Watkins, C., & Dayan, P. (1992). Q-learning. Machine Learning, 8(3-4), 279-292.
Zhang, H., Li, S., & Wang, Q. (2024). Parallel computing techniques for accelerating Q-learning convergence in adaptive cruise control systems. A paper presented at the IEEE Conference on Intelligent Vehicles, Jeju Island, Korea.
Zhang, Y., Chen, Z., & Li, J. (2020). Smart charging strategies for electric vehicles: A review of optimization techniques. Renewable Energy, 152, 1234-1245.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







