ADAPTIVE Q-LEARNING-BASED IOT INTEGRATION FOR SUSTAINABLE URBAN AUTONOMOUS VEHICLE NAVIGATION
DOI:
https://doi.org/10.14456/aisr.2025.12Keywords:
Adaptive Q-Learning, Autonomous Vehicles, Navigation, Internet of Things, SustainabilityAbstract
This research explores a novel method for integrating Internet of Things (IoT) with adaptive Q-learning (AQL) to enhance urban autonomous vehicle (AV) navigation for improved sustainability. The core of this method is an AQL algorithm that dynamically modifies learning settings in response to real-time traffic conditions, which optimizes decision-making. The effectiveness of the model was evaluated in a detailed simulation environment designed to reflect the complexity of urban settings. This infrastructure included sensors, communication protocols, and cloud-based systems. The simulation results show substantial advances in route optimization, hazard avoidance, and overall vehicle safety. The results show that integrating AQL with IoT improves the performance of self-driving cars and promotes more ecological and smart urban transportation strategies.
Downloads
References
Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., & Oluwatola, O. (2016). Autonomous Vehicle Technology: A Guide for Policymakers. California: Rand Corporation.
Atzori, L., Iera, A., & Morabito, G. (2010). The Internet of Things: A Survey. Computer Networks, 54, 2787-2805.
Bansal, P., Kockelman, K., & Gandhi, A. (2019). Estimating public acceptance of autonomous vehicles: An innovative approach. Transportation Research Part C: Emerging Technologies, 98, 73-93.
Baur, J., & Wee, S. (2021). Decarbonizing urban mobility: The role of electric vehicles in the transition to a sustainable future. Sustainability, 13(4), 2316.
Böhm, A., Goller, D., & Michalke, T. (2020). Real-time dynamic obstacle avoidance for autonomous vehicles. Transportation Research Part C: Emerging Technologies, 119, 102815.
Bohm, J., Sashika, K., & Yamashita, M. (2018). The influence of environmental conditions on the performance of autonomous vehicles. IEEE Access, 6, 43807-43816.
Brown, C., Smith, A., & Lee, D. (2024). Challenges and advancements in Q-learning for autonomous vehicle navigation. IEEE Transactions on Cybernetics, 54, 301-315.
Brown, D., & Smith, A. (2022). Q-learning in autonomous vehicles: A comprehensive review. IEEE Transactions on Intelligent Transportation Systems, 23, 45-58.
Chien, S., Ding, Y., & Wei, C. (2020). A traffic flow model for mixed autonomous and human-driven vehicles. Transportation Research Part A: Policy and Practice, 132, 36-53.
Cohen, A., & Kiet, M. (2020). The role of autonomous vehicles in sustainable urban mobility: Mitigating congestion or exacerbating it?. Transportation Research Part D: Transport and Environment, 85, 102408.
Deloitte. (2020). 2020 Global Automotive Consumer Study: Surviving the new normal. Florida: Deloitte Insights.
Fagnant, D., & Kockelman, K. (2015). Preparing a nation for autonomous vehicles: Opportunities, barriers, and policy recommendations. Transportation Research Part A: Policy and Practice, 77, 167-181.
Ferguson, N. (2021). The impact of automation on employment: Opportunities and challenges in transportation. Journal of Business and Economic Policy, 8(2), 1-12.
Garcia, R., & Lee, D. (2022). Q-learning for autonomous vehicle navigation in urban environments: Challenges and opportunities. IEEE Transactions on Intelligent Transportation Systems, 25, 401-415.
Gonzalez, N., Wang, M., & Liu, L. (2022). Regulatory frameworks for autonomous vehicles: An international perspective. Journal of Transportation Law, Logistics, and Policy, 12(1), 15-30.
Gupta, M., & Johnson, R. (2023). Reinforcement learning for autonomous vehicle control: A survey. Journal of Machine Learning Research, 24, 89-104.
Hasselt, H. (2010). Double Q-learning. Advances in Neural Information Processing Systems, 23, 2613-2621.
Himma, K., & Moor, J. (2020). Ethical issues in robotics and autonomous vehicles. AI & Society, 35(1), 1-10.
Johnson, B., & Brown, C. (2019). Classical methods in autonomous vehicle navigation: A review. International Journal of Robotics Research, 38, 201-215.
Johnson, R. (2023). Integrating Q-learning into self-driving urban vehicles: A review. Transportation Research Part C: Emerging Technologies, 42, 158-173.
Jones, E., & Patel, K. (2021). Limitations of traditional motion planning methods in dynamic urban environments. Urban Mobility Journal, 12, 321-335.
Kato, S., & Kato, M. (2020). Autonomous driving and traffic regulations: An overview of legal requirements. Transportation Research Interdisciplinary Perspectives, 6, 100141.
Katrakazas, C., Quddus, M., & Bierlaire, M. (2015). A survey of motion planning techniques for autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 16(2), 740-755.
Lechner, A., Cebon, P., & Sutherland, J. (2020). Reinforcement learning for autonomous vehicles: A survey. IEEE Transactions on Intelligent Transportation Systems, 22(5), 2650-2666.
Lee, H. (2019). Challenges of autonomous vehicle navigation in crowded urban environments. Journal of Intelligent Transportation Systems, 23, 213-228.
Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., & Wierstra, D. (2016). Continuous control with deep reinforcement learning. Retrieved from https://doi.org/10.48550/arXiv.1509.02971.
Lin, H., Zhang, J., & Khalil, H. (2020). Autonomous vehicles: a comprehensive overview. Transport Reviews, 40(5), 748-772.
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., & Mordatch, I. (2017). Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in Neural Information Processing Systems, 30, 6379-6390.
Mao, Z., Cheng, Y., & Zhang, Y. (2022). Adaptive Q-Learning for autonomous vehicle navigation in urban environments. IEEE Transactions on Intelligent Vehicles, 7(1), 51-62.
Miller, T., Zhang, X., & Li, J. (2021). Navigating urban environments using IoT-enabled systems. International Journal of Transportation Science and Technology, 10(4), 347-358.
Mitra, D., Bansal, G., & Akhund, Z. (2022). Understanding pedestrian behavior for autonomous vehicle navigation: A review. Journal of Transportation Safety & Security, 14(1), 92-106.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529-533.
Naderpour, M., Wang, H., & Xu, G. (2020). Cybersecurity in the Internet of Things: A survey. IEEE Communications Surveys & Tutorials, 22(2), 1005-1027.
Patel, S., Brown, C., & Gupta, M. (2020). Adaptive decision-making in autonomous vehicles using Q-learning. Robotics and Autonomous Systems, 78, 102-115.
Smith, A. (2020). Growth of motion planning and control techniques in autonomous vehicles. Journal of Autonomous Vehicle Engineering, 5, 78-92.
Suanpang, P., & Jamjuntr, P. (2024). Optimizing Autonomous UAV Navigation with D* Algorithm for Sustainable Development. Sustainability, 16(17), 7867.
Sutton, R., & Barto, A. (2018). Reinforcement learning: An introduction (2nd ed.). Massachusetts: MIT Press.
van den Bosch, A., Schijven, S., & Bouhuys, A. (2021). Navigating the urban jungle: The future of autonomous vehicles and motion control. Autonomous Vehicles and Machine Learning, 23(4), 387-403.
Watkins, C. (1989). Learning from Delayed Rewards. Doctoral Thesis, University of Cambridge.
Watkins, C., & Dayan, P. (1992). Q-Learning. Machine Learning, 8(3-4), 279-292.
White, E. (2017). Theoretical foundations of traditional motion planning methods for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems, 4, 45-63.
Zhan, W., Chen, F., & Zhou, L. (2019). Motion planning in urban environments: Challenges and solutions. IEEE Intelligent Transportation Systems Magazine, 11(3), 4-16.
Zhou, X., Chen, F., & Yu, H. (2021). Challenges on the use of reinforcement learning in autonomous driving. Journal of Transportation Engineering, 147(6), 04021039.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







