Design of a Path-Following Controller for Autonomous Vehicles Using an Optimized Deep Deterministic Policy Gradient Method

Authors

  • Ali Rizehvandi Faculty of Mechanical Engineering, K.N.Toosi University of Technology, Tehran, Iran
  • Shahram Azadi Faculty of Mechanical Engineering, K.N.Toosi University of Technology, Tehran, Iran

DOI:

https://doi.org/10.15282/ijame.21.3.2024.18.0901

Keywords:

Autonomous vehicles, DRL method, DDPG algorithm, Path-following

Abstract

The need for a safe and reliable transportation system has made the advancement of autonomous vehicles (Avs) increasingly significant. To achieve Level 5 autonomy, as defined by the Society of Automotive Engineers, AVs must be capable of navigating complex and unconventional traffic environments. Path-following is a crucial task in autonomous driving, requiring precise and safe navigation along a defined path. Traditional path-tracking methods often rely on parameter tuning or rule-based approaches, which may not be suitable for dynamic and complex environments. Reinforcement learning has emerged as a powerful technique for developing effective control strategies through agent-environment interactions. This study investigates the efficiency of an optimized Deep Deterministic Policy Gradient (DDPG) method for controlling acceleration and steering in the path-following of autonomous vehicles. The algorithm demonstrates rapid convergence, enabling stable and efficient path tracking. Additionally, the trained agent achieves smooth control without extreme actions. The performance of the optimized DDPG is compared with the standard DDPG algorithm, with results confirming the improved efficiency of the optimized approach. This advancement could significantly contribute to the development of autonomous driving technology.

References

Global Status Report on Road Safety 2018; Technical Report; World Health Organization: Geneva, Switzerland, 2018.

D.J. Fagnant and K. Kockelman, “Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations,” Transportation Research Part A: Policy and Practice, vol. 77, pp. 167–181, 2015.

S. Singh, “Critical reasons for crashes investigated in the national motor vehicle crash causation survey, Technical Report, National Highway Traffic Safety Administration: Washington, DC, USA, 2018.

W.D. Montgomery, R. Mudge, E.L. Groshen, S. Helper, P. Mac Duffie, and C. Carson, “America’s workforce and the self-driving future: Realizing productivity gains and spurring economic growth,” Available online: (accessed on 8 November 2022).

S. Thrun, W. Burgard, and D. Fox, “Probabilistic Robotics (Intelligent Robotics and Autonomous Agents),” The MIT Press: Cambridge, MA, USA, 2005.

P. Koopman, and M. Wagner, “Autonomous vehicle safety: An interdisciplinary challenge,” IEEE Intelligent Transportation Systems Magazine, vol. 9, pp. 90–96, 2017.

Autonomous Vehicles Testing with a Driver, California Department of Motor Vehicles: Sacramento, CA, USA, 2022.

Autonomous Vehicles Testing with a Driver, California Department of Motor Vehicles: Sacramento, CA, USA, 2021.

B. Paden, M. Čáp, S.Z. Yong, D. Yershov and E. Frazzoli “A survey of motion planning and control techniques for self-driving urban vehicles,” IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 33–55, 2016.

T. Faulwasser, B. Kern and R. Findeisen, “Model predictive path-following for constrained nonlinear systems,” In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, Shanghai, China, pp. 8642–8647, 2009.

E. Yurtsever, J. Lambert, A. Carballo and K. Takeda, “A survey of autonomous driving: Common practices and emerging technologies,” IEEE Access, vol. 8, pp. 58443–58469, 2020.

A.P. Aguiar, J.P. Hespanha and P.V. Kokotović, “Performance limitations in reference tracking and path following for nonlinear systems,” Automatica, vol. 44, pp. 598–610, 2008.

B. Rubí, B. Morcego and R. Pérez, “Deep reinforcement learning for quadrotor path following with adaptive velocity,” Autonomous Robots, vol. 45, pp. 119–134, 2021.

D. Zhang, N.L. Azad, S. Fischmeister and S. Marksteiner, “Zeroth-order optimization attacks on deep reinforcement learning-based lane changing algorithms for autonomous vehicles,” In Proceedings of the International Conference on Informatics in Control, Automation and Robotics, pp. 665-673, 2023.

O. Amidi and C.E. Thorpe, “Integrated mobile robot control,” In Proceedings Society of Photo-Optical Instrumentation Engineers (SPIE) 1388, Mobile Robots V, Boston, MA, USA, vol. 1388, pp. 504–523, 1991.

N.H. Amer, H. Zamzuri, K. Hudha and Z.A. Kadir, “Modelling and control strategies in path tracking control for autonomous ground vehicles: A review of state of the art and challenges,” Journal of Intelligent & Robotic Systems, vol. 86, pp. 225–254, 2017.

C. Samson, “Path following and time-varying feedback stabilization of a wheeled mobile robot,” In Proceedings of the Second International Conference Control Automation, Robotics and Vision, Singapore, 1992, vol. 3, pp. 1-7, 1992.

S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, et al., “Stanley: The robot that won the DARPA Grand Challenge,” Journal of field Robotics, vol. 23, pp. 661–692, 2006.

W. Zhao, J.P. Queralta and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: A survey,” In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence, Canberra, Australia, pp. 737–744, 2020.

Y. Li, “Deep reinforcement learning: An overview,” arXiv preprint arXiv:1701.07274, 2017.

V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness and M.G. Bellemare, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015.

J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.

T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, et al., “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015

X. Cheng, S. Zhang, S. Cheng, Q. Xia and J. Zhang, “Path-following and obstacle avoidance control of nonholonomic wheeled mobile robot based on deep reinforcement learning,” Applied Sciences, vol. 12, p. 6874, 2022.

Y. Zheng, J. Tao, Q. Sun, X. Zeng, H. Sun and M. Sun, “DDPG-based active disturbance rejection 3D path-following control for powered parafoil under wind disturbances,” Nonlinear Dynamic, vol. 111, pp. 11205–11221, 2023.

R. Ma, Y. Wang, S. Wang, L. Cheng, R. Wang and M. Tan, “Sample-observed soft actor-critic learning for path following of a biomimetic underwater vehicle,” IEEE Transactions on Automation Science and Engineering, vol. 21, no. 3, pp. 1-10, 2023.

A. Owczarkowski, D. Horla, P. Kozierski and T. Sadalla, "Dynamic modeling and simulation of a bicycle stabilized by LQR control," In 21st International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, pp. 907-911, 2016.

M. Mohri, A. Rostamizadeh and Ameet Talwalkar, “Foundations of machine learning,” MIT Press, Second Edition, 2018.

R.S. Sutton and A.G. Barto, “Reinforcement learning: An introduction,” MIT Press, 2018.

D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra and M. Riedmiller, “Deterministic policy gradient algorithms,” In Proceedings of the International Conference on Machine Learning, PMLR, pp. 387–395, 2014.

Downloads

Published

2024-09-20

How to Cite

[1]
A. Rizehvandi and S. Azadi, “Design of a Path-Following Controller for Autonomous Vehicles Using an Optimized Deep Deterministic Policy Gradient Method”, Int. J. Automot. Mech. Eng., vol. 21, no. 3, pp. 11682–11694, Sep. 2024.

Issue

Section

Articles

Similar Articles

<< < 23 24 25 26 27 28 29 30 31 > >> 

You may also start an advanced similarity search for this article.