arXiv:2410.15607v2 Announce Type: replace-cross Abstract: Reinforcement learning (RL) faces challenges in trajectory planning for urban automated driving due to the poor convergence of RL and the difficulty in designing reward functions. Consequently, few RL-based trajectory planning methods can achieve performance comparable to that of imitation learning-based methods. The convergence problem is alleviated by combining RL with supervised learning. However, most existing approaches only reason one step ahead and lack the capability to plan for multiple future steps. Besides, although inverse reinforcement learning holds promise for solving the reward function design issue, existing methods for automated driving impose a linear structure assumption on reward functions, making them difficult to apply to urban automated driving. In light of these challenges, this paper proposes a novel RL-based trajectory planning method that integrates RL with imitation learning to enable multi-step planning. Furthermore, a transformer-based Bayesian reward function is developed, providing effective reward signals for RL in urban scenarios. Moreover, a hybrid-driven trajectory planning framework is proposed to enhance safety and interpretability. The proposed methods were validated on the large-scale real-world urban automated driving nuPlan dataset. Evaluated using closed-loop metrics, the results demonstrated that the proposed method significantly outperformed the baseline employing the identical policy model structure and achieved competitive performance compared to the state-of-the-art method. The code is available at https://github.com/Zigned/nuplan_zigned.