Abstract:In this article, the authors present a novel method to learn the personalized tactic of discretionary lane-change initiation for fully autonomous vehicles through human-computer interactions. Instead of learning from human-driving demonstrations, a reinforcement learning technique is employed to learn how to initiate lane changes from traffic context, the action of a self-driving vehicle, and in-vehicle user feedback. The proposed offline algorithm rewards the action-selection strategy when the user gives positive feedback and penalizes it when negative feedback. Also, a multi-dimensional driving scenario is considered to represent a more realistic lane-change trade-off. The results show that the lane-change initiation model obtained by this method can reproduce the personal lane-change tactic, and the performance of the customized models (average accuracy 86.1%) is much better than that of the non-customized models (average accuracy 75.7%). This method allows continuous improvement of customization for users during fully autonomous driving even without human-driving experience, which will significantly enhance the user acceptance of high-level autonomy of self-driving vehicles.