Millimeter-wave (mmWave) base station can offer abundant high capacity channel resources toward connected vehicles so that quality-of-service (QoS) of them in terms of downlink throughput can be highly improved. The mmWave base station can operate among existing base stations (e.g., macro-cell base station) on non-overlapped channels among them and the vehicles can make decision what base station to associate, and what channel to utilize on heterogeneous networks. Furthermore, because of the non-omni property of mmWave communication, the vehicles decide how to align the beam direction toward mmWave base station to associate with it. However, such joint problem requires high computational cost, which is NP-hard and has combinatorial features. In this paper, we solve the problem in 3-tier heterogeneous vehicular network (HetVNet) with multi-agent deep reinforcement learning (DRL) in a way that maximizes expected total reward (i.e., downlink throughput) of vehicles. The multi-agent deep deterministic policy gradient (MADDPG) approach is introduced to achieve optimal policy in continuous action domain.