Abstract:As the number of devices getting connected to the vehicular network grows exponentially, addressing the numerous challenges of effectively allocating spectrum in dynamic vehicular environment becomes increasingly difficult. Traditional methods may not suffice to tackle this issue. In vehicular networks safety critical messages are involved and it is important to implement an efficient spectrum allocation paradigm for hassle free communication as well as manage the congestion in the network. To tackle this, a Deep Q Network (DQN) model is proposed as a solution, leveraging its ability to learn optimal strategies over time and make decisions. The paper presents a few results and analyses, demonstrating the efficacy of the DQN model in enhancing spectrum sharing efficiency. Deep Reinforcement Learning methods for sharing spectrum in vehicular networks have shown promising outcomes, demonstrating the system's ability to adjust to dynamic communication environments. Both SARL and MARL models have exhibited successful rates of V2V communication, with the cumulative reward of the RL model reaching its maximum as training progresses.