RF-CVN: Recurrent Reinforcement Learning Framework for Cognitive Vehicular Ad-Hoc Networks Routing
Authors :- Nahar A.; Das D.; Yadav R.; Khujamatov K.; Reypnazarov E.
Publication :- IEEE Wireless Communications and Networking Conference, WCNC, 2024
Deep learning (DL) based cognitive radio networks (CRN) serve as a potential solution to the dilemma posed by spectrum limits and the rising demand for vehicular ad hoc networks (VANETs) routing services. However, the unpre-dictability of VANET restricts the generalization potential of DL-based techniques. Variations in traffic volume, road topologies, and radio propagation characteristics affect the training data significantly. Therefore, in this paper, we propose RF -CVN, a recurrent reinforcement learning (RRL) technique to sense the spectrum and discover a trustworthy path between the source and the destination using belief transmission (i.e., channel conditions, interference levels, and vehicle locations). We first devise a deep recurrent Q network for a multi-channel access scheme for unlicensed users to use available channels. The RRL allows the Q function to learn hidden states in partial observation or highly time-correlated network sensing cases. Later, the trust values are used to gain a more nuanced understanding of the network state, thereby enhancing the efficiency and reliability of the routing process. In this work, we argue that trust should be an integral part of the routing process and, therefore, design a trust mechanism to select a path. The trust mechanism aims to detect those spectrums that over-utilize or under-utilize their channel capacity during the local training. The outcomes of our simulations indicate that our RF -CVN routing method outperforms traditional routing systems based on cognitive radio-based vehicular ad hoc networks in terms of network performance and spectrum sensing efficiency.