A Novel hybrid GRU based multi-agent D2QL model for enhancing spectrum sensing and resource allocation with Energy Harvesting in Cognitive Radio Networks: Towards Green Communication
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Energy Harvesting (EH) has become an important facet of sustainable Cognitive Radio Networks (CRNs), helping to improve spectrum efficiency and alleviate Cognitive Users (CU) energy limitations. This research endeavours to enhance critical components of CRNs, encompassing the sensing of spectrum, channel assignment and power distribution, while integrating EH techniques. A multi-agent double deep reinforcement learning methodology utilizing Gated Recurrent Units (GRUs) is proposed to optimize these processes with efficiency. The integration of GRUs substantially enhances the agent’s ability to comprehend and adjust to variable environments by proficiently capturing temporal dependencies within the data. By utilizing a multi-agent framework, the proposed methodology facilitates superior collaboration and decision-making among CUs. The GRU based Multi-Agent double deep Q- learning (MAD2QL) framework is thoroughly tested against single agent deep Q network (SADQN) and conventional deep Q networks (CDQN) with long short term memory (LSTM) based learning methods. This work aims to compare deep reinforcement learning (DRL) models. The double deep learning architecture further augments the stability and convergence of the learning process, resulting in improved performance in EH and resource allocation. Simulation outcomes substantiate that the proposed methodology surpasses existing approaches in terms of sensing efficiency, resource utilization and energy sustainability, thereby presenting a promising solution for the forthcoming generation of CRNs. The mean square error (MSE) for energy consumption of MAD2QL DRL model is 0.26106, 25.7% lower than the MSE of 0.3513 for SADQN. The proposed model achieves 71.59% of EH efficiency.