Abstract:
Recent upsurge in data intensive applications over wireless communication networks is stimulating rapid expansion of such networks and thus presenting new research challenges pertaining to their efficient deployment. In the present communication networks, the increased traffic load entails network operators to expand their networks by the deployment of a large number of base stations (BSs) and access points (APs). Studies have reported that a major portion of energy consumption occurs at the access network entities. This means that the massive data traffic is being served at the expense of increased carbon footprint and huge energy consumption. Therefore, energy saving has emerged as one of the major aspects in such data intensive and high traffic communication networks. Considering this, the energy efficient operation of BSs and APs has become a major research problem and it is well taken up in this thesis for the case of wireless networks.
In this work, the research aim of energy saving has been considered for both, cellular BSs and
Wi-Fi APs to cover the major part of the wireless communication networks. An actor-critic
(AC) reinforcement learning (RL) framework is used to enable traffic based ON/OFF switching of BSs and APs. Furthermore, previously estimated traffic statistics is exploited through the process of transfer learning for further improvement in energy savings and speeding up the learning process. Herein, this novel approach is used for three cases: realization of a transfer learning framework for Wi-Fi networks, implementation of a three state RL based BS switching scheme for existing cellular networks and application of RL in heterogeneous networks (HetNets) consisting of macro and femto BSs. The use of practical scenario and real time data collected from institute's Wi-Fi network to validate the adopted scheme is an important feature of this study. The superiority of the proposed framework is depicted through simulations and relevant mathematical analysis.