نویسندگان | Javad Zeraatkar Moghaddam,Hosein Mohammadi Firozjae,Mehrdad Ardebilipour |
---|---|
همایش | سی و یکمین کنفرانس بین المللی مهندسی برق |
تاریخ برگزاری همایش | 2023-05-09 |
محل برگزاری همایش | تهران |
شماره صفحات | 0-0 |
نوع ارائه | سخنرانی |
سطح همایش | داخلی |
چکیده مقاله
While using unmanned Aerial Vehicles (UAVs) as Flying Base Stations (FBSs) to improve the efficiency of mobile networks can be promising approach, there are some challenges like the limited energy of the UAV. Applying Reinforcement Learning (RL) algorithms can be a practical solution to solve the energy problem. Furthermore, combining the UAV-aided mobile networks with RL algorithms can be promising tendency to help other terrestrial vehicles to find the best route. A Vehicle-to-Vehicle (V2V) mobile network, which the UAV plays the role of a FBS and harvests energy from terrestrial users is investigated in this paper. Following that, the limited energy problem of the UAV that avoids it to complete its mission, is solved by using RL algorithms. The RL algorithm of this paper is formed by a modified Q-Learning algorithm. The effectiveness of the proposed scenario is indicated in the simulation results. It is shown in the simulation results that our RL-based proposed scenario can mitigate the flight time of the UAV impressively in comparison to the existing scenario that do not use RL algorithms.
کلیدواژهها: Mobile networks, UAV, Reinforcement learning, V2V networks, Energy harvesting