This study focuses on a multi-user massive multiple-input multiple-output (MU-mMIMO) system by incorporating an unmanned aerial vehicle (UAV) as a decode-and-forward (DF) relay between the base station (BS) and multiple Internet-of-Things (IoT) devices. Our primary objective is to maximize the overall achievable rate (AR) by introducing a novel framework that integrates joint hybrid beamforming (HBF) and UAV localization in dynamic MU-mMIMO IoT systems. Particularly, HBF stages for BS and UAV are designed by leveraging slow time-varying angular information, whereas a deep reinforcement learning (RL) algorithm, namely deep deterministic policy gradient (DDPG) with continuous action space, is developed to train the UAV for its deployment. By using a customized reward function, the RL agent learns an optimal UAV deployment policy capable of adapting to both static and dynamic environments. The illustrative results show that the proposed DDPG-based UAV deployment (DDPG-UD) can achieve approximately 99.5% of the sum-rate capacity achieved by particle swarm optimization (PSO)-based UAV deployment (PSO-UD), while requiring a significantly reduced runtime at approximately 68.50% of that needed by PSO-UD, offering an efficient solution in dynamic MU-mMIMO environments.