Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks, and determining collision-free trajectory in multi-UAV non-cooperative scenarios while collecting data from distributed Internet of Things (IoT) nodes is a challenging task. In this paper, we consider a path planning optimization problem to maximize the collected data from multiple IoT nodes under realistic constraints. The considered multi-UAV non-cooperative scenarios involve random number of other UAVs in addition to the typical UAV, and UAVs do not communicate or share information among each other. We translate the problem into a Markov decision process (MDP) with parameterized states, permissible actions, and detailed reward functions. Dueling double deep Q-network (D3QN) is proposed to learn the decision making policy for the typical UAV, without any prior knowledge of the environment (e.g., channel propagation model and locations of the obstacles) and other UAVs (e.g., their missions, movements, and policies). The proposed algorithm can adapt to various missions in various scenarios, e.g., different numbers and positions of IoT nodes, different amount of data to be collected, and different numbers and positions of other UAVs. Numerical results demonstrate that real-time navigation can be efficiently performed with high success rate, high data collection rate, and low collision rate.