Reachability analysis is a promising technique to automatically prove or disprove the reliability and safety of AI-empowered software systems that are developed by using Deep Reinforcement Learning (DRL). Existing approaches suffer however from limited scalability and large overestimation as they must over-approximate the complex and almost inexplicable system components, namely deep neural networks (DNNs). In this paper we propose a novel, tight and scalable reachability analysis approach for DRL systems. By training on abstract states, our approach treats the embedded DNNs as black boxes to avoid the over-approximation for neural networks in computing reachable sets. To tackle the state explosion problem inherent to abstraction-based approaches, we devise a novel adjacent interval aggregation algorithm which balances the growth of abstract states and the overestimation caused by the abstraction. We implement a tool, called BBReach, and assess it on an extensive benchmark of control systems to demonstrate its tightness, scalability, and efficiency.