Recent research has shown that mmWave radar sensing is effective for object detection in low visibility environments, which makes it an ideal technique in autonomous navigation systems such as autonomous vehicles. However, due to the characteristics of radar signals such as sparsity, low resolution, specularity, and high noise, it is still quite challenging to reconstruct 3D object shapes via mmWave radar sensing. Built on our recent proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar), we introduce in this paper DeepPoint, a deep learning model that generates 3D objects in point cloud format that significantly outperforms the original 3DRIMR design. The model adopts a conditional Generative Adversarial Network (GAN) based deep neural network architecture. It takes as input the 2D depth images of an object generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of the object. The model consists of a novel generator network that utilizes a sequence of DeepPoint blocks or layers to extract essential features of the union of multiple rough and sparse input point clouds of an object when observed from various viewpoints, given that those input point clouds may contain many incorrect points due to the imperfect generation process of 3DRIMR's Stage 1. The design of DeepPoint adopts a deep structure to capture the global features of input point clouds, and it relies on an optimally chosen number of DeepPoint blocks and skip connections to achieve performance improvement over the original 3DRIMR design. Our experiments have demonstrated that this model significantly outperforms the original 3DRIMR and other standard techniques in reconstructing 3D objects.