This study proposes a novel approach utilizing a physics-informed deep learning (DL) algorithm to reconstruct occluded objects in a terahertz (THz) holographic system. Taking the angular spectrum theory as prior knowledge, we generate a dataset consisting of a series of diffraction patterns that contain information about the objects. This dataset, combined with unlabeled data measured from experiments, are used for the self-training of a physics-informed neural network (NN). During the training process, the neural network iteratively predicts the outcomes of the unlabeled data and reincorporates these results back into the training set. This recursive strategy not only reduces noise but also minimizes mutual interference during object reconstruction, demonstrating its effectiveness even in data-scarce situations. The method has been validated with both simulated and experimental data, showcasing its significant potential to advance the field of terahertz three-dimensional (3D) imaging. Additionally, it sets a new benchmark for rapid, reference-free, and cost-effective power detection.