Traditional Query-by-Example (QbE) speech search approaches usually use methods based on frame-level features, while state-of-the-art approaches tend to use models based on acoustic word embeddings (AWEs) to transform variable length audio signals into fixed length feature vector representations. However, these approaches cannot meet the requirements of the search quality as well as speed at the same time. In this paper, we propose a novel fast QbE speech search method based on separable models to fix this problem. First, a QbE speech search training framework is introduced. Second, we design a novel model inference scheme based on RepVGG which can efficiently improve the QbE search quality. Third, we modify and improve our QbE speech search model according to the proposed model inference scheme. Experiments on keywords dataset shows that our proposed method can improve the GPU Real-time Factor (RTF) from 1/150 to 1/2300 by just applying separable model scheme and outperforms other state-of-the-art methods.