With the development of machine learning, datasets for models are getting increasingly larger. This leads to increased data annotation costs and training time, which undoubtedly hinders the development of machine learning. To solve this problem, zero-shot learning is gaining considerable attention. With zero-shot learning, objects can be recognized or classified, even without having been seen before. Nevertheless, the accuracy of this method is still low, thus limiting its practical application. To solve this problem, we propose a video-text matching model, which can learn from handcrafted features. Our model can be used alone to predict the action classes and can also be added to any other model to improve its accuracy. Moreover, our model can be continuously optimized to improve its accuracy. We only need to manually annotate some features, which incurs some labor costs; in many situations, the costs are worth it. The results with UCF101 and HMDB51 show that our model achieves the best accuracy and also improves the accuracies of other models.