Abstract:To address the issue of poor generalization ability in end-to-end speech recognition models within deep learning, this study proposes a new Conformer-based speech recognition model called "Conformer-R" that incorporates the R-drop structure. This model combines the Conformer model, which has shown promising results in speech recognition, with the R-drop structure. By doing so, the model is able to effectively model both local and global speech information while also reducing overfitting through the use of the R-drop structure. This enhances the model's ability to generalize and improves overall recognition efficiency. The model was first pre-trained on the Aishell1 and Wenetspeech datasets for general domain adaptation, and subsequently fine-tuned on computer-related audio data. Comparison tests with classic models such as LAS and Wenet were performed on the same test set, demonstrating the Conformer-R model's ability to effectively improve generalization.
Abstract:To enhance the generalization ability of the model and improve the effectiveness of the transformer for named entity recognition tasks, the XLNet-Transformer-R model is proposed in this paper. The XLNet pre-trained model and the Transformer encoder with relative positional encodings are combined to enhance the model's ability to process long text and learn contextual information to improve robustness. To prevent overfitting, the R-Drop structure is used to improve the generalization capability and enhance the accuracy of the model in named entity recognition tasks. The model in this paper performs ablation experiments on the MSRA dataset and comparison experiments with other models on four datasets with excellent performance, demonstrating the strategic effectiveness of the XLNet-Transformer-R model.
Abstract:Translation models tend to ignore the rich semantic information in triads in the process of knowledge graph complementation. To remedy this shortcoming, this paper constructs a knowledge graph complementation method that incorporates adaptively enhanced semantic information. The hidden semantic information inherent in the triad is obtained by fine-tuning the BERT model, and the attention feature embedding method is used to calculate the semantic attention scores between relations and entities in positive and negative triads and incorporate them into the structural information to form a soft constraint rule for semantic information. The rule is added to the original translation model to realize the adaptive enhancement of semantic information. In addition, the method takes into account the effect of high-dimensional vectors on the effect, and uses the BERT-whitening method to reduce the dimensionality and generate a more efficient semantic vector representation. After experimental comparison, the proposed method performs better on both FB15K and WIN18 datasets, with a numerical improvement of about 2.6% compared with the original translation model, which verifies the reasonableness and effectiveness of the method.
Abstract:The main purpose of relation extraction is to extract the semantic relationships between tagged pairs of entities in a sentence, which plays an important role in the semantic understanding of sentences and the construction of knowledge graphs. In this paper, we propose that the key semantic information within a sentence plays a key role in the relationship extraction of entities. We propose the hypothesis that the key semantic information inside the sentence plays a key role in entity relationship extraction. And based on this hypothesis, we split the sentence into three segments according to the location of the entity from the inside of the sentence, and find the fine-grained semantic features inside the sentence through the intra-sentence attention mechanism to reduce the interference of irrelevant noise information. The proposed relational extraction model can make full use of the available positive semantic information. The experimental results show that the proposed relation extraction model improves the accuracy-recall curves and P@N values compared with existing methods, which proves the effectiveness of this model.