Abstract:We intend to generate low-dimensional explicit distributional semantic vectors. In explicit semantic vectors, each dimension corresponds to a word, so word vectors are interpretable. In this research, we propose a new approach to obtain low-dimensional explicit semantic vectors. First, the proposed approach considers the three criteria Word Similarity, Number of Zero, and Word Frequency as features for the words in a corpus. Then, we extract some rules for obtaining the initial basis words using a decision tree that is drawn based on the three features. Second, we propose a binary weighting method based on the Binary Particle Swarm Optimization algorithm that obtains N_B = 1000 context words. We also use a word selection method that provides N_S = 1000 context words. Third, we extract the golden words of the corpus based on the binary weighting method. Then, we add the extracted golden words to the context words that are selected by the word selection method as the golden context words. We use the ukWaC corpus for constructing the word vectors. We use MEN, RG-65, and SimLex-999 test sets to evaluate the word vectors. We report the results compared to a baseline that uses 5k most frequent words in the corpus as context words. The baseline method uses a fixed window to count the co-occurrences. We obtain the word vectors using the 1000 selected context words together with the golden context words. Our approach compared to the Baseline method increases the Spearman correlation coefficient for the MEN, RG-65, and SimLex-999 test sets by 4.66%, 14.73%, and 1.08%, respectively.
Abstract:Despite significant advances in Deep Face Recognition (DFR) systems, introducing new DFRs under specific constraints such as varying pose still remains a big challenge. Most particularly, due to the 3D nature of a human head, facial appearance of the same subject introduces a high intra-class variability when projected to the camera image plane. In this paper, we propose a new multi-view Deep Face Recognition (MVDFR) system to address the mentioned challenge. In this context, multiple 2D images of each subject under different views are fed into the proposed deep neural network with a unique design to re-express the facial features in a single and more compact face descriptor, which in turn, produces a more informative and abstract way for face identification using convolutional neural networks. To extend the functionality of our proposed system to multi-view facial images, the golden standard Deep-ID model is modified in our proposed model. The experimental results indicate that our proposed method yields a 99.8% accuracy, while the state-of-the-art method achieves a 97% accuracy. We also gathered the Iran University of Science and Technology (IUST) face database with 6552 images of 504 subjects to accomplish our experiments.