The development of facial biometric systems has contributed greatly to the development of the computer vision field. Nowadays, there's always a need to develop a multimodal system that combines multiple biometric traits in an efficient, meaningful way. In this paper, we introduce "IdentiFace" which is a multimodal facial biometric system that combines the core of facial recognition with some of the most important soft biometric traits such as gender, face shape, and emotion. We also focused on developing the system using only VGG-16 inspired architecture with minor changes across different subsystems. This unification allows for simpler integration across modalities. It makes it easier to interpret the learned features between the tasks which gives a good indication about the decision-making process across the facial modalities and potential connection. For the recognition problem, we acquired a 99.2% test accuracy for five classes with high intra-class variations using data collected from the FERET database[1]. We achieved 99.4% on our dataset and 95.15% on the public dataset[2] in the gender recognition problem. We were also able to achieve a testing accuracy of 88.03% in the face-shape problem using the celebrity face-shape dataset[3]. Finally, we achieved a decent testing accuracy of 66.13% in the emotion task which is considered a very acceptable accuracy compared to related work on the FER2013 dataset[4].