This paper presents a novel machine learning framework to consistently detect, localize and rate congenital cleft lip anomalies in human faces. The goal is to provide a universal, objective measure of facial differences and reconstructive surgical outcomes that matches human judgments. The proposed method employs the StyleGAN2 generative adversarial network with model adaptation to produce normalized transformations of cleft-affected faces in order to allow for subsequent measurement of deformity using a pixel-wise subtraction approach. The complete pipeline of the proposed framework consists of the following steps: image preprocessing, face normalization, color transformation, morphological erosion, heat-map generation and abnormality scoring. Heatmaps that finely discern anatomic anomalies are proposed by exploiting the features of the considered framework. The proposed framework is validated through computer simulations and surveys containing human ratings. The anomaly scores yielded by the proposed computer model correlate closely with the human ratings of facial differences, leading to 0.942 Pearson's r score.