Abstract:Morphing Attack Detection (MAD) is a relevant topic that aims to detect attempts by unauthorised individuals to access a "valid" identity. One of the main scenarios is printing morphed images and submitting the respective print in a passport application process. Today, small datasets are available to train the MAD algorithm because of privacy concerns and the limitations resulting from the effort associated with the printing and scanning of images at large numbers. In order to improve the detection capabilities and spot such morphing attacks, it will be necessary to have a larger and more realistic dataset representing the passport application scenario with the diversity of devices and the resulting printed scanned or compressed images. Creating training data representing the diversity of attacks is a very demanding task because the training material is developed manually. This paper proposes two different methods based on transfer-transfer for automatically creating digital print/scan face images and using such images in the training of a Morphing Attack Detection algorithm. Our proposed method can reach an Equal Error Rate (EER) of 3.84% and 1.92% on the FRGC/FERET database when including our synthetic and texture-transfer print/scan with 600 dpi to handcrafted images, respectively.