Abstract:A transfer function approach has recently proven effective for calibrating deep learning (DL) algorithms in quantitative ultrasound (QUS), addressing data shifts at both the acquisition and machine levels. Expanding on this approach, we develop a strategy to 'steal' the functionality of a DL model from one ultrasound machine and implement it on another, in the context of QUS. This demonstrates the ease with which the functionality of a DL model can be transferred between machines, highlighting the security risks associated with deploying such models in a commercial scanner for clinical use. The proposed method is a black-box unsupervised domain adaptation technique that integrates the transfer function approach with an iterative schema. It does not utilize any information related to model internals of the victim machine but it solely relies on the availability of input-output interface. Additionally, we assume the availability of unlabelled data from the testing machine, i.e., the perpetrator machine. This scenario could become commonplace as companies begin deploying their DL functionalities for clinical use. Competing companies might acquire the victim machine and, through the input-output interface, replicate the functionality onto their own machines. In the experiments, we used a SonixOne and a Verasonics machine. The victim model was trained on SonixOne data, and its functionality was then transferred to the Verasonics machine. The proposed method successfully transferred the functionality to the Verasonics machine, achieving a remarkable 98\% classification accuracy in a binary decision task. This study underscores the need to establish security measures prior to deploying DL models in clinical settings.