Abstract:In this paper, we study dependence of the success rate of adversarial attacks to the Deep Neural Networks on the biomedical image type, control parameters, and image dataset size. With this work, we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing a total of 605,080 chest X-ray and 317,000 histology images of malignant tumors. We concluded that: (1) An increase of the amplitude of perturbation in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. (2) Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. (3) Percentage of successful attacks is growing with an increase of the number of iterations of the algorithm of generating adversarial perturbations with an asymptotic stabilization. (4) It was found that the success of attacks dropping dramatically when the original confidence of predicting image class exceeds 0.95. (5) The expected dependence of the percentage of successful attacks on the size of image training set was not confirmed.