Differential privacy is a powerful and gold-standard concept of measuring and guaranteeing privacy in data analysis. It is well-known that differential privacy reduces the model's accuracy. However, it is unclear how it affects security of the model from robustness point of view. In this paper, we empirically observe an interesting trade-off between the differential privacy and the security of neural networks. Standard neural networks are vulnerable to input perturbations, either adversarial attacks or common corruptions. We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions. To explore this, we extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model.