This paper presents a comprehensive evaluation of instance segmentation models with respect to real-world image corruptions and out-of-domain image collections, e.g. datasets collected with different set-ups than the training datasets the models learned from. The out-of-domain image evaluation shows the generalization capability of models, an essential aspect of real-world applications, and an extensively studied topic of domain adaptation. These presented robustness and generalization evaluations are important when designing instance segmentation models for real-world applications and picking an off-the-shelf pretrained model to directly use for the task at hand. Specifically, this benchmark study includes state-of-the-art network architectures, network backbones, normalization layers, models trained starting from scratch or ImageNet pretrained networks, and the effect of multi-task training on robustness and generalization. Through this study, we gain several insights e.g. we find that normalization layers play an essential role in robustness, ImageNet pretraining does not help the robustness and the generalization of models, excluding JPEG corruption, and network backbones and copy-paste augmentations affect robustness significantly.