One major drawback of deep neural networks (DNNs) for use in sensitive application domains is their black-box nature. This makes it hard to verify or monitor complex, symbolic requirements. In this work, we present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background knowledge. The knowledge may consist of any fuzzy predicate logic rules. For this, we utilize methods from explainable artificial intelligence (XAI): First, using concept embedding analysis, the output of a computer vision CNN is post-hoc enriched by concept outputs; second, logical rules from prior knowledge are fuzzified to serve as continuous-valued functions on the concept outputs. These can be evaluated with little computational overhead. We demonstrate three diverse use-cases of our method on stateof-the-art object detectors: Finding corner cases, utilizing the rules for detecting and localizing DNN misbehavior during runtime, and comparing the logical consistency of DNNs. The latter is used to find related differences between EfficientDet D1 and Mask R-CNN object detectors. We show that this approach benefits from fuzziness and calibrating the concept outputs.