Recognizing, assessing, countering, and mitigating the biases of different nature from heterogeneous sources is a critical problem in designing a cognitive Decision Support System (DSS). An example of such a system is a cognitive biometric-enabled security checkpoint. Biased algorithms affect the decision-making process in an unpredictable way, e.g. face recognition for different demographic groups may severely impact the risk assessment at a checkpoint. This paper addresses a challenging research question on how to manage an ensemble of biases? We provide performance projections of the DSS operational landscape in terms of biases. A probabilistic reasoning technique is used for assessment of the risk of such biases. We also provide a motivational experiment using face biometric component of the checkpoint system which highlights the discovery of an ensemble of biases and the techniques to assess their risks.