In recent years, an abundance of feature attribution methods for explaining neural networks have been developed. Especially in the field of computer vision, many methods for generating saliency maps providing pixel attributions exist. However, their explanations often contradict each other and it is not clear which explanation to trust. A natural solution to this problem is the aggregation of multiple explanations. We present and compare different pixel-based aggregation schemes with the goal of generating a new explanation, whose fidelity to the model's decision is higher than each individual explanation. Using methods from the field of Bayesian Optimization, we incorporate the variance between the individual explanations into the aggregation process. Additionally, we analyze the effect of multiple normalization techniques on ensemble aggregation.