One of the most common defense strategies against model poisoning in federated learning is to employ a robust aggregator mechanism that makes the training more resilient. Many of the existing Byzantine robust aggregators provide theoretical guarantees and are empirically effective against certain categories of attacks. However, we observe that certain high-strength attacks can subvert the aggregator and collapse the training. In addition, most aggregators require identifying tolerant settings to converge. Impact of attacks becomes more pronounced when the number of Byzantines is near-majority, and becomes harder to evade if the attacker is omniscient with access to data, honest updates and aggregation methods. Motivated by these observations, we develop a robust aggregator called FedRISE for cross-silo FL that is consistent and less susceptible to poisoning updates by an omniscient attacker. The proposed method explicitly determines the optimal direction of each gradient through a sign-voting strategy that uses variance-reduced sparse gradients. We argue that vote weighting based on the cosine similarity of raw gradients is misleading, and we introduce a sign-based gradient valuation function that ignores the gradient magnitude. We compare our method against 8 robust aggregators under 6 poisoning attacks on 3 datasets and architectures. Our results show that existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation.