In this paper, we take an information-theoretic approach to understand the robustness in wireless distributed learning. Upon measuring the difference in loss functions, we provide an upper bound of the performance deterioration due to imperfect wireless channels. Moreover, we characterize the transmission rate under task performance guarantees and propose the channel capacity gain resulting from the inherent robustness in wireless distributed learning. An efficient algorithm for approximating the derived upper bound is established for practical use. The effectiveness of our results is illustrated by the numerical simulations.