Two major uncertainties, dataset bias and perturbation, prevail in state-of-the-art AI algorithms with deep neural networks. In this paper, we present an intuitive explanation for these issues as well as an interpretation of the performance of deep networks in a natural-image space. The explanation consists of two parts: the philosophy of neural networks and a hypothetic model of natural-image spaces. Following the explanation, we slightly improve the accuracy of a CIFAR-10 classifier by introducing an additional "random-noise" category during training. We hope this paper will stimulate discussion in the community regarding the topological and geometric properties of natural-image spaces to which deep networks are applied.