We show how brain networks, modeled as Spiking Neural Networks, can be viewed at different levels of abstraction. Lower levels include complications such as failures of neurons and edges. Higher levels are more abstract, making simplifying assumptions to avoid these complications. We show precise relationships between executions of networks at different levels, which enables us to understand the behavior of lower-level networks in terms of the behavior of higher-level networks. We express our results using two abstract networks, A1 and A2, one to express firing guarantees and the other to express non-firing guarantees, and one detailed network D. The abstract networks contain reliable neurons and edges, whereas the detailed network has neurons and edges that may fail, subject to some constraints. Here we consider just initial stopping failures. To define these networks, we begin with abstract network A1 and modify it systematically to obtain the other two networks. To obtain A2, we simply lower the firing thresholds of the neurons. To obtain D, we introduce failures of neurons and edges, and incorporate redundancy in the neurons and edges in order to compensate for the failures. We also define corresponding inputs for the networks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D and the other relating corresponding executions of A2 and D. Together, these give both firing and non-firing guarantees for the detailed network D. We also give a third theorem, relating the effects of D on an external reliable actuator neuron to the effects of the abstract networks on the same actuator neuron.