This is a theoretical paper, as a companion paper of the plenary talk for the same conference ISAIC 2022. The problem addressed is the widespread so-called "Deep Learning" method -- training neural networks using error-backprop. The objective is to scientifically reason that the so-called "Deep Learning" contains fatal misconduct. In contrast to the main topic of the plenary talk, conscious learning (Weng, 2022b; Weng, 2022c) which develops a single network for a life (many tasks), "Deep Learning" trains multiple networks for each task. Although they may use different learning modes, including supervised, reinforcement and adversarial modes, almost all "Deep Learning" projects apparently suffer from the same misconduct, called "data deletion" and "test on training data". This paper reasons that "Deep Learning" was not tested by a disjoint test data set at all. Why? The so-called "test data set" was used in the Post-Selection step of the training stage! This paper establishes a theorem that a simple method called Pure-Guess Nearest Neighbor (PGNN) reaches any required errors on validation data set and test data set, including zero-error requirements, through the same "Deep Learning" misconduct, as long as the test data set is in the possession of the author and both the amount of storage space and the time of training are finite but unbounded. The misconduct of "Deep Learning" methods, clarified by the PGNN method, violates well-known protocols called transparency and cross-validation. The misconduct is fatal, because in the absence of any disjoint test, "Deep Learning" is clearly not generalizable.