In a neural network (NN), \emph{weight matrices} linearly transform inputs into \emph{preactivations} that are then transformed nonlinearly into \emph{activations}. A typical NN interleaves multitudes of such linear and nonlinear transforms to express complex functions. Thus, the (pre-)activations depend on the weights in an intricate manner. We show that, surprisingly, (pre-)activations of a randomly initialized NN become \emph{independent} from the weights as the NN's widths tend to infinity, in the sense of \emph{asymptotic freeness} in random matrix theory. We call this the \emph{Free Independence Principle (FIP)}, which has these consequences: 1) It rigorously justifies the calculation of asymptotic Jacobian singular value distribution of an NN in Pennington et al. [36,37], essential for training ultra-deep NNs [48]. 2) It gives a new justification of \emph{gradient independence assumption} used for calculating the \emph{Neural Tangent Kernel} of a neural network. FIP and these results hold for any neural architecture. We show FIP by proving a Master Theorem for any Tensor Program, as introduced in Yang [50,51], generalizing the Master Theorems proved in those works. As warmup demonstrations of this new Master Theorem, we give new proofs of the semicircle and Marchenko-Pastur laws, which benchmarks our framework against these fundamental mathematical results.