In this article we present a geometric framework to analyze convergence of gradient descent trajectories in the context of neural networks. In the case of linear networks of an arbitrary number of hidden layers, we characterize appropriate quantities which are conserved along the gradient descent system (GDS). We use them to prove boundedness of every trajectory of the GDS, which implies convergence to a critical point. We further focus on the local behavior in the neighborhood of each critical points and perform a study on the associated basin of attractions so as to measure the "possibility" of converging to saddle points and local minima.