Computational intractability has for decades motivated the development of a plethora of methodologies that mainly aimed at a quality-time trade-off. The use of Machine Learning techniques has finally emerged as one of the possible tools to obtain approximate solutions to ${\cal NP}$-hard combinatorial optimization problems. In a recent article, Dai et al. introduced a method for computing such approximate solutions for instances of the Vertex Cover problem. In this paper we consider the effectiveness of selecting a proper training strategy by considering special problem instances called "obstructions" that we believe carry some intrinsic properties of the problem itself. Capitalizing on the recent work of Dai et al. on the Vertex Cover problem, and using the same case study as well as 19 other problem instances, we show the utility of using obstructions for training neural networks. Experiments show that training with obstructions results in a huge reduction in number of iterations needed for convergence, thus gaining a substantial reduction in the time needed for training the model.