The main aim of this paper is the development of Lyapunov function based sufficient conditions for stability (almost sure boundedness) and convergence of stochastic approximation algorithms (SAAs) with set-valued mean-fields, a class of model-free algorithms that have become important in recent times. In this paper we provide a complete analysis of such algorithms under three different, yet related set of sufficient conditions, based on the existence of an associated global/local Lyapunov function. Unlike previous Lyapunov function based approaches, we provide a simple recipe for explicitly constructing the Lyapunov function needed for analysis. Our work builds on the works of Abounadi, Bertsekas and Borkar (2002), Munos (2005) and Ramaswamy and Bhatnagar (2016). An important motivation to the flavor of our assumptions comes from the need to understand approximate dynamic programming and reinforcement learning algorithms, that use deep neural networks (DNNs) for function approximations and parameterizations. These algorithms are popularly known as deep reinforcement learning algorithms. As an important application of our theory we provide a complete analysis of the stochastic approximation counterpart of approximate value iteration (AVI), an important dynamic programming method designed to tackle Bellman's curse of dimensionality. Although motivated by the need to understand deep reinforcement learning algorithms our theory is more generally applicable. It is further used to develop the first SAA for finding fixed points of contractive set-valued maps and provide a comprehensive analysis of the same.