This paper is dedicated to Prof. Eduardo Sontag on the occasion of his seventieth birthday. In this paper, we build upon the ideas first proposed in Gladyshev (1965) to develop a very general framework for proving the almost sure boundedness and the convergence of stochastic approximation algorithms. These ideas are based on martingale methods and are in some ways simpler than convergence proofs based on the ODE method, e.g., Borkar-Meyn (2000). First we study the original version of the SA algorithm introduced in Robbins-Monro (1951), where the objective is to determine a zero of a function, when only noisy measurements of the function are available. The proof makes use of the general framework developed here, together with a new theorem on converse Lyapunov stability, which might be of independent interest. Next we study an alternate version of SA, first introduced in Kiefer-Wolfowitz (1952). The objective here is to find a stationary point of a scalar-valued function, using first-order differences to approximate its gradient. This problem is analyzed in Blum (1954), but with a very opaque proof. We reproduce Blum's conclusions using the proposed framework.