A Bayesian framework for 3D human pose estimation from monocular images based on sparse representation (SR) is introduced. Our probabilistic approach aims at simultaneously learning two overcomplete dictionaries (one for the visual input space and the other for the pose space) with a shared sparse representation. Existing SR-based pose estimation approaches only offer a point estimation of the dictionary and the sparse codes. Therefore, they might be unreliable when the number of training examples is small. Our Bayesian framework estimates a posterior distribution for the sparse codes and the dictionaries from labeled training data. Hence, it is robust to overfitting on small-size training data. Experimental results on various human activities show that the proposed method is superior to the state of-the-art pose estimation algorithms.