Human activity modeling operates on two levels: high-level action modeling, such as classification, prediction, detection and anticipation, and low-level motion trajectory prediction and synthesis. In this work, we propose a semi-supervised generative latent variable model that addresses both of these levels by modeling continuous observations as well as semantic labels. We extend the model to capture the dependencies between different entities, such as a human and objects, and to represent hierarchical label structure, such as high-level actions and sub-activities. In the experiments we investigate our model's capability to classify, predict, detect and anticipate semantic action and affordance labels and to predict and generate human motion. We train our models on data extracted from depth image streams from the Cornell Activity 120, the UTKinect-Action3D and the Stony Brook University Kinect Interaction Dataset. We observe that our model performs well in all of the tasks and often outperforms task-specific models.