Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.