Action recognition has received increasing attention from the computer vision and machine learning communities in the last decade. To enable the study of this problem, there exist a vast number of action datasets, which are recorded under controlled laboratory settings, real-world surveillance environments, or crawled from the Internet. Apart from the "in-the-wild" datasets, the training and test split of conventional datasets often possess similar environments conditions, which leads to close to perfect performance on constrained datasets. In this paper, we introduce a new dataset, namely Multi-Camera Action Dataset (MCAD), which is designed to evaluate the open view classification problem under the surveillance environment. In total, MCAD contains 14,298 action samples from 18 action categories, which are performed by 20 subjects and independently recorded with 5 cameras. Inspired by the well received evaluation approach on the LFW dataset, we designed a standard evaluation protocol and benchmarked MCAD under several scenarios. The benchmark shows that while an average of 85% accuracy is achieved under the closed-view scenario, the performance suffers from a significant drop under the cross-view scenario. In the worst case scenario, the performance of 10-fold cross validation drops from 87.0% to 47.4%.