Surveillance video anomaly detection searches for anomalous events such as crimes or accidents among normal scenes. Since anomalous events occur rarely, there is a class imbalance problem between normal and abnormal data and it is impossible to collect all potential anomalous events, which makes the task challenging. Therefore, performing anomaly detection requires learning the patterns of normal scenes to detect unseen and undefined anomalies. Since abnormal scenes are distinguished from normal scenes by appearance or motion, lots of previous approaches have used an explicit pre-trained model such as optical flow for motion information, which makes the network complex and dependent on the pre-training. We propose an implicit two-path AutoEncoder (ITAE) that exploits the structure of a SlowFast network and focuses on spatial and temporal information through appearance (slow) and motion (fast) encoders, respectively. The two encoders and a single decoder learn normal appearance and behavior by reconstructing normal videos of the training set. Furthermore, with features from the two encoders, we suggest density estimation through flow-based generative models to learn the tractable likelihoods of appearance and motion features. Finally, we show the effectiveness of appearance and motion encoders and their distribution modeling through experiments in three benchmarks which result outperforms the state-of-the-art methods.