The enormous and ever-increasing complexity of state-of-the-art neural networks (NNs) has impeded the deployment of deep learning on resource-limited devices such as the Internet of Things (IoTs). Stochastic computing exploits the inherent amenability to approximation characteristic of NNs to reduce their energy and area footprint, two critical requirements of small embedded devices suitable for the IoTs. This report evaluates and compares two recently proposed stochastic-based NN designs, referred to as BISC (Binary Interfaced Stochastic Computing) by Sim and Lee, 2017, and ESL (Extended Stochastic Logic) by Canals et al., 2016. Using analysis and simulation, we compare three distinct implementations of these designs in terms of performance, power consumption, area, and accuracy. We also discuss the overall challenges faced in adopting stochastic computing for building NNs. We find that BISC outperforms the other architectures when executing the LeNet-5 NN model applied to the MNIST digit recognition dataset. Our analysis and simulation experiments indicate that this architecture is around 50X faster, occupies 5.7X and 2.9X less area, and consumes 7.8X and 1.8X less power than the two ESL architectures.