The reservoir computing networks (RCNs) have been successfully employed as a tool in learning and complex decision-making tasks. Despite their efficiency and low training cost, practical applications of RCNs rely heavily on empirical design. In this paper, we develop an algorithm to design RCNs using the realization theory of linear dynamical systems. In particular, we introduce the notion of $\alpha$-stable realization, and provide an efficient approach to prune the size of a linear RCN without deteriorating the training accuracy. Furthermore, we derive a necessary and sufficient condition on the irreducibility of number of hidden nodes in linear RCNs based on the concepts of controllability and observability matrices. Leveraging the linear RCN design, we provide a tractable procedure to realize RCNs with nonlinear activation functions. Finally, we present numerical experiments on forecasting time-delay systems and chaotic systems to validate the proposed RCN design methods and demonstrate their efficacy.