Abstract:In this paper, we use a machine learning approach to predict the stationary distributions of the number of customers in a single-staiton multi server system. We consider two systems, the first is $c$ homogeneous servers, namely the $GI/GI/c$ queue. The second is a two-heterogeneous server system, namely the $GI/GI_i/2$ queue. We train a neural network for these queueing models, using the first four inter-arrival and service time moments. We demonstrate empirically that using the fifth moment and beyond does not increase accuracy. Compared to existing methods, we show that in terms of the stationary distribution and the mean value of the number of customers in a $GI/GI/c$ queue, we are state-of-the-art. Further, we are the only ones to predict the stationary distribution of the number of customers in the system in a $GI/GI_i/2$ queue. We conduct a thorough performance evaluation to assert that our model is accurate. In most cases, we demonstrate that our error is less than 5\%. Finally, we show that making inferences is very fast, where 5000 inferences can be made in parallel within a fraction of a second.
Abstract:In this paper, we apply a supervised machine-learning approach to solve a fundamental problem in queueing theory: estimating the transient distribution of the number in the system for a G(t)/GI/1. We develop a neural network mechanism that provides a fast and accurate predictor of these distributions for moderate horizon lengths and practical settings. It is based on using a Recurrent Neural Network (RNN) architecture based on the first several moments of the time-dependant inter-arrival and the stationary service time distributions; we call it the Moment-Based Recurrent Neural Network (RNN) method (MBRNN ). Our empirical study suggests MBRNN requires only the first four inter-arrival and service time moments. We use simulation to generate a substantial training dataset and present a thorough performance evaluation to examine the accuracy of our method using two different test sets. We show that even under the configuration with the worst performance errors, the mean number of customers over the entire timeline has an error of less than 3%. While simulation modeling can achieve high accuracy, the advantage of the MBRNN over simulation is runtime, while the MBRNN analyzes hundreds of systems within a fraction of a second. This paper focuses on a G(t)/GI/1; however, the MBRNN approach demonstrated here can be extended to other queueing systems, as the training data labeling is based on simulations (which can be applied to more complex systems) and the training is based on deep learning, which can capture very complex time sequence tasks. In summary, the MBRNN can potentially revolutionize our ability to perform transient analyses of queueing systems.
Abstract:In this paper, we analyze how well a machine can solve a general problem in queueing theory. To answer this question, we use a deep learning model to predict the stationary queue-length distribution of an $M/G/1$ queue (Poisson arrivals, general service times, one server). To the best of our knowledge, this is the first time a machine learning model is applied to a general queueing theory problem. We chose $M/G/1$ queue for this paper because it lies "on the cusp" of the analytical frontier: on the one hand exact solution for this model is available, which is both computationally and mathematically complex. On the other hand, the problem (specifically the service time distribution) is general. This allows us to compare the accuracy and efficiency of the deep learning approach to the analytical solutions. The two key challenges in applying machine learning to this problem are (1) generating a diverse set of training examples that provide a good representation of a "generic" positive-valued distribution, and (2) representations of the continuous distribution of service times as an input. We show how we overcome these challenges. Our results show that our model is indeed able to predict the stationary behavior of the $M/G/1$ queue extremely accurately: the average value of our metric over the entire test set is $0.0009$. Moreover, our machine learning model is very efficient, computing very accurate stationary distributions in a fraction of a second (an approach based on simulation modeling would take much longer to converge). We also present a case-study that mimics a real-life setting and shows that our approach is more robust and provides more accurate solutions compared to the existing methods. This shows the promise of extending our approach beyond the analytically solvable systems (e.g., $G/G/1$ or $G/G/c$).