Abstract:Current techniques and systems for distributed model training mostly assume that clusters are comprised of homogeneous servers with a constant resource availability. However, cluster heterogeneity is pervasive in computing infrastructure, and is a fundamental characteristic of low-cost transient resources (such as EC2 spot instances). In this paper, we develop a dynamic batching technique for distributed data-parallel training that adjusts the mini-batch sizes on each worker based on its resource availability and throughput. Our mini-batch controller seeks to equalize iteration times on all workers, and facilitates training on clusters comprised of servers with different amounts of CPU and GPU resources. This variable mini-batch technique uses proportional control and ideas from PID controllers to find stable mini-batch sizes. Our empirical evaluation shows that dynamic batching can reduce model training times by more than 4x on heterogeneous clusters.
Abstract:While the pay-as-you-go nature of cloud virtual machines (VMs) makes it easy to spin-up large clusters for training ML models, it can also lead to ballooning costs. The 100s of virtual machine sizes provided by cloud platforms also makes it extremely challenging to select the ``right'' cloud cluster configuration for training. Furthermore, the training time and cost of distributed model training is highly sensitive to the cluster configurations, and presents a large and complex tradeoff-space. In this paper, we develop principled and practical techniques for optimizing the training time and cost of distributed ML model training on the cloud. Our key insight is that both parallel and statistical efficiency must be considered when selecting the optimum job configuration parameters such as the number of workers and the batch size. By combining conventional parallel scaling concepts and new insights into SGD noise, our models accurately estimate the time and cost on different cluster configurations with < 5% error. Using the repetitive nature of training and our models, we can search for optimum cloud configurations in a black-box, online manner. Our approach reduces training times by 2 times and costs more more than 50%. Compared to an oracle-based approach, our performance models are accurate to within 2% such that the search imposes an overhead of just 10%.
Abstract:Scientific computing applications have benefited greatly from high performance computing infrastructure such as supercomputers. However, we are seeing a paradigm shift in the computational structure, design, and requirements of these applications. Increasingly, data-driven and machine learning approaches are being used to support, speed-up, and enhance scientific computing applications, especially molecular dynamics simulations. Concurrently, cloud computing platforms are increasingly appealing for scientific computing, providing "infinite" computing powers, easier programming and deployment models, and access to computing accelerators such as TPUs (Tensor Processing Units). This confluence of machine learning (ML) and cloud computing represents exciting opportunities for cloud and systems researchers. ML-assisted molecular dynamics simulations are a new class of workload, and exhibit unique computational patterns. These simulations present new challenges for low-cost and high-performance execution. We argue that transient cloud resources, such as low-cost preemptible cloud VMs, can be a viable platform for this new workload. Finally, we present some low-hanging fruits and long-term challenges in cloud resource management, and the integration of molecular dynamics simulations into ML platforms (such as TensorFlow).