Abstract:Robust ranking and selection (R&S) is an important and challenging variation of conventional R&S that seeks to select the best alternative among a finite set of alternatives. It captures the common input uncertainty in the simulation model by using an ambiguity set to include multiple possible input distributions and shifts to select the best alternative with the smallest worst-case mean performance over the ambiguity set. In this paper, we aim at developing new fixed-budget robust R&S procedures to minimize the probability of incorrect selection (PICS) under a limited sampling budget. Inspired by an additive upper bound of the PICS, we derive a new asymptotically optimal solution to the budget allocation problem. Accordingly, we design a new sequential optimal computing budget allocation (OCBA) procedure to solve robust R&S problems efficiently. We then conduct a comprehensive numerical study to verify the superiority of our robust OCBA procedure over existing ones. The numerical study also provides insights on the budget allocation behaviors that lead to enhanced efficiency.
Abstract:Ranking and selection (R&S) conventionally aims to select the unique best alternative with the largest mean performance from a finite set of alternatives. However, for better supporting decision making, it may be more informative to deliver a small menu of alternatives whose mean performances are among the top $m$. Such problem, called optimal subset selection (OSS), is generally more challenging to address than the conventional R&S. This challenge becomes even more significant when the number of alternatives is considerably large. Thus, the focus of this paper is on addressing the large-scale OSS problem. To achieve this goal, we design a top-$m$ greedy selection mechanism that keeps sampling the current top $m$ alternatives with top $m$ running sample means and propose the explore-first top-$m$ greedy (EFG-$m$) procedure. Through an extended boundary-crossing framework, we prove that the EFG-$m$ procedure is both sample optimal and consistent in terms of the probability of good selection, confirming its effectiveness in solving large-scale OSS problem. Surprisingly, we also demonstrate that the EFG-$m$ procedure enables to achieve an indifference-based ranking within the selected subset of alternatives at no extra cost. This is highly beneficial as it delivers deeper insights to decision-makers, enabling more informed decision-makings. Lastly, numerical experiments validate our results and demonstrate the efficiency of our procedures.