Abstract:Real-world optimisation problems typically have objective functions which cannot be expressed analytically. These optimisation problems are evaluated through expensive physical experiments or simulations. Cheap approximations of the objective function can reduce the computational requirements for solving these expensive optimisation problems. These cheap approximations may be machine learning or statistical models and are known as surrogate models. This paper introduces a simulation of a well-known batch processing problem in the literature. Evolutionary algorithms such as Genetic Algorithm (GA), Differential Evolution (DE) are used to find the optimal schedule for the simulation. We then compare the quality of solutions obtained by the surrogate-assisted versions of the algorithms against the baseline algorithms. Surrogate-assistance is achieved through Probablistic Surrogate-Assisted Framework (PSAF). The results highlight the potential for improving baseline evolutionary algorithms through surrogates. For different time horizons, the solutions are evaluated with respect to several quality indicators. It is shown that the PSAF assisted GA (PSAF-GA) and PSAF-assisted DE (PSAF-DE) provided improvement in some time horizons. In others, they either maintained the solutions or showed some deterioration. The results also highlight the need to tune the hyper-parameters used by the surrogate-assisted framework, as the surrogate, in some instances, shows some deterioration over the baseline algorithm.
Abstract:Sports data is more readily available and consequently, there has been an increase in the amount of sports analysis, predictions and rankings in the literature. Sports are unique in their respective stochastic nature, making analysis, and accurate predictions valuable to those involved in the sport. In response, we focus on Siamese Neural Networks (SNN) in unison with LightGBM and XGBoost models, to predict the importance of matches and to rank teams in Rugby and Basketball. Six models were developed and compared, a LightGBM, a XGBoost, a LightGBM (Contrastive Loss), LightGBM (Triplet Loss), a XGBoost (Contrastive Loss), XGBoost (Triplet Loss). The models that utilise a Triplet loss function perform better than those using Contrastive loss. It is clear LightGBM (Triplet loss) is the most effective model in ranking the NBA, producing a state of the art (SOTA) mAP (0.867) and NDCG (0.98) respectively. The SNN (Triplet loss) most effectively predicted the Super 15 Rugby, yielding the SOTA mAP (0.921), NDCG (0.983), and $r_s$ (0.793). Triplet loss produces the best overall results displaying the value of learning representations/embeddings for prediction and ranking of sports. Overall there is not a single consistent best performing model across the two sports indicating that other Ranking models should be considered in the future.