Abstract:Ad-hoc team cooperation is the problem of cooperating with other players that have not been seen in the learning process. Recently, this problem has been considered in the context of Hanabi, which requires cooperation without explicit communication with the other players. While in self-play strategies cooperating on reinforcement learning (RL) process has shown success, there is the problem of failing to cooperate with other unseen agents after the initial learning is completed. In this paper, we categorize the results of ad-hoc team cooperation into Failure, Success, and Synergy and analyze the associated failures. First, we confirm that agents learning via RL converge to one strategy each, but not necessarily the same strategy and that these agents can deploy different strategies even though they utilize the same hyperparameters. Second, we confirm that the larger the behavioral difference, the more pronounced the failure of ad-hoc team cooperation, as demonstrated using hierarchical clustering and Pearson correlation. We confirm that such agents are grouped into distinctly different groups through hierarchical clustering, such that the correlation between behavioral differences and ad-hoc team performance is -0.978. Our results improve understanding of key factors to form successful ad-hoc team cooperation in multi-player games.