Abstract:Standard bias and variance (B&V) terminologies were originally defined for the regression setting and their extensions to classification have led to several different models / definitions in the literature. In this paper, we aim to provide the link between the commonly used frameworks of Tumer & Ghosh (T&G) and James. By unifying the two approaches, we relate the B&V defined for the 0/1 loss to the standard B&V of the boundary distributions given for the squared error loss. The closed form relationships provide a deeper understanding of classification performance, and their use is demonstrated in two case studies.