This paper focuses on the identification of different algorithm-based biases in robotic behaviour and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaggression, discrimination, and social exclusion informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system inherent information that reveal unequal treatment of human interactants. Based on this information we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI.