The maturation of cognition, from introspection to understanding others, has long been a hallmark of human development. This position paper posits that for AI systems to truly emulate or approach human-like interactions, especially within multifaceted environments populated with diverse agents, they must first achieve an in-depth and nuanced understanding of self. Drawing parallels with the human developmental trajectory from self-awareness to mentalizing (also called theory of mind), the paper argues that the quality of an autonomous agent's introspective capabilities of self are crucial in mirroring quality human-like understandings of other agents. While counterarguments emphasize practicality, computational efficiency, and ethical concerns, this position proposes a development approach, blending algorithmic considerations of self-referential processing. Ultimately, the vision set forth is not merely of machines that compute but of entities that introspect, empathize, and understand, harmonizing with the complex compositions of human cognition.