Abstract:Contrary to Google Search's mission of delivering information from "many angles so you can form your own understanding of the world," we find that Google and its most prominent returned results -- Wikipedia and YouTube, simply reflect the narrow set of cultural stereotypes tied to the search language for complex topics like "Buddhism," "Liberalism," "colonization," "Iran" and "America." Simply stated, they present, to varying degrees, distinct information across the same search in different languages (we call it 'language bias'). Instead of presenting a global picture of a complex topic, our online searches turn us into the proverbial blind person touching a small portion of an elephant, ignorant of the existence of other cultural perspectives. The language we use to search ends up as a cultural filter to promote ethnocentric views, where a person evaluates other people or ideas based on their own culture. We also find that language bias is deeply embedded in ChatGPT. As it is primarily trained on English language data, it presents the Anglo-American perspective as the normative view, reducing the complexity of a multifaceted issue to the single Anglo-American standard. In this paper, we present evidence and analysis of language bias and discuss its larger social implications. Toward the end of the paper, we propose a potential framework of using automatic translation to leverage language bias and argue that the task of piecing together a genuine depiction of the elephant is a challenging and important endeavor that deserves a new area of research in NLP and requires collaboration with scholars from the humanities to create ethically sound and socially responsible technology together.