Abstract:The emergence of the metaverse, envisioned as a hyperreal virtual universe facilitating boundless human interaction, stands to revolutionize our conception of media, with significant impacts on addiction, creativity, relationships, and social polarization. This paper aims to dissect the addictive potential of the metaverse due to its immersive and interactive features, scrutinize the effects of its recommender systems on creativity and social polarization, and explore potential consequences stemming from the metaverse development. We employed a literature review methodology, drawing parallels from the research on new media platforms and examining the progression of reality-mimicking features in media from historical perspectives to understand this transformative digital frontier. The findings suggest that these immersive and interactive features could potentially exacerbate media addiction. The designed recommender systems, while aiding personalization and user engagement, might contribute to social polarization and affect the diversity of creative output. However, our conclusions are based primarily on theoretical propositions from studies conducted on existing media platforms and lack empirical support specific to the metaverse. Therefore, this paper identifies a critical gap requiring further research, through empirical studies focused on metaverse use and addiction and exploration of privacy, security, and ethical implications associated with this burgeoning digital universe. As the development of the metaverse accelerates, it is incumbent on scholars, technologists, and policymakers to navigate its multilayered impacts thoughtfully to balance innovation with societal well-being.
Abstract:As Large Language Models (LLMs) become increasingly integrated into everyday life, their capabilities to understand and emulate human cognition are under steady examination. This study investigates the ability of LLMs to comprehend and interpret linguistic pragmatics, an aspect of communication that considers context and implied meanings. Using Grice's communication principles, LLMs and human subjects (N=76) were evaluated based on their responses to various dialogue-based tasks. The findings revealed the superior performance and speed of LLMs, particularly GPT4, over human subjects in interpreting pragmatics. GPT4 also demonstrated accuracy in the pre-testing of human-written samples, indicating its potential in text analysis. In a comparative analysis of LLMs using human individual and average scores, the models exhibited significant chronological improvement. The models were ranked from lowest to highest score, with GPT2 positioned at 78th place, GPT3 ranking at 23rd, Bard at 10th, GPT3.5 placing 5th, Best Human scoring 2nd, and GPT4 achieving the top spot. The findings highlight the remarkable progress made in the development and performance of these LLMs. Future studies should consider diverse subjects, multiple languages, and other cognitive aspects to fully comprehend the capabilities of LLMs. This research holds significant implications for the development and application of AI-based models in communication-centered sectors.