Abstract:This study explored how Vision-Language Models (VLMs) process ignorance implicatures with visual and linguistic cues. Particularly, we focused on the effects of contexts (precise and approximate contexts) and modifier types (bare numerals, superlative, and comparative modifiers), which were considered pragmatic and semantic factors respectively. Methodologically, we conducted a truth-value judgment task in visually grounded settings using GPT-4o and Gemini 1.5 Pro. The results indicate that while both models exhibited sensitivity to linguistic cues (modifier), they failed to process ignorance implicatures with visual cues (context) as humans do. Specifically, the influence of context was weaker and inconsistent across models, indicating challenges in pragmatic reasoning for VLMs. On the other hand, superlative modifiers were more strongly associated with ignorance implicatures as compared to comparative modifiers, supporting the semantic view. These findings highlight the need for further advancements in VLMs to process language-vision information in a context-dependent way to achieve human-like pragmatic inference.
Abstract:This study investigates how Large Language Models (LLMs), particularly BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), engage in pragmatic inference of scalar implicature, such as some. Two sets of experiments were conducted using cosine similarity and next sentence/token prediction as experimental methods. The results in experiment 1 showed that, both models interpret some as pragmatic implicature not all in the absence of context, aligning with human language processing. In experiment 2, in which Question Under Discussion (QUD) was presented as a contextual cue, BERT showed consistent performance regardless of types of QUDs, while GPT-2 encountered processing difficulties since a certain type of QUD required pragmatic inference for implicature. The findings revealed that, in terms of theoretical approaches, BERT inherently incorporates pragmatic implicature not all within the term some, adhering to Default model (Levinson, 2000). In contrast, GPT-2 seems to encounter processing difficulties in inferring pragmatic implicature within context, consistent with Context-driven model (Sperber and Wilson, 2002).