Finding claims that researchers have made considerable progress in artificial intelligence over the last several decades is easy. However, our everyday interactions with cognitive systems quickly move from intriguing to frustrating. The root of those frustrations rests in a mismatch between the expectations we have due to our inherent, folk-psychological theories and the real limitations we see in existing computer programs. To address the discordance, we find ourselves building mental models of how each unique tool works: how we address Apple's Siri may differ from how we address Amazon's Alexa, the prompts that create striking images in Midjourney may produce unsatisfactory renderings in OpenAI's DALL-E. Emphasizing intentionality in research on cognitive systems provides a way to reduce these discrepancies, bringing system behavior closer to folk psychology. This paper scrutinizes the propositional attitude of intention to clarify this claim. That analysis is joined with broad methodological suggestions informed by recent practices within large-scale research programs. The overall goal is to identify a novel approach for measuring and making progress in artificial intelligence.