Space-air-ground integrated networks (SAGINs) enable worldwide network coverage beyond geographical limitations for users to access ubiquitous intelligence services. {\color{black}Facing global coverage and complex environments in SAGINs, edge intelligence can provision AI agents based on large language models (LLMs) for users via edge servers at ground base stations (BSs) or cloud data centers relayed by satellites.} As LLMs with billions of parameters are pre-trained on vast datasets, LLM agents have few-shot learning capabilities, e.g., chain-of-thought (CoT) prompting for complex tasks, which are challenged by limited resources in SAGINs. In this paper, we propose a joint caching and inference framework for edge intelligence to provision sustainable and ubiquitous LLM agents in SAGINs. We introduce "cached model-as-a-resource" for offering LLMs with limited context windows and propose a novel optimization framework, i.e., joint model caching and inference, to utilize cached model resources for provisioning LLM agent services along with communication, computing, and storage resources. We design "age of thought" (AoT) considering the CoT prompting of LLMs, and propose the least AoT cached model replacement algorithm for optimizing the provisioning cost. We propose a deep Q-network-based modified second-bid (DQMSB) auction to incentivize these network operators, which can enhance allocation efficiency while guaranteeing strategy-proofness and free from adverse selection.