Learning and adaptation is a fundamental property of intelligent agents. In the context of adaptive information filtering, a filtering agent's beliefs about a user's information needs have to be revised regularly with reference to the user's most current information preferences. This learning and adaptation process is essential for maintaining the agent's filtering performance. The AGM belief revision paradigm provides a rigorous foundation for modelling rational and minimal changes to an agent's beliefs. In particular, the maxi-adjustment method, which follows the AGM rationale of belief change, offers a sound and robust computational mechanism to develop adaptive agents so that learning autonomy of these agents can be enhanced. This paper describes how the maxi-adjustment method is applied to develop the learning components of adaptive information filtering agents, and discusses possible difficulties of applying such a framework to these agents.