Abstract:We are interested in belief revision involving conditional statements where the antecedent is almost certainly false. To represent such problems, we use Ordinal Conditional Functions that may take infinite values. We model belief change in this context through simple arithmetical operations that allow us to capture the intuition that certain antecedents can not be validated by any number of observations. We frame our approach as a form of finite belief improvement, and we propose a model of conditional belief revision in which only the "right" hypothetical levels of implausibility are revised.
Abstract:Belief revision is the process in which an agent incorporates a new piece of information together with a pre-existing set of beliefs. When the new information comes in the form of a report from another agent, then it is clear that we must first determine whether or not that agent should be trusted. In this paper, we provide a formal approach to modeling trust as a pre-processing step before belief revision. We emphasize that trust is not simply a relation between agents; the trust that one agent has in another is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state-partition with each agent, then relativizing all reports to this state partition before performing belief revision. In this manner, we incorporate only the part of a report that falls under the perceived domain of expertise of the reporting agent. Unfortunately, state partitions based on expertise do not allow us to compare the relative strength of trust held with respect to different agents. To address this problem, we introduce pseudometrics over states to represent differing degrees of trust. This allows us to incorporate simultaneous reports from multiple agents in a way that ensures the most trusted reports will be believed.
Abstract:In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agents beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be non-elementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research.