We define a new class of Bayesian point estimators, which we refer to as risk-averse estimators. We then use this definition to formulate several axioms that we claim to be natural requirements for good inference procedures, and show that for two classes of estimation problems the axioms uniquely characterise an estimator. Namely, for estimation problems with a discrete hypothesis space, we show that the axioms lead to the MAP estimate, whereas for well-behaved, purely continuous estimation problems the axioms lead to the Wallace-Freeman estimate. Interestingly, this combined use of MAP and Wallace-Freeman estimation reflects the common practice in the Minimum Message Length (MML) community, but there these two estimators are used as approximations for the information-theoretic Strict MML estimator, whereas we derive them exactly, not as approximations, and do so with no use of encoding or information theory.