In order to be coherent with the axioms of the probability calculus the personal probability functions of ideally rational Bayesian agents must understand completely the logical structure of the propositions over which they are defined.
Let [P.sub.1](-) be the personal probability function of a Bayesian agent, let E describe some piece of well established evidence such that [P.sub.1](E) = 1, then [P.sub.1](H[where]E) = P(H) for every H over which [P.sub.1](-) is defined.
Let [P.sub.L](-) be the personal probability function of a Bayesian agent defined over L.
Imagine, then, that our seventeenth-century Bayesian physicist satisfies LO2 so that her personal probability function, [P.sub.old](-), assigns a probability to all the propositions that can be formulated within her language L.
Again, as in Section 4, someone might object that on the agent's new personal probability function it must still be the case that:
He tells us that chance is reasonable personal probability, but what could be more reasonable than adjusting your belief about the 'chance' of heads at the next throw to the number observed so far?
The conclusion seems inescapable that statements about chances reduce to being merely statements of personal probability.
A Bayesian statistical inference depends only on your personal probability for E, given H, and not on whether this value rests entirely on objective probabilities.
Given this H and thi E, the prior personal probability of H, given E, is one, since the randomizatio ensures that the probability of E is zero on any hypothesis other than H.
When we infer probabilities from sample statistics, a non-random sample and a random sample can yield just the same conditional personal probability for statistic E given the H under investigation, and so can underpin just the same statistical inference.