Probability provides a formal representation of degrees of belief. This is done by relating beliefs to proportions. For example, in order for us all to agree what we mean by ‘probability’, we all have to agree to experience a probability of 1.0 (or certainty) that someone has diabetes if we know that the person is one of the people in a room and that 10 out of 10 people in that room had diabetes mellitus. A probability of 0 that someone has diabetes is what we would agree to experience if we knew that the person was in a room where 0/10 were known to have diabetes. A probability of 0.5 that somone has diabetes is what would agree to experience if all we knew was that 5/10 people in a room were known to have diabetes. This relationship is used as a basis for making calculations with probabilities (for example see 'Bayes theorem'). This is also known as ‘mathematical probability’ when each member of a set is allocated a probability of x/y when there are y members in a set and x turn out to have the predicted attribute e.g. being diabetic.
Even if a probability is guessed, or that the prediction and the information on which it is based are both unique, in order to carry out calculations with probabilities, the estimated probability is regarded AS IF it was based on a theoretical proportion equal to that probability. For example, if some calculation gave a probability of 0.786, then this would be equivalent to the degree of certainty felt if there were theoretically 786/1000 patients in a big room with diabetes.
The accuracy of an estimated probability cannot usually be checked against a theoretical population because the population of the appropriate size may not exist. However, if each outcome of all predictions made with a probability of 0.786 were checked, irrespective of what the prediction was about, then 78.6% of all such predictions should turn out to be correct if the probability estimates were appropriate. If the proportion of correct predictions did not correspond with the probability estimate, then future estimates can be adjusted or ‘calibrated’ so that they correspond to the appropriate proportion. For example, if only 68% of predictions were correct for all probability estimates of 0.786, then this would represent ‘overconfidence’ and all probabilities of 0.786 should be changed to become probability estimates of 0.68.
People appear to recalibrate their probabilities constantly in their day to day lives by adjusting their sense of certainties to compensate for over-confidence or under-confidence. Doctors do this especially during training or when adjusting to new environments e.g. by moving to different specialities or countries. It is something that could be done formally as a part of audit by keeping a diary of predictions made and their associated probabilities and then following them up and recording what happened. This is not common practice at present.
The precision of a probability
A difficulty can arise for a probability of 0.5 for example because it could be based on set of 1 out of 2 or 2/4 or 3/6 etc up to millions over twice as many millions. Which do we choose? One approach would be to regard a probability as a measurement, the precision of the scale depending on the actual numbers. This can be done by regarding the next member as a new member of the set so that after observing 1 out of 2, the new member could result in the set becoming 1 out of 3 (0.33333etc) or 2 out of 3 (0.66666etc). We could then say that an observed set of 1 out of 2 is about to be joined by a third member, the probability to be experienced about the third member is 0.333 or 0.666.
This is analogous to saying that the length of a string using a ruler that measures in cm is ≥103cm or < 104cm, which is the correct way of describing the observation. If the set contains millions over twice as many millions, then the probability would be 0.500000etc or 500000etc, so there is no need to mention the two values, they are essentially the same. The advantage of this convention is that the pair of probabilities allows us to recalculate the numbers in the set on which they are based and as a result to calculate confidence intervals, credibility intervals, etc. In practice, the quoting of pairs of probabilities is not done; the original proportion is usually described as well e.g. probability = 0.5 (1/2).
Probabilities based on random sampling
The idea of random sampling is very important in applications of probability. This involves using one set to create a second, ‘sample set’. For example, a diabetic patient in a room of 5 diabetics out of 10 could be chosen ‘at random’ by blindfolding each diabetic and also someone who wanders about the room until they bump into someone. The person bumped into declares that they are ‘diabetic’ or ‘non-diabetic’ and an observer writes this down on a card, which becomes the first member of the ‘card sample set’. The process is repeated a large number of times to create a large number of cards.
If the selection method is random, then about 50% of the cards in the set should end up with ‘diabetic’ written on them. This is an example of a ‘frequentist probability’, because the probability is equal to a real set with the same proportion as the probability. This is a physical observation based on a study that supports the theory that that selection was random and that the blindfolded wanderer did not know anything about the people bumped into other that they were in the room. Scientific measurements can be regarded as ‘samples’ drawn from a set of all possible results. Confidence intervals, Bayesian credibility intervals and probability of replication within a range all depend on ‘sampling models’ of this kind.
Dice and roulette wheels are mechanical devices that are designed to select members of a set randomly. The members of a dice’s set are 1, 2, 3, 4, 5, and 6, whereas the members of a roulette’s set are 0 to 20.
© Huw Llewelyn 2016