Phil Wall gets his brain round some complicated numbers…
Most of us think that we’re quite good at assessing odds. In fact, surveys show that nearly everyone thinks they’re better than average at working out chances, because we rate ourselves highly at ‘common sense’. But are most of us really any good at it at all? Try this puzzle: you’re on a game show and you’re faced with three doors, one of which conceals the star prize. The host invites you to select a door; let’s say you choose A. The host then tells you that the prize is not behind one of the other doors, say B, and asks if you’d like to change your mind and select C. What should you do? Is it better to stick with A, or change to C, or doesn’t it matter?
At first it seems that there’s no difference – with one door ruled out, there are now two possibilities and a 50/50 chance of a correct guess, so why switch? In fact, by switching you double your chances of winning. (Skip the rest of this paragraph if the detailed explanation doesn’t interest you, and all you need is the answer for potential future game show opportunities…). The reason is this: you had a one in three chance of selecting the right door first, so two-thirds of the time the correct door is one of the others. If you choose A, the prize is behind B or C two thirds of the time; once the host rules an option out (either B or C in this case), then by always switching you can win two-thirds of the time. Obviously you lose by switching the one-third of times that you happened to guess right to begin with. That’s life.
This puzzle caused a furore in the US in 1990, when the solution was explained in the country’s most popular Sunday magazine. Approximately 10,000 people wrote to complain that the magazine was wrong and that there was definitely no benefit in switching. Nearly 1,000 of these signed themselves as PhDs, so as to justify their comments and their indignation! A restating of the problem and solution weeks later only served to stir up more controversy, with mathematicians and statisticians wading in on both sides.
Here’s another puzzle: how many people do you need to gather at random to have a better than even chance that two share the same birthday? Although there are 365 options, amazingly the answer is just 23. Similarly, for a 50% chance of any two random two-digit numbers (00-99) being the same you need a sample size of just 12. In other words, if you’re betting on duplicates appearing in a series of two-digit random numbers, say the last two digits of old car number plates, you have a better than even chance of getting a duplicate after you’ve seen only 12 cars. Many people would bet the opposite.
All this is good fun, and potentially profit-making, but there are more serious consequences of our general inability to estimate chance correctly. In one study doctors were asked what the odds of breast cancer would be in a woman initially assessed as having a 1% risk of cancer but who then had a positive mammogram result. The doctors knew that mammograms accurately identify tumours about 90% of the time. Nearly all estimated the probability of cancer to be about 75%. In fact the maths shows us that the real chance was only around one tenth of that. This could lead to a lot of unnecessary distress and treatment, unless sufficient checking were done.
It’s when statistics start being used in court that we really find trouble. At the OJ Simpson murder trial the prosecution presented evidence that blood at the scene matched that of OJ and only 1 in 400 other people. The defence countered that even with those odds they could fill a football stadium with LA citizens with matching blood, so that didn’t implicate their client at all – overlooking that this was merely one piece of a very long list of evidence against OJ, and made it 400 times more likely that he was guilty than if the blood didn’t match his. Simpson was acquitted of murder.
You may also remember the case of solicitor Sally Clark, who was convicted in 1999 of killing her two infant children. At Mrs Clark’s trial, paediatrician Professor Sir Roy Meadow claimed that the chances of two babies in the same family dying from unexplained causes – Sudden Infant Death Syndrome (SIDS), or ‘cot death’ – were 1 in 73 million. Meadow calculated this figure by multiplying what he believed was the chance of one unexplained infant death – 1 in 8543 – by itself. His calculation was soon rebutted by the Royal Statistical Society.
Meadow didn’t take into account that genetic or environmental factors actually make it more likely a second SIDS death will occur in a family if a first one has; he treated the two events as entirely independent. Moreover, his odds of 1 in 8543 are probably several times too high, and have since been calculated as around 1 in 1300 by Professor of Mathematics Ray Hill of Salford University. Thirdly, and perhaps most importantly, although double SIDS is rare, double murder is rarer still. Two deaths had taken place, and there were two available explanations: murder or SIDS. In addition to the physical evidence of the particular case, what matters is the relative chance of one explanation against the other. If double murders almost never occur, then however unlikely double SIDS appears that is still the probable answer.
It’s quite baffling that courts take the kind of statistical evidence provided by Professor Meadow on trust. Perhaps the problem is that it’s presented in simple words, and that it sounds logical initially when we apply ‘common sense’ to it. Expert witnesses in ballistics or chemistry don’t talk in layman’s terms and most people don’t expect to judge those subjects by their own rules of commonsense. Statisticians talk in language that we like to think we have a grasp of. We are, however, easily fooled.
In a similar case to Sally Clark’s, a Dutch nurse called Lucia De Berk was finally cleared of seven charges of murder and three of attempted murder in April this year, having served five years in prison. Her case received much publicity as the evidence against her was entirely circumstantial – effectively she was in the wrong place at the wrong time. Or, sometimes, not. Read on…
Lucia happened to work on wards where people sometimes died. This is not unusual for nurses. However, in Lucia’s case other staff thought that they saw patterns in entirely random events. Lucia appeared to some colleagues to be present at the scene of deaths more often than was plausible if the deaths were all natural. At her trial a statistical analysis was presented by a Professor of Law (not Mathematics or Statistics) called Henk Elffers, who calculated that the chances of one person being at all the incidents in question was 1 in 342 million. Pretty big odds – and very wrong.
Statistician Richard Gill immediately recognised flaws in Elffers’ argument and campaigned long and hard for a retrial. Gill wrote several papers on the subject, one of which concluded that an innocent nurse has about a 1 in 9 chance of being present at the number of incidents Lucia was really at. Additionally, it turned out that Lucia was not even on duty at the times of some of the deaths. (That blew the statistics, too.)
For Sally Clark and Lucia De Berk justice was eventually done, though not without devastating effects on their lives. Judges and juries would do well to remember that even if the odds do appear overwhelming, unusual things do happen. Look at it this way: the chance of winning a lottery jackpot in 1 in 14 million, yet someone does just that most weeks. Despite the phenomenally long odds, lottery winners are not routinely arrested for fraud.