Bayes' Theorem is, apparently, the new sexy. Everyone's quoting it, from psychologists to ecologists to cosmologists, even TV show The Big Bang Theory. Named after an 18th century English minister and statistician, the theorem is basically a way to calculate the validity of one's beliefs. In a nutshell: initial belief + new evidence = improved belief. Obvious, but sometimes counterintuitive. Take the real life example of breast cancer screening by mammography.
Mammograms are notoriously imperfect. Quoted accuracy rates regularly change, but using typical numbers, mammograms detect about 80 percent of breast cancers while yielding 9.6 percent "false positives" to healthy women. Around 1 percent of 40-year-old women have breast cancer (a rather arbitrary number, since experts disagree on how to diagnose cancer or what even counts as cancer).
First test: A 40-year-old woman is told her mammogram result is positive. What's the probability she has breast cancer? Does your intuition (like mine) put it around 1 in 2 or 1 in 3? It's actually a non-intuitive 1 in 13. This is scant reassurance if you happen to be the 13th, but hopefully it will help keep the actual risk in perspective. (I'm about to show how to calculate this, but if you're numberphobic, skip to "Summing up.")
Using the above figures, if 10,000 40-year-old women are given mammograms, 100 (1 percent) have breast cancer, of whom 80 (80 percent) will test positive. Of the remaining 9,900 healthy women, 950 (9.6 percent) will test positive for a total of 1,030 testing positive. Meaning that, because of the many false positives, a woman whose mammogram result is positive has a 7.8 percent (80/1030 or about 1 in 13) chance of having breast cancer.
Second test: With one positive result, she gets tested again. Whereas her original prior risk ("prior") was 1 percent, it's now 7.8 percent. Using the cohort of 10,000 again, 624 of the 1509 women who test positive a second time have breast cancer, that is, about 41 percent.
Summing up: One positive mammogram result returns a 7.8 percent chance of having breast cancer; two positive results increase the risk to 41 percent. And since you asked, three positives indicate an 85 percent chance.
(With all the hype about screening for breast cancer, it may be hard for many people, including physicians, to accept that a positive mammogram result means that a woman's risk, prior to testing positive 1 in 100, is still only 1 in 13.)
This "iterative" approach is one way to think about Bayes' Theorem. In the case of mammography results, you start with your 1 percent prior, take new evidence into account (the first result) and arrive at a more accurate belief. Repeat, using the new prior of 7.8 percent, to arrive at a yet more-accurate belief. Repeat.
The main problem with Bayes is its potential for misuse, starting with the value you pick for your initial prior. In the above breast cancer example, a 1-percent prior is reasonable — everyone can agree it's not 0.1 percent or 10 percent. But what if you're estimating the likelihood of, say, the existence of God? Alien abductions? The benefits of smoking? With Bayes, by minimizing alternative explanations ("God or Darwin? Pick one!"), you can artificially boost any non-zero prior, even one in a trillion; the evidence will simply increase your original belief.
Can I interest you in the Hollow Earth theory?
Barry Evans' (firstname.lastname@example.org) takeaway is: The more alternative explanations there are, the less I can trust my belief.