Originally posted by Acolyte
A particular type of coin is suspected of being biased towards tails if you toss it. Assume that all coins are equally biased, if they're biased at all.
Experimenter A tests this by tossing one of these coins six times. His results ar ...[text shortened]... alculations were correct. So why are their conclusions different?
So - which tester had the better test?
Assume that a biased coin only throws heads on average once in a hundred times.
Tester A will presumably declare the coin biased if he throws 6 tails (otherwise his test was useless)
He has a chance of (0.99)^6 that the biased coin will do that, ie his chance of declaring the biased coin "true" is 1-(0.99)^6, about 5.85%
He has a chance of 0.5^6 that a good coin will do this, i.e his chance of declaring a good coin biased is about 1.56%
Tester B will presumably declare the coin biased if her first head is after the fifth toss.
She has a chance of 0.01+0.99*0.01 + 0.99^2*0.01 + 0.99^3*0.01 + 0.99^4*0.01 = 4.9% of declaring a biased coin true
and a chance of 0.5^5 = 3.1% of declaring a good coin biased
Now, whose method is better depends on how many coins are biased.
If 1 in every 100 coins are biased, tester A will make 16 mistakes in 1000 coins, whereas tester b will make 31 mistakes
But if 50 in every 100 coins are biased, tester A will make 57 mistakes in 1000 coins, and tester b will make 40 mistakes.
Given that most coins in circulation are fair, I think tester A's method is better in real world scenarios. However in the "real world" I can beat them both by immediately declaring all coins to be unbiased! - that way I only make 10 mistakes in 1000 coins, and I take much less time to do my test!