Hi
I was doing an exercise in the Real Python book. It gave a task of simulating a coin toss "I keep flipping a fair coin until I've seen it land on both heads and tails at least once each – in other words, after I flip the coin the first time, I continue to flip it until I get a different result. On average, how many times will I have to flip the coin total? "
So created the code:
from __future__ import division
from random import randint
flips = 0
trials = 10000
for trial in range(0, trials):
first_flip = randint(0,1)
while randint(0,1) == first_flip:
flips += 1
flips += 1
print "Average = ", flips / trials
So this produces an average of 2 which I thought would be right considering a coin is 2 sided.
However the solution is different:
from __future__ import division
from random import randint
flips = 0
trials = 10000
for trial in range(0, trials):
flips += 1 # first flip
if randint(0, 1) == 0: # flipped tails on first flip
while randint(0, 1) == 0: # keep flipping tails
flips += 1
flips += 1 # finally flipped heads
else: # otherwise, flipped heads on first flip
while randint(0, 1) == 1: # keep flipping heads
flips += 1
flips += 1 # finally flipped tails
print flips / trials
And this produces an answer of 3.
The only difference I can see is that in my code I don't care whether its head or tails just measure whether its the same as the first flip.
Why are the averages different?