# Zadorozhniy and Monty hall problem

## Zadorozhnii and Monty Hall problem¶

We consider slightly different statement proposed by Zadorozhnii: say there's a hostile aircraft at $L$ --- one of the locations $\{1,2,3\}$ and you're a fighter pilot that is asked to destroy the aircraft. You pick some location $X\in\{1,2,3\}$. Now suppose you get an intel from the HQ that informs you that the location $C\neq X$ is clear (i.e. $C\neq L$).

Shall you switch your target?

First on Monty Hall: there are three doors with a car behind one of them and goats behind the rest. You pick one door; the host of the show (who knows where the car is) opens another door and (always) reveals a goat. Despite the first thought may be that both closed doors are now equally likely to contain the car the fact is that switching is better. Here's why: when making an initial choice you end up picking a goat-door in two out of three cases. Then there's only one goat-door left for a host to choose from and the only unmentioned door is the one with the car. The only case when switching leaves you with a goat is when your initial choice was a hit which happens rarely: it is only one out of the three cases.

More rigorously. The probability space is $$\Omega=\{ (L,X,R);\ L,X,R{=}\overline{1,3} \}$$ with $L$ the true location, $X$ the initial choice and $R$ the host-revealed door. The natural assumption is that $L$ and $X$ are uniform, i.e. the marginals are: $$P(L=l)=\frac13,$$ $$P(X=x)=\frac13.$$ The skewedness of the Monty Hall problem comes from the conditional distribution of $R$. If you happen to pick the right door ($X=L$) then the host might open any of the two doors left at random: $$Pr(R=r\ |\ X=L,\ R\neq X) = \frac12.$$ But whenever you draw a goat, the host is bound to reveal the only losing position left: $$Pr(R=r\ |\ X=x,\ L=l,\ x\neq l) = \left\{\begin{aligned} & 1, && r\notin \{x, l\},\\ & 0, && r\in \{x, l\}. \end{aligned}\right.$$

Suppose we initially pick the door $x=1$ and the host opens the door $3$. $$Pr(X\neq L\ |\ X=1,\ R=3,\ R\neq L) = \frac{ Pr\left\{ (2,1,3) \right\} }{ Pr\left\{ (1,1,3), (2,1,3), \right\} } = \frac{ \frac19 }{ \frac12\frac19 + \frac19 } = \frac23$$

```
import numpy as np, pandas as pd
N_ITERS = int(1e6)
np.random.seed(431)
class IntelProblem:
def __init__(self):
self.cnt = dict()
def test(self, exclude=['X', 'L']):
X, L = np.random.choice([1, 2, 3], size=2)
remaining = [1, 2, 3]
for v in exclude:
try:
remaining.remove(locals()[v])
except:
pass
R = np.random.choice(remaining)
self.cnt[(X, L, R)] = self.cnt.get((X, L, R), 0) + 1
def distribution(self):
n = sum(self.cnt.values())
return { lxr: cnt/n for lxr, cnt in self.cnt.items() }
def distribution_df(self):
df = pd.DataFrame(columns=['LXR', 'p'],
data=[i for i in self.distribution().items()])
df[['L', 'X', 'R']] = df['LXR'].apply(pd.Series)
df.drop(['LXR'], axis=1, inplace=True)
return df.sort_values(by=['p', 'L', 'X', 'R'], ascending=False)
```

```
mh = IntelProblem()
for i in range(N_ITERS):
mh.test()
mh.distribution_df()
```

```
def switch_stick_probs(model):
p_switch_wins, p_stick_wins = 0, 0
for (L, X, R), p in model.distribution().items():
if L == R:
continue
if L == X:
p_stick_wins += p
else:
p_switch_wins += p
return p_switch_wins, p_stick_wins
```

```
SWITCH_STICK_MSG = 'In general: switching wins with prob. %s and sticking wins with prob. %s'
p_switch_wins, p_stick_wins = switch_stick_probs(mh)
print(SWITCH_STICK_MSG % (p_switch_wins, p_stick_wins))
mh_dist = mh.distribution()
print('Pr(switch | chosen 1, revealed 3) = ', mh_dist[(2,1,3)]/(mh_dist[(2,1,3)] + mh_dist[(1,1,3)]))
```

Now let us consider the jet fighter's problem. Suppose that the aircraft is located at $L\in\{1,2,3\}$ (equiprobably). Then we make at random the initial choice $X$.

The question is: how the headquarters behave?

Suppose that once we report our initial choice HQ decides to pick and scout one of the locations left: $R\neq X$. After that the search team might have either discovered the aircraft (then of course you'll switch to the right target) or found out the sector's clear.

In this model, if you pick $X=1$ initially and then you get informed that $R=3$ and this location is clear ($R\neq L$), the posterior probability of you picking the wrong target is:

$$Pr(X\neq L\ |\ X=1,\ R=3,\ L\neq R) = \frac{ Pr\left\{ (2,1,3) \right\} }{ Pr\left\{ (1,1,3), (2,1,3) \right\} } = \frac{ \frac12\frac19 }{ \frac12\frac19 + \frac12\frac19 } = \frac12$$```
fhq = IntelProblem()
for i in range(N_ITERS):
fhq.test(exclude=['X'])
display(fhq.distribution_df())
print(SWITCH_STICK_MSG % switch_stick_probs(fhq))
fhq_dist = fhq.distribution()
print('P(switch | chosen 1, revealed 3) = ', fhq_dist[(2,1,3)]/(fhq_dist[(2,1,3)] + fhq_dist[(1,1,3)]))
```

Uh-huh. It looks like in a specific implementation of this model, with the intel provided by HQ we are equally uncertain about both remaining locations. But averaging over all possible scenarios still gives us the $2:1$ ratio!

So if we're to perform this experiment repeatedly, switching still seems more promising. Indeed the overall probability is:

$$Pr(X\neq L\ | \ R\neq X,\ R\neq L) = \frac{ Pr \left\{ (1,2,3), (2,1,3), (1,3,2), (3,1,2), (2,3,1), (3,2,1) \right\} }{ Pr \left\{ (1,1,3), (1,2,3), (2,1,3), (2,2,3), (1,1,2), (1,3,2), (3,1,2), (3,3,2), (2,2,1), (2,3,1), (3,2,1), (3,3,1) \right\} } = \frac69,$$$$Pr(X\neq L\ | \ R\neq X,\ R\neq L) = \frac23.$$It is even less intuitive than Monty Hall problem in the sense that in any specific realization our knowledge about both locations is exactly the same. Engaging either target results in success with equal probabilities. And yet in frequentist perspective switching is better!

This definitely has something to do with Simpson's paradox and Gerrymandering.

## Комментарии

Comments powered by Disqus