Zadorozhniy and Monty hall problem

Zadorozhnii and Monty Hall problem

We consider slightly different statement proposed by Zadorozhnii: say there's a hostile aircraft at $L$ --- one of the locations $\{1,2,3\}$ and you're a fighter pilot that is asked to destroy the aircraft. You pick some location $X\in\{1,2,3\}$. Now suppose you get an intel from the HQ that informs you that the location $C\neq X$ is clear (i.e. $C\neq L$).

Shall you switch your target?


First on Monty Hall: there are three doors with a car behind one of them and goats behind the rest. You pick one door; the host of the show (who knows where the car is) opens another door and (always) reveals a goat. Despite the first thought may be that both closed doors are now equally likely to contain the car the fact is that switching is better. Here's why: when making an initial choice you end up picking a goat-door in two out of three cases. Then there's only one goat-door left for a host to choose from and the only unmentioned door is the one with the car. The only case when switching leaves you with a goat is when your initial choice was a hit which happens rarely: it is only one out of the three cases.

More rigorously. The probability space is $$\Omega=\{ (L,X,R);\ L,X,R{=}\overline{1,3} \}$$ with $L$ the true location, $X$ the initial choice and $R$ the host-revealed door. The natural assumption is that $L$ and $X$ are uniform, i.e. the marginals are: $$P(L=l)=\frac13,$$ $$P(X=x)=\frac13.$$ The skewedness of the Monty Hall problem comes from the conditional distribution of $R$. If you happen to pick the right door ($X=L$) then the host might open any of the two doors left at random: $$Pr(R=r\ |\ X=L,\ R\neq X) = \frac12.$$ But whenever you draw a goat, the host is bound to reveal the only losing position left: $$Pr(R=r\ |\ X=x,\ L=l,\ x\neq l) = \left\{\begin{aligned} & 1, && r\notin \{x, l\},\\ & 0, && r\in \{x, l\}. \end{aligned}\right.$$

Suppose we initially pick the door $x=1$ and the host opens the door $3$. $$Pr(X\neq L\ |\ X=1,\ R=3,\ R\neq L) = \frac{ Pr\left\{ (2,1,3) \right\} }{ Pr\left\{ (1,1,3), (2,1,3), \right\} } = \frac{ \frac19 }{ \frac12\frac19 + \frac19 } = \frac23$$

In [1]:
import numpy as np, pandas as pd


N_ITERS = int(1e6)
np.random.seed(431)


class IntelProblem:

    def __init__(self):
        self.cnt = dict()
        
    def test(self, exclude=['X', 'L']):
        X, L = np.random.choice([1, 2, 3], size=2)
        remaining = [1, 2, 3]
        for v in exclude:
            try:
                remaining.remove(locals()[v])
            except:
                pass
        R = np.random.choice(remaining)
        self.cnt[(X, L, R)] = self.cnt.get((X, L, R), 0) + 1
        
    def distribution(self):
        n = sum(self.cnt.values())
        return { lxr: cnt/n for lxr, cnt in self.cnt.items() }
        
    def distribution_df(self):
        df = pd.DataFrame(columns=['LXR', 'p'],
                          data=[i for i in self.distribution().items()])
        df[['L', 'X', 'R']] = df['LXR'].apply(pd.Series)
        df.drop(['LXR'], axis=1, inplace=True)
        return df.sort_values(by=['p', 'L', 'X', 'R'], ascending=False)
In [2]:
mh = IntelProblem()
for i in range(N_ITERS):
    mh.test()
mh.distribution_df()
Out[2]:
p L X R
9 0.111499 1 3 2
6 0.111268 3 1 2
10 0.111211 2 1 3
3 0.111209 1 2 3
0 0.110904 2 3 1
4 0.110473 3 2 1
2 0.055789 2 2 3
11 0.055741 3 3 1
5 0.055690 1 1 2
8 0.055534 2 2 1
7 0.055400 1 1 3
1 0.055282 3 3 2
In [3]:
def switch_stick_probs(model):
    p_switch_wins, p_stick_wins = 0, 0
    for (L, X, R), p in model.distribution().items():
        if L == R:
            continue
        if L == X:
            p_stick_wins += p
        else:
            p_switch_wins += p
    return p_switch_wins, p_stick_wins
In [4]:
SWITCH_STICK_MSG = 'In general: switching wins with prob. %s and sticking wins with prob. %s'
p_switch_wins, p_stick_wins = switch_stick_probs(mh)
print(SWITCH_STICK_MSG % (p_switch_wins, p_stick_wins))
mh_dist = mh.distribution()
print('Pr(switch | chosen 1, revealed 3) = ', mh_dist[(2,1,3)]/(mh_dist[(2,1,3)] + mh_dist[(1,1,3)]))
In general: switching wins with prob. 0.6665639999999999 and sticking wins with prob. 0.333436
Pr(switch | chosen 1, revealed 3) =  0.6674889413063964

Now let us consider the jet fighter's problem. Suppose that the aircraft is located at $L\in\{1,2,3\}$ (equiprobably). Then we make at random the initial choice $X$.

The question is: how the headquarters behave?

Suppose that once we report our initial choice HQ decides to pick and scout one of the locations left: $R\neq X$. After that the search team might have either discovered the aircraft (then of course you'll switch to the right target) or found out the sector's clear.

In this model, if you pick $X=1$ initially and then you get informed that $R=3$ and this location is clear ($R\neq L$), the posterior probability of you picking the wrong target is:

$$Pr(X\neq L\ |\ X=1,\ R=3,\ L\neq R) = \frac{ Pr\left\{ (2,1,3) \right\} }{ Pr\left\{ (1,1,3), (2,1,3) \right\} } = \frac{ \frac12\frac19 }{ \frac12\frac19 + \frac12\frac19 } = \frac12$$
In [5]:
fhq = IntelProblem()
for i in range(N_ITERS):
    fhq.test(exclude=['X'])
display(fhq.distribution_df())
print(SWITCH_STICK_MSG % switch_stick_probs(fhq))
fhq_dist = fhq.distribution()
print('P(switch | chosen 1, revealed 3) = ', fhq_dist[(2,1,3)]/(fhq_dist[(2,1,3)] + fhq_dist[(1,1,3)]))
p L X R
15 0.055859 3 2 2
12 0.055798 3 2 1
9 0.055760 2 2 3
10 0.055678 1 2 3
0 0.055676 3 3 1
5 0.055631 1 3 2
7 0.055618 2 3 1
16 0.055607 2 2 1
6 0.055603 1 1 2
4 0.055572 3 1 2
13 0.055563 2 1 1
14 0.055547 3 3 2
2 0.055468 2 3 3
17 0.055446 2 1 3
8 0.055430 1 2 2
11 0.055291 1 1 3
1 0.055245 3 1 1
3 0.055208 1 3 3
In general: switching wins with prob. 0.666516 and sticking wins with prob. 0.333484
P(switch | chosen 1, revealed 3) =  0.5006998564165546

Uh-huh. It looks like in a specific implementation of this model, with the intel provided by HQ we are equally uncertain about both remaining locations. But averaging over all possible scenarios still gives us the $2:1$ ratio!

So if we're to perform this experiment repeatedly, switching still seems more promising. Indeed the overall probability is:

$$Pr(X\neq L\ | \ R\neq X,\ R\neq L) = \frac{ Pr \left\{ (1,2,3), (2,1,3), (1,3,2), (3,1,2), (2,3,1), (3,2,1) \right\} }{ Pr \left\{ (1,1,3), (1,2,3), (2,1,3), (2,2,3), (1,1,2), (1,3,2), (3,1,2), (3,3,2), (2,2,1), (2,3,1), (3,2,1), (3,3,1) \right\} } = \frac69,$$$$Pr(X\neq L\ | \ R\neq X,\ R\neq L) = \frac23.$$

It is even less intuitive than Monty Hall problem in the sense that in any specific realization our knowledge about both locations is exactly the same. Engaging either target results in success with equal probabilities. And yet in frequentist perspective switching is better!

This definitely has something to do with Simpson's paradox and Gerrymandering.

Comments

Comments powered by Disqus