Bayes MLE, MAP

I'm not sure how to go about this, as the statement is just to

  • find bayes estimate of $\theta$ under square loss function

  • show derived bayes estimate will converge to MLE as sample size becomes large

I'm a bit confused about where it says 'sample size becomes large', because I'm not sure if that's referring to $n$ only or to $n$ and $x$ increasing together (so that the proportion is the same).

Here's the context -

I have $n$ observations and $x$ successes, $\theta$ is the proportion of successes and this is what we're interested in.

The distribution is $B(\alpha , \beta)$.

Which means that we have

\begin{align} E(\theta \vert \vec{x}) &= \frac{\alpha + \sumi \vec{x}i}{\alpha + \beta + n} \ &= \frac{\alpha + \beta}{\alpha + \beta + n}E(\theta) + \frac{n}{\alpha + \beta + n}\hat{\theta} \ \end{align}


\begin{align} E(\theta) = \frac{\alpha}{\alpha + \beta} , \;\;\;\; \hat{\theta} = \frac{\sumi \vec{x}i}{n} \end{align}

So It seems that I need to consider $\vec{x}$ here, as I need that for $\hat{\theta}$ ?

Anyway, the square loss function gives

\begin{align} E((\hat{\theta - \theta})^2 \vert x)
&= E(\hat{\theta}^2 - 2\hat{\theta}\theta + \theta^2 \vert x ) \ &= \hat{\theta}^2 - 2 \hat{\theta} E(\theta \vert \vec{x}) + E(\theta^2 \vert \vec{x}) - E^2(\theta \vert \vec{x}) + E^2(\theta \vert \vec{x}) \ &= \left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x}) \ \end{align

$\longrightarrow \text{argmax}_{\theta} E( (\hat{\theta} - \theta)^2 \vert \vec{x}) = Var(\theta \vert \vec{x})$

The maximum likelihood estimate for $\theta$ is found from $p^x(1 - p)^{n-x}$ which is minimal at $p = x/n$.

I really don't see how the following works though, that as $n \to \infty$ we have

\begin{align} \left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x}) \to x/n \end{align}

The right hand side of this will tend to zero, or should I be holding the right hand side fixed and adjusting the LHS?

If I sub some terms in then I get

\begin{align} \left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x}) &= \left( \hat{\theta} - \left( \frac{\alpha + \beta}{\alpha + \beta + n}E(\theta) + \frac{n}{\alpha + \beta + n}\hat{\theta} \right) \right)^2 + Var(\theta \vert \vec{x}) \ &= \left( % \hat{\theta} \frac{\sumi \vec{x}i}{n} - \left( \frac{\alpha (\alpha + \beta)}{(\alpha + \beta)(\alpha + \beta + n)} + \frac{n}{\alpha + \beta + n} \frac{\sumi \vec{x}i}{n} \right) \right)^2 + Var(\theta \vert \vec{x}) \ \end{align}

I don't really why this doesn't just give $Var(\theta \vert \vec{x})$ as $n$ increases.

Hopefully the error that I'm making is quite obvious, so I'll leave this for now and wait for assistence.


Sunday, 17 March 2019 10:35 GMT