Bayesian Approach to Parameter Estimation
Last updated
Last updated
Treat as a random variable with prior .
According to Bayes' Rule,
The ML estimate is given by:
The MAP estimate is given by:
The Bayes Estimate is given by:
(the integral becomes a summation for discrete values)
Consider a parameterized distribution uniform on .
Say the discrete prior on is given by:
Suppose X={0.5,1.3,0.7} Given X, we know that and therefore, . So, the ML, MAP and BAYES' hypotheses are all 2.
Now, suppose X={0.5,0.7,0.1}
The posterior density of x given X is given by:
Given X, we have:
The ML estimate for the mean is m i.e. the sample mean.
Note that the estimate for variance is lower than the actual value because we use the sample mean m to compute it instead of using the actual mean.
So,
Therefore, and
In this case, the MAP hypothesis is 1 and the ML hypothesis is 1. The Bayes' hypothesis can be computed as
Assume the data X is drawn from a Gaussian with a known variance and an unknown mean (this is now the ).
Assume a Gaussian prior on i.e. and are known.
Then, generate X from (this is the mean of the Gaussian from which X was chosen. It is what we need to estimate.)
As , m dominates the weighted sum of m and .
The ML estimate of variance is (this is biased since ).
However, is an unbiased estimate.