## S-114.600 Introduction to Bayesian Modeling | |

Write a program which uses the Metropolis algorithm to sample from a 2D Gaussian distribution. See Bayesian Data Analysis, p. 289.

In this case the proposal distribution q is a Gaussian distribution
with mean tt_i and diagonal covariance matrix, that is, the distribution is
isotropic (equally wide in all directions). Use a Gaussian proposal
distribution with scale *sp*.
If you use mvnrnd, note that it takes the covariance matrix as an argument.
If sp is the deviation of the proposal distribution, you can produce the
desired covariance matrix in Matlab with sp^2*eye(2).
With a diagonal covariance the variables are independent so you can
also use randn(1,2)*sp to produce the proposals.

Note that even though here both the target distribution and the proposal distribution are Gaussian, this is not necessary. Make sure that you understand the difference between the target and the proposal distributions.

The target distribution is Gaussian with covariance `S=[1 r;r 1]`

.
Start with r=0.8, after which you can experiment with an increasing r, that is,
increasing correlation between the variables. For the sake of the exercise
we assume that we can not draw samples directly from our target distribution
but can compute the probability density in a specified point (this is
the usual case with models and distributions more complex than the Gaussian).
`mnorm_lpdf(ttnew,[y1 y2],S)`

computes the logarithm of the probability
density of a multivariate Gaussian distribution.

Because of machine precision it is advisable to use the logarithm of the probability density and also to perform all other computations with logarithms. All necessary log-pdf-functions are provided.

An essential part of the Metropolis algorithm is the term
`alpha=min(1,a/b)`

. This can be computed directly, but
better precision is achieved by using
`alpha=min(1,exp(log(a)-log(b)))`

. With the
log-pdf-functions we can compute `log(a)`

and
`log(b)`

. Typically `a`

and `b`

can be either very large or very small, in which case the division
is not exact (over/underflows possible). Subtracting the logarithms
is more precise.

Experiment with different values of *r*, different starting values,
variable chain lenghts and proposals of different width (e.g.
0.1<*sp*<4). Return your program code, a sample figure and
short answers to the following questions:

- What happens to autocorrelation (acorr,acorrtime) when
*r*increases? - How does the initial value affect the length of the burn-in?
- How does the scale of the proposal
*sp*affect the rate of rejection, length of burn-in and autocorrelation?

sampling1.m contains initial information and the code for plotting the sample image. Just add the code for the algorithm (<15 lines).

**b) ** Gibbs sampling

Write a program which uses the Gibbs sampling algorithm to sample
from a 2D Gaussian distribution. See Bayesian Data Analysis, p. 287.

For Gibbs sampling you will need the conditional distributions (p. 288).
Note that 1-r^2 is the variance of the Gaussian distribution and e.g.
`normrnd`

uses the deviation sqrt(1-r^2).

Note that for Metropolis it is enough to evaluate the density in specified points. In Gibbs sampling we have to be able to draw samples from the conditional distributions of the target distribution.

Experiment with different values of *r*, different starting values,
and variable chain lenghts. Return your program code, a sample figure and
short answers to the following questions:

- What happens to autocorrelation (acorr,acorrtime) when
*r*increases? - How does the initial value affect the length of the burn-in?

sampling2.m contains initial information and the code for plotting the sample image. Just add the code for the algorithm (<10 lines).

ellipse.m

acorr.m

acorrtime.m

mnorm_lpdf.m

This page is maintained by Toni.Tamminen@hut.fi

This page has been updated 9. 10. 2003

URL: http://www.lce.hut.fi/teaching/S-114.600/ex/hints.html