Markov Chain Monte Carlo

Today we're gonna be talking about a broad class of methods known as MCMC Methods!
Monte Carlo (MC) Methods are a class of methods which use repeated  sampling to obtain numerical results. Example: Evaluating an expectation.
In particular, MCMC methods do the job by constructing a Markov chain with the stationary distribution being the distribution we wish to sample from.
Intuition behind MCMC
Ideally, while computing expectations, we want to sample from regions having high
probability.So we explore the state space, and prefer to stay in regions of high probability.
 Image courtesy: Wikipedia 
In this Image, the regions with red samples are the ones we need, the initial ones are discarded as burn in period.
We're gonna be seeing with a help of an example, how we can sample from a distribution!
Before we get into it, here is an image with few terms i have summarized for easy understanding.


Clearly, the Transition matrix should be constructed so that the required distribution is invariant under this transition!
Let us look at a popular method known as the Metropolis Algorithm.One big advantage is we need to know the probability density only upto a normalization constant!
This can be advantageous in cases the Probability Density Function cannot be normalized due to the integral being analytically difficult to compute.
        Metropolis Algorithm


                                       
 Gibbs Sampling plots from the Code i have shared above 

Gibbs sampling is a famous MCMC Technique, where if we can sample from the conditional distrubitions, it is easy to sample eventually from the required full distribution. Each step of the algorithm alternative samples from the conditional distributions, ensuring the full distribution is invariant under these steps, to ensure we sample correctly.


First ten samples.


After 300 samples.Clearly, now we can reliably sample from the required pdf.


Cheers.