2.2 Markov Chain

A Markov chain is a stochastic process where we transition from one state to another state using a simple sequential procedure. We start a Markov chain at some state $x^{(1)}$ and use a transition function $p(x^{t}|x^{(t-1)})$, to determine the next state $x^{(2)}$ conditional on the last state. We then keep iterating to create a sequence of states: $$x^{(1)}->x^{(2)}->....->x^{(t)}->...$$. Each such a sequence of states is called a Markov chain or simply chain. The procedure for generating a sequence of T states from a Markov chain is the following: 

(1) Set t = 1

(2) Generate a initial value $u$, and set $x^{(t)}=u$

(3) Repeat 

  t=t+1;

  Sample a new value $u$ from the transition function $p(x^{t}|x^{(t-1)})$;

  Set $x^{(t)}=u$;

(4) Until t = T

Each Markov chain wanders around the state space and the transition to a new state is only dependent on the last state. If we start a number of chains, each with different initial conditions, the chains will initially(not eventually) be in a state close to the starting state, which is called the burn-in state(see the picture below). The starting state of the chain no affects the state of the chain after a sufficiently long sequence of transitions, which is called steady state(see the picture below). This property that Markov chains converge to a stationary distribution regardless of where we started, is quite important. 

HOMEWORK

Implement the Markov chain involving single continuous variable $x$ under the distribution $$Beta(200(0.9x^{(t-1)}+0.05),200(1-0.9x^{(t-1)}-0.05))$$. Create an illustration similar to the Figure above. Start the Markov chain with four different initial values uniformly drawn from [0,1].

Tip: if X is a $T imes K$ matrix in Matlab such that X(t, k) stores the state of the k th Markov chain at the t th iteration, the command plot(X) will simultaneously display the K sequences in different colors.

 1 fa=inline('200*(0.9*x+0.05)','x');%parameter a for beta 
 2 fb=inline('200*(1-0.9*x-0.05)','x');%parameter b for beta
 3 no4mc=4;%4 markove chains
 4 states=unifrnd(0,1,1,no4mc);%initial states
 5 N=200;%200 samples drawn from 4 chains
 6 X=states;
 7 for i=1:N
 8 states=betarnd(fa(states),fb(states));
 9 X=[X;states];
10 end;
11 plot(X);
12 pause;
原文地址:https://www.cnblogs.com/chaseblack/p/5221814.html