this dir | view | cards | source | edit | dark top

Probability and Statistics 2

Probability and Statistics 2
Markov chains

example 1

Markov chains

example 2

a fly which may get caught in a web … absorbing states

Markov chains

df: stochastic process (náhodný proces)

Markov chains

df: Markov process/chain

Markov chains

df: transition diagram/graph

Markov chains

Markov chain is equivalent to a random walk on the transition diagram

from state ii we go to jj with the probability of pijp_{ij} independent of the past

Markov chains

in example 1, assume X0X_0, what is P(X2=w)P(X_2=w)?

Markov chains

probability mass function (PMF, pravděpodobnostní funkce) for X0,X1,X_0,X_1,\dots

Markov chains

theorem

Markov chains

theorem

Markov chains

df: probability of kk-step transition … rij(k)r_{ij}(k)

Markov chains

theorem (Chapman-Kolmogorov)

Markov chains

df: jj is accessible from ii (where i,jSi,j\in S)

Markov chains

proof

Markov chains

df: a state iSi\in S is recurrent if we return to it with probability 1

Markov chains

df: Ti=min{t1:Xt=i}T_i=\min\set{t\geq 1:X_t=i}

Markov chains

ii is transient

Markov chains

example – random walk on a line

Markov chains

theorem

if iji\leftrightarrow j, then either both ii and jj are recurrent or both ii and jj are transient

Markov chains

for finite Markov chains

Markov chains

stationary distribution / steady state distribution

Markov chains

theorem: if a Markov chain is finite, aperiodic (→ not periodic) and irreducible, then

  1. \exists unique stat. distribution π\pi
  2. i:limn(Pn)ij=πj\forall i:\lim_{n\to\infty}(P^n)_{ij}=\pi_j
Markov chains

df: iSi\in S has period di:=gcd{t:rii(t)>0}d_i:=\text{gcd}\set{t:r_{ii}(t)\gt 0}

iSi\in S is aperiodic if di=1d_i=1

Markov chains

df: ii is null recurrent if ii is recurrent and E(TiX0=i)=\mathbb E(T_i\mid X_0=i)=\infty

ii is positive recurrent if ii is recurrent and E(TiX0=i)<\mathbb E(T_i\mid X_0=i)\lt\infty

Markov chains

theorem

Markov chains

theorem

Markov chains

the proof is not easy, here is a cheat proof

Markov chains

to find π\pi, solve system of linear equations πP=π\pi P=\pi, add iSπi=1\sum_{i\in S}\pi_i=1

Markov chains

detailed balance equation

Markov chains

to imagine this: ant colony moving independently according to a Markov chain

stationary distribution     \iff the same number of ants at each state at each time – ants don't "accumulate"

Markov chains

detailed balance equation implies πP=π\pi P=\pi

detailed balance equation is stronger than πP=π\pi P=\pi

Markov chains

MCMC algo. sketch

Markov chains

absorbing state i:pii=1i:p_{ii}=1

Markov chains

example: 0A0\in A (?)

ai=P(t:Xt=0X0=i)a_i=P(\exists t:X_t=0\mid X_0=i)

Markov chains

μi=E(TX0=i)\mu_i=\mathbb E(T\mid X_0=i)

T=min{t:XtA}T=\min\set{t: X_t\in A}

Markov chains

theorem: (ai)iS(a_i)_{i\in S} are the unique solution to

Markov chains

theorem: (μi)iS(\mu_i)_{i\in S} are unieque solutions to

Markov chains

proof

Markov chains

example: drunk person on their way home

Hurá, máš hotovo! 🎉
Pokud ti moje kartičky pomohly, můžeš mi koupit pivo.