Markov Chains

Markov chains (Fall 2020) are a probabilistic model in which the future depends on the present, but not on the past. A Markov chain consists of a set of states, together with probabilities of transitioning from one state to another. Two important types of Markov chains that we will investigate are absorbing Markov chains and ergodic Markov chains. In an absorbing Markov chain, we eventually end up trapped in a single state, and we can ask how likely we are to end up at each absorbing state eventually, or how long it takes to get there. A typical question of this form is, given two sequences of coin flips, how likely is it that we see one of them before the other. Or, given a single sequence of coin flips, how many flips does it take to see that sequence consecutively, on average?

In an ergodic Markov chain, we move around between all the states, seeing all of them infinitely often. In an ergodic Markov chain, there is a stationary distribution, which tells us the proportion of time spent in each state in the long run. We will see several interpretations of these proportions.

Our main focus in this class will be on mixing times. A mixing time in an ergodic Markov chain is, roughly, the amount of time it takes before the chain reaches its stationary distribution and forgets about its starting point. We will investigate several approaches to mixing times, including coupling arguments which group the Markov chain we’re interested in with one that is easier to analyze, and linear algebra techniques, especially eigenvalues. A typical question about mixing times is to determine the number of riffle shuffles we need to perform on a deck of cards before it becomes well-mixed. We will develop all the tools from linear algebra in the class, so no background in linear algebra is required.