Efficient way to simulate thousands of Markov chains

While the other answers focus on circumventing the simulations, I focus on how to speed up the simulations themselves. (Sometimes, simulations might be unavoidable.)

In this situation, when calling RandomFunction, it is much faster to generate many simulations at once by specifying the number of simulations using the third argument (instead of calling RandomFunction many times). (Recently, I observed also a different behavior of RandomFunction when applied to simulating Itô processes.)

Moreover, it is a bit faster to perform the replacement of observables with Part. So, my implementation of traces looks like this:

single = DiscreteMarkovProcess[
   {1., 0., 0.},
   {{0.99, 0.0099, 0.0001}, {0, 0.95, 0.05}, {0, 0.5, 0.5}}
   ];
reporule = Developer`ToPackedArray[{0., 3., 1.}];
ClearAll[traces]
traces[pathCount_, pathLength_] := Partition[
   Part[
    reporule,
    Flatten[RandomFunction[single, {0, pathLength}, pathCount]["ValueList"]]
    ],
   pathLength + 1
   ];

Now we can simulate 100 meanobs in the following way. Notice that I use ParallelTable only for the most outer loop; this is often beneficial.

pathCount = 5000;
pathLength = 500;
meanobs = ParallelTable[Mean[traces[pathCount, pathLength]], {100}]; // AbsoluteTiming // First

7.10128

Instead of 15 minutes, this takes only about 7 seconds on my notebooks's Haswell Quad Core CPU.

In general, ParallelTable can also slow things down when applied at a too deep looping level or in situations where computations cannot be split into sufficiently many independent parts.


I essentially derive a second distribution - without a need for stochastic modelling - from DiscreteMarkovProcess timeslices according to your state to value mappings, and simply use Mean and Variance on it. I skip scaling by 5000 here, as it seems somewhat an arbitrary multiplier for the problem at hand. That is, I perform the computation of these values symbolically instead of through simulation:

With[
 {mdist = DiscreteMarkovProcess[
    {1, 0, 0},
    {{0.99, 0.0099, 0.0001}, {0, 0.95, 0.05}, {0, 0.5, 0.5}}],
  values = {0, 3, 1}},
 With[
  {pdist = ProbabilityDistribution[
     Piecewise@MapIndexed[{PDF[mdist[t], First@#2], v == #1} &, values],
     {v, Min@values, Max@values, 1}]},
  Plot[Re@{Mean@pdist, Variance@pdist}, {t, 0, 500}, Evaluated -> True]]]

enter image description here

You can also plot the probabilities of individual values over time with LogPlot[Evaluate[Re[PDF@pdist /@ Sort@values]], {t, 0, 500}, PlotLegends -> Sort@values].

enter image description here

In order this code to work values must consist only of integers, although one might assume at least rationals to work. (Oddly enough, only step value of 1 on a ProbabilityDistribution of this form would seem to work with functions taking probability functions as arguments. A workaround with GCD and TransformedDistribution would seem to work, but I'm not including it to further complicate this answer. This seems like a bug; I reported it under CASE:4156809.)


In general, Mathematica is not always the best tool for this type of simulation. That said, you do not need to compute sample paths to compute the mean or variance.

For a Markov chain with probability transition matrix $P_{i,j}:=\mathbb{P}(X_1=j|X_0=i)$ you can compute the $n$-step transition matrix by $\mathbb{P}(X_n = j | X_0 = i) = P^n$. Therefore, given initial distribution $x$, the probability of ending up in a given state is given by the corresponding entry of $xP^n$.

Given a discrete random variable $Y$ we can compute the expectation using the formula, $$ \mathbb{E}[Y] = \sum_{k=0}^{\infty} k \mathbb{P}(Y=k) $$

In your case, we will compute $\mathbb{E}[xP^n]$ for each $n$. Since you only have three states, there will only be three entries in this sum (for $k=0$, $k=3$, and $k=1$).

In your case, with your weighting of the state space, you can do this like:

ListPlot@Table[Dot[{1, 0, 0}.MatrixPower[P, n], {0, 3, 1}], {n, 0, 500}]

Similarly, the variance is computed by $\mathbb{E}[(xP^n)^2]-\mathbb{E}[xP^n]^2$. You can compute this by:

ListPlot@Table[
  Dot[{1, 0, 0}.MatrixPower[P, n], {0, 3, 1}^2 - 
    Dot[{1, 0, 0}.MatrixPower[P, n], {0, 3, 1}]^2], {n, 0, 500}]

Both of these could be made more efficient by using the previous iterate of the matrix power to compute the next one, but since $P$ is small these commands run instantly and so efficiency is not of concern.

With regards to your code:

It looks like you compute the mean 500 traces, and plot 100 of these. Do you mean to compute these 100 quantities, or did you want a single one? When I run your code my plot of meanobs only goes up to a bit above 2.5. I'm not sure why your plot has such large numbers.

You also use the replacement after computing the chain. You could permute the entries of the probability transition matrix ahead of time to avoid this and save a bit of computation.