In Part I on this subject, we saw the real benefit of LHS for a one-task network. Now we will try to extend the benefit to two tasks, and then to life-size problems.

The true generalization of stratified sampling to two or more dimensions (which in the case of SRA means two or more tasks) is not LHS but *orthogonal* sampling. In orthogonal sampling, *every combination* of strata is equally likely. In our simple case, there are 36 possible combinations and if we do 36 orthogonal samples we will get exactly one sample for each possible outcome. Let us suppose that we are interested in the probability of getting two 1’s. We know that this is 1/36, and the orthogonal sampling will produce this answer every time with zero error.

If we did regular random sampling, getting two 1’s could happen any number of times from 0 (quite likely) to 36 (most unlikely). The average is still 1, and the standard error is sqrt(35/36) or about 0.986.

So, orthogonal sampling works. (Remember, in the discrete examples we are using for simplicity it is not really sampling at all, but enumeration.) The trouble is that it does not scale well. If we had three tasks, we would need 216 samples. If we had 10 tasks we would need 60 million! Real projects typically have hundreds or thousands of tasks, and the number of samples needed would be astronomical.

Hence the compromise of LHS. Each duration is stratified *independently* but no attempt is made to ensure that *every possible combination* of durations is equally represented. Reworking our 2-dice example with LHS, and stratifying each distribution into 6 strata, each die will come up 1 exactly 6 times out of 36. However these will not in general coincide, so the number of times we get both 1’s can be anything from 0 to 6. Without loss of generality, we can look at just the 6 times where the first die comes up 1. The distribution of the score of the other die within this restricted set of outcomes is more or less random but not quite, because it is subject to the overall constraint that each score comes up exactly 6 times in total. If you allow for this overall constraint, the calculation of which is rather complicated, the result is a standard error of about .845, significantly better than the 0.986 standard error from purely random sampling.

However the randomness in LHS increases, and consequently the benefit of LHS over random sampling decreases, as the number of dimensions (tasks) increases. Typical project networks have anywhere from hundreds to tens of thousands of tasks, and hence dimensions. Broadly speaking, LHS reduces the amount of uncertainty in an n-dimensional problem to what it would be if it had (n-1) dimensions. So for any given number of iterations LHS would process a 1,000-task network to about the same accuracy as random sampling on a 999-task one. Which is an insignificant difference.

David Vose has demonstrated empirically that even with just 9 dimensions the benefits of LHS are minimal.

Furthermore, LHS does not come without costs. In that same paper, David lists a number of these. Some affect the sampling speed, so that the apparent benefit of fewer iterations may be more than offset by it taking longer to do each one. Other disadvantages include the inability to properly model correlations using LHS.

David concludes that there is no place for LHS in modern Monte Carlo simulation, implying that it might have been useful when computers were slower. In the case of high-dimension problems like SRA I would go further and say that it never was worthwhile.