How to efficiently calculate a running standard deviation?
The answer is to use Welford's algorithm, which is very clearly defined after the "naive methods" in:
- Wikipedia: Algorithms for calculating variance
It's more numerically stable than either the two-pass or online simple sum of squares collectors suggested in other responses. The stability only really matters when you have lots of values that are close to each other as they lead to what is known as "catastrophic cancellation" in the floating point literature.
You might also want to brush up on the difference between dividing by the number of samples (N) and N-1 in the variance calculation (squared deviation). Dividing by N-1 leads to an unbiased estimate of variance from the sample, whereas dividing by N on average underestimates variance (because it doesn't take into account the variance between the sample mean and the true mean).
I wrote two blog entries on the topic which go into more details, including how to delete previous values online:
- Computing Sample Mean and Variance Online in One Pass
- Deleting Values in Welford’s Algorithm for Online Mean and Variance
You can also take a look at my Java implement; the javadoc, source, and unit tests are all online:
- Javadoc:
stats.OnlineNormalEstimator
- Source:
stats.OnlineNormalEstimator.java
- JUnit Source:
test.unit.stats.OnlineNormalEstimatorTest.java
- LingPipe Home Page
The basic answer is to accumulate the sum of both x (call it 'sum_x1') and x2 (call it 'sum_x2') as you go. The value of the standard deviation is then:
stdev = sqrt((sum_x2 / n) - (mean * mean))
where
mean = sum_x / n
This is the sample standard deviation; you get the population standard deviation using 'n' instead of 'n - 1' as the divisor.
You may need to worry about the numerical stability of taking the difference between two large numbers if you are dealing with large samples. Go to the external references in other answers (Wikipedia, etc) for more information.