A tremendous advantage of the moving average filter is that it can be implemented with an algorithm that is very fast. To understand this
algorithm, imagine passing an input signal, x[ ], through a seven point moving average filter to form an output signal, y[ ]. Now look at how two adjacent output points, y[50] and y[51], are calculated:
These are nearly the same calculation; points x[48] through x[53] must be added for y[50], and again for y[51]. If y[50] has already been calculated, the most efficient way to calculate y[51] is:
Once y[51] has been found using y[50], then y[52] can be calculated from sample y[51], and so on. After the first point is calculated in y[ ], all of the other points can be found with only a single addition and subtraction per point. This can be expressed in the equation:
Notice that this equation use two sources of data to calculate each point in the output: points from the input and previously calculated points from the output. This is called a recursive equation, meaning that the result of one calculation is used in future calculations. (The term "recursive" also has other meanings, especially in computer science). Chapter 19 discusses a variety of recursive filters in more detail. Be aware that the moving average recursive filter is very different from typical recursive filters. In particular, most recursive filters have an infinitely long impulse response (IIR), composed of sinusoids and exponentials. The impulse response of the moving average is a rectangular pulse (finite impulse response, or FIR).
This algorithm is faster than other digital filters for several reasons. First, there are only two computations per point, regardless of the length of the filter kernel. Second, addition and subtraction are the only math operations needed, while most digital filters require time-consuming multiplication. Third, the indexing scheme is very simple. Each index in Eq. 15-3 is found by adding or subtracting integer constants that can be calculated before the filtering starts (i.e., p and q). Forth, the entire algorithm can be carried out with integer representation. Depending on the hardware used, integers can be more than an order of magnitude faster than floating point.
Surprisingly, integer representation works better than floating point with this algorithm, in addition to being faster. The round-off error from floating point arithmetic can produce unexpected results if you are not careful. For example, imagine a 10,000 sample signal being filtered with this method. The last sample in the filtered signal contains the accumulated error of 10,000 additions and 10,000 subtractions. This appears in the output signal as a drifting offset. Integers don't have this problem because there is no round-off error in the arithmetic. If you must use floating point with this algorithm, the program in Table 15-2 shows how to use a double precision accumulator to eliminate this drift.