Analysis Math Formulas
Moving Average (MA)
If a product has weekly sales data: [150, 200, 250, 300, 350] and using a 3-week moving average: Interpretation: the 3-week moving average for week 5 is 300 units.
Weighted Moving Average (WMA)
The weighted moving average assigns different weights to data points, typically giving more importance to recent observations. This allows the average to be more responsive to recent changes while still smoothing out fluctuations.
- Higher weights for recent data → more responsive to recent trends
- Weights should sum to 1 or be normalized
If a product has weekly sales data: [150, 200, 250, 300, 350] and using weights [0.5, 0.3, 0.2] for a 3-week weighted moving average: Interpretation: the weighted moving average for week 5 is 305 units.
Exponential Moving Average (EMA)
The exponential moving average gives more weight to recent data points, making it more responsive to recent changes. The smoothing factor $\alpha$ determines the rate at which older data points’ influence decreases.
- Higher $\alpha$ → more responsive to recent changes
- Lower $\alpha$ → smoother, less responsive
If a product has weekly sales data: [150, 200, 250, 300, 350] and using a smoothing factor $\alpha$=0.5: Interpretation: the EMA for week 5 is 325 units.
Coefficient of Variation (CV)
The coefficient of variation measures the relative variability of a dataset. It expresses the standard deviation as a proportion of the mean, allowing you to compare variability across products, periods, or units that may have different scales.
- High CV → more variability relative to the mean
- Low CV → more stable, predictable data
If a product has average weekly sales $\mu$=200 units, with standard deviation $\sigma$=50 units: Interpretation: weekly sales vary by 25% relative to the mean.
Range to Mean Ratio (Range/Mean)
The range to mean ratio provides a simple measure of variability by comparing the spread of the data (range) to its average level (mean).
- High ratio → wide range relative to mean, indicating more variability
- Low ratio → narrow range relative to mean, indicating more stability
If a product has average weekly sales $\mu$=200 units, with max sales=300 units and min sales=100 units: Interpretation: weekly sales vary by 1.0 times the mean.
Mean Absolute Deviation (MAD)
The mean absolute deviation measures the average distance between each data point and the mean. It provides a straightforward measure of variability that is less sensitive to extreme values than standard deviation.
- High MAD → more variability
- Low MAD → more stability
If a product has weekly sales data: [150, 200, 250, 300, 350] with mean $\mu$=250: Interpretation: on average, weekly sales deviate from the mean by 70 units.
Mean Absolute Percentage Error (MAPE)
MAPE measures the average percentage error between actual and forecasted values. It provides insight into the accuracy of forecasts relative to the size of the actual values.
If actual sales are [100, 200, 300] and forecasted sales are [110, 190, 310]: Interpretation: on average, forecast errors are about 6.67% of actual sales.
Root Mean Square Error (RMSE)
RMSE measures the average magnitude of forecast errors, giving more weight to larger errors due to squaring. It provides insight into the accuracy of forecasts in the same units as the data.
- Lower RMSE → better forecast accuracy
- Higher RMSE → larger forecast errors
If actual sales are [100, 200, 300] and forecasted sales are [110, 190, 310]: Interpretation: on average, forecast errors are about 10.0 units.
Forecast Bias
Forecast bias measures the average difference between forecasted and actual values. It indicates whether forecasts tend to overestimate (positive bias) or underestimate (negative bias) actual outcomes.
- Positive bias → forecasts are generally too high
- Negative bias → forecasts are generally too low
- Zero bias → forecasts are unbiased on average
If actual sales are [100, 200, 300] and forecasted sales are [110, 190, 310]: Interpretation: on average, forecasts are 3.33 units higher than actual sales.
Forecast Bias Significance
This metric assesses the significance of forecast bias relative to overall forecast accuracy (RMSE). It helps determine if the bias is substantial compared to the typical size of forecast errors.
- High positive value → significant over-forecasting
- High negative value → significant under-forecasting
- Value near zero → bias is small relative to forecast errors
If forecast bias is 3.33 units and RMSE is 10.0 units: Interpretation: forecast bias is about 33.3% of the typical forecast error.
Forecast Bias Confidence Interval
This formula calculates the confidence interval for forecast bias, providing a range within which the true bias is likely to fall with a specified level of confidence. It accounts for the variability in forecast errors (RMSE) and the sample size (n).
If forecast bias is 3.33 units, RMSE is 10.0 units, n=3, and using Z=1.96 for 95% confidence: Interpretation: we are 95% confident the true forecast bias lies between -8.09 and 14.75 units.
\(\text{Forecast Bias Confidence Interval} = 3.33 \pm 1.96 \times \frac{10.0}{\sqrt{3}} \approx 3.33 \pm 11.42\) \(\text{Interval} \approx [-8.09, 14.75]\)
The moving average smooths out short-term fluctuations and highlights longer-term trends in data. It is commonly used in sales forecasting to identify trends over a specified number of periods.