dm.test {forecast} | R Documentation |
The Diebold-Mariano test compares the forecast accuracy of two forecast methods. The null hypothesis is that they have the same forecast accuracy.
dm.test(e1, e2, alternative=c("two.sided","less","greater"), h=1, power=2)
e1 |
Forecast errors from method 1. |
e2 |
Forecast errors from method 2. |
alternative |
a character string specifying the alternative hypothesis, must be one of |
h |
The forecast horizon used in calculating |
power |
The power used in the loss function. Usually 1 or 2. |
A list with class "htest"
containing the following components:
statistic |
the value of the DM-statistic. |
parameter |
the forecast horizon and loss function power used in the test. |
alternative |
a character string describing the alternative hypothesis. |
p.value |
the p-value for the test. |
method |
a character string with the value "Diebold-Mariano Test". |
data.name |
a character vector giving the names of the two error series. |
George Athanasopoulos and Rob Hyndman
Diebold, F.X. and Mariano, R.S. (1995) Comparing predictive accuracy. Journal of Business and Economic Statistics, 13, 253-263.
# Test on in-sample one-step forecasts f1 <- ets(WWWusage) f2 <- auto.arima(WWWusage) accuracy(f1) accuracy(f2) dm.test(residuals(f1),residuals(f2),h=1) # Test on out-of-sample one-step forecasts f1 <- ets(WWWusage[1:80]) f2 <- auto.arima(WWWusage[1:80]) f1.out <- ets(WWWusage[81:100],model=f1) f2.out <- Arima(WWWusage[81:100],model=f2) accuracy(f1.out) accuracy(f2.out) dm.test(residuals(f1.out),residuals(f2.out),h=1)