Monash University
Browse

A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction

Download (258.05 kB)
journal contribution
posted on 2022-11-09, 01:24 authored by Christoph Bergmeir, Rob J Hyndman, Bonsoo Koo
One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV). However, when it comes to time series forecasting, because of the inherent serial correlation and potential non-stationarity of the data, its application is not straightforward and often omitted by practitioners in favor of an out-of-sample (OOS) evaluation. In this paper, we show that the particular setup in which time series forecasting is usually performed using Machine Learning methods renders the use of standard K-fold CV possible. We present theoretical insights supporting our arguments. Furthermore, we present a simulation study where we show empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.

History

Classification-JEL

C52, C53, C22

Creation date

2015-04-01

Working Paper Series Number

10/15

Length

16

File-Format

application/pdf

Handle

RePEc:msh:ebswps:2015-10

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC