TSCV: A Python package for Time Series Cross-Validation

Train-Gap-Test

Cross-validation, a popular tool in machine learning and statistics, is crucial for model selection and hyperparameter tuning. To use this tool, one often requires that the data are independent and identically distributed. However, this hypothesis is violated by time series, where successive data points are interdependent.

Many cross-validation packages, such as scikit-learn, rely on the independence hypothesis and thus cannot help for time series. To solve this problem, I developed a python package TSCV, which enables cross-validation for time series without the requirement of the independence.

The intuition behind this package is that, by introducing gaps between the training set and the test set, the temporal dependence can be mitigated. Hence, after introducing the gap, leaving p out, K-Fold, and so forth are once again valid. Researches show that cross-validation with gaps outperforms the one without gaps.

The best feature of this package is that it works seamlessly with scikit-learn. You can pass every class, aka cross-validator, in my package as the cv argument to the cross_validate function in scikit-learn. Indeed, my package is designed as an extension for scikit-learn, instead of being a standalone package itself.

In the following, I will present the various cross-validators in my package.

  • Gap leave p out
  • Gap K-Fold
  • Gap walk forward
  • gap train test split

At the end, I will demonstrate how to use this extension with scikit-learn seamlessly.

Gap leave p out

An ordinary leaving p out cross-validation uses any combination of $p$ data samples as the test set and the remaining as the training set. The test sets need not to be contiguous.

leave p out

The Gap leaving p out, as its name suggests, introduces gaps between the training set and the test set. Since it is not economical to “shatter” the test sets, it is preferred to make contiguous test sets. Also, the gaps in front of and behind the training set need not to be of equal size.

Gap leave p out

The gap leaving p out cross-validation can be reproduced with the GapLeavePOut class as in the following code.

>>> from tscv import GapLeavePOut
>>> cv = GapLeavePOut(p=3, gap_before=1, gap_after=2)
>>> for train, test in cv.split(range(7)):
...    print("train:", train, "test:", test)

train: [5 6]   test: [0 1 2]
train: [6]     test: [1 2 3]
train: [0]     test: [2 3 4]
train: [0 1]   test: [3 4 5]
train: [0 1 2] test: [4 5 6]

Gap K-Fold

An ordinary K-Fold splits the data into $K$ folds, then each time uses one fold for the test set and the remaining for the training set. The data are preferably shuffled before being split to K folds.

K-Fold

The gap K-Fold also splits the data into $K$ folds. The test sets are untouched, while the training sets get the gaps removed. Unlike K-Fold, gap K-Fold does not shuffle the data.

Gap K-Fold

The gap K-Fold cross-validation can be reproduced with the GapKFold class as in the following code.

>>> from tscv import GapKFold
>>> cv = GapKFold(n_splits=5, gap_before=2, gap_after=1)
>>> for train, test in cv.split(range(10)):
...    print("train:", train, "test:", test)

train: [3 4 5 6 7 8 9] 	 test: [0 1]
train: [5 6 7 8 9] 	 test: [2 3]
train: [0 1 7 8 9] 	 test: [4 5]
train: [0 1 2 3 9] 	 test: [6 7]
train: [0 1 2 3 4 5] 	 test: [8 9]

Gap walk forward

Walk-forward is very similar to K-Fold except that it ignores the data after the test set.

Walk Forward

Gap walk-forward works similarly: it introduces a gap between the training set and the test set, and this very gap is removed from the training set.

Gap Walk Forward

The gap walk-forward cross-validation can be reproduced with the GapWalkForward class as in the following code.

>>> from tscv import GapWalkForward
>>> cv = GapWalkForward(n_splits=3, gap_size=1, test_size=2)
>>> for train, test in cv.split(range(10)):
...    print("train:", train, "test:", test)

train: [0 1 2] 	         test: [4 5]
train: [0 1 2 3 4]       test: [6 7]
train: [0 1 2 3 4 5 6] 	 test: [8 9]

Gap walk-forward is less efficient than gap K-Fold and gap leaving p out in that it does not make the fullest use of the data set. It can be advantageous if the time series is non-stationary though.

Gap train test split

Unlike the above cross-validator, gap train-test split is not a cross-validator but a one-line function that split the data set into the training set and test set while removing the gap.

gap train test split

The above split can be reproduced with the gap_train_test_split function as in the following code.

import numpy as np
from tscv import gap_train_test_split
X, y = np.arange(20).reshape((10, 2)), np.arange(10)
X_train, X_test, y_train, y_test = gap_train_test_split(X, y, test_size=2, gap_size=2)

Use them with scikit-learn

The best feature of this package is that you can use it with scikit-learn seamlessly. Let me show you with an example.

First let us load the data, the algorithm, and the evaluation.

import numpy as np
from sklearn import datasets
from sklearn import svm
from sklearn.model_selection import cross_val_score
from tscv import GapKFold

iris = datasets.load_iris()
clf = svm.SVC(kernel='linear', C=1)

Then we construct a GapKFold object and pass it, as the argument for cv, to the cross_val_score function.

cv = GapKFold(n_splits=5, gap_before=5, gap_after=5)
scores = cross_val_score(clf, iris.data, iris.target, cv=cv)

You can see that you can use the classes in this package in exactly the same way as you use the classes in scikit-learn.

Resource

  • This package is open-source and is hosted on GitHub. The user guide can be found in the README file. If you like this package, please star the repository.
  • I have opened a pull request on scikit-learn. If you would like to see it merged and use it directly within scikit-learn, please comment on the pull request.

Acknowledgment

  • I would like to thank Christoph Bergmeir, Prabir Burman, and Jeffrey Racine for the helpful discussion.

Bibliography

  • Bergmeir, Christoph, and José M. Benítez. “On the use of cross-validation for time series predictor evaluation.” Information Sciences 191 (2012): 192-213.
  • Bergmeir, Christoph, Rob J. Hyndman, and Bonsoo Koo. “A note on the validity of cross-validation for evaluating autoregressive time series prediction.” Computational Statistics & Data Analysis 120 (2018): 70-83.
  • Burman, Prabir, Edmond Chow, and Deborah Nolan. “A cross-validatory method for dependent data.” Biometrika 81.2 (1994): 351-358.
  • Racine, Jeff. “Consistent cross-validatory model-selection for dependent data: hv-block cross-validation.” Journal of econometrics 99.1 (2000): 39-61.
  • Roberts, David R., et al. “Cross‐validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure.” Ecography 40.8 (2017): 913-929.
You may also like
Written on May 14, 2019