Evaluating State of the Art, Forecasting Ensembles- and Meta-learning Strategies for Model Fusion
release_a7kkx7jbo5cyplozwiyaywvzda
by
Pieter Cawood, Terence van Zyl
2022
Abstract
Techniques of hybridisation and ensemble learning are popular model fusion
techniques for improving the predictive power of forecasting methods. With
limited research that instigates combining these two promising approaches, this
paper focuses on the utility of the Exponential-Smoothing-Recurrent Neural
Network (ES-RNN) in the pool of base models for different ensembles. We compare
against some state of the art ensembling techniques and arithmetic model
averaging as a benchmark. We experiment with the M4 forecasting data set of
100,000 time-series, and the results show that the Feature-based Forecast Model
Averaging (FFORMA), on average, is the best technique for late data fusion with
the ES-RNN. However, considering the M4's Daily subset of data, stacking was
the only successful ensemble at dealing with the case where all base model
performances are similar. Our experimental results indicate that we attain
state of the art forecasting results compared to N-BEATS as a benchmark. We
conclude that model averaging is a more robust ensemble than model selection
and stacking strategies. Further, the results show that gradient boosting is
superior for implementing ensemble learning strategies.
In text/plain
format
Archived Files and Locations
application/pdf 13.5 MB
file_56peckguwnelxk72sxrysxbupa
|
arxiv.org (repository) web.archive.org (webarchive) |
2203.03279v3
access all versions, variants, and formats of this works (eg, pre-prints)