Maas, Benedikt (2019): Nowcasting and forecasting US recessions: Evidence from the Super Learner.
Preview |
PDF
MPRA_paper_96408.pdf Download (369kB) | Preview |
Abstract
This paper introduces the Super Learner to nowcast and forecast the probability of a US economy recession in the current quarter and future quarters. The Super Learner is an algorithm that selects an optimal weighted average from several machine learning algorithms. In this paper, elastic net, random forests, gradient boosting machines and kernel support vector machines are used as underlying base learners of the Super Learner, which is trained with real-time vintages of the FRED-MD database as input data. The Super Learner’s ability to categorise future time periods into recessions versus expansions is compared with eight different alternatives based on probit models. The relative model performance is evaluated based on receiver operating characteristic (ROC) curves. In summary, the Super Learner predicts a recession very reliably across all forecast horizons, although it is defeated by different individual benchmark models on each horizon.
Item Type: | MPRA Paper |
---|---|
Original Title: | Nowcasting and forecasting US recessions: Evidence from the Super Learner |
Language: | English |
Keywords: | Machine Learning; Nowcasting; Forecasting; Business cycle analysis |
Subjects: | C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models ; Multiple Variables > C32 - Time-Series Models ; Dynamic Quantile Regressions ; Dynamic Treatment Effect Models ; Diffusion Processes ; State Space Models C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C53 - Forecasting and Prediction Methods ; Simulation Methods C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C55 - Large Data Sets: Modeling and Analysis E - Macroeconomics and Monetary Economics > E3 - Prices, Business Fluctuations, and Cycles > E32 - Business Fluctuations ; Cycles |
Item ID: | 96408 |
Depositing User: | Benedikt Maas |
Date Deposited: | 16 Oct 2019 05:37 |
Last Modified: | 16 Oct 2019 05:37 |
References: | Breiman, L. (1996a). Bagging predictors. Machine Learning, 24(2):123–140. Breiman, L. (1996b). Stacked regressions. Machine Learning, 24(1):49–64. Breiman, L. (2001). Random forests. Machine Learning, 45(1):5–32. Bühlmann, P., Drineas, P., Kane, M., and van der Laan, M. J. (2016). Handbook of Big Data. CRC Press. Crone, S. F. and Kourentzes, N. (2010). Feature selection for time series prediction–a combined filter and wrapper approach for neural networks. Neurocomputing, 73(10-12):1923–1936. Croushore, D. D. (1993). Introducing: the Survey of Professional Forecasters. Business Review-Federal Reserve Bank of Philadelphia, 6:3–15. Döpke, J., Fritsche, U., and Pierdzioch, C. (2017). Predicting recessions with boosted regression trees. International Journal of Forecasting, 33(4):745–759. Dudoit, S. and van der Laan, M. J. (2005). Asymptotics of cross-validated risk estimation in estimator selection and performance assessment. Statistical Methodology, 2(2):131–154. Efron, B. and Hastie, T. (2016). Computer Age Statistical Inference, volume 5. Cambridge University Press. Estrella, A. and Hardouvelis, G. A. (1991). The term structure as a predictor of real economic activity. The Journal of Finance, 46(2):555–576. Estrella, A. and Mishkin, F. S. (1996). The yield curve as a predictor of US recessions. Current Issues in Economics and Finance, 2(7). Estrella, A. and Mishkin, F. S. (1998). Predicting US recessions: financial variables as leading indicators. Review of Economics and Statistics, 80(1):45–61. Freund, Y. (1995). Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285. Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139. Freund, Y. and Schapire, R. E. (1999). Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1-2):79–103. Friedman, J., Hastie, T., and Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22. Friedman, J., Hastie, T., Tibshirani, R., et al. (2000). Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Annals of Statistics, 28(2):337–407. Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of Statistics, pages 1189–1232. Giannone, D., Reichlin, L., and Small, D. (2008). Nowcasting: the real-time informational content of macroeconomic data. Journal of Monetary Economics, 55(4):665–676. Gogas, P., Papadimitriou, T., Matthaiou, M., and Chrysanthidou, E. (2015). Yield curve and recession forecasting in a machine learning framework. Computational Economics, 45(4):635–645. Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006). Extreme learning machine: theory and applications. Neurocomputing, 70(1-3):489–501. Jordà, Ò. and Taylor, A. M. (2011). Performance evaluation of zero net-investment strategies. Technical report, National Bureau of Economic Research. Jordà, Ò. and Taylor, A. M. (2012). The carry trade and fundamentals: Nothing to fear but feer itself. Journal of International Economics, 88(1):74–90. Jung, J.-K., Patnam, M., and Ter-Martirosyan, A. (2018). An algorithmic crystal ball: Forecastsbased on machine learning. IMF Working Paper 230. Kecman, V. (2005). Support vector machines: an introduction. In Support vector machines: theory and applications, pages 1–47. Springer. Khandani, A. E., Kim, A. J., and Lo, A. W. (2010). Consumer credit-risk models via machinelearning algorithms. Journal of Banking & Finance, 34(11):2767–2787. Kourentzes, N., Barrow, D. K., and Crone, S. F. (2014). Neural network ensemble operators for time series forecasting. Expert Systems with Applications, 41(9):4235–4244. Kuhn, M. and Johnson, K. (2013). Applied Predictive Modeling, volume 26. Springer. Lachtermacher, G. and Fuller, J. D. (1995). Back propagation in time-series forecasting. Journal of Forecasting, 14(4):381–393. Liu, W. and Moench, E. (2016). What predicts US recessions? International Journal of Forecasting, 32(4):1138–1150. Loermann, J. and Maas, B. (2019). Nowcasting US GDP with artificial neural networks. MPRA Working Paper. McCracken, M. W. and Ng, S. (2016). FRED-MD: a monthly database for macroeconomic research. Journal of Business & Economic Statistics, 34(4):574–589. Ng, S. (2014). Boosting recessions. Canadian Journal of Economics, 47(1):1–34. Peterson, W., Birdsall, T., and Fox, W. (1954). The theory of signal detectability. Transactions of the IRE professional group on information theory, 4(4):171–212. Pierdzioch, C., Reid, M. B., and Gupta, R. (2018). On the directional accuracy of inflation forecasts: evidence from South African survey data. Journal of Applied Statistics, 45(5):884–900. Politis, D. N. and Romano, J. P. (1994). The stationary bootstrap. Journal of the American Statistical association, 89(428):1303–1313. Polley, E. C. and van der Laan, M. J. (2010). Super Learner in prediction. U.C. Berkeley Division of Biostatistics Working Paper Series 226. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323(6088):533. Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2):197–227. Schölkopf, B., Smola, A. J., Bach, F., et al. (2002). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press. Stock, J. H. and Watson, M. W. (1989). New indexes of coincident and leading economic indicators. NBER Macroeconomics Annual, 4:351–394. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288. Tiffin, A. (2016). Seeing in the dark: a machine-learning approach to nowcasting in Lebanon. IMF Working Paper 56. van der Laan, M. J., Polley, E. C., and Hubbard, A. E. (2007). Super Learner. Statistical Applications in Genetics and Molecular Biology, 6(1). Vapnik, V. (1998). Statistical Learning Theory. Wiley, New York. Wallis, K. F. (1986). Forecasting with an econometric model: the “ragged edge” problem. Journal of Forecasting, 5(1):1–13. Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5(2):241–259. Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/96408 |