Matkovskyy, Roman (2012): Forecasting the Index of Financial Safety (IFS) of South Africa using neural networks.
Download (1MB) | Preview
This paper investigates neural network tools, especially the nonlinear autoregressive model with exogenous input (NARX), to forecast the future conditions of the Index of Financial Safety (IFS) of South Africa. Based on the time series that was used to construct the IFS for South Africa (Matkovskyy, 2012), the NARX model was built to forecast the future values of this index and the results are benchmarked against that of Bayesian Vector-Autoregressive Models. The results show that the NARX model applied to IFS of South Africa and trained by the Levenberg-Marquardt algorithm may ensure a forecast of adequate quality with less computation expanses, compared to BVAR models with different priors.
|Item Type:||MPRA Paper|
|Original Title:||Forecasting the Index of Financial Safety (IFS) of South Africa using neural networks|
|English Title:||Forecasting the Index of Financial Safety (IFS) of South Africa using neural networks|
|Keywords:||Index of Financial Safety (IFS); neural networks; nonlinear dynamic network (NDN); nonlinear autoregressive model with exogenous input (NARX); forecast|
|Subjects:||C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics > C45 - Neural Networks and Related Topics
E - Macroeconomics and Monetary Economics > E4 - Money and Interest Rates > E44 - Financial Markets and the Macroeconomy
G - Financial Economics > G0 - General > G01 - Financial Crises
|Depositing User:||Roman Matkovskyy|
|Date Deposited:||23. Oct 2012 19:22|
|Last Modified:||22. Aug 2015 10:03|
ABIAD, A., G. (2003). Early Warning Systems: A Survey and a Regime-Switching Approach. IMF Working Paper No. 03/32.
AKAIKE, H. (1970). Statistical predictor identification, Annals of the Institute for Statistical Mathematics 22, 203-217.
AMARI, S. (1995). Learning and statistical inference, in M. A. Arbib, ed., ‘The Handbook of Brain Theory and Neural Networks’, MIT Press, Cambridge, Massachusetts, 522-526.
BACHA, H. & MEYER, W. (1992). Neural Network architecture for Load Forecasting, Proceedings of International Joint Conference on Neural Networks, IEEE Publisher, 442-447.
BARTLETT, P. (1993). Vapnik-Chervonenkis dimension bounds for two-and three-layer networks, Neural Computation 5(3), 371-373. COHN, D. and TESAURO, G. (1992). How tight are the Vapnik-Chervonenkis bounds?, Neural Computation 4(2), 249-269.
ELISSEEFF, A., and PAUGAM-MOISY, H. (1997). Size of multilayer networks for exact learning: analytic approach". Advances in Neural Information Processing Systems 9, Cambridge, MA: The MIT Press, 162-168.
FAHLMAN, S. E. and LEBIERE, C. (1991). The Cascade-Correlation Learning Architecture, Available at http://www.cs.iastate.edu/~honavar/fahlman.pdf [Accessed 07 February 2012].
FORESEE, F.D. and HAGAN, M.T. (1997). Gauss-Newton approximation to Bayesian regularization, Proceedings of the 1997 International Joint Conference on Neural Networks, pp. 1930-5.
FRASER, A. M., SWINNEY, H. L. (1986). Independent coordinates for strange attractors from mutual information, Physical Review, A 33, 1134-40.
GIRGUIS, S., AHMED, K. M., El MAKKY, N. M., & HAFEZ, A. M. (2006). Mining the Future: Predicting Itemsets' Support of Association Rules Mining. Retrieved January 28, 2007, Available at: http://www.cs.pitt.edu/~shenoda/files/miningthefuture-fdm06.pdf [Accessed 07 February 2012].
HAYKIN, S. (1999). Neural Networks, A Comprehensive Foundation, 2nd Edition, Prentice Hall.
HSU, HWEI P. (1995). Signals and Systems. McGraw-Hill. p.18.
KAISER, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23(3): 187-200.
KELLEY, C.T. (1999). Iterative Methods for Optimization. SIAM Press, Philadelphia.
KOOPMAN, S. J., LUCAS, A. and SCHWAAB, B. (2010). Macro, frailty, and contagion effects in defaults: Lessons from the 2008 credit crisis. Tinbergen Institute Discussion Paper 2010-004/2, 1–40.
LAWRENCE, S., GILES, C.L., and TSOI, A.C. (1996). What size neural network gives optimal generalization? Convergence properties of backpropagation. Technical Report UMIACS-TR-96-22 and CS-TR-3617, Institute for Advanced Computer Studies, University of Maryland, College Park.
LEONTARITIS, I. L., BILLINGS, S.A. (1985). Input-output parametric model for nonlinear systems – Part II: stochastic nonlinear systems, Int.J.Control, 41(2): 329-344.
LEVENBERG, K. (1944). A Method for the Solution of Certain Non-linear Problems in Least Squares. Quarterly of Applied Mathematics, 2(2): 164-168.
LIN, TSUNGNAN, HORNE, BILL G., TINO, PETER & LEE, G., C. (2000). Learning long term dependencies in NARX recurrent neural networks, Recurrent neural networks design and applications, CRC Press, 133-146.
MAASS, W. (1995). Vapnik-Chervonenkis dimension of neural networks, in M. A. Arbib, ed., The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, Massachusetts, 522-526.
MACKAY, D.J.C. (1992). Bayesian Interpolation, Neural Computation, 4(3): 415-447.
MARQUARDT, D.W. (1963). An Algorithm for the Least-Squares Estimation of Nonlinear Parameters. SIAM Journal of Applied Mathematics, 11(2): 431-441.
MATKOVSKYY, R. (2012). The Index of the Financial Safety (IFS) of South Africa and Bayesian estimates for IFS Vector-Autoregressive Model. Forthcoming.
MENEZES, Jose M.P. & BARRETO, G. A. (2006). A new look at nonlinear time series prediction with NARX recurrent neural network, Proceedings of Ninth Brazilian Symposium on Neural Networks, pp.160-165, 23-27 Oct. 2006.
MOODY, J. (1992). The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, in J. Moody, S. J. Hanson and R. P. Lippmann, eds, Advances in Neural Information Processing Systems, Vol. 4, Morgan Kaufmann, San Mateo, CA, 847-854.
NIELSEN, H.B. (1999). Damping Parameter in Marquardt’s Method. Technical Report IMM-REP-1999-05, Technical University of Denmark, 1999. Available at http://www.imm.dtu.dk/˜hbn.
NOCEDAL, J. and WRIGHT, S.J. (1999). Numerical Optimization. Springer, New York.
RIPLEY, B. (1995). Statistical ideas for selecting network architectures, Invited Presentation, Neural Information Processing Systems 8.
SIEGELMANN, HAVA T., HORNE G. & LEE G.C. (1997). Computational capabilities of recurrent NARX neural network” IEEE Transactions on Systems, Man and Cybernetics, Part B, 27(2): 208-215.
OPPENHEIM, A. V., WILLSKY, A. S., & NAWAB, S. H. (1997). Signals & Systems. Prentice Hall.