Korobilis, Dimitris and Shimizu, Kenichi (2021): Bayesian Approaches to Shrinkage and Sparse Estimation.

PDF
MPRA_paper_111631.pdf Download (1MB)  Preview 
Abstract
In all areas of human knowledge, datasets are increasing in both size and complexity, creating the need for richer statistical models. This trend is also true for economic data, where highdimensional and nonlinear/noparametric inference is the norm in several fields of applied econometric work. The purpose of this paper is to introduce the reader to the realm of Bayesian model determination, by surveying modern shrinkage and variable selection algorithms and methodologies. Bayesian inference is a natural probabilistic framework for quantifying uncertainty and learning about model parameters, and this feature is particularly important for inference in modern models of high dimensions and increased complexity. We begin with a linear regression setting in order to introduce various classes of priors that lead to shrinkage/sparse estimators of comparable value to popular penalized likelihood estimators (e.g. ridge, lasso). We explore various methods of exact and approximate inference, and discuss their pros and cons. Finally, we explore how priors developed for the simple regression setting can be extended in a straightforward way to various classes of interesting econometric models. In particular, the following casestudies are considered, that demonstrate application of Bayesian shrinkage and variable selection strategies to popular econometric contexts: i) vector autoregressive models; ii) factor models; iii) timevarying parameter regressions; iv) confounder selection in treatment effects models; and v) quantile regression models. A MATLAB package and an accompanying technical manual allow the reader to replicate many of the algorithms described in this review.
Item Type:  MPRA Paper 

Original Title:  Bayesian Approaches to Shrinkage and Sparse Estimation 
Language:  English 
Keywords:  Bayesian inference, sparsity, shrinkage, hierarchical priors, computation 
Subjects:  C  Mathematical and Quantitative Methods > C1  Econometric and Statistical Methods and Methodology: General > C11  Bayesian Analysis: General C  Mathematical and Quantitative Methods > C1  Econometric and Statistical Methods and Methodology: General > C12  Hypothesis Testing: General C  Mathematical and Quantitative Methods > C1  Econometric and Statistical Methods and Methodology: General > C13  Estimation: General C  Mathematical and Quantitative Methods > C1  Econometric and Statistical Methods and Methodology: General > C15  Statistical Simulation Methods: General C  Mathematical and Quantitative Methods > C2  Single Equation Models ; Single Variables > C20  General C  Mathematical and Quantitative Methods > C3  Multiple or Simultaneous Equation Models ; Multiple Variables > C30  General C  Mathematical and Quantitative Methods > C4  Econometric and Statistical Methods: Special Topics > C45  Neural Networks and Related Topics C  Mathematical and Quantitative Methods > C4  Econometric and Statistical Methods: Special Topics > C46  Specific Distributions ; Specific Statistics C  Mathematical and Quantitative Methods > C5  Econometric Modeling > C51  Model Construction and Estimation C  Mathematical and Quantitative Methods > C5  Econometric Modeling > C52  Model Evaluation, Validation, and Selection C  Mathematical and Quantitative Methods > C5  Econometric Modeling > C53  Forecasting and Prediction Methods ; Simulation Methods C  Mathematical and Quantitative Methods > C5  Econometric Modeling > C55  Large Data Sets: Modeling and Analysis C  Mathematical and Quantitative Methods > C5  Econometric Modeling > C58  Financial Econometrics C  Mathematical and Quantitative Methods > C6  Mathematical Methods ; Programming Models ; Mathematical and Simulation Modeling > C61  Optimization Techniques ; Programming Models ; Dynamic Analysis C  Mathematical and Quantitative Methods > C6  Mathematical Methods ; Programming Models ; Mathematical and Simulation Modeling > C63  Computational Techniques ; Simulation Modeling C  Mathematical and Quantitative Methods > C8  Data Collection and Data Estimation Methodology ; Computer Programs > C88  Other Computer Software 
Item ID:  111631 
Depositing User:  Dimitris Korobilis 
Date Deposited:  24 Jan 2022 09:13 
Last Modified:  24 Jan 2022 09:13 
References:  Aitkin, M. (1991). Posterior Bayes factors. Journal of the Royal Statistical Society. Series B (Methodological), 53(1):111–142. Alhamzawi, R. and Ali, H. T. M. (2018). The Bayesian adaptive lasso regression. Mathematical Biosciences, 303:75 – 82. Alhamzawi, R. and Yu, K. (2012). Variable selection in quantile regression via gibbs sampling. Journal of Applied Statistics, 39(4):799–813. Antonelli, J., Papadogeorgou, G., and Dominici, F. (2020). Causal inference in high dimensions: A marriage between Bayesian modeling and good frequentist properties. Biometrics. Antonelli, J., Parmigiani, G., and Dominici, F. (2019). Highdimensional confounding adjustment using continuous spike and slab priors. Bayesian Analysis, 14(3):805 – 828. Armagan, A., Clyde, M., and Dunson, D. (2011). Generalized beta mixtures of Gaussians. In ShaweTaylor, J., Zemel, R., Bartlett, P., Pereira, F., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc. Armagan, A., Dunson, D. B., and Lee, J. (2013). Generalized double pareto shrinkage. Statistica Sinica, 23:119–143. Armagan, A. and Zaretzki, R. L. (2010). Model selection via adaptive shrinkage with t priors. Computational Statistics, 25(3):441–461. Assmann, C., BoysenHogrefe, J., and Pape, M. (2016). Bayesian analysis of static and dynamic factor models: An expost approach towards the rotation problem. Journal of Econometrics, 192(1):190–206. Bae, K. and Mallick, B. K. (2004). Gene selection using a twolevel hierarchical Bayesian model. Bioinformatics, 20(18):3423–3430. Bai, R., Rockova, V., and George, E. I. (2021). Spikeandslab meets lasso: A review of the spikeandslab lasso. arXiv preprint arXiv:2010.06451. Barbieri, M. M. and Berger, J. O. (2004). Optimal predictive model selection. The Annals of Statistics, 32(3):870–897. Baumeister, C., Korobilis, D., and Lee, T. K. (2020). Energy Markets and Global Economic Conditions. The Review of Economics and Statistics, pages 1–45. Belloni, A., Chernozhukov, V., and Hansen, C. (2014). Inference on treatment effects after selection among highdimensional controls. The Review of Economic Studies, 81(2):608–650. Belmonte, M. A., Koop, G., and Korobilis, D. (2014). Hierarchical shrinkage in timevarying parameter models. Journal of Forecasting, 33(1):80–94. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis, volume Springer Series in Statistics. SpringerVerlag New York. Berger, J. O. and Mortera, J. (1999). Default Bayes factors for nonnested hypothesis testing. Journal of the American Statistical Association, 94(446):542–554. Berger, J. O. and Pericchi, L. R. (1996). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association, 91(433):109–122. Berger, J. O. and Pericchi, L. R. (1998). Accurate and stable Bayesian model selection: The median intrinsic Bayes factor. Sankhy¯a: The Indian Journal of Statistics, Series B (1960 2002), 60(1):1–18. Berger, J. O. and Pericchi, L. R. (2001). Objective Bayesian methods for model selection: Introduction and comparison. In Lahiri, P., editor, Model selection, volume Volume 38 of Lecture Notes–Monograph Series, pages 135–207. Institute of Mathematical Statistics, Beachwood, OH. Bernanke, B. S., Boivin, J., and Eliasz, P. (2005). Measuring the Effects of Monetary Policy: A FactorAugmented Vector Autoregressive (FAVAR) Approach. The Quarterly Journal of Economics, 120(1):387–422. Bhadra, A., Datta, J., Li, Y., and Polson, N. (2020). Horseshoe regularisation for machine learning in complex and deep models. International Statistical Review, 88(2):302–320. Bhattacharya, A., Chakraborty, A., and Mallick, B. K. (2016). Fast sampling with Gaussian scale mixture priors in highdimensional regression. Biometrika, 103(4):985–991. Bhattacharya, A. and Dunson, D. B. (2011). Sparse Bayesian infinite factor models. Biometrika, pages 291–306. Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. B. (2015). Dirichlet–Laplace priors for optimal shrinkage. Journal of the American Statistical Association, 110(512):1479–1490. PMID: 27019543. Bitto, A. and FruhwirthSchnatter, S. (2019). Achieving shrinkage in a timevarying parameter model framework. Journal of Econometrics, 210(1):75 – 97. Annals Issue in Honor of John Geweke “Complexity and Big Data in Economics and Finance: Recent Developments from a Bayesian Perspective”. Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017). Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877. Bogdan, M., Chakrabarti, A., Frommlet, F., and Ghosh, J. K. (2011). Asymptotic Bayesoptimality under sparsity of some multiple testing procedures. The Annals of Statistics, 39(3):1551–1579. Bottolo, L. and Richardson, S. (2010). Evolutionary stochastic search for Bayesian model exploration. Bayesian Anal., 5(3):583–618. Cao, X., Khare, K., and Ghosh, M. (2020). Highdimensional posterior consistency for hierarchical nonlocal priors in regression. Bayesian Analysis, 15(1):241 – 262. Carbonetto, P. and Stephens, M. (2012). Scalable variational inference for Bayesian variable selection in regression, and its accuracy in genetic association studies. Bayesian Analysis, 7(1):73 – 108. Caron, F. and Doucet, A. (2008). Sparse Bayesian nonparametric regression. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 88–95, New York, NY, USA. ACM. Carriero, A., Clark, T. E., and Marcellino, M. (2019). Large bayesian vector autoregressions with stochastic volatility and nonconjugate priors. Journal of Econometrics, 212(1):137 – 154. Big Data in Dynamic Predictive Econometric Modeling. Carvalho, C. M., Chang, J., Lucas, J. E., Nevins, J. R., Wang, Q., and West, M. (2008). Highdimensional sparse factor modeling: Applications in gene expression genomics. Journal of the American Statistical Association, 10 (484):1438–1456. Carvalho, C. M., Polson, N. G., and Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, 97(2):465–480. Castillo, I., SchmidtHieber, J., and van der Vaart, A. (2015). Bayesian linear regression with sparse priors. The Annals of Statistics, 43(5):1986–2018. Castillo, I. and van der Vaart, A. (2012). Needles and straw in a haystack: Posterior concentration for possibly sparse sequences. The Annals of Statistics, 40(4):2069 – 2101. Chan, J., LeonGonzalez, R., and Strachan, R. W. (2018). Invariant inference and efficient computation in the static factor model. Journal of the American Statistical Association, 113(522):819–828. Chan, J. C. and Grant, A. L. (2016). Fast computation of the deviance information criterion for latent variable models. Computational Statistics & Data Analysis, 100:847 – 859. Chan, J. C., Koop, G., LeonGonzalez, R., and Strachan, R. W. (2012). Time varying dimension models. Journal of Business & Economic Statistics, 30(3):358–367. Chen, J. and Chen, Z. (2008). Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 95(3):759–771. Chib, S. (1995). Marginal likelihood from the Gibbs output. Journal of the American Statistical Association, 90(432):1313–1321. Chib, S. and Jeliazkov, I. (2001). Marginal likelihood from the metropolis–hastings output. Journal of the American Statistical Association, 96(453):270–281. Chib, S., Nardari, F., and Shephard, N. (2006). Analysis of high dimensional multivariate stochastic volatility models. Journal of Econometrics, 134(2):341–371. Chipman, H., George, E. I., and McCulloch, R. E. (2001). The practical implementation of Bayesian model selection. In Lahiri, P., editor, Model selection, volume Volume 38 of Lecture Notes–Monograph Series, pages 65–116. Institute of Mathematical Statistics, Beachwood, OH. Clyde, M. A. (1999). Bayesian model averaging and model search strategies. In Bernardo, J., Dawid, A., Berger, J., and Smith, A., editors, Bayesian Statistics 6. Oxford University Press. Clyde, M. A., Ghosh, J., and Littman, M. L. (2011). Bayesian adaptive sampling for variable selection and model averaging. Journal of Computational and Graphical Statistics, 20(1):80– 101. Dangl, T. and Halling, M. (2012). Predictive regressions with timevarying coefficients. Journal of Financial Economics, 106(1):157 – 181. Datta, J. and Ghosh, J. K. (2013). Asymptotic properties of Bayes risk for the horseshoe prior. Bayesian Anal., 8(1):111–132. Davison, A. C. (1986). Approximate predictive likelihood. Biometrika, 73(2):323–332. De Santis, F. and Spezzaferri, F. (1997). Alternative Bayes factors for model selection. Canadian Journal of Statistics, 25(4):503–515. Dehaene, G. and Barthelm´e, S. (2018). Expectation propagation in the large data limit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):199–217. Dellaportas, P., Forster, J. J., and Ntzoufras, I. (2002). On Bayesian model and variable selection using mcmc. Statistics and Computing, 12(1):27–36. DiCiccio, T. J., Kass, R. E., Raftery, A., and Wasserman, L. (1997). Computing Bayes factors by combining simulation and asymptotic approximations. Journal of the American Statistical Association, 92(439):903–915. Dunson, D. B., Herring, A. H., and Engel, S. M. (2008). Bayesian selection and clustering of polymorphisms in functionally related genes. Journal of the American Statistical Association, 103(482):534–546. Efron, B. and Morris, C. (1973). Stein’s estimation rule and its competitors – an empirical Bayes approach. Journal of the American Statistical Association, 68(341):117–130. Eicher, T. S., Papageorgiou, C., and Raftery, A. E. (2011). Default priors and predictive performance in Bayesian model averaging, with application to growth determinants. Journal of Applied Econometrics, 26(1):30–55. Fernandez, C., Ley, E., and Steel, M. F. (2001a). Benchmark priors for Bayesian model averaging. Journal of Econometrics, 100(2):381–427. Fernandez, C., Ley, E., and Steel, M. F. J. (2001b). Model uncertainty in crosscountry growth regressions. Journal of Applied Econometrics, 16(5):563–576. Figueiredo, M. A. T. (2003). Adaptive sparseness for supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 25(9):1150–1159. Foster, D. P. and George, E. I. (1994). The risk inflation criterion for multiple regression. The Annals of Statistics, 22(4):1947–1975. Fourdrinier, D., Strawderman, W. E., and Wells, M. T. (2018). Shrinkage Estimation, volume Springer Texts in Statistics. Springer International Publishing. Fragoso, T. M., Bertoli, W., and Louzada, F. (2018). Bayesian model averaging: A systematic review and conceptual classification. International Statistical Review, 86(1):1–28. FruwirthSchnatter, S. and Lopes, H. (2018). Sparse Bayesian factor analysis when the number of factors is unknown. Technical Report arXiv:1804.04231v1, ArXiV. FruwirthSchnatter, S. andWagner, H. (2010). Bayesian variable selection for random intercept modeling of Gaussian and nonGaussian data. In Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., and West, M., editors, Bayesian Statistics 9. Oxford University Press. Gelfand, A. E. and Dey, D. K. (1994). Bayesian model choice: Asymptotics and exact calculations. Journal of the Royal Statistical Society. Series B (Methodological), 56(3):501–514. Gelfand, A. E. and Ghosh, S. K. (1998). Model choice: A minimum posterior predictive loss approach. Biometrika, 85(1):1–11. Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3):515 – 534. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data Analysis. Chapman and Hall/CRC, New York, 3rd ed. edition. Gelman, A. and Hennig, C. (2017). Beyond subjective and objective in statistics. Journal of the Royal Statistical Society Series A, 180(4):967–1033. Gelman, A., Hwang, J., and Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24(6):997–1016. Gelman, A., Meng, X.L., and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6(4):733–760. George, E. I. and Foster, D. P. (2000). Calibration and empirical Bayes variable selection. Biometrika, 87(4):731–747. George, E. I. and McCulloch, R. E. (1993). Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881–889. George, E. I. and McCulloch, R. E. (1997). Approaches for Bayesian variable selection. Statistica Sinica, 7(2):339–373. Geweke, J. and Zhou, G. (1996). Measuring the price of the arbitrage pricing theory. The Review of Financial Studies, 9(2):557–587. Ghosh, J. and Clyde, M. A. (2011). Rao–blackwellization for Bayesian variable selection and model averaging in linear and binary regression: A novel data augmentation approach. Journal of the American Statistical Association, 10 (495):1041–1052. Ghosh, J. and Dunson, D. B. (2009). Default prior distributions and efficient posterior computation in Bayesian factor analysis. Journal of Computational and Graphical Statistics, 18(2):306–320. Giordano, R., Broderick, T., and Jordan, M. I. (2018). Covariances, robustness, and variational Bayes. Journal of Machine Learning Research, 19(51):1–49. Girolami, M. (2001). A variational method for learning sparse and overcomplete representations. Neural Computation, 13(11):2517–2532. Goutis, C. and Robert, C. P. (1998). Model choice in generalised linear models: A Bayesian approach via KullbackLeibler projections. Biometrika, 85(1):29–37. Griffin, J. E. and Brown, P. J. (2010). Inference with normalgamma prior distributions in regression problems. Bayesian Analysis, 5(1):171–188. Griffin, J. E. and Brown, P. J. (2011). Bayesian hyperlassos with nonconvex penalization. Australian & New Zealand Journal of Statistics, 53(4):423–442. Griffin, J. E. and Brown, P. J. (2017). Hierarchical shrinkage priors for regression models. Bayesian Analysis, 12(1):135–159. Hahn, P. R., Carvalho, C. M., Puelz, D., and He, J. (2018). Regularization and confounding in linear regression for treatment effect estimation. Bayesian Analysis, 13(1):163–182. Hahn, P. R., Murray, J. S., and Carvalho, C. M. (2020). Bayesian regression tree models for causal inference: Regularization, confounding, and heterogeneous effects (with discussion). Bayesian Analysis, 15(3):965 – 1056. Hans, C. (2009). Bayesian lasso regression. Biometrika, 96(4):835–845. Hans, C., Dobra, A., and West, M. (2007). Shotgun stochastic search for “large p” regression. Journal of the American Statistical Association, 102(478):507–516. Hill, J., Linero, A., and Murray, J. (2020). Bayesian additive regression trees: A review and look forward. Annual Review of Statistics and Its Application, 7(1):251–278. Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T. (1999). Bayesian model averaging: A tutorial. Statistical Science, 14(4):382–401. Ibrahim, J. G. and Laud, P. W. (1994). A predictive approach to the analysis of designed experiments. Journal of the American Statistical Association, 89(425):309–319. Irie, K. (2019). Bayesian dynamic fused lasso. arXiv preprint arXiv:1905.12275. Ishwaran, H. and Rao, J. S. (2003). Detecting differentially expressed genes in microarrays using Bayesian model selection. Journal of the American Statistical Association, 98(462):438–455. Ishwaran, H. and Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals of Statistics, 33(2):730–773. Ji, C. and Schmidler, S. C. (2013). Adaptive markov chain monte carlo for Bayesian variable selection. Journal of Computational and Graphical Statistics, 22(3):708–728. Jiang, W. (2006). On the consistency of Bayesian variable selection for high dimensional binary regression and classification. Neural Computation, 18(11):2762–2776. Johndrow, J., Orenstein, P., and Bhattacharya, A. (2020). Scalable approximate mcmc algorithms for the horseshoe prior. Journal of Machine Learning Research, 21(73):1–61. Johnson, V. E. and Rossell, D. (2010). On the use of nonlocal prior densities in Bayesian hypothesis tests. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 72(2):143–170. Johnson, V. E. and Rossell, D. (2012). Bayesian model selection in highdimensional settings. Journal of the American Statistical Association, 107(498):649–660. Johnstone, I. M. and Silverman, B. W. (2004). Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences. The Annals of Statistics, 32(4):1594–1649. Judge, G. G., Griffith, W. E., Hill, R. C., L¨utkepohl, H., and Lee, T.C. (1985). The theory and practice of econometrics. Wiley, New York. Kadane, J. B. and Lazar, N. A. (2004). Methods and criteria for model selection. Journal of the American Statistical Association, 99(465):279–290. Kahn, M. J. and Raftery, A. E. (1992). Fast exact Bayesian inference for the hierarchical normal model: Solving the improper posterior. Technical report, University of Washington. Kalli, M. and Griffin, J. E. (2014). Timevarying sparsity in dynamic regression models. Journal of Econometrics, 178(2):779 – 793. Kass, R. E. and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430):773–795. Kass, R. E. and Wasserman, L. (1995). A reference Bayesian test for nested hypotheses and its relationship to the schwarz criterion. Journal of the American Statistical Association, 90(431):928–934. Kaufmann, S. and Schumacher, C. (2019). Bayesian estimation of sparse dynamic factor models with orderindependent and expost mode identification. Journal of Econometrics, 210(1):116 – 134. Annals Issue in Honor of John Geweke “Complexity and Big Data in Economics and Finance: Recent Developments from a Bayesian Perspective”. Khare, K. and Hobert, J. P. (2012). Geometric ergodicity of the Gibbs sampler for Bayesian quantile regression. Journal of Multivariate Analysis, 112:108 – 116. Khare, K. and Hobert, J. P. (2013). Geometric ergodicity of the Bayesian lasso. Electron. J. Statist., 7:2150–2163. Kim, A. S. I. and Wand, M. P. (2016). The explicit form of expectation propagation for a simple statistical model. Electronic Journal of Statistics, 10(1):550–581. Knowles, D. and Ghahramani, Z. (2011). Nonparametric Bayesian sparse factor models with application to gene expression modeling. The Annals of Applied Statistics, 5(2B):1534–1552. Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46(1):33–50. Koop, G. and Korobilis, D. (2010). Bayesian multivariate time series methods for empirical macroeconomics. Foundations and Trends® in Econometrics, 3(4):267–358. Koop, G. and Korobilis, D. (2012). Forecasting inflation using dynamic model averaging. International Economic Review, 53(3):867–886. Koop, G. and Korobilis, D. (2016). Model uncertainty in panel vector autoregressive models. European Economic Review, 81:115–131. Koop, G. and Korobilis, D. (2018). Bayesian dynamic variable selection in high dimensions. Technical Report arXiv:1809.03031, ArXiV. Koop, G., Korobilis, D., and Pettenuzzo, D. (2019). Bayesian compressed vector autoregressions. Journal of Econometrics, 210(1):135–154. Korobilis, D. (2013a). Bayesian forecasting with highly correlated predictors. Economics Letters, 118(1):148 – 150. Korobilis, D. (2013b). VAR forecasting using Bayesian variable selection. Journal of Applied Econometrics, 28(2):204–230. Korobilis, D. (2016). Prior selection for panel vector autoregressions. Computational Statistics & Data Analysis, 101:110 – 120. Korobilis, D. (2017). Quantile regression forecasts of inflation under model uncertainty. International Journal of Forecasting, 33(1):11–20. Korobilis, D. (2020). Sign restrictions in highdimensional vector autoregressions. Working Paper series 2009, Rimini Centre for Economic Analysis. Korobilis, D. (2021). Highdimensional macroeconomic forecasting using message passing algorithms. Journal of Business & Economic Statistics, 39(2):493–504. Korobilis, D., Landau, B., Musso, A., and Phella, A. (2021). The timevarying evolution of inflation risks. Working Paper Series 2600, European Central Bank. Korobilis, D. and Pettenuzzo, D. (2019). Adaptive hierarchical priors for highdimensional vector autoregressions. Journal of Econometrics, 212(1):241 – 271. Korobilis, D. and Pettenuzzo, D. (2020). Machine learning econometrics: Bayesian algorithms and methods. Oxford Research Encyclopedia of Economics and Finance. Kowal, D. R., Matteson, D. S., and Ruppert, D. (2019). Dynamic shrinkage processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81(4):781–804. Kozumi, H. and Kobayashi, G. (2011). Gibbs sampling methods for Bayesian quantile regression. Journal of Statistical Computation and Simulation, 81(11):1565–1578. Krishna, A., Bondell, H. D., and Ghosh, S. K. (2009). Bayesian variable selection using an adaptive powered correlation prior. Journal of Statistical Planning and Inference, 139(8):2665 – 2674. Kuo, L. and Mallick, B. (1998). Variable selection for regression models. Sankhy¯a: The Indian Journal of Statistics, Series B (19602002), 60(1):65–81. Kyung, M., Gill, J., Ghosh, M., and Casella, G. (2010). Penalized regression, standard errors, and Bayesian lassos. Bayesian Analysis, 5(2):369–411. Laud, P. W. and Ibrahim, J. G. (1995). Predictive model selection. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):247–262. Legramanti, S., Durante, D., and Dunson, D. B. (2020). Bayesian cumulative shrinkage for infinite factorizations. Biometrika, 107(3):745–752. Leng, C., Tran, M.N., and Nott, D. (2014). Bayesian adaptive lasso. Annals of the Institute of Statistical Mathematics, 66(2):221–244. Lewis, S. M. and Raftery, A. E. (1997). Estimating Bayes factors via posterior simulation with the LaplaceMetropolis estimator. Journal of the American Statistical Association, 92(438):648–655. Li, H. and Pati, D. (2017). Variable selection using shrinkage priors. Computational Statistics & Data Analysis, 107:107–119. Li, Q. and Lin, N. (2010). The Bayesian elastic net. Bayesian Analysis, 5(1):151–170. Liang, F., Paulo, R., Molina, G., Clyde, M. A., and Berger, J. O. (2008). Mixtures of g priors for Bayesian variable selection. Journal of the American Statistical Association, 103(481):410– 423. Lim, D., Park, B., Nott, D., Wang, X., and Choi, T. (2020). Sparse signal shrinkage and outlier detection in highdimensional quantile regression with variational Bayes. Statistics and Its Interface, 13(2):237–249. Lindley, D. V. (1983). Parametric empirical Bayes inference: Theory and applications: Comment. Journal of the American Statistical Association, 78(381):61–62. Liu, Y., Roˇckov´a, V., and Wang, Y. (2019). Variable selection with abc Bayesian forests. Technical Report arXiv:1806.02304v2, ArXiV. Lopes, H. F. and West, M. (2004). Bayesian model assessment in factor analysis. Statistica Sinica, 14(1):41–67. Madigan, D., York, J., and Allard, D. (1995). Bayesian graphical models for discrete data. International Statistical Review / Revue Internationale de Statistique, 63(2):215–232. Makalic, E. and Schmidt, D. F. (2016). A simple sampler for the horseshoe estimator. IEEE Signal Processing Letters, 23(1):179–182. Mallick, H. and Yi, N. (2014). A new Bayesian lasso. Statistics and its Interface, 7(4):571–582. Martini, A. S. and Spezzaferri, F. (1984). A predictive model selection criterion. Journal of the Royal Statistical Society. Series B (Methodological), 46(2):296–303. Matusevich, D. S., Cabrera, W., and Ordonez, C. (2016). Accelerating a Gibbs sampler for variable selection on genomics data with summarization and variable preselection combining an array DBMS and R. Machine Learning, 10 (3):483–504. Mitchell, T. J. and Beauchamp, J. J. (1988). Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032. Moran, G. E., Rockova, V., and George, E. I. (2019). Variance prior forms for highdimensional Bayesian variable selection. Bayesian Analysis, 14(4):1091–1119. Nakajima, J. and West, M. (2013a). Bayesian analysis of latent threshold dynamic models. Journal of Business & Economic Statistics, 31(2):151–164. Nakajima, J. and West, M. (2013b). Bayesian dynamic factor models: Latent threshold approach. Journal of Financial Econometrics, 11:116–153. Nakajima, J. and West, M. (2015). Dynamic network signal processing using latent threshold models. Digital Signal Processing, 47:6–15. https://doi.org/10.1016/j.dsp.2015.04.008. Nakajima, J. and West, M. (2017). Dynamics and sparsity in latent threshold factor models: A study in multivariate EEG signal processing. Brazilian Journal of Probability and Statistics, 31:701–731. Narisetty, N. N. and He, X. (2014). Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics, 42(2):789–817. Narisetty, N. N., Shen, J., and He, X. (2018). Skinny Gibbs: A consistent and scalable Gibbs sampler for model selection. Journal of the American Statistical Association, 0(0):1–13. Neville, S. E., Ormerod, J. T., and Wand, M. P. (2014). Mean field variational Bayes for continuous sparse signal shrinkage: Pitfalls and remedies. Electronic Journal of Statistics, 8(1):1113–1151. Nott, D. J. and Kohn, R. (2005). Adaptive sampling for Bayesian variable selection. Biometrika, 92(4):747–763. O’Hagan, A. (1995). Fractional Bayes factors for model comparison. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):99–138. O’Hara, R. B. and Sillanp¨a¨a, M. J. (2009). A review of Bayesian variable selection methods: What, how and which. Bayesian Analysis, 4(1):85–117. Ormerod, J. T., You, C., and M¨uller, S. (2017). A variational Bayes approach to variable selection. Electronic Journal of Statistics, 11(2):3549–3594. Pal, S. and Khare, K. (2014). Geometric ergodicity for Bayesian shrinkage models. Electronic Journal of Statistics, 8(1):604 – 645. Pal, S., Khare, K., and Hobert, J. P. (2017). Trace class Markov chains for Bayesian inference with generalized double Pareto shrinkage priors. Scandinavian Journal of Statistics, 44(2):307–323. Papaspiliopoulos, O. and Rossell, D. (2017). Bayesian blockdiagonal variable selection and model averaging. Biometrika, 104(2):343–359. Park, T. and Casella, G. (2008). The Bayesian lasso. Journal of the American Statistical Association, 103(482):681–686. Pati, D., Bhattacharya, A., Pillai, N. S., and Dunson, D. (2014). Posterior contraction in sparse Bayesian factor models for massive covariance matrices. The Annals of Statistics, 42(3):1102–1130. Peltola, T., Marttinen, P., and Vehtari, A. (2012). Finite adaptation and multistep moves in the metropolishastings algorithm for variable selection in genomewide association analysis. PLOS ONE, 7(11):1–11. Polson, N. G. and Scott, J. G. (2010). Shrink globally, act locally: Sparse Bayesian regularization and prediction. In Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., and West, M., editors, Bayesian Statistics 9. Oxford University Press. Raftery, A. E. (1995). Bayesian model selection in social research. Sociological Methodology, 25:111–163. Raftery, A. E. (1996). Approximate Bayes factors and accounting for model uncertainty in generalised linear models. Biometrika, 83(2):251–266. Rajaratnam, B., Sparks, D., Khare, K., and Zhang, L. (2019). Uncertainty quantificationfor modern highdimensional regression via scalable Bayesian methods. Journal of Computational and Graphical Statistics, 28(1):174–184. Robert, C. (2007). The Bayesian Choice: From DecisionTheoretic Foundations to Computational Implementation, volume Springer Texts in Statistics. SpringerVerlag New York. Rodrigues, T. and Fan, Y. (2017). Regression adjustment for noncrossing Bayesian quantile regression. Journal of Computational and Graphical Statistics, 26(2):275–284. Rockova, V. and George, E. I. (2014). EMVS: The EM approach to Bayesian variable selection. Journal of the American Statistical Association, 109(506):828–846. Rockova, V. and George, E. I. (2016). Fast Bayesian factor analysis via automatic rotations to sparsity. Journal of the American Statistical Association, 111(516):1608–1622. Rockova, V. and George, E. I. (2018). The spikeandslab lasso. Journal of the American Statistical Association, 113(521):431–444. Rockova, V. and McAlinn, K. (2017). Dynamic variable selection with spikeandslab process priors. Technical Report arXiv:1708.00085v2, ArXiV. Rue, H. (2001). Fast sampling of Gaussian markov random fields. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 63(2):325–338. Shin, M., Bhattacharya, A., and Johnson, V. E. (2018). Scalable Bayesian variable selection using nonlocal prior densities in ultrahighdimensional settings. Statistica Sinica, 28(2):1053– 1078. Smith, M. and Kohn, R. (2002). Parsimonious covariance matrix estimation for longitudinal data. Journal of the American Statistical Association, 97(460):1141–1153. Spiegelhalter, D. J., Best, N. G., Carlin, B. P., and Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(4):583–639. Spiegelhalter, D. J., Best, N. G., Carlin, B. P., and van der Linde, A. (2014). The deviance information criterion: 12 years on. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(3):485–493. Srivastava, S., Engelhardt, B. E., and Dunson, D. B. (2017). Expandable factor analysis. Biometrika, 104(3):649–663. Stein, C. (1956). Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Neyman, J., editor, Berkeley Symposium on Mathematical Statistics and Probability, pages 197–206. University of California Press. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288. Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244. Uribe, P. and Lopes, H. (2017). Dynamic sparsity on dynamic regression models. Technical report, Available at http://hedibert.org/wpcontent/uploads/2018/06/uribelopes Sep2017.pdf. van den Boom, W., Dunson, D., and Reeves, G. (2015a). Quantifying uncertainty in variable selection with arbitrary matrices. In 2015 IEEE 6th International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP), pages 385–388. van den Boom, W., Dunson, D., and Reeves, G. (2015b). Scalable approximations of marginal posteriors in variable selection. Technical Report arXiv:1506.06629v1, ArXiV. van der Linde, A. (2005). DIC in variable selection. Statistica Neerlandica, 59(1):45–56. van der Pas, S. L., Kleijn, B. J. K., and van der Vaart, A. W. (2014). The horseshoe estimator: Posterior concentration around nearly black vectors. Electronic Journal of Statistics, 8(2):2585–2618. Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leaveoneout crossvalidation and WAIC. Statistics and Computing, 27(5):1413–1432. Verdinelli, I. and Wasserman, L. (1995). Computing Bayes factors using a generalization of the SavageDickey density ratio. Journal of the American Statistical Association, 90(430):614– 618. Volinsky, C. T. and Raftery, A. E. (2000). Bayesian information criterion for censored survival models. Biometrics, 56(1):256–262. Wainwright, M. J. and Jordan, M. I. (2008). Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305. Wang, H. and Pillai, N. S. (2013). On a class of shrinkage priors for covariance matrix estimation. Journal of Computational and Graphical Statistics, 22(3):689–707. Wang, Y. and Blei, D. M. (2019). Frequentist consistency of variational Bayes. Journal of the American Statistical Association, 114(527):1147–1161. Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11:3571–3594. Watanabe, S. (2013). A widely applicable Bayesian information criterion. Journal of Machine Learning Research, 14(1):867–897. West, M. (2003). Bayesian factor regression models in the “large p, small n” paradigm. In Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., and West, M., editors, Bayesian Statistics 7, pages 723–732. Oxford University Press. West, M. and Harrison, J. (1997). Bayesian Forecasting and Dynamic Models, volume Springer Series in Statistics. SpringerVerlag New York. Yu, K., Chen, C., Reed, C., and Dunson, D. (2013). Bayesian variable selection in quantile regression. Statistics and its Interface, 6(2):261–274. cited By 22. Yu, K. and Moyeed, R. A. (2001). Bayesian quantile regression. Statistics & Probability Letters, 54(4):437 – 447. Yuan, M. and Lin, Y. (2005). Efficient empirical Bayes variable selection and estimation in linear models. Journal of the American Statistical Association, 100(472):1215–1225. Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with gprior distributions. In Goel, P. and Zellner, A., editors, Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, pages 233–243, New York. Elsevier Science Publishers, Inc. Zhang, Y. and Bondell, H. D. (2018). Variable selection via penalized credible regions with Dirichlet–Laplace globallocal shrinkage priors. Bayesian Analysis, 13(3):823 – 844. Ziniel, J. and Schniter, P. (2013). Dynamic compressive sensing of timevarying signals via approximate message passing. IEEE Transactions on Signal Processing, 61(21):5270–5284. Zou, X., Li, F., Fang, J., and Li, H. (2016). Computationally efficient sparse Bayesian learning via generalized approximate message passing. In 2016 IEEE International Conference on Ubiquitous Wireless Broadband (ICUWB), pages 1–4. 
URI:  https://mpra.ub.unimuenchen.de/id/eprint/111631 