Wohlrabe, Klaus and Bornmann, Lutz (2017): Normalization of citation impact in economics.
Preview |
PDF
MPRA_paper_80384.pdf Download (2MB) | Preview |
Abstract
This study is intended to facilitate fair research evaluations in economics. Field- and time-normalization of citation impact is the standard method in bibliometrics. Since citation rates for journal papers differ substantially across publication years and Journal of Economic Literature (JEL) classification codes, citation rates should be normalized for the comparison of papers across different time periods and economic subfields. Without normalization, both factors that are independent of research quality bias the results of citation analyses. We introduce two normalized indicators in economics, which are the most important indicators in bibliometrics: (1) the mean normalized citation score (MNCS) compares the citation impact of a focal paper with the mean impact of similar papers published in the same economic subfield and publication year. (2) PPtop 50% is the share of papers that belong to the above-average half in a certain subfield and time period. Since the MNCS is based on arithmetic averages despite skewed citation distributions, we recommend using PPtop 50% for fair comparisons of entities in economics (e.g. researchers, institutions, or countries). In this study, we apply the method to 294 journals (including normalized scores for 192,524 papers) by assigning them to four citation impact classes and identifying 33 outstandingly cited economics journals.
Item Type: | MPRA Paper |
---|---|
Original Title: | Normalization of citation impact in economics |
Language: | English |
Keywords: | Bibliometrics, citations, JEL codes, journal ranking, mean normalized citation score (MNCS), citation percentile, PPtop 50% |
Subjects: | A - General Economics and Teaching > A1 - General Economics > A11 - Role of Economics ; Role of Economists ; Market for Economists A - General Economics and Teaching > A1 - General Economics > A12 - Relation of Economics to Other Disciplines A - General Economics and Teaching > A1 - General Economics > A14 - Sociology of Economics |
Item ID: | 80384 |
Depositing User: | Klaus Wohlrabe |
Date Deposited: | 26 Jul 2017 19:53 |
Last Modified: | 27 Sep 2019 21:13 |
References: | Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). Assessing the varying level of impact measurement accuracy as a function of the citation window length. Journal of Informetrics, 5(4), 659-667. doi: 10.1016/j.joi.2011.06.004. Acemoglu, D., Johnson, S., & Robinson, J. A. (2001). The colonial origins of comparative development: An empirical investigation. American Economic Review, 91(5), 1369-1401. doi: DOI 10.1257/aer.91.5.1369. Anauati, V., Galliani, S., & Galvez, R. H. (2016). Quantifying the life cycle of scholary articles across field of economic research. Economic Inquiry, 54(2), 1339-1355. Angrist, J., Azoulay, P., Ellison, G., Hill, R., & Lu, S. F. (2017). Economic Research Evolves: Fields and Styles. American Economic Review, 107(5), 293-297. doi: doi: 10.1257/aer.p20171117. Arellano, M., & Bond, S. (1991). Some Tests of Specification for Panel Data - Monte-Carlo Evidence and an Application to Employment Equations. Review of Economic Studies, 58(2), 277-297. doi: Doi 10.2307/2297968. Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45, 199-245. Bornmann, L., Butz, A., & Wohlrabe, K. (in press). What are the top five journals in economics? A new meta-ranking. Applied Economics. Bornmann, L., & Daniel, H.-D. (2008). Selecting manuscripts for a high impact journal through peer review: a citation analysis of Communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere. Journal of the American Society for Information Science and Technology, 59(11), 1841-1852. doi: 10.1002/asi.20901. Bornmann, L., & Glänzel, W. (2017). Applying the CSS method to bibliometric indicators used in (university) rankings. Scientometrics, 110(2), 1077–1079. doi: 10.1007/s11192-016-2198-5. Bornmann, L., & Haunschild, R. (2017). An Empirical Look at the Nature Index. Journal of the Association of Information Science and Technology, 68(3), 653–659. doi: 10.1002/asi.23682. Bornmann, L., & Leydesdorff, L. (2017). Skewness of citation impact data and covariates of citation distributions: A large-scale empirical analysis based on Web of Science data. Journal of Informetrics, 11(1), 164–175. Bornmann, L., & Marx, W. (2014a). How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98(1), 487-509. doi: 10.1007/s11192-013-1161-y. Bornmann, L., & Marx, W. (2014b). The wisdom of citing scientists. Journal of the American Society of Information Science and Technology, 65(6), 1288-1292. Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Use of citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93-102. doi: 10.3354/esep00084. Bornmann, L., Schier, H., Marx, W., & Daniel, H.-D. (2011). Is interactive open access publishing able to identify high-impact submissions? A study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes. Journal of the American Society for Information Science and Technology, 62(1), 61-71. Bornmann, L., Stefaner, M., de Moya Anegón, F., & Mutz, R. (2014). What is the effect of country-specific characteristics on the research performance of scientific institutions? Using multi-level statistical models to rank and map universities and research-focused institutions worldwide. Journal of Informetrics, 8(3), 581-593. doi: 10.1016/j.joi.2014.04.008. Boyack, K. W. (2004). Mapping knowledge domains: characterizing PNAS. Proceedings of the National Academy of Sciences of the United States of America, 101, 5192-5199. Card, D., & DellaVigna, S. (2013). Nine Facts about Top Journals in Economics. Journal of Economic Literature, 51(1), 144-161. doi: 10.1257/jel.51.1.144. Cherrier, B. (2017). Classifying Economics: a history of JEL codes. Journal of Economic Literature, 55(2), 545-579. Coats, A. (1971). The role of scholarly journals in the history of economics: An essay. Journal of Economic Literature, 9(1), 29-44. Combes, P.-P., & Linnemer, L. (2010). Inferring Missing Citations: A Quantitative Multi-Criteria Ranking of all Journals in Economics, Working Papers halshs-00520325, HAL. Council of Canadian Academies. (2012). Informing research choices: indicators and judgment: the expert panel on science performance and research funding. . Ottawa, Canada: Council of Canadian Academies. Crespo, J. A., Herranz, N., Li, Y., & Ruiz-Castillo, J. (2014). The effect on citation inequality of differences in citation practices at the web of science subject category level. Journal of the Association for Information Science and Technology, 65(6), 1244-1256. doi: 10.1002/asi.23006. Cumming, G. (2012). Understanding the new statistics: effect sizes, confidence intervals, and meta-analysis. London, UK: Routledge. Ellison, G. (2002). The Slowdown of the Economics Publishing Process. Journal of Political Economy, 110(5), 947-993. doi: 10.1086/341868. Ellison, G. (2013). How Does the Market Use Citation Data? The Hirsch Index in Economics. American Economic Journal-Applied Economics, 5(3), 63-90. doi: 10.1257/app.5.3.63. Ferrara, A., & Bonaccorsi, A. (2016). How robust is journal rating in Humanities and Social Sciences? Evidence from a large-scale, multi-method exercise. Research Evaluation, 25(3), 279-291. doi: 10.1093/reseval/rvv048. Galton, F. (1907). Vox populi. Nature, 75, 450-451. doi: Doi 10.1038/075450a0. Garfield, E. (2006). The history and meaning of the Journal Impact Factor. Journal of the American Medical Association, 295(1), 90-93. Gevers, M. (2014). Scientific performance indicators: a critical appraisal and a country-by-country analysis. In W. Blockmans, L. Engwall & D. Weaire (Eds.), Bibliometrics: Use and Abuse in the Review of Research Performance (pp. 43-53). London, UK: Portland Press. Gibson, J., Anderson, D. L., & Tressler, J. (2014). Which Journal Rankings Best Explain Academic Salaries? Evidence from the University of California. Economic Inquiry, 52(4), 1322-1340. doi: 10.1111/ecin.12107. Glänzel, W. (2008). Seven myths in bibliometrics. About facts and fiction in quantitative science studies. Paper presented at the Proceedings of WIS 2008, Berlin. Fourth International Conference on Webometrics, Informetrics and Scientometrics & Ninth COLLNET Meeting, Berlin, Germany. Glänzel, W., Debackere, K., & Thijs, B. (2016). Citation classes: a novel indicator base to classify scientific output. Retrieved October, 21, 2016, from https://www.oecd.org/sti/051%20-%20Blue%20Sky%20Biblio%20Submitted.pdf Glänzel, W., & Schubert, A. (2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357-367. Glänzel, W., Thijs, B., Schubert, A., & Debackere, K. (2009). Subfield-specific normalised relative indicators and a new generation of relational charts. Methodological foundations illustrated on the assessment of institutional research performance. Scientometrics, 78(1), 165–188. Haddow, G., & Noyons, E. (2013). Misfits? research classification in research evaluation: Visualizing journal content within fields of research codes. Paper presented at the Proceedings of ISSI 2013 - 14th International Society of Scientometrics and Informetrics Conference. Hamermesh, D. S. (2015). Citations in Economics: Measurement, Uses and Impacts, IZA Discussion Papers. Bonn, Germany: Institute for the Study of Labor (IZA). Haunschild, R., & Bornmann, L. (2015). Criteria for Nature Index questioned. Nature, 517(7532), 21. Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569-16572. doi: 10.1073/pnas.0507655102. Johnston, D. W., Piatti, M., & Torgler, B. (2013). Citation success over time: theory or empirics? Scientometrics, 95(3), 1023-1029. doi: 10.1007/s11192-012-0910-7. Kostoff, R. N. (2002). Citation analysis of research performer quality. Scientometrics, 53(1), 49-71. Kreiman, G., & Maunsell, J. H. R. (2011). Nine criteria for a measure of scientific output. Frontiers in Computational Neuroscience, 5(48). doi: 10.3389/fncom.2011.00048. Landis, J. R., & Koch, G. G. (1977). Measurement of observer agreement for categorical data. Biometrics, 33(1), 159-174. Lawson, T. (2013). What is this 'school' called neoclassical economics? Cambridge Journal of Economics, 37(5), 947-983. doi: 10.1093/cje/bet027. Leydesdorff, L., & Opthof, T. (2013). Citation analysis with medical subject Headings (MeSH) using the Web of Knowledge: A new routine. Journal of the American Society for Information Science and Technology, 64(5), 1076-1080. doi: 10.1002/asi.22770. Li, Y., & Ruiz-Castillo, J. (2014). The impact of extreme observations in citation distributions. Research Evaluation, 23(2), 174-182. doi: 10.1093/reseval/rvu006. Liebowitz, S. J., & Palmer, J. P. (1984). Assessing the Relative Impact of Economics Journals. Journal of Economic Literature, 22(1), 77-88. Linnemer, L., & Visser, M. (2016). The Most Cited Articles from the Top-5 Journals (1991-2015), CESifo Working Paper Series Nr. 5999. Munich, Germany: CESifo Group. Lundberg, J. (2007). Lifting the crown - citation z-score. Journal of Informetrics, 1(2), 145-154. Martin, B. R., & Irvine, J. (1983). Assessing basic research - some partial indicators of scientific progress in radio astronomy. Research Policy, 12(2), 61-90. Marx, W., & Bornmann, L. (2015). On the causes of subject-specific citation rates in Web of Science. Scientometrics, 102(2), 1823-1827. McAllister, P. R., Narin, F., & Corrigan, J. G. (1983). Programmatic evaluation and comparison based on standardized citation scores. IEEE Transactions on Engineering Management, 30(4), 205-211. Mingers, J., & Leydesdorff, L. (2015). A Review of Theory and Practice in Scientometrics. Retrieved February 2, 2015, from http://arxiv.org/abs/1501.05462 Moed, H. (2015). Comprehensive indicator comparisons intelligible to non-experts: the case of two SNIP versions. Scientometrics, 1-15. doi: 10.1007/s11192-015-1781-5. Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, n/a-n/a. doi: 10.1002/asi.23314. National Research Council. (2010). A revised guide to the methodology of the data-based assessment of research-doctorate programs in the United States. Washington, DC, USA: The National Academies Press. Opthof, T. (2011). Differences in citation frequency of clinical and basic science papers in cardiovascular research. Medical & Biological Engineering & Computing, 49(6), 613-621. doi: 10.1007/s11517-011-0783-6. Palacios-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963-977. doi: DOI 10.1111/j.1468-0262.2004.00519.x. Panel for Review of Best Practices in Assessment of Research, Panel for Review of Best Practices in Assessment of Research, Development Organizations, Laboratory Assessments Board, Division on Engineering, Physical Sciences, & National Research Council. (2012). Best Practices in Assessment of Research and Development Organizations: The National Academies Press. Perry, M., & Reny, P. J. (2016). How to count citations if you must. American Economic Review, 106(9), 2722-2741. Podlubny, I. (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95-99. Radicchi, F., & Castellano, C. (2011). Rescaling citations of publications in physics. Physical Review E, 83(4). doi: 10.1103/PhysRevE.83.046116. Rehn, C., Kronman, U., & Wadskog, D. (2007). Bibliometric indicators – definitions and usage at Karolinska Institutet. Stickholm, Sweden: Karolinska Institutet University Library. Research Evaluation and Policy Project. (2005). Quantitative indicators for research assessment – a literature review (REPP discussion paper 05/1). Canberra, Australia: Research Evaluation and Policy Project, Research School of Social Sciences, The Australian National University. Ruiz-Castillo, J. (2012). The evaluation of citation distributions. SERIEs: Journal of the Spanish Economic Association, 3(1), 291-310. doi: 10.1007/s13209-011-0074-3. Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5-6), 281-291. Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628-638. Smolinsky, L., & Lercher, A. (2012). Citation rates in mathematics: a study of variation by subdiscipline. Scientometrics, 91(3), 911-924. doi: 10.1007/s11192-012-0647-3. Stern, D. I. (2013). Uncertainty Measures for Economics Journal Impact Factors. Journal of Economic Literature, 51(1), 173-189. doi: 10.1257/jel.51.1.173. Strotmann, A., & Zhao, D. (2010). Combining commercial citation indexes and open-access bibliographic databases to delimit highly interdisciplinary research fields for citation analysis. Journal of Informetrics, 4(2), 194-200. doi: 10.1016/j.joi.2009.12.001. Thelwall, M. (2017). Three practical field normalised alternative indicator formulae for research evaluation. Journal of Informetrics, 11(1), 128-151. doi: http://dx.doi.org/10.1016/j.joi.2016.12.002. Thomson Reuters. (2015). InCites Indicators Handbook. Philadelphia, PA, USA: Thomson Reuters. van Leeuwen, T. N., & Calero Medina, C. (2012). Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. Research Evaluation, 21(1), 61-70. doi: 10.1093/reseval/rvr006. van Raan, A. F. J. (2005). Measurement of central aspects of scientific research: performance, interdisciplinarity, structure. Measurement, 3(1), 1-19. Vinkler, P. (1986). Evaluation of some methods for the relative assessment of scientific publications. Scientometrics, 10(3-4), 157-177. Vinkler, P. (2010). The evaluation of research by scientometric indicators. Oxford, UK: Chandos Publishing. Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365-391. Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J., . . . Wouters, P. (2012). The Leiden Ranking 2011/2012: data collection, indicators, and interpretation. Retrieved February 24, from http://arxiv.org/abs/1202.3941 Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372-379. Waltman, L., & van Eck, N. J. (2013a). Source normalized indicators of citation impact: an overview of different approaches and an empirical comparison. Scientometrics, 96(3), 699-716. doi: 10.1007/s11192-012-0913-4. Waltman, L., & van Eck, N. J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7(4), 833–849. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: some theoretical considerations. Journal of Informetrics, 5(1), 37-47. doi: 10.1016/j.joi.2010.08.001. Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851-872. doi: 10.1007/s11192-012-0775-9. Wang, Q., & Waltman, L. (2016). Large-Scale Analysis of the Accuracy of the Journal Classification Systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347-364. Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., . . . Johnson, B. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Bristol, UK: Higher Education Funding Council for England (HEFCE). Wilson, D. S., & Gowdy, J. M. (2013). Evolution as a general theoretical framework for economics and public policy. Journal of Economic Behavior & Organization, 90, S3-S10. doi: 10.1016/j.jebo.2012.12.008. Wouters, P., Thelwall, M., Kousha, K., Waltman, L., de Rijcke, S., Rushforth, A., & Franssen, T. (2015). The Metric Tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management). London, UK: Higher Education Funding Council for England (HEFCE). Zimmermann, C. (2013). Academic rankings with RePEc. Econometrics, 1(3), 249-280. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/80384 |