Pérez-Asurmendi, Patrizia and de Andrés Calle, Rocío (2023): Self-betrayal voters: The Spaniards Case.
Preview |
PDF
MPRA_paper_119372.pdf Download (862kB) | Preview |
Abstract
This contribution deals with the measurement of the concordance between two characteristics reported by individuals at different points in time. Although intuition tell us that the declared opinion would be the same each time, the evidence does not always show it. In these cases, several measures for calculating agreement can be found in the literature, but by using them, much of the available information is lost, since they do not take disagreement into account. To overcome this drawback, we propose to use Cohen’s Kappa statistic, which is an easy-to- interpret tool to measure the agreement between two categorical characteristics considering also the disagreement. To illustrate its applicability, we analyze the concordance between declared voting intention and voting decision, and between declared sympathy for political parties before and after the election with data from the 2015 General Election in Spain.
Item Type: | MPRA Paper |
---|---|
Original Title: | Self-betrayal voters: The Spaniards Case |
English Title: | Self-betrayal voters: The Spaniards Case |
Language: | English |
Keywords: | Cohen’s Kappa statistic; Concordance; Categorical data; Voting behaviour |
Subjects: | C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C10 - General D - Microeconomics > D7 - Analysis of Collective Decision-Making > D72 - Political Processes: Rent-Seeking, Lobbying, Elections, Legislatures, and Voting Behavior Z - Other Special Topics > Z1 - Cultural Economics ; Economic Sociology ; Economic Anthropology > Z13 - Economic Sociology ; Economic Anthropology ; Social and Economic Stratification |
Item ID: | 119372 |
Depositing User: | Mrs Patrizia Pérez-Asurmendi |
Date Deposited: | 23 Dec 2023 08:42 |
Last Modified: | 23 Dec 2023 08:42 |
References: | Abraira, V., 2001. El índice kappa. Medicina de Familia. SEMERGEN 27, 247–249. Bennett, E.M., Alpert, R., Goldstein, A.C., 1954. Communications Through Limited-Response Questioning. Public Opinion Quarterly 18, 303–308. Cohen, J., 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 37–46. Goodman, L., Kruskal, W., 1979. Measures of association for cross classifications, in: Measures of Association for Cross Classifications. Springer New York. Springer Series in Statistics, pp. 2–34. Gwet, K., 2008. Computing inter-rater reliability and its variance in the presence of high agreement. The British Journal of Mathematical and Statistical Psychology, 29–48. Gwet, K., 2012. Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Landis, J.R., Koch, G.G., 1977. The measurement of observer agreement for categorical data. Biometrics 33, 159–174. McHugh, M., 2012. Inter-rater reliability: the kappa statistic. Biochemia Medica 22, 276–282. Scott, W.A., 1955. Reliability of content analysis: The case of nominal scale coding. The Public Opinion Quarterly 19, 321–325. Tinsley, H.E., Weiss, D.J., 1975. Inter-rater reliability and agreement of subjective judgments. Journal of Counseling Psychology 22, 358–376. Tinsley, H.E., Weiss, D.J., 2000. Inter-rater reliability and agreement, in: Handbook of Applied Multivariate Statistics and Mathematical Modeling. Academic Press, San Diego, pp. 95–124. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/119372 |