Green, Kesten C. (2008): Assessing probabilistic forecasts about particular situations.
Download (381kB) | Preview
How useful are probabilistic forecasts of the outcomes of particular situations? Potentially, they contain more information than unequivocal forecasts and, as they allow a more realistic representation of the relative likelihood of different outcomes, they might be more accurate and therefore more useful to decision makers. To test this proposition, I first compared a Squared-Error Skill Score (SESS) based on the Brier score with an Absolute-Error Skill Score (AESS), and found the latter more closely coincided with decision-makers’ interests. I then analysed data obtained in researching the problem of forecasting the decisions people make in conflict situations. In that research, participants were given lists of decisions that might be made and were asked to make a prediction either by choosing one of the decisions or by allocating percentages or relative frequencies to more than one of them. For this study I transformed the percentage and relative frequencies data into probabilistic forecasts. In most cases the participants chose a single decision. To obtain more data, I used a rule to derive probabilistic forecasts from structured analogies data, and transformed multiple singular forecasts for each combination of forecasting method and conflict into probabilistic forecasts. When compared using the AESS, probabilistic forecasts were not more skilful than unequivocal forecasts.
|Item Type:||MPRA Paper|
|Original Title:||Assessing probabilistic forecasts about particular situations|
|Keywords:||accuracy, error measures, evaluation, forecasting methods, prediction|
|Subjects:||D - Microeconomics > D7 - Analysis of Collective Decision-Making > D74 - Conflict; Conflict Resolution; Alliances
C - Mathematical and Quantitative Methods > C0 - General
F - International Economics > F5 - International Relations and International Political Economy > F51 - International Conflicts; Negotiations; Sanctions
|Depositing User:||Kesten Green|
|Date Deposited:||23. May 2008 07:51|
|Last Modified:||16. Feb 2013 01:04|
Armstrong, J. S. (2001). Evaluating forecasting methods. In Armstrong, J. S. (Ed.), Principles of forecasting: a handbook for researchers and practitioners. Norwell, MA: Kluwer Academic Publishers, 443 472.
Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78(1), 1-3.
Doggett, K. (1998). Glossary of verification terms (revised June, 1998). National Oceanic and Atmospheric Administration. Retrieved November 13, 2002, from http://www.sel.noaa.gov/forecast_verification/verif_glossary2.html.
Fuller, S. (2000). Verification: probability forecasts. NWP Gazette, December 2000. Retrieved November 10, 2002, from http://www.met-office.gov.uk/research/nwp/publications/nwp_gazette/dec00/verification.html.
Green, K. C. & Armstrong, J. S. (2007a). Value of expertise for forecasting decisions in conflicts. Interfaces, 37, 287-299.
Green, K. C. & Armstrong, J. S. (2007b). Structured analogies for forecasting. International Journal of Forecasting, 23, 365-376.
Green, K. C. (2005). Game theory, simulated interaction, and unaided judgment for forecasting decisions in conflicts. International Journal of Forecasting, 21, 463-472
Lichtenstein, S., Fischhoff, B., & Phillips, L. (1982). Calibration of probabilities: the state of the art to 1980. In Kahneman, D., Slovic, P., & Tversky, A. (Eds.), Judgement under uncertainty: heuristics and biases. New York: Cambridge University Press, 306-334.