Logo
Munich Personal RePEc Archive

Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight

Gorny, Paul M. and Groos, Eva and Strobel, Christina (2024): Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight.

[thumbnail of MPRA_paper_121065.pdf]
Preview
PDF
MPRA_paper_121065.pdf

Download (1MB) | Preview

Abstract

Regulators of artificial intelligence (AI) emphasize the importance of human autonomy and oversight in AI-assisted decision-making (European Commission, Directorate-General for Communications Networks, Content and Technology, 2021; 117th Congress, 2022). Predictions are the foundation of all AI tools; thus, if AI can predict our decisions, how might these predictions influence our ultimate choices? We examine how salient, personalized AI predictions affect decision outcomes and investigate the role of reactance, i.e., an adverse reaction to a perceived reduction in individual freedom. We trained an AI tool on previous dictator game decisions to generate personalized predictions of dictators’ choices. In our AI treatment, dictators received this prediction before deciding. In a treatment involving human oversight, the decision of whether participants in our experiment were provided with the AI prediction was made by a previous participant (a ‘human overseer’). In the baseline, participants did not receive the prediction. We find that participants sent less to the recipient when they received a personalized prediction but the strongest reduction occurred when the AI’s prediction was intentionally not shared by the human overseer. Our findings underscore the importance of considering human reactions to AI predictions in assessing the accuracy and impact of these tools as well as the potential adverse effects of human oversight.

Atom RSS 1.0 RSS 2.0

Contact us: mpra@ub.uni-muenchen.de

This repository has been built using EPrints software.

MPRA is a RePEc service hosted by Logo of the University Library LMU Munich.