Yang, Nanyin and Palma, Marco and Drichoutis, Andreas C. (2023): Humanization of Virtual Assistants and Delegation Choices.
Preview |
PDF
MPRA_paper_119275.pdf Download (1MB) | Preview |
Abstract
Virtual assistants powered by artificial intelligence are present in virtually every aspect of daily life. Although they are computer algorithms, most are represented with humanized personal characteristics. We study whether assigning them a gender affects the propensity to delegate a search in two online experiments and compare it to human counterparts of identical characteristics. Virtual assistants generally receive higher delegation than humans. Gender has differential effects in delegation rates impacting the user's welfare. The results are entirely driven by female subjects. We find mild spillover effects, primarily decreasing selection of male humans after interacting with low-quality male virtual assistants.
Item Type: | MPRA Paper |
---|---|
Original Title: | Humanization of Virtual Assistants and Delegation Choices |
Language: | English |
Keywords: | anthropomorphic features; artificial intelligence; autonomy; delegation; gender; |
Subjects: | C - Mathematical and Quantitative Methods > C9 - Design of Experiments > C90 - General D - Microeconomics > D2 - Production and Organizations > D23 - Organizational Behavior ; Transaction Costs ; Property Rights D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D82 - Asymmetric and Private Information ; Mechanism Design O - Economic Development, Innovation, Technological Change, and Growth > O3 - Innovation ; Research and Development ; Technological Change ; Intellectual Property Rights > O33 - Technological Change: Choices and Consequences ; Diffusion Processes |
Item ID: | 119275 |
Depositing User: | Andreas Drichoutis |
Date Deposited: | 28 Nov 2023 16:13 |
Last Modified: | 28 Nov 2023 16:13 |
References: | Agarwal, N., A. Moehring, P. Rajpurkar, and T. Salz (2023, July). Combining human expertise with artificial intelligence: Experimental evidence from radiology. (31422). Ajzenman, N., G. Elacqua, A. Jaimovich, and G. Perez-Nunez (2023, July). Humans versus chatbots: Scaling-up behavioral interventions to reduce teacher shortages. Working Paper IDB-WP-01501, Inter-American Development Bank. Azaria, A., Y. Gal, S. Kraus, and C. V. Goldman (2016). Strategic advice provision in repeated human-agent interactions. Autonomous Agents and Multi-Agent Systems 30 (1), 4–29. Baldauf, M., P. Fr ̈oehlich, and R. Endl (2020). Trust me, I’m a doctor–user perceptions of AI-driven apps for mobile health diagnosis. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, pp. 167–178. Bauer, K., M. von Zahn, and O. Hinz (2023). Please take over: XAI, delegation of authority, and domain knowledge. SAFE Working Paper 394, Leibniz Institute for Financial Research SAFE. Borau, S., T. Otterbring, S. Laporte, and S. Fosso Wamba (2021). The most human bot: Female gendering increases humanness perceptions of bots and acceptance of ai. Psychology & Marketing 38 (7), 1052–1068. Cave, S. and K. Dihal (2020). The whiteness of AI. Philosophy & Technology 33 (4), 685–703. Chen, D. L., M. Schonger, and C. Wickens (2016). otree—an open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance 9, 88–97. Dave, C., C. C. Eckel, C. A. Johnson, and C. Rojas (2010). Eliciting risk preferences: When is simple better? Journal of Risk and Uncertainty 41, 219–243. De Visser, E. J., S. S. Monfort, R. McKendrick, M. A. Smith, P. E. McKnight, F. Krueger, and R. Parasuraman (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22 (3), 331. Dell’Acqua, F., B. Kogut, and P. Perkowski (2023, 04). Super Mario Meets AI: Experimental Effects of Automation and Skills on Team Performance and Coordination. The Review of Economics and Statistics, 1–47. Dietvorst, B. J., J. P. Simmons, and C. Massey (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144 (1), 114. Ertac, S., M. Gumren, and M. Y. Gurdal (2020). Demand for decision autonomy and the desire to avoid responsibility in risky environments: Experimental evidence. Journal of Economic Psychology 77, 102200. Eyssel, F. and F. Hegel (2012, September). (s)he’s got the look: Gender stereotyping of robots†. Journal of Applied Social Psychology 42 (9), 2213–2230. Fehr, E., H. Herz, and T. Wilkening (2013). The lure of authority: Motivation and incentive effects of power. American Economic Review 103 (4), 1325–1359. Feigenberg, B., B. Ost, and J. A. Qureshi (2023). Omitted variable bias in interacted models: A cautionary tale. Review of Economics and Statistics, 1–47. Gunadi, C. and H. Ryu (2023, October). How do people respond when they know that robots will take their jobs? Oxford Bulletin of Economics and Statistics 85, 939–958. Hu, Q., X. Pan, J. Luo, and Y. Yu (2022). The effect of service robot occupational gender stereotypes on customers’ willingness to use them. Frontiers in Psychology 13. Hwang, G., J. Lee, C. Y. Oh, and J. Lee (2019). It sounds like a woman: Exploring gender stereotypes in south korean voice assistants. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6. Jung, E. H., T. F. Waddell, and S. S. Sundar (2016). Feminizing robots: User responses to gender cues on robot body and screen. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3107–3113. Koster, R., J. Balaguer, A. Tacchetti, A. Weinstein, T. Zhu, O. Hauser, D. Williams, L. Campbell-Gillingham, P. Thacker, M. Botvinick, et al. (2022). Human-centred mechanism design with democratic ai. Nature Human Behaviour 6 (10), 1398–1407. Liu, M., X. Tang, S. Xia, S. Zhang, Y. Zhu, and Q. Meng (2023). Algorithm aversion: Evidence from ridesharing drivers. Management Science (Ahead of print). Mann, H. B. and D. R. Whitney (1947). On a test whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics 18, 50–60. Mori, M., K. F. MacDorman, and N. Kageki (2012). The uncanny valley [from the field]. IEEE Robotics & automation magazine 19 (2), 98–100. Nomura, T. (2017). Robots and gender. Gender and the Genome 1 (1), 18–26. Sriram, N. and A. G. Greenwald (2009). The brief implicit association test. Experimental Psychology 56 (4), 283–294. Sunstein, C. R. and L. Reisch (2023, August 18). Do people like algorithms? a research strategy. Tay, B. T. C., T. Park, Y. Jung, Y. K. Tan, and A. H. Y. Wong (2013). When stereotypes meet robots: The effect of gender stereotypes on people’s acceptance of a security robot. In D. Harris (Ed.), Engineering Psychology and Cognitive Ergonomics. Understanding Human Cognition, Volume 8019 of Lecture Notes in Computer Science, Berlin, Heidelberg. Springer. Vodrahalli, K., R. Daneshjou, T. Gerstenberg, and J. Zou (2022, July). Do humans trust advice more if it comes from ai?: An analysis of human-ai interactions. In AIES ’22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pp. 763–777. Waytz, A., J. Heafner, and N. Epley (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of experimental social psychology 52, 113–117. Weidinger, L., J. Uesato, M. Rauh, C. Griffin, P.-S. Huang, J. Mellor, A. Glaese, M. Cheng, B. Balle, A. Kasirzadeh, et al. (2022). Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics 1, 80–83. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/119275 |