Rosa-García, Alfonso (2024): Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video.
Preview |
PDF
MPRA_paper_120135.pdf Download (269kB) | Preview |
Abstract
This study explores student responses to AI-generated educational content, specifically a teaching video delivered by an AI-replicant of their professor. Utilizing ChatGPT-4 for scripting and Heygen technology for avatar creation, the research investigates whether students' awareness of the AI's involvement influences their perception of the content's utility. With 97 participants from first-year economics and business programs, the findings reveal a significant difference in valuation between students informed of the AI origin and those who were not, with the former group valuing the content less. This indicates a bias against AI-generated materials based on their origin. The paper discusses the implications of these findings for the adoption of AI in educational settings, highlighting the necessity of addressing student biases and ethical considerations in the deployment of AI-generated educational materials. This research contributes to the ongoing debate on the integration of AI tools in education and their potential to enhance learning experiences.
Item Type: | MPRA Paper |
---|---|
Original Title: | Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video |
Language: | English |
Keywords: | AI-Generated Content; Virtual Avatars; Student Perceptions; Technology Adoption |
Subjects: | I - Health, Education, and Welfare > I2 - Education and Research Institutions > I23 - Higher Education ; Research Institutions O - Economic Development, Innovation, Technological Change, and Growth > O3 - Innovation ; Research and Development ; Technological Change ; Intellectual Property Rights > O33 - Technological Change: Choices and Consequences ; Diffusion Processes |
Item ID: | 120135 |
Depositing User: | Alfonso Rosa-Garcia |
Date Deposited: | 21 Feb 2024 10:14 |
Last Modified: | 21 Feb 2024 10:14 |
References: | Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. NBER Papers, No. w31161. Charness, G., Jabarian, B., & List, J. A. (2023). Generation next: Experimentation with ai. NBER Papers, No. w31679. Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Management Unit Working Paper, (24-013). Farjam, M., & Kirchkamp, O. (2018). Bubbles in hybrid markets: How expectations about algorithmic trading affect human trading. Journal of Economic Behavior & Organization, 146, 248-269. Horton, J. J. (2023). Large language models as simulated economic agents: What can we learn from homo silicus? NBER Papers, No. w31122). Ishowo-Oloko, F., Bonnefon, J. F., Soroye, Z., Crandall, J., Rahwan, I., & Rahwan, T. (2019). Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence, 1(11), 517-521. Korinek, A. (2023). Generative AI for economic research: Use cases and implications for economists. Journal of Economic Literature, 61(4), 1281-1317. Plaks, J. E., Rodriguez, L. B., & Ayad, R. (2022). Identifying psychological features of robots that encourage and discourage trust. Computers in Human Behavior, 134, 107301. Upadhyaya, N., & Galizzi, M. M. (2023). In bot we trust? Personality traits and reciprocity in human-bot trust games. Frontiers in Behavioral Economics, 2, 1164259. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/120135 |