{"id":49799,"date":"2024-10-30T15:00:13","date_gmt":"2024-10-30T15:00:13","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=49799"},"modified":"2024-10-30T15:00:34","modified_gmt":"2024-10-30T15:00:34","slug":"are-we-prepared-for-the-looming-shadow-of-ai-hallucination","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/are-we-prepared-for-the-looming-shadow-of-ai-hallucination\/49799\/","title":{"rendered":"Are we prepared for the looming shadow of AI hallucination?"},"content":{"rendered":"

The King\u2019s Institute for AI<\/a> discusses the implications of growing GenAI use in education and the risks of false information disguised by eloquent writing.<\/h2>\n

The rapid integration of Generative artificial intelligence (GenAI) into education presents a double-edged sword. While its potential to personalise learning and enhance teaching is undeniable, concerns regarding ethical implications, accuracy, and student vulnerability to AI-generated misinformation are also rising.<\/p>\n

This commentary delves into the current state of knowledge surrounding GenAI in education, highlighting the critical gap in understanding students\u2019 ability to detect AI hallucinations defined in our research as responses generated from GenAI that contain incorrect information.<\/p>\n

One particular concern for educators is to embrace GenAI in creating effective assessments and maintaining academic integrity.\u00b9\u00a0 A broader concern to society is GenAI\u2019s tendency to generate false information that can often be masked under coherent and eloquent writing. If undetected, unverified, and unrectified, such false information can be inadvertently used or misused to various degrees of danger.<\/p>\n

In this paper, we propose the first experimentation to study whether and how students in a top UK business school can detect false information created by GenAI, which is often defined as AI hallucinations, in a high-stake assessment context. While we constrain our paper within the educational context, it is highly relevant to the emerging research on identifying the key traits and socioeconomic factors underlying news readers in recognising false information and fake news.<\/p>\n

Our setting presents a situation when readers (students) have abundant resources and training, as well as a vested interest, to investigate and evaluate the information (AI-generated response to an assessment question). We aim to shed light on the extent to which economics and business-related course educators can evaluate students\u2019 academic performance considering the recent development in GenAI and proctored, in-person examination settings to be avoided.<\/p>\n

Our evidence on students\u2019 ability to detect incorrect information beyond cohesive and well-prosed responses from GenAI contributes to the scholarship of teaching and learning on AI literacy in the education setting.<\/p>\n

A spectrum of attitudes<\/h3>\n

Research reveals a spectrum of attitudes towards GenAI in education. Studies from the UK, Norway, and Hong Kong indicate a generally optimistic outlook, with some reservations regarding accuracy, privacy, and ethical considerations.\u00b2 However, a more cautious approach is evident in African contexts, where concerns about academic integrity and misuse of tools like ChatGPT prevail.\u00b3<\/p>\n

Interestingly, American studies suggest individual differences significantly influence GenAI perception, with students exhibiting higher confidence and motivation being more likely to trust and engage with these tools.\u2074 These findings emphasise the need for a nuanced approach, balancing innovation with ethical considerations and robust oversight mechanisms.<\/p>\n

The risks of AI hallucination<\/h3>\n

AI hallucinations encompass various misleading outputs generated by large language models (LLMs) and pose significant risks. These fabricated responses can be ambiguous, making interpretation difficult. Additionally, potential biases inherent in training data can be inadvertently reproduced by AI, potentially exacerbating existing societal inequalities. Furthermore, fragmented and inconsistent information generated by LLMs can adversely impact online safety and public trust.<\/p>\n

Mitigating the risks: Intervention vs caution<\/h3>\n

Two main approaches exist for mitigating AI-related concerns in education: intervention-based and a more cautious approach. Intervention involves implementing policies and fostering open discussions about AI use, promoting transparency and accountability among stakeholders. Additionally, reviewing training data is crucial for ensuring the integrity and reliability of AI outputs. Conversely, the cautious approach advocates for limiting or even refraining from using GenAI tools altogether. While intervention seeks to actively manage risks, the cautious approach prioritises complete avoidance, potentially hindering valuable practical educational applications.<\/p>\n

The knowledge gap: Understanding student vulnerability<\/h3>\n

Existing research primarily focuses on the benefits and challenges of GenAI integration, neglecting to identify factors influencing students\u2019 ability to detect factually inaccurate information. To address this gap, our research at King\u2019s Business School employed a multi-pronged approach to assess students\u2019 ability to identify AI hallucinations within a high-stakes economics assessment.<\/p>\n

Assessment design<\/h3>\n

The assessment strategically incorporated AI-generated responses within a sub-question worth 15% of the assessment grade. To ensure focus on factual accuracy, explicit instructions directed students to evaluate the econometrics content of the AI response, excluding stylistic qualities.<\/p>\n

Post-Course survey<\/h3>\n

Following the assessment, a post-course survey delved deeper into student attitudes towards AI. This survey employed a Likert scale (1-4: Strongly Disagree to Strongly Agree) on four key areas of AI hallucination and AI literacy.<\/p>\n

Cohort Exposure<\/h3>\n

We randomly divided the student cohort into two equal groups. One group received information about the overall detection rate of AI hallucinations in the course, while the other group did not. This manipulation allowed us to investigate how student exposure to such information might influence their confidence levels regarding AI detection abilities.<\/p>\n

Findings<\/h3>\n

Our preliminary findings reveal a fascinating interplay between academic skills, critical thinking, and student confidence in the context of AI detection.<\/p>\n