The challenge of AI: adapt, improve, incorporate 

There can be no doubt that AI will have a major impact on education, both for students and educators.  

ChatGPT, an AI chatbot from Silicon Valley start-up OpenAI, is just the beginning. Google has since launched its own competitor, known as Bard. Microsoft has announced an AI-powered version of Bing, just the first of the no-doubt many AI integrations to come. As has been noted in just about every article written on the subject of AI, the genie is out of the bottle.   

While some have called the current abilities of ChatGPT superficial, the leap forward in technology has understandably generated a strong and varied reaction in the education community, with some wondering if the essay is over as a form of assessment and, indeed, if the machines are coming for educators’ jobs.    

Some institutions have banned it, such as the New York City public school system, which cited ‘negative impacts on student learning, and concerns regarding the safety and accuracy of content’. Other institutions are rethinking plagiarism, which has historically been defined as using someone else’s work or ideas without giving proper credit – note someone, not something. Brown University, for instance, speculates on who is being stolen from when AI-generated content is used by students. Does an algorithm count as a person?  

Yet other institutions, including some of our customers, are incorporating AI into their teaching and approach to assessment, asking students to describe what their queries are and changing rubrics to more heavily weight those aspects of the assessment that require human input.

Our position   

As with all new technology, AI text generators present both a threat and an opportunity for higher education. One the one hand – the positive view – if harnessed well, can ChatGPT be the catalyst to help realise the goal of more authentic assessment? On the other hand, some faculty may take the route of least short-term resistance and double down on detection, reference-checking and proctoring. Is the latter just a finger in the dyke? 

As a responsible provider and partner to HE, our position will evolve and develop as the technology evolves and develops, and as we understand more about the consequences for HE. Guided by ethical principles and best practices, we will ensure that our platform supports universities’ current and future assessment strategies in a sustainable way.  

For now, we are in agreement with Jisc’s take that the sector should not view AI-generated content as simply a threat, which ‘highlights the need to work towards integrating these tools into education rather than legislating against them’.  

Blocking or banning such tools is not feasible in the long term. OpenAI is looking into adding watermarks – cryptographic signals – in ChatGPT outputs to make them more easily identifiable. In the meantime, various output detectors have emerged, which spot work that might be written by the chatbot, with plagiarism checking providers working to detect AI-generated text. But, as many educators have pointed out, this is an arms race that is unlikely to be won by technology alone.  

The keywords, then, when it comes to assessment in the light of AI text generators, are adapt, improve and incorporate. (The AUA has a really good summary of what ChatGPT means for educators here.) 

We may expect to see the emergence of ChatGPT as a further catalyst to the return to on-campus digital exams as a response to fears about how to mitigate for ChatGPT’s existence. There may be some faculty who advocate for a return to old days of pen-and-paper exams, but going backwards – or even standing still – are not feasible strategies – not for the good of students and not for the longevity of the university as an institution. We as a sector can’t do nothing. Rather than try to contain the technology, we will have to confront its implications and embrace it in our pedagogies and policies.  

Next steps 

Our view is that universities and other educational institutions should approach the use of ChatGPT and other AI tools with a focus on ethical principles and best practices. Specifically, they should consider:  

  • Developing clear policies and guidelines for the use of AI tools, such as ChatGPT, in education and research. These policies should define the acceptable use of these tools and the consequences for misuse.  

  • Educating students and faculty about the capabilities and limitations of AI tools, as well as the ethical considerations surrounding their use. This will help ensure that the tools are used in a responsible and effective way. Developing AI literacy will be a crucial skill going forward.  

  • Incorporating AI tools into the curriculum in a way that enhances student learning and supports the goals of the institution; updating syllabi in response to AI tools. For example, they can be used to enhance research, writing, and critical thinking skills.  

  • Continuously monitor the use of AI tools to ensure that they are being used in accordance with policies and guidelines and that they are having the intended impact on student learning.  

  • Collaborating with other universities, research institutions and industry partners like UNIwise to share best practices and stay up to date with the latest developments within AI.  

  • Being transparent about the use of AI tools and ensuring that students are accountable for their actions. A code of conduct or honour code – to help establish norms of academic integrity – can be useful here. 

Overall, by approaching the use of ChatGPT and other AI tools in a responsible and ethical way, educational institutions can ensure that they are used in a way that benefits students, emphasises the value of human instruction and of a university education and supports the mission of the institution. 

How digital assessment can help 

In addition to bringing digital assessment back to campus, using a diverse range of assessment formats can be an effective countermeasure against cheating with AI text generators like ChatGPT. Different exam formats can make it more difficult for students to use AI-assisted cheating methods, such as: 

  • Open-book or take-home assessments or exams that require human cognitive tasks to complete, such as reflection, the application of knowledge, explanations of thought processes and demonstrations of sophisticated thinking.   

  • Oral assessments or exams, in which students must verbally answer questions, can make it difficult for students to use AI tools.  

  • Randomised questions, where the exam questions are chosen from a pool of questions, can make it more difficult for students to share answers and cheat.  

  • And many other combinations and variations that likewise can transcend the written format by use of video, audio, graphics, applications, debates, podcasts etc. 

As a digital assessment platform that supports a wide range of exam and assessment types, WISEflow can be accommodating in the pursuit of countering and adjusting to ChatGPT’s abilities. WISEflow allows educators to create assessments that are tailored to the specific needs of their students and the material being taught in various ways and, in the long run, this will secure academic integrity while also helping to prepare students for the 21st century. In the end, it is not about taming the AI to protect old ways of assessment, but rather about embracing its potential and reimagining and furthering assessment practices to align with the reality we are living in. 

 

Previous
Previous

Campfire session recap: Assessment moving forward with a step back to campus

Next
Next

UNIwise to hold hands-on Machine Learning workshop for students