A new artificial intelligence chatbot that can generate realistic, human-like text is causing intense debate among educators, with schools, universities and students divided about whether it poses a threat to learning or will enhance it.
Key points:
- ChatGPT writes sophisticated essays and songs and answers questions
- Cheating and ethical concerns have been raised about the AI chatbot
- But some in the education sector say the technology should be embraced
Chat Generative Pre-Trained Transformer, known as ChatGPT, fluently answers questions from users online and has the ability to write bespoke essays and exam responses.
Teachers are worried that students will use the tool to cheat and plagiarise, with some universities moving quickly to rewrite exams, essay questions and integrity procedures.
Three states — New South Wales, Queensland, and Tasmania — have already banned ChatGPT in public schools, and Western Australia’s Education Department will next week decide whether to form a similar policy, in time for the start of the school year.
‘Helpful for initial draft’: student guild
ChatGPT can quickly pump out a multitude of written responses — from explaining a topic and writing speeches and computer code, to composing songs, poems, and short stories.
The tool had over a million users sign up a week after its launch in November.
In Western Australia, Curtin University student guild president Dylan Botica said students were quick to jump on board.
“For me, it’s still a bit rudimentary in its early stages, but you can definitely see how it will get better and be harder to detect,” he said.
“It is really helpful to start with that sort of initial draft or getting some ideas on paper.
“I think other people see it as a tool that they’re going to use.
[But] there have been a few students concerned their degrees won’t mean as much if everyone is using these tools.”
‘Tertiary experience’ at risk
Mr Botica said universities needed to write assessments in a variety of ways and ensure students were genuinely engaged in the learning process, in order to make them less tempted to use AI.
“I don’t think you’re ever going to stop people from being able to use these services, especially as they get more sophisticated,” he said.
Curtin University student Ryan said he did not think ChatGPT was the answer, but regulations were needed to ensure academic integrity.
“It undermines the tertiary experience of students coming out of university. Because if they don’t have that foundational knowledge, then they’re probably not going to do as good a job in industry,” he said.
Fellow student Imari was apprehensive about using the tool.
“How much do you just trust this AI? Is it completely accurate? Is it taking from other sources without you realising it?” they said.
Embrace technology: headmaster
While WA’s Education Department mulls over how to respond to the technology, one independent school in Perth has already made up its mind.
Scotch College headmaster Alec O’Connell said the department should be embracing the technology, not banning it.
“I am not a great one for prohibition … I think it’s better to look for ways to work with it. Don’t be scared, go find out more,” he said.
Dr O’Connell said while screening for cheating in 2023 was complex, good teachers knew their students well enough to know when they submitted work that was not their own.
“A while ago we would’ve been sitting here discussing Wikipedia. We had to work our way through that as well,” he said.
“We need to teach students the difference between right and wrong, and submitting work that is not your own is morally incorrect.”
Cheating concerns downplayed
A law and technology expert at the University of Western Australia (UWA), Julia Powles, felt the cheating concern was “overblown”.
“Ever since we’ve had the ability to search the web or access material on Wikipedia, people have been able to draw on digital resources,” she said.
“And if you’re setting assessments that could be addressed simply by drawing on web resources, then you may have a problem.”
Associate Professor Powles said it was important to talk about technology, its ethics and where the line was as a society.
“During COVID, we were forced to use lots of technologies, [such as] contact tracing,” she said.
“In education, we had tools — eye tracking [when students sat online] assessments — and we really didn’t look at the various compromises involved in those technologies when we deployed them.
“We have the chance now. There is no rush.”
She said many technologies, including ChatGPT, had a significant environmental and social cost.
“Young people are curious about technology. But they should be curious too about the implicit compromises of products developed by foreign companies that are scraping material from all kinds of sources,” she said.
Associate Professor Powles pointed to an investigation by Time magazine, which found the multi-billion-dollar owner of ChatGPT, OpenAI, employed workers in Kenya for $2 an hour to weed out the most abhorrent and sensitive content on the internet from the tool.
Workers reportedly had to sift through sexually explicit, racist, and offensive content for hours a day, with many saying they experienced long-term mental health effects and PTSD from the work.
“There is also a significant environmental cost in terms of computational intensity to train a model like this,” she said.
“Also, what does it mean for the sustenance of our creators and writers, if their works can be taken for free without compensation and consent and regurgitated in a model like this?
“There is a corporate entity that’s behind ChatGPT. They have their own commercial drivers and they are backed by some of the biggest companies and most wealthy individuals in the world, whose ends are not the same as those people of Western Australia.”