Some of the country’s top universities will adapt their teaching and learning to incorporate the “ethical” use of artificial intelligence
A set of principles that will help universities take advantage of AI has been published by The Russell Group – which includes institutions such as Oxford, Cambridge, Bristol and Durham.
Backed by the vice chancellors of the 24 Russell Group universities, the statement said it hopes to support the ethical and responsible use of tools such as ChatGPT, while making an effort to uphold academic integrity.
ChatGPT is a text-based generative app, that can answer questions, respond to scenarios and compose essays.
But there are fears that some students could use it to complete assignments.
However, the statement from the universities says using generative AI in teaching and assessments “has the potential to enhance the student learning experience, improve critical-reasoning skills and prepare students for the real-world applications” of generative AI technologies.
It said: “All staff who support student learning should be empowered to design teaching sessions, materials and assessments that incorporate the creative use of generative AI tools where appropriate.”
Young people are increasingly using AI to make music – but is it killing creativity?
Takeaways will be tasted virtually by 2040, report predicts
Lawyers used ChatGPT to help with a case – it backfired massively
Education Secretary Gillian Keegan asked for evidence last month on how generative AI could be used in education settings, after some of the country’s exam boards suggested that schools should be able to make students complete coursework “in class under direct supervision” amid cheating fears linked to AI.
Be the first to get Breaking News
Install the Sky News app for free
The Russell Group said: “Ensuring academic integrity and the ethical use of generative AI can also be achieved by cultivating an environment where students can ask questions about specific cases of their use and discuss the associated challenges openly and without fear of penalisation.”
Professor Andrew Brass, head of the School of Health Sciences at the University of Manchester, said: “We know that students are already utilising this technology, so the question for us as educators is how do you best prepare them for this, and what are the skills they need to have to know how to engage with generative AI sensibly?
“From our perspective, it’s clear that this can’t be imposed from the top down, but by working really closely with our students to co-create the guidance we provide.
“If there are restrictions for example, it’s crucial that it’s clearly explained to students why they are in place, or we will find that people find a way around it.”
Please use Chrome browser for a more accessible video player
More on AI:
Young people are increasingly using AI to make music – but is it killing creativity?
Lawyers used ChatGPT to help with a case – it backfired massively
Artificial Intelligence a ‘threat to democracy’ says government expert ahead of 2024 elections
He added: “Assessment will also need to evolve – as it has always done in response to new technology and workforce skills needs – to assess problem-solving and critical-reasoning skills over knowledge recall.”
Dr Tim Bradshaw, chief executive of the Russell Group, said: “AI breakthroughs are already changing the way we work and it’s crucial students get the new skills they need to build a fulfilling career.
“The transformative opportunity provided by AI is huge and our universities are determined to grasp it. This statement of principles underlines our commitment to doing so in a way that benefits students and staff and protects the integrity of the high-quality education Russell Group universities provide.”