UK universities have come together to establish a set of guiding principles aimed at ensuring that students and staff are well-versed in artificial intelligence (AI). This move comes as the education sector grapples with adapting teaching methods and assessment practices to accommodate the increasing use of generative AI. The code has been endorsed by vice-chancellors from all 24 Russell Group universities, who believe that it will enable institutions to harness the potential of AI while safeguarding academic rigor and integrity.
Previously, there were discussions about prohibiting the use of software like ChatGPT in education to prevent cheating. However, the new guidance suggests that students should be taught how to use AI appropriately in their studies, while also being made aware of the risks associated with plagiarism, bias, and inaccuracies in generative AI. Additionally, staff will receive training to equip them with the necessary skills to assist students, many of whom are already incorporating ChatGPT into their assignments. As a result, new methods of assessing students are likely to emerge in order to mitigate the risk of cheating.
All 24 Russell Group universities have reviewed their academic conduct policies and guidance to account for the rise of generative AI. The updated guidance serves to clarify to both students and staff where the use of generative AI is deemed inappropriate. Its purpose is to support individuals in making informed decisions and empower them to utilize these tools appropriately, while also encouraging them to acknowledge their use when necessary.
Developed in collaboration with AI and education experts, these principles represent an initial step in what is expected to be a challenging period of transformation in higher education, as AI continues to reshape the world. The five guiding principles state that universities will facilitate AI literacy for both students and staff, ensure that staff are equipped to guide students in the appropriate use of generative AI tools, adapt teaching and assessment methods to incorporate the ethical use of AI while ensuring equal access, uphold academic integrity, and share best practices as the technology evolves.
Dr. Tim Bradshaw, the chief executive of the Russell Group, emphasized the immense transformative potential of AI and the commitment of universities to harness it. He stated, “This statement of principles underlines our commitment to doing so in a way that benefits students and staff and protects the integrity of the high-quality education Russell Group universities provide.”
Prof. Andrew Brass, head of the School of Health Sciences at the University of Manchester, highlighted the need to prepare students for the increasing use of generative AI and to equip them with the skills necessary to engage with it sensibly. He emphasized the importance of working closely with students to co-create the guidance provided, as top-down imposition would not be effective. He also stressed the need for clear explanations to students if any restrictions are put in place, to prevent them from finding ways to circumvent them.
Prof. Michael Grove, deputy pro-vice chancellor (education policy and standards) at the University of Birmingham, expressed his belief that the rapid growth of generative AI presents an opportunity to reevaluate assessment practices. He stated, “We have an opportunity to rethink the role of assessment and how it can be used to enhance student learning and help students appraise their own educational gain.”
Last month, Gillian Keegan, the education secretary, launched a call for evidence on the use of generative AI in education. This initiative seeks to gather views on the risks, ethical considerations, and training requirements for education workers.