This page contains Blackboard Original support material


Visit Blackboard Ultra Staff support for information and guides for teaching with Blackboard Ultra
Visit Student Blackboard Ultra support for information and guides for digital learning with Blackboard Ultra

Introduction to Generative Artificial Intelligence (AI) tools in education

Artificial Intelligence (AI) content creation tools have been highlighted in the media extensively since the launch of ChatGPT in late November 22.

The service reached 100 million users within 2 months of launch. Media coverage generated both excitement and panic with examples of outputs successfully passing professional exams, writing passable job applications and producing work which could pass many assessments in education.  Academia has been ‘stunned by ChatGPT’s essay writing skills and usability’ (Herne 2022), but some in the sector have approached the issue from a more critical, nuanced and welcoming perspective urging us to ‘get off the fear carousel’ (Mihal 2023) and some are just plain ‘AIngry' and frustrated that we are being distracted by the shiny new toy (Lancos 2022).

Ulster has established an AI in Learning & Teaching working group to consider guidance, and policy changes, in response to the increased sector discussions on the topic. This article summarises some of the discussion at the initial meetings.

ChatGPT was the first experience that many of us had with conversational interfaces to Large Language Models (LLM) – models trained on large datasets of text sourced from a variety of sources. These conversational interfaces allowed us to ask the system to complete tasks iteratively, refining the output that was produced based on our prompts.  The system could also respond to requests to generate computer code, perform common tasks and workflows or to create images and video.

The tools have evolved over several years, from within research laboratories and indeed many of our academic teams, at Ulster, are actively involved in AI research and teaching. In the research world there are some that dispute whether an LLM is really Artificial Intelligence and some who are navigating the debate from a centrist perspective (Pallaghy 2022). Despite different philosophical positions, the simplified interface and resulting reasoned responses provided by ChatGPT seemed like a huge leap in our expectations of these tools. This has resulted in much greater visibility of the opportunities and challenges of these technologies across subject disciplines including creative disciplines where AI can produce images, and video, from natural language prompts.

Assessment and Academic Integrity

The wider HE discourse has naturally been in relation to assessment and concern about academic integrity. This has resulted in many technology companies, including Turnitin, offering commercial solutions to detect the use of AI tools.

Turnitin plan to release a feature within their product in early April 2023, something which has caused concern amongst the sector as the feature will be released without adequate testing or understanding of how it works. The continual development and evolution of these solutions has been described as an arms race and arguably a race that is a distraction and unlikely to be successful as technology and approaches evolve. The working group do however recognise that many academic teams would get some level of reassurance from integrated detection tools, in Turnitin for instance, and Ulster is continuing to contribute to sector work to best understand the implications of implementing such a solution.

These tools may also provide reassurance to Professional, Statutory and Regulatory Bodies (PSRB), but some may have concerns about AI developments and they, together with some academic colleagues, may be tempted to insist on assessment by examination in response to these developments. We encourage PSRBs, and colleagues, to consider inclusive and authentic assessment design as the default setting, but we do recognise that PSRBs may mandate other approaches.

Some organisations have tried to ban the use of AI tools on networks within educational settings, again a solution that the working group did not think was effective or beneficial. Many of the currently available AI tools will evolve and become more integrated in to existing software such as the Microsoft Office suite and search engines. Solutions to ban the use of AI technology do not sit comfortably with Ulster’s approach to Learning & Teaching and will not be part of our recommendations.

The Ulster Context

Ulster has a long history of active learning pedagogies combined with authentic assessment design and the working group felt that the current AI in assessment discussions can help us to refocus on assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.

Ulster recognises that staff and students are using AI technology now and will continue to do so both in personal and professional settings, indeed AI will be part of many of our students future working lives and new roles and job opportunities in the sector will follow.

The working group has heard from academic teams who have been exploring how AI tools can be used in their context and how they encourage the use of AI tools within the curriculum. It seems appropriate to help develop staff and students’ digital literacy skills to use the tools appropriately and responsibly. Much of this existing work is within subject disciplines that understand the limitations of the tools and the working group recognise that further guidance, and support, will be necessary before this approach is more widely adopted. The working group has recommended creating space for critical dialogue particularly on ethical and sustainability issues but also recognise that there will be great diversity in approaches across the organisation.

The working group are aware that many of the tools will move to paid subscription models which will restrict how the tools can be used in an educational setting.

The working group will also consider the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

OpenAI publish excellent documentation for educator use of their tools that describe ethical issues and limitations in detail and includes the information that:

“Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't, or, by misrepresenting the relative strength of two opposing arguments.“

For these reasons, it's crucial that students, and staff, know how to evaluate the trustworthiness of information using external, reliable sources.

Resources 

As a general starting point, the QAA briefing paper is a useful briefing note to support staff in tackling challenges to academic integrity which have been brought about by the rise of artificial intelligence tools.

The Times Higher Education’s, The Campus, has a spotlight feature titled AI transformers like ChatGPT are here, so what next?

As mentioned earlier in this article, OpenAI’s documentation for educator use of their tools offers some useful insight in to limitations and opportunities for the tools.

What’s next? 

The working group is currently:

  • Authoring changes to the academic integrity policy to ensure that students understand what appropriate use of AI tools is.

  • Reviewing student declarations that can be used at the point of submission.

  • Developing AI guidance for students.

  • Developing AI guidance for staff.

  • Developing referencing guidelines

  • Considering the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

  • Informing academic development activity around assessment design and authentic assessment

  • Developing AI evaluation strategies to help monitor its use and impact. This will include understanding the funding models as these tools will not be free once the evaluation phases end.

  • Considering accessibility implications of AI.

Related pages