Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ChatGPT was the first experience that many of us had with conversational interfaces to Large Language Models (LLM) – models trained on large datasets of text sourced from a variety of sources. These conversational interfaces allowed us to ask the system to complete tasks iteratively, refining the output that was produced based on our prompts.  The system could also respond to requests to generate computer code or to , perform common tasks and workflows or to create images and video.

The tools have evolved over several years, from within research laboratories and indeed many of our academic teams, at Ulster, are actively involved in AI research and teaching. In the research world there are some that dispute whether an LLM is really Artificial Intelligence and some who are navigating the debate from a centrist perspective (Pallaghy 2022). Despite different philosophical positions, the simplified interface and resulting reasoned responses provided by ChatGPT seemed like a huge leap in our expectations of these tools. This has resulted in much greater visibility of the opportunities and challenges of these technologies across subject disciplines including creative disciplines where AI can produce images, and video, from natural language prompts.

Assessment and Academic Integrity

The wider HE discourse has naturally been in relation to assessment and concern about academic integrity. This has resulted in many technology companies, including Turnitin, offering commercial solutions to detect the use of AI tools.

...

Some organisations have tried to ban the use of AI tools on networks within educational settings, again a solution that the working group did not think was effective or beneficial. Many of the currently available AI tools will evolve and become more integrated in to existing software such as the Microsoft Office suite and search engines. Solutions to ban the use of AI technology do not sit comfortably with Ulster’s approach to Learning & Teaching and will not be part of our recommendations.

The Ulster Context

Ulster has a long history of active learning pedagogies combined with authentic assessment design and the working group felt that the current AI in assessment discussions can help us to refocus on assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.

Ulster recognises that staff and students are using AI technology now and will continue to do so both in personal and professional settings, indeed AI will be part of many of our students future working lives and new roles and job opportunities in the sector will follow.

The working group has heard from academic teams who have been exploring how AI tools can be used in their context and how they encourage the use of AI tools within the curriculum. It seems appropriate to help develop staff and students’ digital literacy skills to use the tools appropriately and responsibly. Much of this existing work is within subject disciplines that understand the limitations of the tools and the working group recognise that further guidance, and support, will be necessary before this approach is more widely adopted. The working group has recommended creating space for critical dialogue particularly on ethical and sustainability issues but also recognise that there will be great diversity in approaches across the organisation.

The working group are aware that many of the tools will move to paid subscription models which will restrict how the tools can be used in an educational setting.

The working group will also consider the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

...

For these reasons, it's crucial that students, and staff, know how to evaluate the trustworthiness of information using external, reliable sources.

Resources 

As a general starting point, the QAA briefing paper is a useful briefing note to support staff in tackling challenges to academic integrity which have been brought about by the rise of artificial intelligence tools.

...

As mentioned earlier in this article, OpenAI’s documentation for educator use of their tools offers some useful insight in to limitations and opportunities for the tools.

What’s next? 

The working group is currently:

  • Authoring changes to the academic integrity policy to ensure that students understand what appropriate use of AI tools is.

  • Reviewing student declarations that can be used at the point of submission.

  • Developing AI guidance for students.

  • Developing AI guidance for staff.

  • Developing referencing guidelines

  • Considering the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

  • Informing academic development activity around assessment design and authentic assessment

  • Developing AI evaluation strategies to help monitor its use and impact. This will include understanding the funding models as these tools will not be free once the evaluation phases end.

  • Considering accessibility implications of AI.