Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ChatGPT was the first experience that many of us had with conversational interfaces to Large Language Models (LLM) – models trained on large datasets of text sourced from a variety of sources. These conversational interfaces allowed us to ask the system to complete tasks iteratively, refining the output that was produced based on our prompts.  The system could also respond to requests to generate computer code or to , perform common tasks and workflows or to create images and video.

The tools have evolved over several years, from within research laboratories and indeed many of our academic teams, at Ulster, are actively involved in AI research and teaching. In the research world there are some that dispute whether an LLM is really Artificial Intelligence and some who are navigating the debate from a centrist perspective (Pallaghy 2022). Despite different philosophical positions, the simplified interface and resulting reasoned responses provided by ChatGPT seemed like a huge leap in our expectations of these tools. This has resulted in much greater visibility of the opportunities and challenges of these technologies across subject disciplines including creative disciplines where AI can produce images, and video, from natural language prompts.

...

Some organisations have tried to ban the use of AI tools on networks within educational settings, again a solution that the working group did not think was effective or beneficial. Many of the currently available AI tools will evolve and become more integrated in to existing software such as the Microsoft Office suite and search engines. Solutions to ban the use of AI technology do not sit comfortably with Ulster’s approach to Learning & Teaching and will not be part of our recommendations.

...

Ulster recognises that staff and students are using AI technology now and will continue to do so both in personal and professional settings. The working group has heard from academic teams who have been exploring how AI tools can be used in their context and how they encourage the use of AI tools within the curriculum. It seems appropriate to help develop staff and students’ digital literacy skills to use the tools appropriately and responsibly. Much of this existing work is within subject disciplines that understand the limitations of the tools and the working group recognise that further guidance, and support, will be necessary before this approach is more widely adopted. The working group has recommended creating space for critical dialogue particularly on ethical and sustainability issues but also recognise that there will be great diversity in approaches across the organisation.

The working group are aware that many of the tools will move to paid subscription models which will restrict how the tools can be used in an educational setting,

The working group will also consider the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

...

  • Authoring changes to the academic integrity policy to ensure that students understand what appropriate use of AI tools is.

  • Reviewing student declarations that can be used at the point of submission.

  • Developing AI guidance for students.

  • Developing AI guidance for staff.

  • Developing referencing guidelines

  • Considering the potential impact of AI on the day-to-day working lives of staff. Guidelines will emphasise the limitations of AI, and the ethical considerations for research, educational and administrative contexts.

  • Informing academic development activity around assessment design and authentic assessment

  • Developing AI evaluation strategies to help monitor its use and impact. This will include understanding the funding models as these tools will not be free once the evaluation phases end.

  • Considering accessibility implications of AI.