Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This guide has been informed by the work of the Artificial Intelligence (AI) in Teaching and Learning Working Group. It is recognised that AI is rapidly evolving, and this document should be considered as a living document. This is version 1.1, created in August 2023.

 There There is a significant amount of media coverage, interest, and experimentation with generative AI. These are tools which can be prompted in conversational ways to create new content including text, images, audio, video and computer code. As the tools develop, they are becoming integrated into existing business and personal applications such as web browsers, the Microsoft Office suite and Google Docs.  New plugin architectures are evolving allowing other businesses to integrate their services within the tools.

 Tools Tools such as ChatGPT, Google Bard, DALLE-2 and CoPilot can be helpful for generating content. This has obvious implications for assessment and some institutions have banned their use. Here at Ulster, we believe that these tools will be a part of our personal and professional lives and we wish to explore their use with students in ethical, transparent and reasonable ways.  Our position is to

  • Encourage a University culture that upholds the value of integrity

  • Reinforce the expectation that work submitted for assessment is students own original work.

  • Remain open to the benefits of the use of AI whilst highlighting the dangers of relying on the outputs as accurate sources of information.

  • Develop guidance about how to accurately acknowledge the reasonable use of AI in student work.

  • Encourage critical dialogue when AI tools ae are used within the curriculum. 

 As As an Ulster student, you are expected to comply with the University Regulations which include appropriate academic conduct. The Academic Misconduct Policy has been updated to explicitly reference the use of AI, and the student declaration for coursework submission states that:

...

 Some of the current limitations of Large Language Model (LLM) AI tools include

  • The With text generation, the tools do not understand what the words they produce mean.

  • The tools will often generate arguments that are wrong.

  • The tools will often generate false references and quotations.

  • Content generated is not checked for accuracy.

  • The tools can distort the truth and emphasise the strength of an opposing argument.

  • The tools do not perform well on subjects that do not have a lot of public online discourse.

  • The content generated is based on an historical data set which is fixed in time.

  • Generated content can include harmful bias and reinforce stereotypes. These biases can be reinforced through further human interaction with the model.

  • The models are trained on a data set from a Western English-speaking perspective again reinforcing particular perspectives. 

  • There are copyright concerns in creative disciplines where existing creative works are used to generate new work without the permission of the original makers.

Over reliance on these tools will limit the development of your writing and evaluation skills which are skills that you will use in future careers. You should therefore approach the use of these tools with a critical lens by understanding the limitations and bias in the output.

...