Using AI critically

In higher education, AI technology is becoming ubiquitous, embedded in both online platforms and everyday software. As we begin to integrate AI tools into teaching practice, it's crucial to remember that many generative AI models are still in their experimental stages. This calls for a thoughtful and analytical approach to using AI tools, ensuring we critically evaluate their capabilities and the content they produce.

The Ulster University framework for using AI for teaching and assessment, aims to

“Promote the equitable, ethical and inclusive use of AI for education to benefit staff and students”

“Encourage a balanced approach to AI adoption that uses AI as complementary tools while recognising their benefits and limitations.”

This post promotes the adoption of a critical lens to encourage a more ethical and balanced approach to AI use.

How to use AI critically

AI is an evolving technology with inherent potential for bias and errors. These programs are trained on large datasets, using neural networks to ‘learn' and 'recognise’ patterns. Once trained, they apply this knowledge to identify and respond to similar patterns in new data. The effectiveness of an AI tool is directly linked to the quality of its inputs, including the coding of the neural network, the source data, and the training process.

AI-development-diagram.png
Simplified view of the development of AI model showing inputs and outputs

 

To assess the quality of the tool we need to ask questions of the model and its output:

 

Is it accurate and representative?

Many of the Large Language Models (LLMs) like ChatGPT are trained on a limited number of data sets including user-generated sources such as Wikipedia (Minaee et al, 2024). This training data may not be up-to-date, representative or appropriate for all cultures or nationalities.

AI identifies and replicates patterns in data, rather than ‘understanding' and 'responding’ to prompts. This can lead to hallucinations, where the technology spots a pattern and tries replicating it without understanding the subject or context. For instance, the prompt ‘a woman playing a violin in the orchestra’ looks like this:

Ellen_Bell_woman_playing_the_violin_in_orchestra_0ecf872b-1425-4e4d-ac8f-d6cf39c5a6f1.jpg
Image generated using ImagineArt from the prompt a woman playing a violin in the orchestra (Imagine AI, 2024).

Therefore always review generated text for inaccurate arguments and fictitious quotes. Treat AI content as a source, always compare its findings against other scholarly or reputable sources.

 

Is it biassed?

According to the US Equal Employment Opportunity Commission there is a lack of gender, racial and ethnic diversity in the high tech industries of Silicon Valley (USEEOC, 2024), home of Open AI, the creator of Chat-GPT and Dall-E. The commission found that women make up 30 percent of the workforce at the leading 75 Silicon Valley tech firms. Within these companies, only 1.6 percent of executives were Hispanic despite representing 39.8% of the state population; and less than one percent were African American (as compared to 5.29% of the state population (Data USA, 2025; USEEOC, 2024). Always assess generated content for bias, the lack of diversity in the development of AI tools may be reflected in their output.

 

Is it transparent?

Is there information on how the AI system was designed and trained? Transparency is a crucial attribute, as it helps us understand how the AI model makes decisions and allows us to assess the accuracy and trustworthiness of its outputs. This issue is particularly important in industries such as law and healthcare.

 

Is it fair?

Some authors believe that AI reinforces inequality, benefitting the already advantaged in a number of ways (e.g. Rotman, 2022; Selwyn, 2022).

AI models have been found to exhibit gender and racial biases. When these models are used in high-stakes processes like hiring or admissions, they can perpetuate harmful stereotypes and disadvantage specific demographic groups (Marinucci et al, 2023).

Many AI tools operate a tiered system offering a free service with reduced capabilities and a paid subscription with more sophisticated functionality, this could potentially advantage students from wealthier backgrounds.

When you are using AI, consider how the program was trained – by hundreds of pieces of annotated data.

The majority of this annotated data was labelled by hundreds of people employed by third-party companies (Wang et al, 2022). The majority of this manual labour –the repetitive, tedious labelling of images, video and text– is hidden, performed in developing countries by low-paid workers (Muldoon et al, 2024). However, there are also data annotation companies, like Isahit, that are concerned with social impact and aim to empower their data labelling workers (Kaye, 2019; Milne, 2024).

Also consider that the annotated data may be derived from a pirated source. An investigation by the Atlantic Magazine found that Meta used Library Genesis, an online pirated library that contains millions of books and research articles, to train their AI model (Reisner, 2025).

Is it sustainable?

According to the United Nations Environment Programme, AI has a large environmental impact. The data centres that facilitate AI require the mining of minerals and metals, are large consumers of electricity and water and produce damaging electronic waste (UNEP, 2024). For instance, by 2027, it is estimated that the water supply needed to cool the data centres that support AI will consume the equivalent of half of UK’s water supply (Li et al, 2023).

 

Is it secure?

Although the EU AI Act governs the use and the development of AI in the European Union, many countries have yet to adopt such regulations. Therefore check how the AI tool will use any data you input. Check the terms and conditions, does it comply with GDPR regulations? If you’re using a free tool it may use your data to train the AI model. Therefore only use paid or local versions of these tools to process any proprietary data. Companies like Microsoft have a responsible AI initiative that clearly states how it uses your data, that is why you can use the university’s Microsoft Teams application for the transcription of interviews.

Do not input your personal details, copyrighted, sensitive or confidential data into online AI tools such as ChatGPT. If it’s a public platform, the information you input can be shared publicly.

  • Never input your own personal information.

  • Never use cloud-based AI to process sensitive research data e.g. online transcription tools to transcribe interviews.

 

AI is integrated into a wide range of platforms and everyday tools such as Google and MS Word. Therefore, it is crucial that we learn to use it with a critical eye, questioning both the technology and its output in order to leverage its benefits and avoid potential pitfalls.

 

Further reading

Marcus on AI - American Psychologist, Gary Marcus writing about AI on Substack

One Useful thing American Professor of management, Ethan Mollick writing about AI on Substack

IBM AI explainers - A range of articles explaining different aspects of AI technology

Marinucci, L., Mazzuca, C. & Gangemi, A. Exposing implicit biases and stereotypes in human and artificial intelligence: state of the art and challenges with a focus on gender. AI & Soc 38, 747–761 (2023). https://doi.org/10.1007/s00146-022-01474-3

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57, 620–631. https://doi.org/10.1111/ejed.12532

Gillani, N., Eynon, R., Chiabaut, C., & Finkel, K. (2023). Unpacking the “Black Box” of AI in Education. Educational Technology & Society, 26(1), 99–111. https://www.jstor.org/stable/48707970

 

References

 

Looking for labels? They can now be found in the details panel on the floating action bar.

Centre for Digital Learning Enhancement
ulster.ac.uk/learningengancement/cdle