Artificial Intelligence, or AI, is the science of making machines that can think like humans. It can be defined as "the use or study of computer systems or machines that have some qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognise or create images, solve problems, and learn from data supplied to them" (Cambridge Dictionary).
AI is a tool, and can fulfil various functions. The most common ones are in an academic setting can include text generation, analysis of texts, grammar checks, and image generation. This guide will go into more detail on some of the types of AI you may encounter.
There may be times where you need help understanding a topic or perhaps more information on how to structure an assignment. Here are some examples of cases when it is ethical to use AI.
Limited knowledge - AI is constantly improving and updating, but it does come with limitations. Examples of limitations include;
Bias - AI is heavily dependant on the data it is being fed to generate the result; they are essentially trained on existing text, images and other material that appears online. Thus if there is existing bias, such as sexist, racist, homophobic, xenophobic or political content being harvested, these may be reproduced in the final results.
Fake responses and hallucinations - A 'hallucination' in AI terms is "a plausible but false or misleading response generated by an artificial intelligence algorithm" (Merriam Webster).
This means that even though the information or text generated may sound plausible, it can be misleading, or simply wrong.
One of the main ways we see this at the moment is the creation of false references. LLM's such as ChatGPT, when asked to write academic work, often simply invent citations and references that don't exist in the real world. These will get picked up on by tutors and Turnitin.
*Information courtesy of University of Huddersfield