Fabrication, as defined by Georgian College's (n.d.) academic integrity regulations, is:
"falsification or invention of any information or citation in an academic work or program and employment documentation" (section 8.2.2).
Examples of fabrication include:
Generative AI tools (ChatGPT, Bard, and more) have been known to fabricate sources and information, which is sometimes referred to as a hallucination (MIT Sloan Teaching & Learning Technologies, 2024). Always ensure you have permission to use these and check the output generated carefully to avoid introducing fabricated information/evidence into your work.
Watch the following video, created by students, to get a better understanding of Fabrication and Breaches of Academic Integrity.
Video source: Liam P. IUPUI. (2017, August 1). Academic integrity fabrication [Video]. YouTube. https://youtu.be/Cfg3IrzEN50
Avoid putting yourself in a situation where you may need to fabricate information, data, research sources or documentation:
Penalties for Breaches of Academic Integrity range in severity, depending on the situation and the number of prior breaches.
The initial penalties for breaches of academic integrity at Georgian College may include:
Depending on the breach, your professor may also ask you to complete additional academic integrity training, attend a workshop or seek individual help to ensure that you can learn from the mistake and move on to be successful in your future assignments.
Breaches of Academic Integrity are recorded by the Registrar's office.
After one or more offences, penalties increase in severity and may also include:
Review the Georgian College Academic Integrity Regulations for full details about academic integrity and breaches, including definitions, policies and procedures, and consequences.
Except where otherwise noted, "Fabrication" by Georgian College Libraries and Academic Success is licensed under a CC-BY-NC-SA 4.0.
Georgian College. (n.d.). 8. Academic integrity. Retrieved August 9, 2024, from https://cat.georgiancollege.ca/academic-regulations/integrity/
MIT Sloan Teaching & Learning Technologies. (2024, May 7). When AI gets it wrong: Addressing AI hallucinations and bias. https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/