Skip to Main Content

Generative AI Literacy

Is the use of GenAI Ethical?

The truth is, it depends! It depends on why, when, how, and where you're using the tool.

There are a wide variety of ethical concerns surrounding beyond academic and personal integrity and having permission from your professor, which are important considerations for the use of GenAI in a college setting.

Watch this video for a short discussion of some of the ethical concerns around the use of Generative AI.


Problems viewing? Watch or read the transcript for Ethical Issues with Generative AI (3:28 mins) on YouTube

Want to learn more? Click on the items in the A-z list below to explore some of the ethical concerns surrounding Generative AI use. You can also learn more by doing a library or web search on the particular topics.

A to Z of Ethical Considerations of Using AI Models

Accessibility

Basic AODA standards  require web-based tools to allow for keyboard navigation and for assistive technology compatibility. A review of the accessibility of AI interfaces conducted by Langara College  suggests that many AI tools are not in compliance with AODA requirements or present other accessibility barriers. 

Even technologies compliant with AODA standards may still present barriers to access for some users and learners. 

Accountability

Determining responsibility for AI-driven decisions and actions can be challenging, especially in cases of error or harm.

Authority

AI models cannot legally "author" content. Researchers should cite how they use AI in their academic work.

Autonomy and human agency

As AI becomes more advanced, there are concerns about maintaining human control and decision-making power.

Bias and fairness

AI models can perpetuate or amplify existing societal biases present in their training data, leading to unfair or discriminatory outcomes.

AI output depends entirely on its input,in the form of the prompt it is fed, the dataset used for training and the engineers who create and develop it. This can result in explicit and implicit bias, both unintentional and intentional. 

To “train” the system, generative AI ingests enormous amounts of training data from across the internet. Using the internet as training data means generative AI can replicate the biases, stereotypes, and hate speech found on the web. In addition, as of January 2024, 52% of information available on the internet is in English, which means this bias is built into the system through training data.A little less than 70% of people working in AI are male (World Economic Forum, 2024 Global Gender Gap Report) and the majority are white (Georgetown University, The US AI Workforce: Analyzing Current Supply and Growth, January 2024). As a result, there have been numerous cases of algorithmic bias, which is when algorithms make decisions that systemaically disadvantage certain groups, in generative AI systems.

While this does not mean that content generated by AI has no value, users should be aware of the possibility of bias influencing AI output.

Copyright

AI models are trained on datasets that may include copyrighted materials without explicit permission. This raises concerns about whether the use of such data constitutes copyright infringement.

Environmental impact and nuclear energy

Training large AI models and use of AI technology requires significant computational resources, contributing to energy consumption and carbon emissions. Amazon, Google, and Microsoft all are inking deals with nuclear energy providers to meet surging demand for data centers and artificial intelligence.

AI is typically associated with virtuality and the cloud, yet these systems rely on vast physical infrastructures that span the globe and require tremendous amounts of natural resources, including energy, water, and rare earth minerals. A 2019 study found that training large language models "can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself)" (MIT Technology Review).

Inequality

Access to AI technologies and their benefits may not be evenly distributed, potentially exacerbating existing societal inequalities.

Any technology presents issues of equitable access related to cost. Even tools that are free often offer a more feature-rich or advanced version at a cost. Requiring the use of GenAI tools can create access issues in educational settings, as not all students may be able to afford access.

Job displacement

AI automation may lead to significant changes in the job market, potentially causing unemployment in certain sectors.

Labour Issues

AI still needs human intervention to function properly, but this necessary labor is often hidden. For example, ChatGPT uses prompts entered by users to train its models. Since these prompts are also used to train its subscription model, many consider this unpaid labor.

Taylor & Francis recently signed a $10 million deal to provide Microsoft with access to data from approximately 3,000 scholarly journals. Authors in those journals were not consulted or compensated for the use of their articles. Some argue that using scholarly research to train generative AI will result in better AI tools, but authors have expressed concern about how their information will be used, including whether the use by AI tools will negatively impact their citation numbers

In a more extreme case, investigative journalists discovered that OpenAI paid workers in Kenya, Uganda and India only $1-$2 per hour to review data for disturbing, graphic and violent images. In improving their product, the company exposed their underpaid workers to psychologically scarring content. One worker referred to the work as “torture”.

Misinformation, manipulation and Deepfakes

AI can be used to create convincing fake content or manipulate information, posing risks to public discourse and trust.

Deepfakes are videos, images or audio that appear very realistic but are fake. Using AI tools, people can create deep fakes that make it seem like someone has done or said something they have not. This guide from MIT goes in-depth about deep fakes and how to spot them.

Privacy and data protection

Input of user-provided data may be retained and used in training, which raises concerns about individual privacy and data security.

There are ongoing privacy concerns and uncertainties about how AI systems harvest personal data from users. Some of this personal information, like phone numbers, is voluntarily given by the user. However, users may not realize that the system is also harvesting information like the user’s IP address and their activity while using the service. This is an important consideration when using AI in an educational context, as some students may not feel comfortable having their personal information tracked and saved.

Additionally, OpenAI may share aggregated personal information with third parties in order to analyze usage of ChatGPT. While this information is only shared in aggregate after being de-identified (i.e. stripped of data that could identify users), users should be aware that they no longer have control of their personal information after it is provided to a system like ChatGPT.

Safety and reliability

Ensuring AI systems behave safely and reliably, especially in critical applications like healthcare or autonomous vehicles, is crucial.

Transparency and explainability

Many AI models, especially deep learning systems, operate as "black boxes," making it difficult to understand how they arrive at decisions.

Further Reading

Attribution

A to Z of Ethical Considerations of Using AI Models is adapted from Ethics of Using AI by Tulane University Libraries, CC BY-NC 4.0.

Sources from original content - Ethics of Using AI

  • "Google Signs Nuclear Deal." prompt, Perplexity, Perplexity, 16 Oct. 2024, https://www.perplexity.ai
  • "What are the copyright concerns for using AI in academic work?" prompt, ChatGPT, OpenAI, 16 Oct. 2024, https://chatgpt.com/
  • “What are the ethical concerns associated with using Al models?" prompt. Claude, Anthropic, 16 Oct. 2024, https://claude.ai/