Skip to Main Content

Basics of Generative AI

Guide uses elements created by the US Military Academy, used with USMA permission.

Understanding the Implications of Generative AI

AI literacy is more than just understanding how to input prompts into a Generative AI and receive the result. AI literacy is also understanding how to verify the AI outputs and the context and implications of AI. Cadets, faculty, and users can use this page to learn about:

  • How to evaluate AI outputs:  Learn what to be wary of when reading and analyzing AI generated work.
  • The harm that AI can cause:  Learn about the harm that AI can cause in its creation and use.
  • Privacy and Security Concern:  Understand privacy and security conserns that need to be taken into account by all levels of AI stakeholders.

Evaluating Outputs

There are ethical considerations that you need to take into account when using AI. Watch the clip from the video below to get a brief introduction to hallucinations and deep fakes.

KI Camps. (2023, September 25). Generative AI explained in 2 minutes [Video]. YouTube.  https://youtu.be/rwF-X5STYks?si=BznVY5O7Xe00sm3J
 "Generative AI explained in 2 minutes" by KI-Campus is licensed under CC BY-SA 4.0.

Hallucination Examples

Here are some real life examples where Generative AI created hallucination,  which means it produced false events, stories, sources, or general nonsense and presented it as factual. If you were to ask ChatGPT to write an essay and include sources for the information presented those sources may not even exist!  Here are a few notable examples:

  • In May 2024, Google's AI Overviews generated outputs that included telling users to use non-toxic glue to keep the cheese from sliding off their pizza and to eat rocks daily.  These errors were the result of Generative AI not being able to distinguish between satire, humor, forums, and well-researched articles. So just as there are websites that promote misinformation, disinformation, or malinformation, Generative AI outputs can present false information as true.
  • In February 2023, Kevin Roose, a writer for the New York Times, wrote "Bing's AI Chat: I Want to Be Alive" in which he describes a conversation with an early version of CoPilot which the Chatbot tried to convince him to leave his wife. A copy of the transcript is included in the article. 
  • In June 2023, two New York attorneys were sanctioned for filing a brief which included six hallucinated cases. The lawyers admitted to using ChatGPT to write the brief.

There is inherent bias in the data that Generative AI LLMs are trained on. The data that goes in determines the data which comes out. Since it is humans who create and select the data that is used to train LLMs that are used to create Generative AI, AI contains and shares that same bias in its outputs.

This video, created by the The London Interdisciplinary School, demonstrates the ways that AI image generators can be complicit in and perpetuate bias.

Harm that Generative AI Causes

Generative AIs are not only beneficial. AI's create environmental, economic, and societal harm in the way they are created and used. It is important to understand what these impacts are in order to make an informed decisions on when and how often to use Generative AI. This chart created by Rebecca Sweetman demonstrates the different ways that Generative AIs can harm the environment, the economy, and the way society functions.

Privacy and Security Concerns

This diagram breaks down the areas where privacy and security come into play for each level of Generative AI stakeholders.

 

A. Golda et al., "Privacy and Security Concerns in Generative AI: A Comprehensive Survey," in IEEE Access, vol. 12, pp. 48126-48144, 2024, doi: 10.1109/ACCESS.2024.3381611 Retrieved May 6, 2024. https://ieeexplore.ieee.org/document/10478883