As I was scrolling through Reddit a few weeks back, I saw a (misleading, click-baity, and false) post that read something like, “Kamala Harris intentionally left off of Montana ballot.” I was in a rush and didn’t have time to explore this shocking bit of news, but I suspected that there was more to the story. After all, Reddit isn’t exactly considered a non-biased news source. Still, this bugged me all day long and I wondered, “What’s going on in Montana?!” Later that day, I did some quick research, checked out some unbiased sources, and discovered that the story was false. It turns out that Harris was left off of the overseas absentee ballot by accident. This error was immediately fixed upon discovery. This story is a great example of misinformation... or even disinformation, depending on the source. So how might we combat disinformation? What about AI-generated misinformation? And how can we help our students do the same as we approach the presidential election?
Information, Misinformation, & Disinformation
Ok, so what’s the difference between these three concepts? Very simply, Information is true knowledge, misinformation is unintentionally harmful false knowledge, and disinformation is purposefully harmful false knowledge. Here’s an example:
- Information: An individual on Instagram posts “Let’s get out and vote this November! This only happens every 4 years!”.
- Misinformation: An individual on Instagram with a long account history and no history of spreading disinformation posts “Let’s get out and vote this November! This only happens every 2 years!”.
- Disinformation: An individual on Instagram with a short account history, but several thousand oddly written inflammatory posts, posts “There's no point in voting this November. Your electoral college representatives have already decided on the winner for your state and actually voted back in July. Don’t waste your time.”.
AI Misinformation & Disinformation In the 2024 Presidential Election Cycle
As AI applications have become easier to access and use, we’ve seen several high-profile examples of AI-created disinformation in the past few months. For instance, Elon Musk shared a fake AI voice-cloned video of Kamala Harris a few months ago. More recently, a Harris voice-clone appeared again in a fake pro-abortion AI video. These were both attempts to purposefully discredit the Vice President, so these examples are considered disinformation.
Using Media Evaluation Criteria and Technology, Including AI, To Help
How can we identify misinformation and disinformation, especially AI-produced political disinformation, as we approach the November presidential election? Here are two techniques:
- Media Evaluation Criteria: All of the traditional criteria that we use to determine the trustworthiness of information are still great. We should still use these when evaluating information, even information that may be AI-generated. There are also specific evaluation criteria for spotting AI-generation in video, images, and text.
- Technology Solutions: A quick first step in spotting fake AI-generated news is to use a reputable checker. org, an AI-backed checker tool, is a good place to start for images and video. For text-based information, try one of the more reliable AI detectors like GPTZero or ZeroGPT. Although not perfect, AI checkers are getting better. Our secure generative AI tool at GSU, Co-Pilot, is also pretty good as a checker itself. (NOTE: CETLOE does not recommend using these tools to look at student work. As we noted in this post, “AI detectors are unreliable and potentially biased against non-native speakers. If you have concerns regarding academic honesty, be sure to collect several kinds of evidence during your investigations.”)
Tips for Teaching & Learning
So, you’ve decided that you’d like to do an AI misinformation and disinformation activity with your students. Great! Here are a few research-backed tips for thoughtfully and carefully designing your activities and teaching in this area:
- Direct Instruction: We have a lot of evidence that most learners learn best from direct instruction. This looks like carefully planned, sequenced, and scaffolded lessons where learners get to practice with proper support and feedback. You’re probably doing this already.
- Primacy Effect: We also know that people tend to remember what they learn first, even if this learning is wrong. This is one of the reasons that misinformation and disinformation can be so pernicious. So, as you think about developing a lesson, consider how you can help students avoid remembering misinformation and disinformation as true
- Labeling: One super simple way to help students with the primacy effect is to clearly and consistently label information, including any fake AI examples that you use. Although we don’t yet have a big body of evidence about labeling AI yet, this helpful policy brief from MIT offers direction in this area.
- Contact a Librarian: Our GSU subject librarians are all experts in media literacy and evaluation. They can help you out with all sorts of things related to this topic.
For more ideas and information, check out the recording and resources from our recent workshop, “Combating Misinformation and Disinformation: Exploring AI Problems and Possibilities with Students During the 2024 Electoral Cycle.”
POST BY:
Contact Us
Instructional Support
Instructional support is available online between 8:30 a.m. and 5:15 p.m.
Locations
Atlanta - Library South, Room 100
Tel: 404-413-4700 | Map
Alpharetta - AA2170
Decatur - SC1148
Newton - 1N3120
Clarkston - CL 1201
Dunwoody - NE2903