Skip to main content

Terms like “game-changing” and “disruptive” have become cliche in technology circles. As educators, we’ve seen the hype of 1:1 devices, paperless classrooms, interactive whiteboards, and a host of different education fads. However, AI feels different. We are living in a cultural moment where generative AI specifically, and machine learning more generally, seem to be changing our world at a rapid pace. While there are many positive trends for educators, one potential danger is in how AI is changing the information landscape. This is especially true with deep fakes. In my latest article, I share 4 potential solutions we can take as we rethink information literacy in an age of AI.

What is the future of information literacy?Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify

 

The Rise of the Chatbot

In 2014, Microsoft launched a hugely successful A.I. bot named Xiaoice in China. With over forty million conversations, users often described feeling as though they were interacting with a real human. Microsoft founder Bill Gates described it this way, “’Xiaoice has attracted 45 million followers and is quite skilled at multitasking. And I’ve heard she’s gotten good enough at sensing a user’s emotional state that she can even help with relationship breakups.”

Xiaoice has published poetry, recorded musical albums, hosted television shows, and released audiobooks for children. She’s such a superstar that it’s easy to forget she is merely a set of complex algorithms. Some have pointed this as evidence of the ELIZA Effect and the way we humanize AI chatbots.

In 2016, Microsoft hoped to duplicate Xiaoice’s success with the introduction of Tay in the U.S.A. This would be the virtual assistant of the future. Part influencer and part helper. Part content creator and part counselor. However, within hours, Tay began posting sexist and racist rants on Twitter. She spouted off unhinged conspiracy theories and engaged in trolling behaviors.

So, what happened? As trolls and bots spammed Tay with offensive content, the A.I. began to mimic racist and sexist speech. As the bot attempted to “learn” how to interact with a community, it picked up on the mores and norms of a group that deliberately spammed it with racist and sexist content. By the time Microsoft shut Tay down, the bot had begun promoting Nazi ideology.

While this might be an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” A.I. because it pulls its data from the larger culture, and it will “learn” social norms and values from its vast data set.  It’s easy to miss the subtler biases and the misinformation embedded within generative A.I. when it often produces accurate content. In some sense, ChatGPT poses a different challenge than Tay because the bias is less overt but still powerful.

And yet, when people interact with A.I. bots, they are more likely to assume that the information is unbiased and objective. I’ve already noticed people saying things like, “I treat ChatGPT like a search engine” or “AI can teach you pretty much anything.” It’s taking on this assumption that AI is inherently unbiased and accurate — which is a really dangerous place to be.

In the upcoming decade, A.I. will fundamentally change the larger media landscape. Our students will inhabit a space where generative A.I. can create instant content that seems inherently human. They’ll have to navigate a world of deepfakes and misinformation. But this isn’t a new phenomenon. A.I. has been changing the larger information landscape for decades. Before diving into deepfakes, let’s explore how A.I. has already transformed the way we interact with media — because ultimately if we want to think about how generative AI will influence information literacy we need to recognize how much rudimentary forms of AI have already shaped our current information landscape.

We cannot make sense out of the future if we don't understand the present

 

The Rise of Filter Bubbles

As social media platforms emerged in the early 2000s, computer scientists designed algorithms to focus on relevance. For the first time ever, users began to experience an entirely personalized media experience tailored to their ideas, beliefs, values, and preferences. At the same time, users could also create and share their own content without the need for official gatekeepers. Combined with the advent of new media technology (like podcasting and blogging), the media landscape shifted from one of broadcasting to narrowcasting.

With this democratization of media, it was easier than ever to create, edit, and publish one’s work to the world. Voices that had previously been excluded found a place online to share their ideas and insights. People without formal journalistic training could now set up a blog and share their ideas on topics that might not appeal to everyone.

With such a glut of information, the algorithms became mroe and more important. With so much information and a lack of official gatekeepers, algorithms functioned like gatekeepers by prioritizing media based on relevance. For the last two decades, children have grown up with a worldview shaped as much by algorithms as by geography. This has created some phenomenal learning opportunities in a connected world. However, relevance-based algorithms have also led to filter bubbles.

This often leads to an “echo chamber,” where information gets reinforced over and over again.

Filter bubbles and echo chambers work in part by clicks and “likes.” But these are often manipulated using A.I. bots. Typically, bots have used simple algorithms or scripts to create a signal boost on certain ideas or to attack members of a different group or ideology. But with generative A.I., there’s a very real concern about the human-like quality of this newer type of bot. Microsoft shut down Tay when she spewed Nazi rhetoric but what happens when people engineer bots to emphasize totalitarian rhetoric or to promote racist ideology?

 

The Rise of the Deep Fake

With generative A.I., bots can mimic the style of an actual user. This creates huge concerns for misinformation and catfishing. It’s easy to imagine how an army of generative A.I. bots could potentially manipulate public opinion by crafting text with a tone of credibility or engaging in conversational tactics meant to manipulate the larger population.

Part of how we develop our beliefs is by interacting with one another. This is especially true of our students, who are often trying to make sense of their world. With generative A.I., these bots won’t simply like or retweet a post. They’ll engage in a full conversation in a tone that can show a range of emotions – from authentic to authoritative to approachable. So, we already see that there is a mimicking of human speech in text-based bots. But there’s another element as well in the rise of the deep fake in video and audio.

The term “deepfake” is a combination of “deep learning” and “fake.”  It’s essentially an A.I.-generated video or audio clip that has been manipulated to make it look like someone has said something they never said.

On an individual level, there’s a very real concern about deep fakes leading to catfishing. If you listened to the podcast episode I did with Alec Couros, he described how deep fakes related to cat-fishing have been used to prey on teenagers who are falsely accused of sexting. These kids then panic and the catfishers scam them through extortion. Another example might be an grandmother who gets a deep fake phone call from her granddauthter about being kidnapped. The scammer then makes out with hundreds of thousands of dollars.

On a more social level, deep fakes pose a threat to democracy as people share doctored video and audio clips relating to current events. We have already seen examples of deep fakes from both the left and the right falsely manipulating a politician’s speech. In some cases, authoritarian leaders might deep fakes to manipulate videos of their political rivals or of protestors.

This will only accelerate as deepfakes grow more realistic. As Couros points out, “Down the road, there will be a more powerful tool. So you’ve got voice. You’ve got style. You’ve got video. In terms of catfishing, you’ve got everything that you possibly need to fool people – whether it’s on a personal front if it’s something political.”

So, where do we go from here? How do we help our students determine what is real in a world of disinformation and deepfakes? How do you keep them protected from generative A.I. catfishing schemes? How do you help them seek out new ideas and opinions when they’re surrounded by an echo chamber? How can they discover truth in the world when there’s so much misinformation?

Traditional approaches to informational literacy aren’t enough. We need new approaches.

 

Solution #1: Engage in Lateral Reading

For years, students have used information literacy techniques such as the CRAAP Test. However, Mike Caulfield, the Director of Blended and Networked Learning at Washington State University Vancouver, coined the term “lateral reading” in 2017 as a different approach.

To practice lateral reading, you might start by conducting research on the author or publisher of the original source to determine their credibility. This could include checking their credentials, affiliations, and track record to ensure that they are a trustworthy source of information. Additionally, you could investigate the publication or website where the information was first shared to determine its reliability.

Next, seek out alternative viewpoints and additional sources of information to help you evaluate the original source. This might involve reading articles from different news outlets or using fact-checking websites to determine if the information presented in the original source is accurate and well-supported.

At this point, a student finally considers credibility. This includes considering the author’s motives, the reliability of the information presented, and the broader social and political context in which the information is being shared. By engaging in lateral reading, you can develop a more nuanced understanding of complex issues and avoid being misled by misinformation.

Caulfield has developed what he calls the SIFT Method as an alternative to the CRAAP Test. It’s an acronym that stands for Stop, Investigate, Find better coverage, and Trace Claims. Here’s how it works:

The four steps of the SIFT method are:

  1. Stop: Before engaging with the information, take a moment to assess your emotional response and consider the motivations of the person or organization that shared the information.
  2. Investigate the Source: Evaluate the credibility of the source of the information, including the author, publisher, and website. Look for signs of bias, conflicts of interest, and expertise in the topic.
  3. Find Trusted Coverage: Verify the accuracy of the information by finding multiple sources that corroborate the claims. Look for reliable news sources and fact-checking websites.
  4. Trace Claims, Quotes, and Media Back to the Original Context: Follow the information back to its original source, including any quotes or media used to support the claims. Look for any misrepresentations or distortions of the information.

By following the SIFT method, readers can develop a critical understanding of the information they encounter online and make informed decisions about its credibility and accuracy. This tends to work well in formal research projects. However, students won’t always be accessing information formally.

Lateral reading might not be enough in a world of A.I. Information literacy expert Jennifer LaGarde has pointed out that lateral reading doesn’t take into consideration the emotional aspects of reading. We often buy into fake information and deep fakes because of the way this content makes us feel.

Lateral reading assumes students will engage in media literacy on laptops while doing online research. However, that doesn’t capture the way students often consume media. Students scroll through social media, quickly reading articles, looking at memes, watching videos, and engaging in rapid-fire conversations on a smartphone. In other words, lateral reading doesn’t fit the typical way we process digital information on our smartphones. LaGarde points out that most of our students view content on mobile devices in a fast, informal, media consumption mode. In other words, they’re not using acronyms or checklists. For this reason, we need to treat information literacy as a habit and a mindset rather than merely a process.

 

Solution #2: Teach Students to Be Digital Detectives

The future of information literacy needs to be more than just a set of skills. Students will need to adopt it as a mindset and continue it as a habit. Jennifer LaGarde and Darren Hudgins use the metaphor of a digital detective to describe this mindset.

  1. The Triggers Lens: Digital Detectives use this lens to make sense out of the ways information elicit an emotional response.
  2. The Access Lens: Digital Detectives use this lens to see how platforms and devices determine what is a credible source. For example, a news story looks different on a mobile device than on a browser. It’s also a chance to think about how the community impacts interpretations of information. So, the way people on Twitter and Facebook interact with a news story will vary.
  3. The Forensics Lens: This is the lens that we tend to think of as information literacy. Here is where students investigate the information from a place of curiosity.
  4. The Motives Lens: This is where students think about the motives of those who are creating the information. Like any great detective, they consider why people might manipulate information.

LaGarde and Hudgins argue that the solution goes beyond simply developing a set of information literacy skills. Instead, students need to develop these mindsets in conjunction with the broader SEL Competencies of Self-Awareness, Self-Management, Social Awareness, Relationship Skills, and Responsible Decision-Making.

The Digital Detectives approach recognizes the distinction between informal information literacy and formal information literacy. We tend to read informally and formally, perusing articles for fun and doing close reading when it’s highly academic. We tend to write in formal and informal registers. A text message contains poop emojis. An essay contains complex sentences and citations.

Similarly, students need to use different information literacy approaches based on the type of device they’re using and their purpose of their information consumption. If we treat information literacy only as a skill for doing academic research (think lateral reading) for things like Instagram posts, our students fail to develop the informal information literacy they will need in an information landscape dominated by A.I.

 

Solution 3: Seek Out Librarians

The media landscape has been changing for the last three decades and it will continue to transform in ways that we can’t even predict. The newest forms of A.I. present huge challenges with deepfakes, catfishing, and misinformation. We cannot rely entirely on older tools like the CRAAP test, and we cannot assume that students will use an academic approach similar to lateral reading every time they consume content. Information literacy will not only need to be a skill. It will need to be a habit and a mindset of critical thinking and adaptability. The approaches we use will change as the landscape continues to change.

Librarians are more important than ever. We cannot lean into a single model or process for information and media literacy. Our students will need to learn these skills in a dynamic and human way. Schools need to tap into the expertise of librarians in helping students learn this newer type of information literacy and develop it as a mindset rather than just a skill.

As we shift into the future, librarians can work across the curriculum to teach students how to engage in prompt engineering that leads to better questions but also better analysis of the results they see. I created the FACTS Cycle as a prompt engineering process that students can use as they interact with chatbots:

The goal is to get students to slow down as they interact with chatbots. We want them to be more intentional in how they craft their prompts and more critical as they look at potential bias and misinformation. Librarians can help recommend tools that have been trained on better data. For example, Consensus tends to skew toward peer reviewed journals rather than engaging in AI “hallucinations” and simply making up information along the way.

 

Solution 4: Look Back to Go Forward

I recently read about a future era of deep fakes and misinformation. With the democratization of media, people take on fake names in anonymity to trick people into joining their sides. They could put words into politician’s mouths. With a democratization of the publishing process, you would see polarization on the fringes and flaming, angry words in the comment sections of newspapers.

But here’s the thing? That era wasn’t in the future. It was Revolutionary America. In reading a biography of Samuel Adams, I went through a rabbit trail of historical monographs, dissertations, and journal articles.

And my key takeaway? We’ve been here before.

Newspapers of that era regularly published fabricated quotes and letters from politicians. They exaggerated eyewitness accounts. Authors took false names and hid in anonymity. Many newspapers had large margins where people would write comments that often shifted into flame wars. Others clipped pieces of articles and pasted them into commonplace books that were passed around. On a more individual level, it was a time with catfishing schemes via letters.

One of the best solutions to this challenging environment was to engage in open dialogue about the media they consumed. Often, they discussed multiple sources that contrasted greatly in the form of a conversation at a pub. As a teacher, this might look less like a pub and more like a Socratic Seminar

But they also took on the role of being citizen journalists. I’ve written before about why journalism might be the class of the future. When students engage in journalism, they learn to take on a journalistic mindset that’s similar to what LaGarde and Hudgins describe as “digital detectives.”

But also, you learn to see how media is constructed. Think about it this way. You will learn soccer best by playing soccer. You learn to appreciate art when you create art. By encouraging students to engage in journalism, they learn how information literacy works by actually creating information in a more ethical way.

The key idea here is that if we want to make sense out of the future of information literacy, it might help to look backward as well. Because even though the technology is entirely new, there’s another sense in which we’ve been here before.

 

Final Thoughts

In the end, we can’t predict how the media landscape will change. Our students will need to be adaptable in a world of misinformation. These challenges are big. These challenges are real. But also, these challenges aren’t going away. But as educators, we can equip students to be better informed as critical consumers of information so that they can engage with this changing landscape.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

One Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.