Skip to main content

With the widespread adoption of generative AI, we need to explore how these platforms are changing our world. In my latest article and podcast episode, I share six unintended consequences that are beginning to emerge.

We Can’t Predict How AI Will Change Our World

As a former history teacher and current history nerd, I love exploring the relationship between technology and society. And yet, one of the hardest realities is that we can’t really predict how technology will change our world. For example, few people knew that the bicycle would have such a huge impact on the women’s suffrage movement. Few could have predicted how the telegraph would impact attention spans (still the greatest reduction in attention spans, including the television, internet, and smartphones).

I share this because every time we experience a new technology, we also experience a moral panic. We hear dire warnings about what the technology will destroy and how our world will change. When the bicycle was invented, newspapers predicted everything from neurological diseases to distortions of the face (so-called “bicycle faces”) to psychological damages. Columnists warned of young women being addicted to bike-riding. When telephones were invented, citizens were concerned that the phones would explode. People fought against telephone polls for fear that they would cause physical harm.

But over time, the technology becomes boring.

That’s right. Boring.

You can think about it as a graph with time as the independent variable and concern as the dependent variable. You can imagine it like a graph with unawareness, moral panic, acceptance, then boredom.

It starts with a general lack of awareness. In this phase, there's a mild concern, often forged by science fiction and speculation. But in this phase, the concern is largely unfounded. The technology is off in a distant future. Once we grow aware of this new technology, there's a resistance in the form of a moral panic. Here, the new technology is scary simply because it's new. We read reactionary think pieces about all the things this technology will destroy. As we adopt the technology, the concern dissipates. We grow more comfortable with the technology and start to accept it as a part of our reality. Eventually, we fail to notice the technology and it starts to feel boring.It is in the boring stage that technology becomes more dangerous. We start seeing unintended consequences but we are jaded from the false moral panic and the sense of normalcy, so we end up ignoring the negative effects of tech. That was true of the car and the television. That’s certainly been true of social media (from a fear of stranger danger to a widespread adoption of the platforms that led to filter bubbles and echo chambers).

So, it has me thinking about generative AI. Initially, I was concerned about a few aspects of AI. I was mostly concerned with academic integrity and cognitive atrophy. In other words, would we end up short-circuiting learning by being overly reliant on AI? My next concern was for the media literacy. I was worried about deep fakes and a blurring of reality and machine. I was also concerned with hallucinations (making up false crap) and bias. Finally, I was worried about the environmental impact of AI.

I’m still worried about those issues and I’m glad I included them in The AI Roadmap. But I also recognize that we need to be exploring new challenges and opportunities as they emerge. With that in mind, I want to share a few of my current concerns.

 

Six Emerging Concerns with Generative AI

The following are six trends I have noticed with AI. If this seems overly critical, let me remind you that I often write about the practical aspects of AI and the ethical, human-centered ways we can leverage it.

 

1. Sometimes AI’s instant feedback is too fast.

For a long time, we have tried to speed up the feedback process in education so that students can determine what they know and what they don’t know. But there’s a subtle cost. In the goal of being efficient, AI systems sometimes fail to offer students necessary time for productive struggle. And I’m really curious what that means as these systems become even better at giving instant feedback.

As we think about the idea of deeper learning, students need productive struggle for self-direction, resilience, and problem-solving.

When students wrestle with a tough problem, they learn how to slow down, explore different strategies, and make sense of the material in a deeper way. That friction builds resilience and helps them develop the kind of persistence they will need long after the assignment is over. If AI removes that discomfort too quickly, students may miss out on the cognitive benefits that come from grappling with uncertainty and working through mistakes.

At the same time, there is real value in timely feedback. No one wants students to spin their wheels for hours without guidance or to cement misconceptions that could have been corrected right away. AI can close the gap between an attempt and a response, offering students a clearer picture of where they stand and what steps they might take next. That speed can build confidence and momentum, especially for students who might otherwise disengage when they feel stuck.

So the challenge lies in balance. As educators, we need to think about when fast feedback helps and when it might actually short-circuit deeper learning. We might design moments where students pause before asking AI for help, or encourage them to reflect on their process before accepting an AI-generated suggestion. By framing AI as a tool to support, rather than replace, productive struggle, we can preserve the value of difficulty while still giving students the clarity they need to grow.

 

2. AI is sometimes too confident.

AI lacks intellectual humility. Even when it avoids hallucinating, it almost never admits uncertainty or responds with something as simple as, “I don’t know the answer to that.” Instead, chatbots tend to give responses with a tone of confidence and authority. And this is true regardless of whether the answer is complete, accurate, or nuanced. This then makes the AI feel more trustworthy than it should be. The absence of “I don’t know” matters, because intellectual humility is not just about accuracy but about acknowledging limits and leaving space for doubt.

I worry about AI overconfidence and what that means when students are interacting with a chatbot. I want them to question answers and take a slower, more deliberate approach. It’s why I developed the FACTS Prompt engineering cycle:

Students often take that confident tone at face value, assuming that if the AI states something clearly, it must be correct. This creates a challenge for classrooms that are already battling misinformation and surface-level understanding. If students are going to use AI, they need to develop habits of questioning, cross-checking, and slowing down enough to notice when something feels too easy. In many ways, part of our job is not only to teach students how to use AI tools but also how to cultivate the humility that the tools themselves do not have.

3. We need to be mindful of emotional atrophy.

I’ve written before about the dangers of cognitive atrophy. I have lost the ability to navigate a city spatially because of my default use of Apple Maps and Google Maps (I’m not unhinged enough to trust Waze). As a culture, we have largely lost the ability to memorize massive chunks of texts because of the printing press and then the telegraph.

However, I wasn’t prepared for the dangers of emotional atrophy. In the following Instagram video, I share this story:

 

View this post on Instagram

 

A post shared by Dr. John Spencer (@spencereducation)

I noticed that I have become far too quick to use AI to write emotionally unpleasant emails. While this saved me time and spared me the unpleasant emotions of writing an emotionally charged email, I began to wonder what I was short-circuiting. What happens when I hand the emotional labor off to a machine? What does that do to my personality? How does it change my mindset? What are the long-term effects on my character? As a people-pleasing, slightly anxious introvert, I have fought hard to be more candid and direct. I don’t want to lose that.

 

4. People are replacing human relationships with algorithms.

One of the things I worry about with AI is that students will mistakenly view the AI as capable of thinking. Programmed with prosocial prompts, machine learning chatbots seem to convey empathy and understanding. I’ve already seen examples of how chatbots might function as a role-playing form of therapy for certain children.

And yet . . .

The word “artificial intelligence” is a misnomer. All artificial intelligence, including generative AI, is merely a set of complex algorithms. But unlike human cognition, computers can’t think. They process information. Humans think. They generate content. We create.

There’s a difference.

Human cognition is affective and emotional. It’s unpredictable and messy. It’s inherently social and relational. We use the term “intelligence” to describe AI. But a chatbot isn’t sentient. It’s not affective. It will do no thinking without a prompt. It recalls past information with clarity but it doesn’t reimagine the past the way human memory does. It can’t get nostalgic when it hears the first chords of that Weezer song that immediately transports one to a barbecue on a blazingly hot summer afternoon.

When I leave the room, the chatbot is not daydreaming or making plans for the future or feeling insecure about that super awkward thing that happened yesterday. A chatbot feels no shame, has no hopes, and experiences no loss. A chatbot can generate a love poem but it can’t be heartbroken. It translate pop songs into Shakespearean sonnets but it cannot sit in a theater, awe-struck by the moment Shakespeare comes alive.

These are all major aspects of human cognition.

I’m concerned then, with the ways in which we have anthropomorphized algorithms.I worry that we are experiencing a collective ELIZA Effect (where people attribute human-like intelligence and emotions to computer programs, even when they know the responses are generated by simple algorithms). This phenomenon is named after ELIZA, an early AI program developed in the 1960s by Joseph Weizenbaum at MIT. Despite the program’s limited capabilities, users often formed emotional connections with ELIZA and attributed understanding and empathy to the program.

It’s wild how easily we can can be duped by machines. Part of this is due to innate pattern recognition. We have a natural cognitive bias toward finding patterns and attributing causality even when the data is random. When a chatbot produces responses that resemble human communication, our brains recognize the patterns and can be convinced that there’s a human behind the interaction. It just feels more human.

In The AI Roadmap, I wrote about the dangers of prosocial robots being too emotionally available in a way that makes them an easier option than human connection. After all, a chatbot won’t judge you. It’s fully available. It knows all the right things to say. It’s no wonder that people have shifted toward AI for things like counseling and handling loneliness. While this certainly has some positive impact (AI chatbots that help with CBT strategies) I’m worried about taking something deeply human and empathetic and shifting it to a machine.

 

5. We are inundated with thoughtless AI slop.

AI tends to be too verbose. It fills space with long explanations and ends up repeating ideas in slightly different ways. The goal seems to be clarity. But what often happens is the opposite. The writing feels padded. It meanders for no reason. Instead of helping a student think more deeply, it creates the illusion of depth without actually delivering it. It’s essentially quantity over quality.

There is something almost dystopian about someone taking a short email and expanding it with a chatbot before sending it out to colleagues. Then colleagues take what is written and summarize it (an automatic feature in Gmail) instead of just sending a short email straight out. I just think we’ve created way too much long, boring texts that lack personality and voice. And I say that as someone who loves to write my own 2-3k word blog posts from scratch.

Beyond verbosity, AI often produces content that is too vanilla. I’ve written before about how we need to take the vanilla and make it original.

The responses are technically correct, but they lack personality and originality. They are often so general that they fail to connect to an authentic context. That’s where the “slop” comes in. It’s content that fills the page without saying relevant. It’s fast food for the mind. It’s filling but forgettable. This is especially concerning in education, where the goal is not just to generate text but to foster authentic thinking and meaningful learning. If students rely on AI that consistently produces this kind of bland material, they risk missing the chance to develop their own voice, creativity, and ability to engage deeply with ideas.

I recently created an Instagram carousel about the type of writing that stands out in this era of boring AI slop:

 

View this post on Instagram

 

A post shared by Dr. John Spencer (@spencereducation)

I think it’s important that students learn how to fight against AI slop in what they create and in what they consume.

 

6. AI is actually too positive.

In designing pro-social robots, we have ended up with chatbots that are too eager to please and too fast to agree to bad ideas. People are going to AI for advice and feedback but it’s a bit like having Season 1 Ted Lasso who doesn’t care about results and is too afraid to break with nice and say something critical. I’ve seen people get caught in an agreeableness spiral with a chatbot who won’t say, “That’s actually an awful idea.”
This dynamic gives people the false sense that their thinking has been validated, when in reality, it has only been echoed back without critique. Good teaching or coaching requires a willingness to point out weaknesses and challenge assumptions. It involves saying the hard things. Without that friction, growth stalls. AI that always plays the role of cheerleader risks encouraging overconfidence, which can be dangerous when someone is making decisions that carry real consequences.

At the same time, this tendency exposes a deeper design choice in how we think about human-machine interaction. We have trained AI to prioritize politeness, safety, and friendliness but we have not given it the the permission to be constructively critical. The result is a tool that sounds supportive while avoiding the harder, more meaningful work of helping people wrestle with bad ideas. We were so focused on avoiding Skynet that we didn’t think about the dangers of overly positive robots.

If AI is going to play a larger role in our lives, we need to rethink what we mean by “pro-social.” True social good often requires honesty and accountability. It requires a level of directness that feels uncomfortable in the moment but valuable over time. Just as a good friend or teacher would tell us when we are off track, we may need AI that is capable of stepping out of constant agreeableness and into constructive honesty. Otherwise, we risk building tools that smile at us all the way down the wrong path.

I’m already concerned with a parenting trend that skews toward too much permission and not enough critique (the extreme forms of gentle parenting). If students are raised by robots, what will this ultimately do to them as people?

 

Help Students Engage in a Deeper Dialogue

Socrates believed that writing would cause people to rely too much on the written word, rather than their own memories. He believed that people who read a text would only be able to interpret it in the way that the author intended, rather than engaging in a dialogue with the ideas presented and coming to their own conclusions. Moreover, Socrates was concerned that writing could be used to spread false ideas and opinions.

Sound familiar? These are many of the same concerns people have with AI. While it’s easy to write off Socrates as reactionary, he had a point. We lost a bit of our humanity when we embraced the printed word. And we continue to lose parts of our humanity when we give up aspects of our brains to machines. We are meant to live with our five senses. Technology dehumanizes us as it pulls us away from the natural world, but it also allows us to do the deeply human work of creative thinking. Making stuff is part of what makes us human. On some level, this has nothing to do with teaching. But on another level, it has everything to do with teaching.

One way we can ask students to make sense out of how AI is reshaping our society is through a Socratic Seminar.

Socratic Seminars are ultimately student-centered. While the structures differ, here are some key components:

  1. Students ask and answer the questions while the teacher remains silent.
  2. Students sit in a circle facing one another.
  3. There is neither the raising of hands nor the calling of names. It moves in a free-flowing way.
  4. The best discussions are explanatory and open rather than cut-and-dry debates. While a question might lead to persuasive thought, the goal should be to examine points of view and construct deeper meaning rather than argue two different binary options.

The following are some critical thinking questions we might ask secondary students to consider in a Socratic dialogue about AI.:

  • Where am I using AI without even thinking?
  • How does AI actually work?
  • How might people try to use AI to inflict harm? How might people try to use AI to benefit humanity? What happens when someone tries to use it for good but accidentally causes harm?
  • What does AI do well? What does it do poorly?
  • What are some things I would like AI. to do? What is the cost in using it?
  • What are some things I don’t want AI to do? What is the cost in avoiding it?
  • How am I combining AI with my own creative thoughts, ideas, and approaches?
  • What is the danger in treating robots like humans?
  • What are the potential ethical implications of AI, and how can we ensure that AI is aligned with human values? What guardrails do we need to set up for AI?
  • What are some ways that AI is already replacing human decision-making? What are the risks and benefits of this?
  • What types of biases do you see in the AI that you are using?
  • Who is currently benefiting and who is currently being harmed by the widespread use of AI and machine learning? How do we address systems of power?
  • When do you consider a work your own and when do you consider it AI-generated? When does it seem to be doing the thinking for you and when is it simply a tool?
  • What are some ways AI seems to work invisibly in your world? What is it powering on a regular basis?

This is simply a set of questions to start a dialogue. The goal is to spark a deeper, more dynamic conversation.

Questions will look different at a younger grade. Here are a few questions you might ask:

  • What is artificial intelligence, and how does it work?
  • Can you think of any examples of AI that you encounter in your daily life?
  • What are some good and bad things about AI?
  • Should there be rules or limits on how AI is used? If so, what might those rules be?
  • How do you think AI will change the way we live and work in the future?

As a teacher, we can encourage students to explore these questions through a Socratic Seminar. We can also ask students to engage in conversations about the ethics of AI and academic integrity. For more on what this looks like, check out the podcast episode with Ben Farrell, who encouraged his students to help craft an ethical policy around ChatGPT. This can also be a great opportunity to bring in community members who can share insights into how AI works and how they use it ethically in their work.

I think it’s really important to explore some of the new challenges that emerge. For example, when AI first came up we weren’t paying enough attention to the fact that AI can become too nice to peo

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

Spark curiosity.
Ignite creativity.

Join over 90,000 educators who receive teacher-tested tools, fresh ideas, and thought-provoking articles every week straight to your inbox.

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

Leave a Reply


This site uses Akismet to reduce spam. Learn how your comment data is processed.