Skip to main content

Throughout this summer, I’ve had the opportunity to lead professional development on ethical AI integration throughout the United States. In some cases, it’s been a keynote and customized sessions. In other cases, it’s been a deep dive 1-3 day workshop. The audience has ranged from K-12 teachers to educational leaders to faculty at the university level. Even when I lead sessions on project-based learning and design thinking, student empowerment, or deeper learning, AI has been a part of the context. It’s redefining the way we do our work.

In talking to so many educators,  I’m struck by is how many of them are being intentional and creative in their approaches to AI. They’re not merely reacting to it. They’re not ignoring AI’s presence. They’re asking hard questions, discovering innovative solutions, and thinking ethically about what it means to live in an era of intense change.

We’re in the midst of a massive shift. Not a subtle pivot or gentle curve, but a full-on tectonic plate movement. AI is already transforming how we write, communicate, plan, and even think. But with any disruptive change, the real challenge isn’t just keeping up. It’s slowing down long enough to ask better questions. What kind of learning do we want to protect? What human skills do we want to amplify? What do we risk losing if we chase efficiency at the expense of intentionality?

This is where educators have a unique opportunity. We can’t ignore the changes. We shouldn’t. But we also don’t need to sprint toward every shiny tool. Instead, we can take a thoughtful, curious approach. Try things out. Observe the impact. Keep what works. Leave what doesn’t. The goal is not to reject AI or to embrace it blindly. It’s to integrate it wisely, always grounded in the kind of deep, human-centered learning we know matters most.

The shift is real but we don't have to rush. Let's lead with purpose, not panic.

Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via Apple Podcasts  or Spotify.

 

I’m an Accidental Techie

I’m an accidental techie. I was not the kid who grew up loving Audio-Visual equipment. We had a Tandy computer and I learned a tiny of code but I spent my days playing outside, drawing pictures, painting, and getting lost in fantastical fictional worlds. Don’t get me wrong. I watched television and I played a fair bit of Sega Genesis. But I never loved technology. In fact, I grew skeptical of technology in college while reading Wendell Berry, reading everything Neil Postman had written (along with most of Marshall McLuhan), and analyzing the primary sources of the Luddite in a historical methods course.

In other words, I was not a techie. That is, until I started teaching. I first fell in love with blogging in 2002 as I wrote my first “weblog” called “Musings from a Not-So-Master Teacher.” And yet, it wasn’t about the technology. It was the connection to others. It was the creativity. Technology has the ability to make the creative process faster and cheaper and suddenly, I could do things I never had the chance to do. As a classroom teacher, I pursued ed tech with a blend of skepticism and love. I know “love” is a strange word here. But that’s what it was. I grew to love the potential that technology offered.

I refurbished a bunch of old desktops and ran them on Linux so my students could use Google Docs and Blogger. I set up a podcasting station and had students film documentaries. I would later work as a technology coach, help develop the digital journalism course, and ultimately design my own makerspace. I earned a degree in Ed Tech.

But I was also skeptical of passing trends. Remember when Marzano said we all needed Interactive Whiteboards and suddenly districts spent millions on them? Remember when one-to-one devices were touted as the end all be all? I remember attending a huge technology conference and seeing a flatscreen device that could work as a xylophone and thinking, “Or maybe we just let kids use xylophones.” Still, I slipped into tech hype as well. I remember “going paperless” for a full quarter only to have a student tell me how much she missed sketchnotes, sticky notes, and Socratic Seminars.

It was at this point, that I realized I wanted to take more of a “vintage innovation” approach to tech:

So, here I am entering my twenty-third year as an educator and I’m still an accidental techie. I don’t know every single AI platform for education. I don’t get particularly excited about things like App Smashing (though I think they are cool). Instead, I am thinking about the bigger picture of what learning looks like in a generative age. And so, I’d like to share a few thoughts about how we can approach machine learning as we start out the school year.

Let’s lead with curiosity, not fear. Let’s stay grounded, not reactive.Start with Curiosity

For the longest time, I felt like I was ahead of the curve with technology. I avoided the fads (well, except for that quarter when I went paperless and the short period where I was obsessed with Wordle – not to be confused with the game). But I would also adopt new technology if it could lead to deeper learning.

And yet . . .

Generative AI caught me off guard. In 2017, when I first witnessed the newest iterations of generative AI, was overwhelmed. I immediately started to ask, “Is this good or bad?” followed by “How do we leverage the strengths and avoid the weaknesses?” But then, I talked to an expert who said, “What does all of this make you wonder?”

We spent an hour talking about what it means to be human, how we use our tools, and how our tools transform our minds. I left that meeting wondering if maybe I needed to start from a place of curiosity rather than critical analysis. I needed to ask questions and chase rabbit trails. I needed some play time to explore what AI could and could not do.

So, this has me thinking about schools and districts. Yes, we need policies. True, we need meaningful professional development. But I wonder if maybe we also need curiosity. You might start out by having some sandbox time where you explore how AI actually works by going through the FACTS Cycle of prompt engineering:

 

You might also need to come up with a set of questions that you, as an institution need to ask before having teachers or students use AI. About a year ago, I wrote this article about seven questions you might ask before having students use AI. You could start with those questions and then move into a larger dialogue. You might even engage in a Socratic Seminar.

This stance of curiosity has the added bonus of keeping us humble. The hard reality is that we cannot predict how technology will change our world. Few people thought the bicycle would help lead to the amplification of the Suffrage Movement. Few people could have predicted how social media would lead to filter bubbles, echo chambers, and changing attention spans (including the short-term dopamine spikes leading to challenges in attention and emotional regulation in many youth).

When we start with questions and we ask, “What’s going on here?” we are more likely to avoid snap judgments and the dangerous dead ends of both Techno-Futurism and the Lock It and Block Approach.

 

Taking an Ethical Approach

Two and a half years ago, when ChatGPT first made a splash in the public consciousness, I wrote an article and created a video asking how we should approach AI in education. I shared two dead ends to avoid and one approach that we might want to embrace as well.

Avoiding Two Dead Ends

The first dead end is Techno-Futurism. This is what happens when we start with the question, “What can AI do that humans can’t?” and then scrap all those pieces that a machine can do instead. This dead end sees the promise of transformation but sadly places technology as the driver of that transformation. I’ve seen this with the bold statements that AI will replace the essay or that we won’t even need teachers because students will simply sit alone with a chatbot and learn self-paced content. Students will have personalized learning at their fingertips.

This dead end mistakes teaching for content delivery and fails to recognize that learning is deeply human and dynamic. It is often collaborative and creative. We end up mistaking adaptive learning systems for personalized learning. Furthermore, Techno-Futurism mistakes novelty for innovation and we end up chasing shallow trends rather than sustainable change.

But there’s a second dead end that takes us in the opposite direction. This is the Lock It and Block It approach. Here, schools block all forms of AI from school. They often shift toward paper and pencil and ban technology altogether. In some cases, schools use AI detection software (which creates massive legal challenges and the high likelihood of falsely accusing students of cheating). Meanwhile, students never have the opportunity to learn how to use AI in a way that is ethical and intentional.

A Third Way: Being Ethical and Intentional

Fortunately, there’s a third approach that avoids the extremes of Techno-Futurism and Lock It and Block It. This is a human-driven approach that focuses on allowing the learning to drive the AI usage but also being open to ways that generative AI can transform the learning process. This approach is inherently blended. It’s yes / and. This venn diagram is an overlap of AI and the human voice with the word "blended" in the middle

This third way begins with the question, “What does it mean to use AI ethically?” and from there, it recognizes that we will likely change our use of AI based on the content. Instead of being pro-AI or anti-AI, this approach sees AI as a powerful tool that we need to use wisely with a hefty dose of humility.

Sometimes it helps to think of AI use as a continuum from rejecting to embracing its use.

As we navigate the rapid evolution of generative AI in education, it’s helpful to think in terms of a continuum rather than a hierarchy. It’s not about one being better than the other, but about choosing intentionally based on your goals, your learners, and your context. At one end is the AI-Resistant approach, which centers on tactile, human-centered learning with no AI use. Then comes AI-Assisted, where teachers use AI behind the scenes to streamline prep and planning while students do not use it directly. Next is AI-Integrated, where students and teachers both use AI as a support for learning, with tools woven into instruction in a thoughtful, standards-driven way. At the far end is the AI-Driven model, where machine learning reshapes what and how students learn, inviting educators to reimagine the entire learning experience in a world shaped by AI.

As educators, we are going to move back and forth between these modes. You might switch between these models on an assignment by assignment bases or even within a single lesson. In my assessment course, I had students complete an in-class AI-Resistant task, then switch to something AI-Integrated, then move toward a homework assignment that was much more AI-Driven.

The goal isn’t to compete with AI. It’s to stay deeply human in how we teach and how we learn.

Taking a Cyborg or a Centaur Approach

This third way also blends together the Cyborg and Centaur approaches to using AI. Ethan Mollick’s book Co-Intelligence and it has me thinking about a metaphor he used at the end of the of the book – the centaurs and cyborgs. A centaur divides the task between human and machine. The human does some parts, the AI does others. It’s collaborative but separate. You might see this in a writer who brainstorms ideas with ChatGPT, then drafts and edits everything themselves. The machine offers structure or inspiration, but the creative decisions stay with the human. I do this centaur approach in my own writing. I outline it and then ask for feedback. I might stop and have a conversation to add clarity on a topic. I use Consensus to explore research but then I write it in my words.

A cyborg, on the other hand, blends the work together in real time. It’s less about switching roles and more about co-creating. A cyborg approach might look like someone generating ideas, asking questions, revising with AI feedback, and shifting approaches midstream in a constant back and forth. I have a friend who describes the writing process for him as being closer to that of a writer’s room for a tv show. He interacts with two characters that he has prompted in AI that he uses to bounce ideas (via ChatGPT) and then he writes in a back and forth draft format with AI-generated text in his own style that he then modifies.

Mollick argues that cyborgs tend to outperform centaurs. Not because they rely more on AI, but because they’ve learned to weave it into their thinking as a kind of extended cognition. They’re not outsourcing the work. They’re enhancing their own abilities by tapping into what AI does well with things like generating ideas, testing variations, and offering structure. In this way, AI becomes more like a thought partner than a tool. In other words, the most effective users aren’t just skilled at prompting. They’re reflective, iterative, and curious. They stay in control of the process, but they also allow the process to be shaped by the interaction.

So, as educators, we might want to do some centaur tasks. You might let AI run DIBBELS testing, for example. You might use it to do some aspect of the job that feels tedious and time consuming. But you might also want to keep some things fully human and gate off the AI. However, you’ll also blend together the AI with the human in a way that resembles ice cream.

As we consider this third way of ethical integration, we need to think critically about the pros and cons of AI.

 

Being Cognizant of the Concerns and Limitations of Generative AI

The other day, my daughter watched me do a Google Search and saw me point to something with the AI Overview. She immediately shook her head and said, “Dad, you’re asking about the set list for Regina Spektor’s concert. It’s going to give you an overview of so many different concerts. Maybe you should look at Reddit.”

“Yeah, I don’t know what I was thinking,” I answered.

“It’s fine. We all hit autopilot,” she said.

This is a small example of a limitation in AI. It doesn’t know the immediate context. The best choice here was actually to hop onto Reddit and look at multiple threads to see what her set lists have been recently.

As we think about AI, contextual understanding is a huge limitation. AI cannot “read the room.” It doesn’t know about current events, even when it tries to stay up to date. It doesn’t know about the local community. AI has no idea what a student just said in that Socratic Seminar ten minutes ago. We often have to give it context using a RAFT (an idea taken from Project CRISS).

Even when we do give it context, AI often defaults to the training data. It will essentially “forget” a correction you gave it a few prompts before and it struggles to keep pace with information when you change your mind. I saw this firsthand when I was using it as a thought partner to ask questions and give feedback on a novel I was writing. At one point, after analyzing 70,000 words, it simply failed to understand what I had shared and it thought the ending was what I had written in Chapter 4. It confused characters’ names. The bottom line is that it doesn’t always maintain consistency. The main culprit is that while AI is getting better with deeper reasoning, it still doesn’t engage in causal reasoning and logic. Which leads to my next point.

Generative AI will contain factual inaccuracies. Even when the temperatures is low (more accurate and less creative), there will be hallucinations, where it simply makes things up. It’s getting better but we still need to engage in media literacy when interacting with a chatbot. In addition, AI contains biases. Quick challenge. Ask for AI to create an image of a left-handed painter. It struggles with this because it predictive analytics based on the most common training data available. It’s no surprise then that AI models can flatten cultural complexity, misrepresent communities, or rely on biased training data that doesn’t reflect the lived experiences of diverse groups.

Furthermore, AI misses many of the deeply human elements that a machine cannot replicate. AI cannot be truly creative. It can remix and reword, but it doesn’t wrestle with ideas. It doesn’t struggle through drafts or stare out the window waiting for a breakthrough. It mimics creativity, but it doesn’t originate. Same with empathy. The tone might be warm. The phrasing might sound human. But it doesn’t actually care. And that matters, especially in classrooms, where students need to feel seen, heard, and valued.

We also see broader unintended consequences. AI cannot engage in ethical thinking. Without a grounding in ethics, we can end up with tools that amplify bias, spread misinformation, or cause real harm in the name of innovation. We’ve already seen what happens when deep fakes blur the line between fact and fiction. Trust erodes. Truth feels negotiable.  But sometimes it’s more subtle. We grow reliant on AI tools and experience cognitive atrophy. That’s why it’s important that we don’t let AI do all the thinking for us.

Embracing the Potential Uses of AI

If the last section makes it seem like I’m in the Lock It and Block It camp, I think we need to recognize that AI is a powerful tool that can help us do amazing work.

  • Creating learning supports: Designing sentence stems, writing frames, and graphic organizers that give students a way into the learning without lowering the bar. It can be a great tool for helping students with Executive Function challenges to stay on track and to do things like visualizing time or breaking down large tasks.

  • As teachers and students, we can use it as a co-creation tool. For example, we might integrate AI into every part of the writing process.

  • Using it as a study aid, where students submit their work to AI and it analyzes the work, offers trends, and then they can use it to study. It can create leveled texts and differentiated reading passages so all students can access content at just the right challenge level. It can also generate quick tutorials, anchor charts, or models that help reinforce concepts without requiring hours of prep. Students can ask the AI questions and it will test them and offer feedback on why their answers were wrong or right.
  • Using it as an inquiry tool. You can let students ask their own follow-up questions in real time. They can build background knowledge on concepts, learn specific skills, interview historic figures, etc.

  • It’s a great thought partner. You can use it as a brainstorming partner during inquiry or research, helping students refine questions and build background knowledge. In fact, we can even use it as a thought partner within PBL.

  • We can use it as an assessment tool. This might include offering feedback on student work in the moment, so revision happens while the ideas are still fresh. It could involve helping students reflect by summarizing their thinking and asking clarifying questions. As teachers, we can use it to design rubrics or to analyze formative data or student responses to find trends that can inform next steps
  • We can use it as a productivity tool for drafting parent emails, rubrics, or lesson outlines so teachers can spend more time connecting with students. We might use to to build slide decks, tutorials, and quick videos that support instruction without burning out the teacher

These are just a few of the ideas that came to mind off the top of my head. I’ve seen teachers use AI as a cognitive coaching partner to guide them in reflection. I’ve seen them use it to help with logistics around small group formation. Again, the sandbox time is critical here.

One of my biggest questions that I wrestle with is, “Does this lead to deeper learning for my students?” In other words, will it give them the depth advantage they need in a changing world.

We can start out by asking, “What am I already doing?” and the build on it. Some of the most creative uses of AI come from the teachers right down the hall from you.

Finally, we need to consider the role of policy. I’m working with a school in New Hampshire as part of a back to school kickoff and a keynote in the fall. In our conversations, we have talked about national policy but also statewide policy and guidance, as well as local community perspectives.

Here’s where it also helps to invite community members to talk about what a blended approach might look like. You might do a Socratic Seminar with students and listen to their fears, concerns, and aspects that they find exciting. You might do an in-depth interview with local experts. You might hold a town hall meeting with parental figures.

 

Some of the most creative uses of AI come from the teachers right down the hall.Start Out with Intentionality

So here we are. A new school year is beginning, and the ground is still shifting. Generative AI isn’t a passing trend. It’s a tool, a context, and in many ways, a mirror that reflects how we think, learn, and create. But it doesn’t have to take over. As educators, we get to decide how we respond. We can lead with wisdom, not panic. We can make space for nuance, not quick fixes. And we can keep students at the center of it all.

I’ve seen some schools that seem worried about falling behind. Are we doing enough? Are we moving too slow? But I think the pace is less important than the attention and the conversations. This is why I keep coming back to curiosity, conversation, and thoughtful experimentation. We can listen to students. We can design learning experiences that are both deeply human and genuinely innovative.

We can create policies that are flexible, grounded, and informed by the people who live this work every day. Most of all, we can remind ourselves that our goal is not to compete with AI. It is to cultivate thinking, creativity, empathy, and connection. These are the very things that make learning matter.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

Spark curiosity.
Ignite creativity.

Join over 90,000 educators who receive teacher-tested tools, fresh ideas, and thought-provoking articles every week straight to your inbox.

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

Leave a Reply


This site uses Akismet to reduce spam. Learn how your comment data is processed.