AI has made writing faster and easier than ever before. But it has also nudged writing toward something more generic and predictable. We’re starting to see polished work that looks right on the surface but lacks depth, voice, and real thinking. So the question isn’t simply how we catch students using AI. It’s how we design writing in a way that still demands something human. In this article, I want to explore a more proactive approach, one that focuses on making writing AI-resistant while keeping the focus on what writing is actually for, which is thinking, learning, and making meaning.
![]()
Listen to the Podcast
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify.
The Nightmare of Pleasant AI Communication
Initially, the conversation felt pleasant. The man I spoke with was patient while I described my issue. He asked great follow up questions and responded with empathy. But eventually, I grew frustrated.
“I just need to change the location for my rental car return. Please help me,” I said.
He couldn’t do so. I asked again. Still nothing. I grew more frustrated but he met my frustration with a pleasant, patient tone. Too pleasant. Too patient. The empathy felt artificial. Because it was. And that’s when I begged for a human on the other line.
In many respects, this verbal chatbot was a customer service dream. He was pleasant, patient, and eager to help. He was positive. If my tone got sarcastic or angry, he wouldn’t care. But is that really what we want? Do I want to become the kind of person who feels free to lose my temper with no consequence? And do I want to interact with someone who doesn’t actually demonstrate empathy?
I’ve been thinking about how pervasive AI has become. I don’t want an AI summary of my Instagram direct messages with friends. I don’t need AI rewriting my emails to be less direct and more professional and verbose (just so the other person can read the AI summary of my email). What I actually want is something more human, more distinct, more real. I want authenticity.
Which leads me to a kind of counterintuitive thought. With AI making it easier than ever to generate content, I actually think it’s more important than ever that students become great writers.
Do We Even Need Writing?
I recently gave a talk on AI in education at a university teaching and learning conference. It was a great chance to meet professors in different fields and ask, “How is AI transforming your discipline and how is that similar or different from how you anticipated it?”
I met a psychologist who said, “We were worried that people would use really bad chatbots for counseling. That’s still a concern but a bigger unintended consequence was just how fast we would see young people choosing non-judgmental, positive chatbots for friends and romantic relationships rather than people. We had a hunch but it just moved so much faster than we thought.”
An environmental scientist said, “I was the first to speak out against AI for ecological reasons but it’s so mixed. Efficiencies in supply chains and crop yields have been phenomenal.”
At one point, someone turned the question back to me. She was a business professor who focused on machine learning integration within finance (and the bigger ethical implications).
“Do you think the essay is dead?” she asked.
“I mean, I think it will continue to evolve,” I answered.
“Yeah, but do you think professors will start doing oral exams? If AI can do so much of the writing for us, will verbal communication become the sought after skill?”
“I imagine that verbal communication will become more important than ever before but I still feel that writing is critical. Maybe more important than ever.”
She nodded. “Interesting.”
“Given the sheer amount of AI slop that exists in our world, I think students will need to excel as writers in the areas where AI struggles. They’ll need to become really good with authentic empathy in a way that a chatbot can only mimic. They’ll need to understand contextual thinking at a deeper level. They’ll need to find their voice.”
Later, I was thinking about the power of writing for learning as well. True, students will need to find their voice and communicate in a way that is authentic and relevant. But we also need to remember that writing is important for learning. Our memory can only hold so much information in verbal exchanges but writing allows us to hang on to complexity by creating something more permanent and concrete. When ideas stay in our heads, they slip and blur and simplify too quickly. It’s ephemeral. But when we write can slow down and make connections or really think through a key idea and go back to it. It becomes a kind of mental workspace where we can wrestle with contradictions, refine our thinking, make connections, etc.
It’s not just that we write in order to explain what we learned. We also learn through writing.
But How Do We Stop Cheating?
I was recently at a meeting with K-12 asynchronous English teachers who mentioned how many of their students were now using AI to write everything.
Their principal said, “We need to start with engagement. Students need a reason to want to write.”
I see her point here but it’s a little more complicated. Teachers have to teach specific standards and content and not every student comes in excited about the motifs present in a shared novel they’re reading. Also, some students are going to choose the easy way out because they struggle with focus and resilience. If AI can offer a quick fix, it can be really tempting.
These teachers want their students to develop a strong voice as writers. They also want students to write their way through understanding challenging texts. But in asking the question, “How do we catch kids cheating?” they ran the risk of falsely accusing students of academic dishonesty. They knew that the two options of “make it more interesting” or “catch them cheating” ultimately wouldn’t solve the problem of students using AI to write their papers for them. Both approaches were reactive. Instead, they need to think proactively about how to prevent AI usage throughout the writing process.
Some nuance here. I’ve written before about how we might integrate AI into the writing process and we’ll often move within this continuum as we take this approach.
But I want to focus on AI-resistance for a moment. What does it look like to prevent AI usage before, during, and after the writing process?
Ahead of Time: Craft AI-Resistant Prompts
If we think about making writing AI-resistant, we can take a proactive approach by crafting writing prompts that are AI-resistant. Many educators feel exhausted by student cheating and academic dishonesty. They feel demoralized when they craft a high-interest, critical thinking prompt only to get back a sea of chatbot-generated writing.
Simply saying, “Make it interesting” essentially blames teachers for student cheating. However, if left to their own devices, students will grow overly reliant on AI and lose the ability to write.
Some teachers have responded by requiring paper and pencil for all writing assignments. I’m actually a fan of paper and pencil. It can help with long-term memory and information retention. It can allow us to use visuals and sketch-noting technique. A handwritten approach is a great option when we are doing a single draft as a “learn through writing.” We’ll be exploring that later in this article.
I do think there’s sometimes a cost to the paper and pencil approach. If we are taking more of a “demonstrate what you are learning in writing” rather than “learn through writing” approach, the handwritten process can be laborious and time-consuming. Most of us, as educators, would feel frustrated if we couldn’t type our drafts. Why would our students feel any different?
But there’s another approach that has both a push and pull to it. The pull is the desire to write. Here’s where that principal had a valid point. Motivation is critical. Students are more likely to write when they find the prompt challenging but also meaningful. This is why argumentative writing works so well. Students tend to enjoy making a claim and back it up with facts.
Even small differences in a prompt can lead to an increase in motivation. I tested this out with the causes of World War I. Option 1 was “What were the causes of World War I and how were they interconnected?” Note that this is higher order but not opinion-based. The second was “Which of the causes of World War I do you see in our world? How does this impact the likelihood of another world war?” The third option was “Could World War I have been prevented? Can we prevent it from happening again?” The final option was “Describe how World War I happened. Include each of the causes.” Note that all but the final option required deeper critical thinking. When I tested it, nearly every student chose the second or third option. Many of them wrote more than what was required of them. Part of this is that these prompts pushed students to engage in higher order thinking. They were hitting those top levels of Bloom’s Taxonomy.
Even so, students could easily have used AI to write their answers for any of those options. This is why it helps to craft AI-resistant writing prompts that incorporate a push element, where we are actively making it harder to use AI.
We can start with intrapersonal knowledge. Here students must make personal connections to what they are learning. We can tie the question directly to their lived experiences. An example would be, “Write about a time when you faced a challenge that connects to the theme of resilience in the novel we just read.”
We can also focus on tying writing prompts to a specific, often local, context. Prompts that live in the school, neighborhood, or city at large force students to use knowledge that isn’t available to to AI. A chatbot can’t “read the room.” It doesn’t know what happened in your school and neighborhood. An example might be, “Interview someone in our community about how our town has changed in the last 10 years. Connect their perspective to what we’ve studied about urban development.” Or in writing about The Great Gatsby, it might be, “How does this novel relate to the American Dream? How do you see this theme play out in our school and in your neighborhood? What aspects seem to confirm or contradict the core theme of the novel?”
Sometimes this contextual aspect involves quoting actual conversations from class. You might have students cite quotes from your classroom Socratic Seminar or discussion.
An example would be, “Ask three classmates what they think the biggest challenge is in solving climate change. Summarize their answers and explain which one you agree with most and why.”
Or you might have students take notes and summarize an interview they did with an expert and then incorporate that into their written piece. This can be really powerful in a student journalism project.
In a similar vein, you might incorporate real-time information into a writing prompt. While generative AI has improved in keeping information up to date, there’s still a lag. If this information is local, it’s especially AI-resistant. This strategy is admittedly the hardest to pull off because it forces you to modify prompts on the fly. However, it can definitely add an element of relevance. It might even be something as simple as a prompt in third grade that connects to something that happened that day in PE or on the playground.
We can also incorporate an element of metacognition by asking students to shift perspectives. You can ask students to compare how their ideas evolve over multiple drafts or discussions. While they might still use AI to amplify it, this is something that most chatbots struggle with. An example would be, “Look back at your first journal entry on the Civil War. How has your perspective changed after our debates and readings? Be specific about what shifted and why.”
With this approach, the goal is not simply AI-resistance. It’s about centering the writing process on our human experience. It’s a focus on the aspects of writing that humans do well that chatbots will always struggle with. But we need to recognize that this approach is not AI-proof. It’s merely resistant. Students can still use AI to generate initial drafts. So, let’s look at what we can do in the moment.
During Writing: Incorporate Transparency
As students begin to write, we can take an approach that focuses on trust and transparency. Here, we take a “show your work” approach centered on soft accountability. Let’s think about longform writing for a moment. When students are engaged in the research portion of writing, we can ask them to fill out a graphic organizer by hand or to show their thinking through a sketchnote or concept map. When they create their outlines, we can have them sketch out their outlines by hand or on sticky notes or notecards that they maneuver around physically.
During the actual writing phase, we could go old school and have students write by hand. This actually works really well for short form quick writes. But another option might be to require students to do all of their main writing on a Google Doc where you can track time stamps and check for large chunks of copied text. Again, could they get around this by copying AI-generated text onto their phone and then hand-typing? Sure. But the goal is to make it more challenging through soft accountability.
During editing, we might ask them to take notes on the peer feedback they get using the following process. We might even print up a first draft and have them hand write some of the corrections they want to make. I know that sounds old school but there’s something powerful in the tactile element of reading a draft aloud and making revisions by scratching ideas in the margins or in between the lines.
After Writing: Use AI Checkers (But Not in the Way You Think)
After designing AI-resistant prompts and incorporating soft accountability, we may still encounter student work that feels AI-generated. So, let’s think about how we handle that after the fact.
Let’s Avoid Making Assumptions
I recently read an article about a professor who claims to know when students are using AI. If they use the word “moreover,” that’s apparently a “telltale sign of AI usage.” I immediately went back to my oldest blog posts and checked for “moreover.” Apparently, it’s one of my favorite transition words. Unless I have a DeLorean and a Flux Capicitor, I don’t think I used generative AI back in 2004. For what it’s worth, I use “moreover” in speech as well.
Similarly, I have stopped using em dashes because AI tends to use them frequently.
And yet . . . em dashes have been my go to for years. I’m often unsure if I should use a comma or a semi-colon and my ADHD brain knows that I use parenthesis way too often. However, the em dash functions as the fast writer’s go-to. I love it. Or I did. That is, until I had to scrap it from my writing in order to avoid people claiming my work was AI-generated.
I find it odd how quick people are to try to spot the patterns in AI generated text when that’s precisely what AI does. At a basic level, it’s a prediction machine creating “thoughts” based on patterns in data. “AI uses ‘moreover’ often in academic writing.” Okay, but where did AI “learn” this from? That’s right, massive amount of human-generated writing.
But I actually think this points to a larger issue. In a world with so much AI slop, we have this collective desire to ask, “Is this human?” and if we can find certain common patterns, we can avoid the machines and connect with humans.
And yet, it feels awful when you spend hours writing and someone accuses you of using AI to generate text. It feels like a personal shot to your integrity and character. Which has me thinking about academic dishonesty and AI checkers. If I feel genuinely hurt by the claim that AI wrote something for me, I can’t imagine how a student might feel.
On a basic level, an AI checker is using this same process of pattern recognition. It’s an AI trying to spot AI. A bit like Blade Runner. Maybe too much like Blade Runner for me. But the problem is if you write in a way that is similar to training data or too similar to the AI process of writing (consistent verb tenses, formulaic writing, sharp contrasts) you’ll get flagged. We often see high achievers, Gifted Learners, Autistic Learners (note that I use Autistic here instead of the “people first” version of “with Autism” because I view it from a neurodiversity positive mindset), and English Learners all being flagged for AI usage.
If just 5% of the writing is falsely flagged, then in a typical high school English teacher could easily have 5-20 students being falsely accused. This not only ruins the relationship between the teacher and student but can ruin a child’s future and lead to huge lawsuits.
So, why would I suggest using an AI checker?
Instead of using it as a punitive measure, teachers can use it as a diagnostic tool. They can run student writing through AI and then point out key areas that are flagged as “AI generated.” Students are then asked to rewrite that section from scratch. Not necessarily edit the writing (though that is an option) but come up with something new with more of a distinct voice and then use the checker again to see the score.
The goal here isn’t catching cheating. It’s helping students learn to write in a way that’s less formulaic and that incorporates more of a distinct voice.
AI tends to write in a way that is too general and too bland. I made the comparison in the video to ice cream. AI creates vanilla.
Surrounded by vanilla, our students will need to write in a way that is unique. They’ll need to find their voice. Too often, students think good writing means sounding formal or generic. They have this vague notion of professional or academic writing. And yet, what actually makes writing powerful is clarity (especially in terms of knowing your audience) mixed with personality and perspective.
We can help students get there by inviting them to take risks and to experiment with tone. We can ask them to curate great writing that stands out to them in a way that they might emulate (we all do this). We can ask them to write about ideas that matter to them and to revise not just for accuracy but for impact. That means asking questions like, “Where do you sound most like yourself?” and “What part of this feels flat or forced?” Over time, students begin to see that their voice is not something extra layered on top of writing. Instead, it is the deeper essence of how they write.
We’ll Never Be Able to Stop All Cheating
It’s important to recognize that cheating isn’t new. When I was in high school and college, certain students cheated by paying classmates to write their essays for them. Even as the technology evolved, they could still pull that off. Turn It In couldn’t catch that type of cheating if it tried.
But with AI, it is easier than ever to cheat. It’s cheaper, faster, and harder to detect. And yet we know how important it is that students develop into great writers. This will continue to be an uphill battle but we can fight this battle proactively by focusing on an approach that is, at times, AI-integrated and at times AI-resistant. And we can be strategic about crafting our assignments and lessons to be AI-resistant before, during, and after writing.