Skip to main content

When I do workshops on student-centered assessment practices or teach the Assessment Design course as a professor, I’ve noticed that people feel strong emotions about assessment. As educators, all of our assessment practices come from a set of values and beliefs that we hold. Do we give extra credit? If so, are we valuing accuracy or effort? Do we take points off for late work? If so, is this about measuring the mastery of learning or teaching students how to stick to deadlines and work hard? Do we grade for effort or grade based upon a set of criteria? Do we give group grades on projects or is that unfair to individuals who did all the work? Should we use weighted categories? Should students get the opportunity to resubmit work or retake tests? Does that help them gain perseverance or lead to laziness?

For the last decade, I have watched teachers engage in heated debates over these issues. And I understand why. These assessment practices are rooted in our values about fairness and justice. They connect to what we believe about teaching and learning and why we are in the classroom as educators. Tell me to try a new discussion protocol with small groups? I’m down for that. Tell me to eliminate zeroes from the grade book? I’ve got strong feelings about it.

At a basic level, most of us can agree that assessment exists for a few reasons:

  • Assessment helps teachers know how individual students are doing. Did this particular student master the learning target? Does this student understand the concepts? Has this student mastered the skills? Is this student engaged in the learning? Does the student need additional supports or scaffolds? These questions help us identify which students need enrichment and which need intervention (like a small group pull-out).
  • Assessment helps teachers know how the class as a whole is doing. What are the overall trends that you notice? This helps teachers with the larger lesson planning cycle of planning, implementing, and assessing.
  • Assessment helps students know their mastery level. They should be able to comprehend what they know, what they don’t know, and where they will go next. But assessment might also help increase student motivation or self-efficacy. It might be less evaluative and be more descriptive.

So, I’d like to consider how AI might change our assessment practices in the upcoming years. The following are five potential trends we might see.

This is a sketchnote with all 5 ideas: 1. Less grading, more assessment. 2. Empowering students to own the assessment process 3. Faster feedback. 4. Increased differentation of assessements 5. Predictive analyticsListen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).

 

Trend #1: Less Grading and More Assessment

When I interviewed Mark Schneider, he described how AI will be used in the future within speech therapy. Currently, speech pathologists spend hours doing the initial intake tests. The work can be tedious and time-consuming. With the combination of generative AI and voice recognition, we will eventually have AI tools that can run the speech test and provide the initial data that the pathologist needs. This data can go beyond the simple score and actually provide insights into what type of speech therapy the student might need. Speech pathologists can then engage in ongoing assessment in a way that’s more dynamic and relational.

Now imagine a similar technology for something like reading fluency. Currently, early elementary teachers spend hours doing reading fluency tests on a one-on-one level. It can get boring at times and that’s when human error begins to slip through. Fortunately, AI doesn’t get tired. So, instead, we might see speech recognition technology used within AI for a reading fluency test. Students will take the test on a computer or a smart device, get an immediate score, and see how it compares to their reading goals. This frees teachers up to engage in student-teacher conferences and small groups. Teachers will also be able to adjust the small groups because the data is more frequent.

It’s not that AI replaces teachers in the assessment process. It’s just that we outsource the areas most prone to human error and focus instead of the more relational elements of assessment.

 

Trend #2: Faster Feedback

When I was a kid, I loved playing baseball. I spent hours hitting a ball off a tee. My twin brother and I would do soft toss in the backyard. I probably averaged 40+ hours per week in the summer just practicing the game of baseball. You’d think I would have been a stellar hitter. After all, practice makes perfect. But actually, I was awful. I couldn’t see the ball well. When I hit it, I often fouled it away. Then, in the rare occasion that my coach corrected my swing, I found it nearly impossible to correct it. So, what was going on? I struggled from a lack of feedback and I had spent hours committing a poor swing to muscle memory.

For decades, we’ve known that it’s vital for students to get immediate feedback. When learning a skill, a lag in feedback can mean students practice things incorrectly. It’s not all that different from my baseball swing. When learning a concept, a lag in feedback can mean a misconception lasts for weeks and leads to multiple misconceptions. For some students, a delay in feedback leads to a loss in motivation.

With AI, we have the ability to provide faster feedback. A teacher might run an essay through an AI assessment tool and provide a set of suggestions that the student then uses. A teacher might use an AI tool with a math problem to get a quick diagnosis of what mistakes might be occurring in computational fluency or mathematical reasoning. With AI, we will see faster feedback that can lead to more timely corrections. While I generally embrace this trend, I think we need to be a little worried about the lack of struggle. If we are identifying mistakes fast and providing instant feedback, I worry that we might accidentally quash productive struggle and this could lead to learned helplessness.

As a diagnostic tool, AI has the potential to help us identify mistakes that we don’t see and to provide targeted feedback quickly. But it might also move a step further in predicting mistakes before they happen.

 

Trend #3: Predictive Analysis

Machine learning tends to excel in finding patterns and using those patterns for predictive analysis. In assessment, this might look like:

  • Adaptive learning: AI algorithms can analyze data on students’ past performance to develop adaptive learning plans that can help students learn more effectively and efficiently. They might take into consideration student interests, attention span, work completion rate, or any number of data factors to help predict what kind of assignment might lead to “stickier” learning for the student. For more on this idea, see my previous article.
  • Early identification of at-risk students: AI algorithms can analyze data on students’ attendance, grades, and other factors to identify students who are at risk of falling behind or dropping out of school. This can help teachers and administrators intervene early to provide support and prevent students from falling through the cracks.
  • Adaptive assessments: AI algorithms can analyze data on students’ responses to assessment questions to adjust the difficulty level of subsequent questions, ensuring that each student is challenged appropriately and accurately assessed.
  • Predictive modeling: AI algorithms can use data on past student performance to develop predictive models that can identify factors that are likely to lead to success or failure. This can help teachers and administrators make data-driven decisions to improve student outcomes.

I have mixed feelings on the predictive element. For example, if we use AI to identify “at-risk” students, are we creating a self-fulfilling prophecy? Are we falling into a deficit mindset? There’s something vaguely Black Mirror about the use of Big Data and AI for predicting a student’s performance. And yet, there’s another side that feels more hopeful. If AI can look at past math performance, we might be able to predict upcoming struggles and actually move toward compacting, where students skip the standards they’ve already mastered and spend longer on the areas where they struggle.

 

Trend #4: More Differentiation within Assessments

Differentiation takes time. There’s no way around it. Even when we empower our students to self-select scaffolds, it takes time to design those scaffolds. With AI, differentiation is now faster and easier. We might see a trend toward differentiated assessment. It might look similar to adaptive learning programs, where the assessment closely matches a student’s current skill level. AI-powered adaptive assessments can provide teachers with more detailed insights into student performance by adjusting the difficulty level of assessment items based on student responses. This can help teachers figure out what students know, what they don’t know, and what they need to do next. The result is more personalized instruction at each student’s skill level.

Other times, we might use AI to take a current assessment and differentiate it for English Language Learners. Here, the AI might simplify verb tenses, provide a glossary of vocabulary, or even generate sentence stems. In some cases, you might create multiple types of assessments and let students choose their method. But we might also use AI to create targeted formative assessments for students with learning differences. It could be a rubric or checklist for a task related to an IEP goal. Here, a teacher starts with the AI-generated assessment and then modifies it based on what they know about the individual student.

 

Trend #5: Empowering Students to Own the Assessment

While differentiated assessment is important, we also want to empower students to own the assessment process.

Here are a few examples we might see:

  • Goal-Setting: Students can use AI to get personalized recommendations for areas to improve. They can take ownership of the learning process by evaluating their strengths and weaknesses and coming up with next steps.
  • Resource Recommendations: Students can use AI as a curation device to help find the scaffolds, tutorials, and resources they need when learning a concept. Whether it’s a set of tutorials in an applied math exploration or a recommendation of sites to visit in a PBL, AI has the potential for students to move from “What do I need to know?” to “Where can I find this?”
  • Performance Feedback: As educators, we can use machine learning in video for surveillance purposes (like checking for misbehavior or catching cheating) but the combination of video and AI can also mean better feedback on physical performance. Imagine having AI listen to a musical practice and give feedback. Added bonus, they don’t have to listen to “Hot Cross Buns” on the recorder. My heart goes out to every elementary teacher who has to listen to recorder music. Similarly, in a PE class, AI might be able to examine a video and provide feedback on form to increase safety. Students could essentially “chat” with the AI to learn how they might make adjustments.
  • Learning Portfolios: We tend to think of digital portfolios as a personal endeavor – which it is. But they are also inherently social. Students are sharing their work with a larger audience. AI can be used to help students create and maintain digital portfolios, which can showcase their achievements and growth over time. A student might select a three artifacts and then choose two more from a recommendation algorithm.
  • Time management: AI can be used to help students manage their time more effectively, by providing personalized reminders and recommendations for how to prioritize their tasks. It doesn’t have to be heavy-handed or attached to a penalty. It could simply be an on-task score. If a student is working on a project and finding social media distracting, they might use AI to create a system to stay focused. That being said, there’s also a time for ignoring metrics and embracing human inefficiency. Daydreaming, going for walks, taking brain breaks – these all lead to better problem-solving and improved creativity.
  • Facilitating Reflection: Assessment isn’t always purely evaluative. Sometimes it’s more descriptive in nature. Students can work with chatbots that can facilitate reflection in a way that allows for follow up questions.
  • Find what’s missing: Sometimes we just need another set of eyes who can say, “Have you considered _______?” AI can function like a peer who can look at a work and say, “You might want to consider adding a piece here.”
  • Personalized feedback: Students can submit an example and get immediate feedback. It could be diagnostic feedback (which we’ll address in-depth in our chapter on math) to help determine mistakes. Or it could be an open-ended set of pros and cons. However, we can also teach students how to ask for specific and actionable feedback.

In some cases, the AI acts almost like a tutor. In other cases, it’s simply a tool that students wield.

What Values Will Drive the Assessment Practices?

Earlier I mentioned how assessment practices are values-driven with significant variance and disagreement among educators. As we think about how we will use AI within our assessment practices, we need to recognize the role of these values and beliefs. Machine learning will not occur in a vacuum. Values, beliefs, and policies will shape the way engineers design and educators implement these AI systems. As a teacher, leader, or coach, it can help to look at any AI assessment tool and ask, “What are the values driving this design?”

I mention this because some of the AI assessment tools seem to be driven by a strongly behaviorist perspective of learning with an emphasis on surveillance and accountability. Think plagiarism detectors or facial recognition systems used to gauge student engagement and behavior. Others are quasi-behaviorist with elements of gamification (badges, levels, etc). Some treat knowledge as something obtained, retained, and explained rather than internally constructed. Some assessment tools are more diagnostic, others more descriptive (an extra set of eyes), and others evaluative.

My fear is that we will see AI assessment tools chosen largely by convenience and cost rather than pedagogical soundness or core values. I worry that we will see tools gain popularity due to clever marketing and cute design. As districts purchase these tools in a top-down fashion, some educational leaders will push for compliance in the name of having common assessments. Instead of providing a tool and trusting teachers to decide when and how to use it, we might end up with policies that outsource most of the assessment processes to a machine in a way that might conflict with a teacher’s core beliefs.

On the other hand, if we empower teachers to use AI tools wisely, we will end up with a more human-driven approach. I think it’s critical that we retain the human element in assessment.

Remember the Human Element

I showed my son how well ChatGPT does in analyzing writing and giving feedback. I explained that it could do feedback almost instantaneously in a way that I, as a teacher, cannot. I gave an example of one of my blog posts. It was a personal narrative about the lessons our greyhound taught me.

His response?

“That’s horrible feedback.”

“What do you mean? It’s practical and actionable. It includes more positive than negative feedback. Everything about it is true.”

My son said, “If a teacher did that it would be heartless. If someone writes about their pet dying the only response is ‘I’m so sorry. Want to talk about it?’ That’s it.”

He went on to say “Feedback is fine. I know that we need it and all but my favorite English teachers give feedback in a way that makes me feel known. The feedback makes me want to write. It’s critical, yeah, but it’s critical in a way where I think, ‘My teacher gets me.’ Does that make sense?”

It does make sense. What he’s alluding to is that feedback isn’t mechanical. It’s relational. It’s dynamic. It’s contextual. It’s even, at times, empathetic. I still think there’s some potential promise in AI as a feedback tool but I don’t think it will ever replace the relational aspect of assessment.

At the most human level, assessment is a conversation. Sometimes it’s an internal dialogue we do in isolation. Sometimes it’s a conversation with trusted peers. Often, it’s a conversation between a student and a teacher. These conversations might be about word choice or style or argumentation. But they might also be about hopes and dreams and grief and loss. I never want to lose those conversations.

Similarly, there is something powerful about self-assessment in true solitude. I know it’s inefficient and probably not eco-friendly, but every time I write a book, I print everything up in a manuscript format and mark it up by hand. I jot notes in the margins. I rip sentences to shreds. I circle segments and draw an arrow to a new spot. If I’m frustrated, I might just draw a cartoon character on the side. This messy annotation is a private space where I’ve learned to trust my voice and my inner critic. I don’t want to invite an algorithm into this ritual. It’s mine.

AI can function as a form of peer feedback. The algorithm can generate targeted feedback on voice and style and word choice. It can help me tighten up my writing to get rid of unnecessary words that I don’t need to use. But it can’t tell me what it’s like for a person to read it. For that, I need a human on the other end who can say, “John that piece moved me.”

That’s the kind of feedback that makes me want to write more.

Where I am most hopeful is that teachers will spend more time giving the more human-oriented feedback while AI provides some of the instant feedback as a diagnostic tool. Many of my current students (preservice teachers at the elementary level) spend hours testing students on reading fluency. In the upcoming years, we will likely have AI that can listen to students do fluency practice and catch the mistakes to provide a fluency score but also diagnose areas where they might be struggling in phonics, blending, and phonemic awareness. This would then free teachers up to do conferencing or pull small groups.

My hope is not that AI will replace teachers in the realm of assessment. Instead, my hope is it will free teachers up to give the kind of feedback that only a human can give.

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

 

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

6 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.