Skip to main content

Artificial Intelligence can feel overwhelming. With so many tools and strategies, you might be wondering where to start or what to do. But I actually think that before we integrate AI into our lessons with students, we should ask some critical questions that will guide our approach. In this week’s article, I share seven different questions you might ask.

Instead of starting with a list of tools, what if we started with series of questions?Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify.

Seven Key Questions to Consider When Having Students Use Machine Learning

The following are seven questions you might want to consider as you craft policy and design lessons.

 

1. What are the learning tasks?

Over the last two years, I’ve had the honor of delivering keynotes and conducting workshops on AI and education. One of the questions I get asked most often is, “When is it okay for students to use AI?”

Often, the goal here is to nail down a specific school-wide policy that every teacher can adhere to. Some schools have even developed a table or chart with “acceptable help” and “unacceptable help” for AI usage. While I love these charts for the clarity and unity they offer, I wonder if it might work best to use that type of chart on individual assignments or projects rather than as a singular school-wide policy.

In the long run, we want students to learn how to use AI ethically and wisely. But this requires students to think critically about the context of the task at hand. We want students to develop a minset of curiosity and critical thinking, where they ask, “Is this the right time to use AI?” This will vary from course to course. If you’re teaching a coding class, you might want to be tight with students on using generative AI to create any kind of computer code. You might want students to learn how to code by hand first and then, after mastering the language, use AI-generated code as a time-saving device. By contrast, if you’re teaching a history class and a student wants to demonstrate their understanding of a historical concept by designing a game, you might allow that student to generate the game code using machine learning.

It might feel like cheating for a student in a film class to use AI for video editing but the AI-generated jump cuts might save loads of time in a science class where students demonstrate their learning in a video. In a film class, it’s critical for students to learn how to edit by hand in order to tell a story. In science, AI-generated jump cuts allow students to create videos quickly so they can focus on the science content.

This isn’t new. Technology has always helped us save time and money in doing creative work.

Technology makes the creative process faster and cheaperAs an eighth grader, when I made a slide presentation, I had to find all the pictures in books and magazines, take photos of those pictures with a camera, and take the film to Thrifty’s Drugstore to get my slides for the carousel. I don’t miss that. Okay, I do miss the cylindrical ice cream scoops from Thrifty’s. It was a big thing in California. But I now make slides using paper, pens, an Apple Pencil, and Photoshop. It’s way easier and faster.

The danger with automation is that the AI can do so much of the work that students miss out on the learning. This is why it’s still vital for students to take notes by hand or do prototyping with cardboard and duct tape.

So, what does this mean for schools crafting universal policies for all students? I’m on an AI committee at my university. We are rewriting our university policy that will include in our syllabi. I don’t have any easy answers for crafting an airtight policy that also allows for contextual flexibility. However, I think there are a few things we can often agree on:

  • Give educators leeway in how their students use AI
  • Research how AI is used in different disciplines, domains, and industries and allow students to learn how to use it wisely
  • Make sure educators give clear expectations for how students can use AI within a given assignment
  • Require students to share when and how they have used AI

The challenge is in creating a policy that is universal for an entire school but allows for flexibility given the context and learning targets of specific lessons. A simple statement might be, “Generative AI may only be used on graded assignments when the teacher has granted explicit permission.”

 

2. How will we ensure that the AI supports rather than replaces the thinking process?

I have lived in Salem, Oregon, for nearly a decade. However, I still have moments when I struggle to orient myself around town. I hear people refer to landmarks and I find myself unable to pinpoint where it is in relation to the rest of the city. This is a sharp contrast to Phoenix, where I spent my formative driving years memorizing key locations and keeping a mental map of the city and its sprawling suburbs.

This is a small example of something called cognitive atrophy. Just as a muscle can atrophy when we don’t use it, our brains can experience a similar process with specific thinking processes. This isn’t new, by the way. Modern humans do not memorize anywhere near as much text as they did two or three hundred years ago.

So, as we think about the use of AI, we need to be cognizant of cognitive atrophy. I love the question and answer nature of a chatbot but I worry about the lack of productive struggle it might cause. I worry about instant answers and the loss of things like boredom and confusion that are so necessary for the learning process. I love how AI can help with ideation but I never want it to be my default in brainstorming.  I can see value in using AI throughout the creative process (especially within project-based learning) but I worry about outsourcing creative work to a machine. When that happens, students don’t become the makers and problem-solvers that can be. In other words, I worry that we might become so dependent on AI that we lose the ability to engage in certain types of thinking.

As we engage in lesson planning, it can help to take a t-chart and separate the human and the machine, in terms of the thinking. When we type on a computer, we lose aspects like handwriting (which can be important for motor development and can be helpful in making learning more memorable). When we use spellcheck, we get instant feedback (which can improve spelling) but if we run on auto-pilot, we run the risk of never learning the correct spelling. When we use spreadsheets and calculators, we sometimes forget to engage in number sense and ask, “Is this reasonable?”

This is why I love the notion of vintage innovation. We want to maintain an overlap of the lo-fi and high-tech:

 

3. How does this AI tool align with what we know about how the brain works?

If I am going to connect machine learning to human learning, I need to consider the way that the human mind works. So, if I am using AI as a tool for project management or work completion, it helps to think about human motivation and ask, “What type of motivational technique does this AI tool use?” Here, I might use the Self-Determination Theory continuum from extrinsic to intrinsic motivation:

Here, I might use AI for designing and tracking goals. I might use elements of extrinsic motivation through AI to gamify certain habits:

If I am focusing solely on acquiring new knowledge, I might start with the idea of cognitive load and ask, “Does the use of AI here reduce cognitive load or does it create extraneous cognitive load?” I might focus on the neuroscience of long-term and short-term memory mentioned in the following video:

When I first learned about neuroscience and the brain, I was surprised to learn that so many study strategies don’t work. Re-reading and highlighting, for example, are highly ineffective. However, testing oneself on the material creates additional retrieval practice and allows students to determine what they know, what they don’t know, and what to do next. In terms of AI, a student can use a chatbot as a study tool to increase this retention of knowledge. Here’s how it works.

1. Gather and Upload Your Work

  • Collect assignments or notes: Collect essays, homework, or notes from students. If working in a group or class, ask others to share their work for analysis.
  • Format for easy upload: Ensure all documents are in text-based formats like Word, PDF, or plain text (for easier processing by the chatbot).
  • Upload documents: Use a chatbot or AI tool that supports file uploading, and upload the gathered student work. Some chatbots may have a drag-and-drop interface or an upload button
  • Work Together (Optional): Have small groups of students share their work together and identify who has expertise in what areas

2. Analyze Trends in Your Work

  • Command the chatbot to analyze trends: Ask the chatbot to identify common themes, mistakes, or gaps in understanding across the uploaded documents. For example, you might say, “Analyze the uploaded files and identify common mistakes or areas where students need improvement.”
  • Receive feedback: The chatbot will provide feedback on what concepts or skills students struggle with most. It may generate a report with trends such as recurring errors, misunderstood concepts, or areas where students are excelling.
  • Note critical areas for improvement: Pay attention to the areas identified by the chatbot where most students are struggling, as these should guide your teaching or study focus.
  • Ask for resources (Optional): Ask the chatbot to create tutorials (text-based), sample problems, or a curated list of current locations where you can figure out where you are struggling.

3. Create Targeted Multiple-Choice Questions

  • Request MCQs based on identified gaps: Once trends are identified, ask the chatbot to create multiple-choice questions based on these gaps. For example, say, “Create 10 multiple-choice questions focused on the common mistakes identified in the analysis.” Another prompt might be, “You are a test-master. Create a multiple choice question based on the trends you just gave me. If I get the answer wrong, share the correct answer with an explanation. For each correct answer, make the test progressively harder.”
  • Specify difficulty levels: To make the questions more effective, ask the chatbot to generate questions at varying levels of difficulty. For instance, “Create 5 easy, 3 medium, and 2 difficult multiple-choice questions based on the identified trends.”
  • Include distractors: Ensure the chatbot creates questions with plausible distractors (wrong answers). This is crucial for effective retrieval practice. For instance, ask, “Ensure that the incorrect answers are similar to the common mistakes students make.”
  • Add open-ended questions: Over time, move from multiple choice questions to ones that rely solely on recall (rather than recognition) and eventually open-ended questions. Ask for specific feedback that goes beyond right/wrong.
  • Request explanations: For questions you got wrong, ask the chatbot for a detailed explanation. For example, “Explain why the correct answer is X for question 3.”
  • Identify weak areas: After each practice session, ask the chatbot to analyze your responses and suggest areas where you need more review. For instance, “Analyze my answers and tell me which areas I need to study more.”

5. Track Progress Over Time

  • Keep uploading new work: Continuously upload new student work, practice tests, or notes for trend analysis.
  • Request progress reports: Periodically ask the chatbot to summarize your progress, e.g., “Show me how my performance has improved in [subject/concept] over time.”
  • Adjust study plan: Use these progress reports to adjust your study strategy, focusing more on areas where improvement is slower.

Note that this works well for higher level courses at the high school and university level.

 

4. How is machine learning changing the learning domain?

The first few questions took a learner-centered approach by focusing on the learning tasks, the thinking process, and our understanding of the human brain. But we also need to recognize the power of technology in changing a domain of learning. For example, in a CTE class, we might ask, “How does a particular industry use machine learning and how can we prepare students for that reality?” In a science class, we might ask, “In what ways will AI transform our understanding of the world? How do we adjust the learning tasks as a result?”

We can think of this as a reciprocal relationship between AI and learning, where we start with the learning standards and use AI in a way that is driven by the standards. But we then ask, “How is AI transforming these standards?” and thus modify the standards themselves. This isn’t new. In our reading standards, we now include multimedia content creation and the ability to decode digital graphics. We integrate the internet into online research standards. It now seems obvious to infuse research standards with internet-based research but at one time that was a very deliberate decision people made.

In terms of AI, I imagine we will see new definitions emerge around information literacy. For a deeper dive on this topic, check out the podcast interviews I did with Jennifer LaGarde and Alec Couros.

 

5. What are the ethical implications I should consider?

After thinking through the learning targets and the use of AI, we need to consider the policies that govern any kind of technology integration. Here in the U.S., we need to consider a few key policies:

  1. Family Educational Rights and Privacy Act (FERPA): FERPA protects the privacy of student education records. It grants parents rights to their children’s education records, which transfer to the student, or “eligible student,” at age 18 or upon entering a postsecondary institution at any age. When using AI tools that process student data, we, as educators need to ensure these tools comply with FERPA. This has big implications for using AI with things like creating IEPs, giving feedback on student work, or writing a letter of recommendation.
  2. Children’s Online Privacy Protection Act (COPPA): COPPA imposes requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age. We need to ensure that any AI tool used in class is COPPA compliant, especially when these tools collect data from students. It’s important that we pay close attention to the Terms of Services and the age limits of different AI apps.
  3. Children’s Internet Protection Act (CIPA): CIPA requires K-12 schools and libraries in the U.S. to use internet filters and implement policies to protect children from harmful online content as a condition of receiving federal funding. If AI tools are used to access internet resources or incorporate online research, teachers need to ensure that these tools do not bypass the school’s internet filters. AI applications should be vetted for their ability to filter and block access to inappropriate content.
  4. District Policies and Acceptable Use Policies (AUP): School districts often have their own set of policies regarding technology use, including acceptable use policies (AUPs) that outline what is considered appropriate use of school technology and internet access. Teachers should review their district’s AUP to understand limitations and guidelines for AI tool use. This review helps ensure that the integration of AI into teaching and learning aligns with district standards for ethical and responsible technology use.
  5. Americans with Disabilities Act (ADA) Compliance: Legislation such as the Americans with Disabilities Act (ADA) and Section 508 of the Rehabilitation Act requires that educational materials and technologies are accessible to all students, including those with disabilities. When selecting AI tools, teachers need to ensure these technologies are accessible to students with disabilities, supporting a range of learning styles and needs. AI tools should not create barriers to learning but should enhance accessibility and inclusivity.

The policy level is the baseline level of compliance. But beyond policies like FERPA and COPPA, there are broader concerns about data security and privacy with the use of technology in education. We need to be cognizant of how these AI tools use and store data and we need to provide some student autonomy about what type of personal data they want to contribute to large language models. At a basic level, we need to consider the role of data privacy. However, we need to delve deeper into the ethics of AI in a way that goes beyond policy compliance.

Consider the role of pro-social robots. We have already seen therapy chatbots that can help students who are dealing with mental health challenges. For example, Woebot engages users in conversations that encourage self-reflection and cognitive restructuring to help change a person’s mental scripts. Trained on the principles of Cognitive Behavioral Therapy (CBT), Woebot might be used to help with emotional regulation and targeted support for ADHD, anxiety, depression, OCD, and mood disorders. But what are the ethical implications of these chatbots? Are they a form of therapy or are they replacing the role of an empathetic human therapist? We need to engage in hard conversations about when and how we might use these prosocial bots.

Consider the use of AI in crafting chatbots where students have fictional conversations with historical figures. Many of us are okay with a historical figure in historical fiction. But given the ELIZA Effect (where people tend to treat chabots as if they were actual people), it can feel almost icky to have young students engaging in conversations with the dead. Even if the chabots are trained on primary source material, we need to engage in critical conversations before using these types of tools. Are you okay using an AI chatbot that mimics a historical figure?

Or consider the role of AI in different key learning tasks like image generation. Are we essentially stealing from the pre-existing art that exists? Are we participating in the demise of creative industries? This is why it’s helpful to engage students in discussions about the nature of AI and how it works.

 

6. What is developmentally appropriate?

Most AI tools have been designed by grown ups for grown ups. So, as we use AI tools, we need to ask, “What does a child this age need and has this tool been developed in a developmentally appropriate way?”

Consider the role of AI tutors. We need to know how exactly the tools adapt to the developmental needs of students. If you hired a human tutor, you would likely ask what experience that person had working with children of a certain age range. The same is true of A.I. tutors. As educators, we need to ask how the machine learning algorithms had been trained to engage with children of various age levels. We need to know what safeguards have been put in place to make sure that the content is age-appropriate.

In this respect, I’ve been encouraged by the Khan Academy. In interviewing Salmon Khan on my podcast, I was struck by the intentionality they had with issues of bias, human development, and aligning the A.I. to learning theory as they developed Khanmigo.

 

7. Are my students ready to use AI responsibly?

We cannot assume that students will automatically use AI ethically and wisely. As educators, we need to model what it looks like. This starts with providing clear guidelines for when and how we will use AI in an assignment. We might use something like this color-coded system:

  • Blue: AI-generated text
  • Green: AI Generated but Revised by Human
  • Pink: Human Generated but Edited by AI (think Grammarly or Spell Check)
  • Black: Human Generated (with no modifications)

It also helps to teach students about the nature of AI. We are in a cultural moment where AI is magic. The most popular AI tool I see in schools right now is Magic School. We use magical sparkles to represent AI integration into existing platforms. But when treat it as magical, we fail to grasp how machine learning works and we end up feeling disappointment by its limitations (think Gartner’s Hype Cycle). This is why it helps to teach students how an LLM works and what strengths and weaknesses we can expect from it as a tool.

We can also clarify the inherent bias and the potential for inaccuracies (often called “hallucinations”) in AI. We might need to teach students how to engage in prompt engineering through something like the FACTS Cycle:

Note that the goal here is to teach students how to slow down when using AI. We want them to be more deliberate and mindful of how AI works.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

2 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.