A little over three years after the launch of ChatGPT, I think we have to admit that the introduction of AI into schools isn’t going so well. Every administrator I have talked to about this says that they are seeing a clear rise in AI cheating — and that aligns with what we’re seeing at SLA as well. And while yes, the promise of AI is great, so is its threat. And if we don’t talk about that, we’re going to create profound harm in our schools.

We are already seeing a bifurcation into camps on this issue in really not awesome ways, as Greg Toppo’s piece illustrates. There are those – often those not in schools every day – who are arguing that AI is going to revolutionize school and be the thing that finally tears down the old ways of doing school and leads us into a brave new vision in powerful ways. And there are those who want to see us return to the days of the blue book exam, because they don’t want to be bothered with the problem of figuring out whether or not a student wrote something. Neither of those camps help us all that much today. And neither of those camps are all that realistic.

Toppo’s piece cites the organizations and the reports that are trying to find that third way, but trying to find the middle ground in this particular issue is difficult. It is too easy to cheat using AI. We have to own that. And we have to own that we’re not just going lesson plan our way out of this. Expecting teachers to be able to forever create the most engaging, the most empowering, the most interesting lessons to every kid at every moment so that they are completely disincentivized to cheat is unrealistic. When you have 30 kids in a class, some kids are going to like some assignments more than others.

As long as schools have mandatory subjects and compulsory school, we can’t simply create curricula that appeals to every kid at every moment. That doesn’t mean that we don’t try to create curricula that is deeply of interest to students, but it is a recognition of the challenge facing teachers right now. We live in a world of immediate gratification and dopamine hits. And the best schools are asking kids to be thoughtful and contemplative and do work that requires them to do hard things when so much around them tells them that they don’t have to.

In that context, generative AI tools can be seen as the latest and greatest “easy button” ever to exist.

At SLA, we are genuinely frustrated by the number of kids who are using AI to cheat. That’s not to say we don’t have students who are using it in really interesting ways – we do. But the bad is currently out-weighing the good in ways that are going to make it hard to argue for its use in the long-term if we’re not careful. And that’s despite the fact that I think we have created a really sound policy on how to use AI ethically for kids.

In an attempt to bring the community together around positive use rather than simple cheating, I sent the following email out to our community this week. I’ve shared this with other Philly principals in my network, and I share it here, both because I think it’s important to be transparent about what we’re grappling with and the ways in which we are trying to address it:

Dear families,

We are asking parents to please review our Academic Integrity Policy with your children: https://sla.philasd.org/academic-integrity-policy/

Like every school, college and university, we are being challenged by how new AI tools make cheating easier. And as a school with a long-standing use of innovative technology, we have a vested interest in making sure our students have an ethical grounding in the use of AI. 

AI cannot be used to replace the hard work of learning. When used properly, AI can enable kids to create new artifacts of their learning, get feedback on their ideas, and streamline workflow in important ways, but when used to simply complete assignments, then AI is just 21st century cheating. 

We have worked hard to craft a policy that allows students to learn how to use AI ethically while holding them accountable for using it correctly. 

At its most basic – the policy is this: All AI use must be documented when you turn in the assignment for which it was used. This allows teachers to look at how students are using it to ensure that the hard work of thinking, learning and doing was done by the students, and the AI use was assistive, not a substitution for learning. 

SLA’s accounts come with Google Gemini, so when students are choosing to use AI, they must do the following: 

  • Use their SLA Google Gemini account unless they have discussed using another AI client with their teacher for that assignment beforehand.
  • Share the link to their AI chat with their teacher when they turn in their assignment.
  • When in doubt of whether or not they are using AI appropriately, ask the teacher as early as possible.
  • Understand there may be assignments that teachers declare must be “AI-Free.”

Again, all undocumented AI use is considered plagiarism because it removes the teacher’s ability to examine the learning process.

Parents, again, please take some time to review the policy with your children and continue the work we’re doing in school about promoting thoughtful learning and ethical technology use.

Our policy again: https://sla.philasd.org/academic-integrity-policy/


Thank you,

Mr. Lehmann

The day I sent this email, both these things happened — I had to have a meeting with a student and their parent when we caught the student using AI to cheat in almost every class, and I sat with a student and helped them create a logo for their Capstone project using Gemini.

This issue is intensely complicated.

Our AI policy was crafted to encourage kids to use it in thoughtful ways, and then include the teacher in the conversation of its use. By creating a policy that says, “all AI use must be documented when you hand in the assignment,” we are creating the conditions by which the learning process is as transparent as possible for teachers to be able to help students use the tools ethically. The use of AI in student work has to be part of a dialogue between students and teachers so students never make the mistake of letting the AI do too much of the critical thinking for them. At root, we want to help kids do their own thinking, acknowledging the changing nature of thought work, while continuing to value the process of learning and productive struggle in our school.

And as long as we hold to that core principle, I do think that AI can be useful. This past fall, as kids were grappling with creating their Capstone proposals, I made a Gemini gem for them that helped them create those proposals. It was an interesting process to create the gem. I loaded all of SLA’s Capstone documents into Gemini, and then I gave it instructions to be helpful, but not to write the proposal for the students. In iterating through the use of the gem, I kept having to revise my instructions for the gem to be less helpful. It took me multiple iterations to get to the point where the gem would not write the proposal for the student. I gave it very explicit instructions not to do so, and yet it kept creating responses that would do things like say, “I won’t write it for you, but here’s an example…” And then essentially write the proposal for the student. It reminded me that these AI bots are not educational tools. They are crafted in such a way to give you what you want, so that you keep using the tool. I think it is at a point now where it’s very hard to get to the place where it will do the proposal for you, though I imagine if you iterate long enough it still might. But we published it for use, and the seniors really liked it, and they liked that it didn’t do it for them, but it did help them craft their proposals. That’s an example of how AI can be used in powerful ways if we constrain its use in very explicit ways.

You can use the Gem yourself to see how it works.

And despite that, I had to write that note to our community because it’s still too easy to cheat using the tool.

So we create policy that mandates transparency. Turn in the link, show your work, and then students and teachers can work together to figure it out from there.

But I don’t know where all of this lands.

Tools are going to continue to get more sophisticated, and it’s probably only going to get easier to use them as a mimic for our own learning.

So we’re going to have to keep figuring out how to make the case that the work is worth doing. We’re going to have to keep making the work as authentically of value to students as we can, while being honest with kids that not every assignment is always immediately of interest to every student in the room.

In the end, I don’t want either end of the spectrum. I don’t want students who can’t create their own work or do their own thinking without using an AI tool, nor do I want a return to the days of the blue book. So we’re going to have to create a thoughtful middle path. The problem with that is that it’s not going to be easy. And it is going to challenge us in ways that, quite frankly, I’m not sure we’re ready for.

And yes, maybe over time we are going to see a radical shift in the subjects that are taught, and the expectations of what school should be, but that’s not where we are right now.

And where we are right now feels so very fragile to me.

From Theory to Practice:

These are the slides from my sessions at EduCon this year where we grappled with some of these same questions. If your school is working to build thoughtful guardrails around AI use and these are useful to you, please use them and let me know how it goes.

(If you can’t view the slides in email, they are viewable in the post on the site.)


Discover more from Practical Theory

Subscribe to get the latest posts sent to your email.