This entry today is informed by this story about Adam Raine’s suicide in the NY Times. It is time – actually well past time – to discuss that there is a multi-trillion dollar industry designed to keep us using apps no matter what those apps may mean for our individual and collective mental health. (And by the way, if you want to read two wonderful novels that also deal with this issue, let me recommend An Absolutely Remarkable Thing and A Beautifully Foolish Endeavor by Hank Green.)
You don’t have to agree with everything Jonathan Haidt wrote in The Anxious Generation to know that the way kids are growing up today isn’t awesome. It’s been 22 years since Facebook launched, and we’ve learned a lot about the way we interact with technology since then. And for me, what we’ve learned about the attention economy and how these social media sites are designed to keep us using their apps is the most important lesson we can learn about how we can help kids navigate the digital world. (By the way — click that link when you finish reading this entry. It’s a fantastic article to frame social media for high school students.)
And it seems pretty clear to me that the AI companies have learned the most basic business lesson from the social media apps — if you want to monetize these apps, you have to keep people using them. And that was bad enough when “the algorithm” was putting content in front of people that incentivized us to stay in the app for hours on end. It’s even worse when the algorithms create content that is personalized to our perceived needs to keep us using the app. The potential for abuse is exponentially more dangerous.
This is obvious when it comes to apps like Character.ai — but as the article about Adam Raine’s suicide suggests, it’s every bit as true for more general use AI apps like ChatGPT. When people — and here I am thinking most about our students who are at-risk for anxiety, depression and self-harm — are using these apps as would-be friends, life-coaches or therapists, the potential for real harm is astronomical.
And yes, all of these apps are talking about how they are putting in safeguards to prevent abuse, but there is more and more evidence that these apps are giving kids horrible advice about issues of real and profound concern.
And I’m not putting any stock in the hope that the folks running these companies care about kids in any meaningful way, and there’s almost nothing that I have seen in the last twenty years of technology development to suggest they do. If anything, it’s gotten much, much worse as the developers get better and better at keeping us hooked on using the apps.
Maybe… maybe the best we can hope for is that bad PR will force them into putting some obvious guardrails. But we need to be thinking so much more deeply than that.
An AI bot that has programming that doesn’t let it tell a kid how to commit suicide isn’t going to stop this problem. Yes, it is a necessary thing, and all of these companies need to transparently publish exactly how they are working to prevent their users to engage in self-harm caused by their interactions with the app, but it isn’t nearly enough.
Because there are a half-dozen questions I can imagine off the top of my head that if a kid started a dialogue with an AI bot using one could lead to further despair.
- Does life get better?
- Is there any meaning to life?
- Why doesn’t anyone love me?
- Why don’t I have any friends?
- Why is life so hard?
- When will I stop being in pain?
These questions are hard enough to answer when teens are asking them, and anyone who has ever worked with teenagers. Those conversations are fraught with peril, and even innocuous answers can have devastating effects if we’re not incredibly careful.
At SLA – much like so many schools I know, we workshop what conversations we can handle on our own, and what conversations require more than one adult in the room for safety. And we work hard to make sure that, if we are ever hearing things from students that suggest self-harm, we involve parents, guardians and sometimes other social services immediately. Everything we do is to avoid putting kids into a bunker mentality with us, where a student could over-invest in our words and in our relationships with them to the point where it could do them harm. As a school, we have no investment in keeping the relationship with students between us and the school. Therapists are even better trained to know when they must actively involve more people and more services to help a child.
The apps are intentionally and actively designed to do the opposite. They are designed to keep you using the app. And as such, we do not want any kind of AI chat empathy bot giving kids answers to these questions ever.
To suggest anything else is to put kids in harm’s way.
So what do we do?
The answer isn’t just hoping AI companies put in enough guardrails — because they won’t. And the answer probably isn’t simply banning the technology in schools, because putting our heads in the sand has never worked when it comes to new technologies, and if we aren’t talking – cautiously and carefully – about these tools in schools, we lose the opportunity to help kids learn how to use them well. We need positive steps that we can take.
First, we must resist anthropomorphizing these AI system so that no kid is even tempted to think that there is a thinking — or more importantly, a caring — brain behind the responses we get. This is hard to do. The AI clients are designed to make you want to anthropomorphize them – the bots even give you better answers when you are polite with your requests. And why? Because it makes us want to use the software more. As educators, it is essential we do not use anthropomorphic language when referring to this software. I challenge everyone who reads this to actively work to do this, and if it’s harder than you thought, ask yourself what are the implications of that.
Second, we need AI literacy in schools – and not AI literacy that overhypes and skitters away from the deep ethical questions that AI raises. All introduction of AI in schools must include a discussion of the ethical, moral and societal implications of AI use. Societal adoption of AI has a profound impact on the world kids will inherit, and we owe it to them to introduce how these system could fundamentally alter the way we live, work, and learn. For example, FullScale published Valley New Schools’ curricular guide for middle and high schools to help students develop a nuanced view of AI — and it is a good place to start. Figuring out AI ethics and AI literacy is going to be fall to every teacher, and it will impact every subject we teach. Whether we like it or not, we all have a role — and responsibility — in helping students examine the ethical questions raised by AI if we want our students to become healthy citizens of our world.
Finally, and I can’t believe I have to say this, “AI therapist” software should not be bought and used in schools. With ever-shrinking school budgets, the mental health workforce shortage, and the ways in which powerful people are treating widespread adoption of in schools as a fait accompli, there will be a greater and greater push for schools to use AI mental health platforms. There’s no way we should be experimenting on kids who have mental health needs with these programs.
I don’t pretend to know all the answers here. I don’t even know all the questions we should be asking. But I know that we must turn a deeply critical lens to how students are interacting with these bots, and how we, as educators, are involving ourselves in the discussion of their use. It’s not just the academics that we have to concern ourselves with, but rather, in an age of exploding mental health concerns among youth, we must help our students critically understand how these apps are intentionally designed to keep us using them, and the very real concerns that creates for our mental health and well-being.