I’m reading The AI Con right now, and it reminded me of a story I’d heard a long time ago about how Joseph Weizenbaum – inventor of Eliza which was one of the first chatbots – grew to be horrified by the way that people who knew it was just a simple program projected human intelligence onto the program. Years later, his book Computer Power and Human Reason was a seminal work in considering the ethics of artificial intelligence.
A fascinating piece of the AI puzzle is our own capacity as humans for empathy. I will argue that empathy is one of the most important human traits we have. It is what allows us to care about the world, about people, about building something better for everyone. However, when it comes to AI, if we are not careful, our ability to empathize can create the conditions by which we anthropomorphize artificial intelligence. Weizenbaum wrote the programming code that allowed Eliza to mimic the way a therapist would take the statements a patient would make and ask the next question. He then saw his own secretary treat Eliza like a real human therapist, and it terrified him. And anyone who has used AI tools to create a piece of writing or a piece of Art has seen how “human” these programs can seem. When I use Claude as a “writing partner,” I admit – I talk to it, and I laughed at myself the first time I said, “Thank you” to an AI.
But therein lies the problem. Our very human ability — and need — to empathize and anthropomorphize can cause us to overestimate the good that AI can do for us. When we say that AI can be a thought partner for us or our students, we have to remember, computers don’t think, not the way we do. And that really matters.
One of the big distinctions that Weizenbaum writes about is the difference between choosing and deciding. Deciding in his definition, is a computational problem and thus can be solved by a computer. Choosing, however, is a moral judgement, and moral judgements can never be made by a program that has no empathy and no wisdom. What Generative AI does is mimic morality and wisdom, because the programs synthesize the web. But those choices are personal, and while, yes, I understand that reading how the synthesized summary of the entire web would answer my question can be really seductive, figuring out what I want to say and how I want to say it is still pretty important. So when there is one objective right answer to be had, there is a decision to be made. But when we have to figure out what is good, what is best, what we think, then we are making choices, and choices involve empathy and wisdom, and those are profoundly human endeavors.
I understand how we can use these tools as a way to get off the blank page, how we can use these tools to help us figure out what we want say, even how we can use these tools to clarify our ideas for readers. On a personal note, writing has always been a struggle for me. And the idea that I can use a tool that helps me get out of my head and onto the page is seductive. Right now, as I create this blog entry, I am using a form of AI (not generative) because I am using speech to text. Talking out these ideas is easier for me than trying to write them, and talking them out and then seeing them on the written page (screen) helps me learn what I think and what I believe. So whenever I get stuck on an entry, I turn on dictation and talk out my ideas and then edit it into something that is – hopefully – readable. And so yes, I understand where adaptive tools allow more people to access the world of ideas – their own ideas and others.
And yet.
I can’t shake the nagging feeling in my head that much of what we get when we use AI as a thought partner is the CliffsNotes version of what we want to know about. And interestingly, AI doesn’t cite its sources, so when we ask it to help us think through a problem we don’t know where it answers come from. And part of the struggle of becoming learned is knowing who thought what you thought before you. Some of what AI does allows us to live in the worst parts of the modern world, where we think every idea is a new idea and we become deeply ahistorical in our thinking. And I don’t want that for our students — it’s why I’ve never liked the textbook version of history, and it’s what forever rings false for me about AI answers to questions. Our thoughts, our ideas, should be almost “controversial,” and rarely does the homogenized summary provide us with a creative synthesis that leads us somewhere new.
When we ask ourselves and our students to write about something, I want them to know where their ideas come from. When an educator talks or writes about productive struggle, I don’t know if they’ve read David Perkins or Vigotsky or even Steven Johnson, but I hope they have. I want our ideas to come from more than just the CliffsNotes. I want our students to be big thinkers. I want them to have struggled to figure out what they think and why they think it, and I want them to have the skills to put word to paper and write what they think.
In the never-ending gChat of Zac Chase, Diana Laufenberg, Marcie Hull and me, Zac made a great point — “It [AI] can save me time if the thing I’m using it for is something I know how to do.” In a calculus class, we don’t expect the students to do the computational math by hand, because we trust that they have the a priori knowledge of how to do computational math, and as such, the calculator is the right tool for the job. When I write, I appreciate that the AI of spell-check can find my typos and even suggest where my writing is clunky, but I know how to write. Where AI allows us to do what we already know how to do more efficiently, it’s a useful tool, but that’s not the same as learning.
As educators, we are going to have to grapple with the very hard question that Weizenbaum framed 50 years ago. When AI can be used to help us “decide” by doing the equivalent of the computational work of writing, it is a powerful tool for us and our students to use. But we can never let it substitute for the “choosing” of what we believe and what we think. Because it is in the choosing that we figure out who we are, what we believe, and what we truly know.
We never want to outsource that to AI, because that struggle is where wisdom is gained. And the thing is, it’s a blurry line. We’re going to make mistakes. We are going to overuse the tools, and we’re going to underuse the tools. For me, right now, I think I want to err on the side of under-using the tools. For the most part, I want kids to figure out what they think the old-fashioned way. I want them to read, and I want them to talk to other people about the ideas in their heads. And then I want them to write or podcast or design, and I want them to make choices when they do. And whatever mistakes they make, I want them to be their own.
Discover more from Practical Theory
Subscribe to get the latest posts sent to your email.
Remember how a million years ago people said computers should either do things better, or do better things? The same with AI, and technology in general. You make a strong case for also keeping at the forefront the human element, the growth of student thinking, analysis, creativity.
There was a vintage method we used about forty years ago called I-Search. This required the students to choose a topic; and instead of doing research they were charged with gathering all hands on information and resources themselves.
A rubric was developed through student-teacher conferences. The student would decide in these conferences which methods they’d use: interviews, surveys community events, experiments, living history given through discussions with elders, etc.
This method motivated the student to be vested in the project: they would be the director, producer, cast selector, editor, etc. Also, it could include the involvement of family members, school and neighborhood community which can enhance the experience
This method worked for many students but not for all. It was worthwhile because students were allowed the freedom to express themselves, discover their talents and further develop communication skills.
Your last two posts focus on maintaining the productive struggle of learning and you describe that in ways in which we always have as educators. And that’s good, and a worthy focus of every school. But I wonder if the productive struggle can evolve and be something new that includes all of the ways you describe it, but also involves a more expansive view on what is available to support learning, AI included. Its my perspective that ignoring the potential uses of AI in the learning process diminishes the preparation of students. Why shouldn’t they ask the same questions you ask and have the opportunity to develop their own insights and perspectives about its role in their life. Focusing on the productive struggle of traditional school seems to negate this opportunity. It makes sense to me that they should be able to write about their struggle with AI in the same way you have done. That means having the experience of using AI, under the tutelage of professional educators. I just think the productive struggle of learning now has additional components to negotiate and not ignore. Thanks for your two posts