AI in Education: the Good, the Bad, and What It Means for You
- horizonshiftlab
 - Oct 16
 - 16 min read
 

The rise of artificial intelligence has sparked a heated debate about AI's role in education. While some view AI as a powerful tool to enhance learning, others fear it will hinder critical thinking and creativity. To explore this complex issue, we spoke with Sarah Levine, a teacher and academic director at the Stanford University Graduate School of Education's Center for the Support of Excellence in Teaching (CSET). Dr. Levine, who also researches the role of AI in students' writing, offers a nuanced perspective on the good, the bad, and the unknown of AI in the classroom.
AI as a Tool for Augmentation, Not Automation
Dr. Levine and her colleagues have been studying how high school students use large language models like ChatGPT for their writing assignments. Their research indicates that these bots can be surprisingly beneficial.
Rethinking Assignments: The ability of a bot to write a standard essay in seconds forces educators to re-evaluate the value of the assignments they give. Instead of relying on traditional essays, teachers can design more human-centered and personalized assignments that a bot can't replicate. This encourages a shift from rote learning to critical, human-centric thinking.
Personalized Feedback: AI can automate the tedious parts of a teacher's job, such as providing repetitive feedback on common errors in essays. This allows teachers to offload tasks and focus on more impactful, human interactions, giving them back valuable time and energy.
Encouraging Creativity: Contrary to the belief that AI stifles creativity, it can actually help students and teachers think better. A large language model can generate five different versions of the same essay in 30 seconds, providing contrasting examples that help students understand differences in style, voice, and logic.
The Critical Human Element: Where AI Falls Short
Despite the potential benefits, Dr. Levine also raises serious concerns about the widespread adoption of AI in education, emphasizing that AI is built on human data, and therefore, reflects the biases within that data.
The Risk of "Generic" Learning: There's a risk that without careful design, AI will simply replicate a generic, "middle-class, Western-centered" way of learning, rather than offering a truly personalized approach. AI alone cannot help students find the personal relevance in their work, which is a fundamentally human task for a teacher.
A "Cheating" Epidemic? The fear of students using AI to cheat is a major concern for teachers. However, data from surveys shows that there hasn't been a significant increase in cheating since the introduction of AI. The more critical question, according to Dr. Levine, is not about the technology, but the underlying human motivations for cheating. Are students cheating because they feel the assignment is irrelevant, or they lack the time or confidence?.
The Problem of Misinformation: AI makes it even more crucial for schools to teach "AI literacy". This includes teaching students how to be skeptical, check sources, and understand that AI models can be wrong. The danger is that students, who already tend to be poor at source-checking, will treat AI like a search engine and accept its output as fact.
The Path Forward for Professionals
For professionals in a mid-career transition or contemplating a career shift, the questions posed by AI in education are highly relevant. The focus on what and why we learn is essential for navigating a future where skills like critical thinking, problem-solving, and emotional intelligence will be paramount. As AI automates more tasks, the uniquely human skills—creativity, communication, and empathy—become more valuable than ever. The role of the human teacher will not disappear but will instead shift to guiding students and fostering a love for learning that no bot can replicate. It’s not just about learning with AI, but learning how to be more human in a world shaped by it.
Selected Links:
https://hai.stanford.edu/events/human-centered-ai-for-a-thriving-learning-ecosystem
"AI Will Transform Teaching and Learning—Let's Get It Right." Stanford HAI, Stanford University, 11 May 2023, hai.stanford.edu/news/ai-will-transform-teaching-and-learning-lets-get-it-right.
Episode Transcript:
Raakhee: (00:00)
Hello and welcome to Signal Shift with me, Raakhee, and I'm joined by a wonderful guest today, Sarah Levine. Welcome, Sarah. Thank you for being here.
Sarah Levine: (00:09)
Thanks, I'm excited.
Raakhee: (00:11)
Absolutely, and we are recording on a Sunday, so I am especially grateful for your time on a weekend. We'll try to make it as snappy as possible, but a really impactful conversation. as you would notice, Sue isn't here today and will of course be missed on this episode. A little bit about Sarah. So Sarah works at Stanford University's Graduate School of Education as the academic director for the Center for the Support of Excellence in Teaching.
Sarah has her PhD in learning sciences from Northwestern University. And before working at the university level, she worked for many years as an English teacher at a Chicago public high school. Now she helps teachers develop their skills for teaching, specifically reading and writing. And in the last few years, she has been researching the role of AI in students' writing. She sees reasons for hope and concern when it comes to AI and education.
We have been exploring AI in the form of education and lifelong learning and re-skilling even in our 40s and all these big questions around learning that I think society is grappling with. And AI and its use is so new, is so new to all of us, and we're so excited to kind of explore that. I came to learn of Sarah's work through the Institute for Human-Centered AI by Stanford University and their AI Plus Education Summit.
The summit showcases the latest research and thinking on AI and education. And so today we are going to be talking about AI in education and teaching, the good, the bad, the scary and the very unknown. And there's a lot of unknown. Simply how AI is reshaping the classroom, the very nature of how we learn and making us reconsider what and why we learn. yeah, Sarah, thank you again for being here and I'm excited to
get into this with you today, but I think to start off with, can you give us a sense of the work that you do?
Sarah Levine: (02:11)
Sure. have been, I'm a teacher first, and I work at Stanford with pre-service teachers, those who will become English teachers, they're our next generation of English teachers. And I research, right now, everybody in my field has been touched by some aspect of AI in education. And in particular, I think when we say AI,
I think a lot of people when they say AI are thinking about large language models, there's obviously many, many characters in the AI theater. But  I think the one that's really hit teachers has to do with large language models and being able to talk to bots and have bots look at your work.
So that's where I've been doing some research with some colleagues like Sarah Beck over at New York University. And we are looking at ways that students could use AI, how they are using AI, what teachers can do when it comes to the teaching of writing in the classroom. And so far, we've just been looking at
For example, whether students are more likely to use AI for, let's say, planning a piece of writing. If you think about a typical piece of writing like an argument essay or a piece of literary analysis, which  if you went to high school in the US, you've done, are students more likely to use a chatbot like something from like ChatGPT to plan an essay?
are how likely are they to be just lifting, you know, paragraphs from a chat bot? How likely are they to put the prompt to their essay into a chat GPT, have chat GPT spit out an entire essay? How likely are they to critique what's coming out of the bot and so on? And we do really small studies. So, cause we're really interested in how students are thinking about what their
reading what the bot is giving them. And so far, part of what we've found is that something like ChatGPT can be really useful for students and teachers
because it offers many alternative ways to write about the same thing. Many students think there's just one way to write about this and I just need to answer the question and move on. And I'm not thinking about my style. I'm not thinking about my voice. I just need to finish this assignment. Something like Chat GPT can give you in 30 seconds
five versions of the same essay. A teacher can use those five versions to help students understand very quickly the differences in style, the differences in voice, the differences in logic, and teachers on their own just don't have the bandwidth to personalize and create multiple contrasting cases of something that a kid is reading or writing. So there are
opportunities for kids to see these contrasting cases and reflect on their own style, their own voice.
Raakhee: (05:43)
The belief is that it's gonna hinder our creativity. It's gonna hinder our ability to think. And so I think that's a really interesting example to showcase that actually it can help us think better,
speaking a little bit more than about the benefits and maybe what you're seeing ⁓ both in the work you're doing or in the sort of greater atmosphere of learning that's happening right now, what do you think are some of the benefits and the positive use cases?
Sarah Levine: (06:09)
I'm feeling pretty gloomy about a lot of AI right now. So I will answer your question with that caveat. ⁓ And maybe I'll slip in before we start. I'm very worried about ⁓ the ecological damage, ⁓ the amount of power that AI requires. ⁓
I am worried about the mountain of new offerings, new chat bots that people are trying to sell schools. with those two things in mind, ⁓ rampant profit motive and ecological harm, let's go into the classroom and talk about what could be helpful.
I'll stay in the, let's say the humanities classroom for now. And if you're, again, if you went to secondary school, you have a sense ⁓ of being given a book to read ⁓ and being asked to respond to it. So here are some things that I think could be ⁓ benefits of the emergence of AI.
The first is because a chat bot can now write a standard essay in 30 seconds, I believe that this is an opportunity for teachers and schools of education to really rethink the kinds of assignments we are asking our students to do. To the degree that a bot can now
do the work that teachers ask students to do. And that doesn't mean we shouldn't do it. You should learn how to add. You should learn the principles of addition, even though we have calculators. But to the degree that a bot can now do your basic assignments in English.
We need to rethink, all right, how valuable is it for me to assign an essay of literary analysis to my student? How much has that ever served them? Which parts have served them? ⁓ How much have I been assigning this essay simply because this essay was assigned to me and because we haven't really figured out
other richer ways to engage with literature or how much am I assigning this because there's a test at the end of the year that's really kind of forcing me to assign this. The emergence of a robot that can do what we're asking kids to do should force us all to rethink.
the value of the assignments we're currently giving. It's a very human value. I need to reflect on what it is I'm asking students to do, whether it's good for them, and what the alternatives might be. Number two,
that ⁓ teachers can offload without losing ⁓ pedagogical value. So for example, any teacher you talk to who responds to a class set of essays will tell you that they wish they had a little stamp. In fact, some teachers do have little stamps with comments that are typical that you'd make, you know, your thesis seems unclear or remember,
you need to support your claim with evidence. And because we write that over and over, ⁓ we could instead, as teachers, note the trends in set of essays, design lessons to address those trends, and then pass these essays off to a bot to offer specified feedback, feedback that we, the teachers, ask the bot to give. And then
Each kid can have personalized feedback, and I, the teacher, might actually get eight hours of sleep. So there are certainly values like that, things that ⁓ AI can replicate that a teacher is doing that really are all about one-on-one feedback that really no, for example, secondary teacher has the time to offer.
⁓ That can be useful. And then there are just a ton of, ⁓ I think, really exciting new developments in AI that involve ⁓ virtual reality, that involve being able to explore, for example, a forest without being in the forest, to be able to understand, you know, cycles in science. There are all sorts of things that
allow kids to step out of the worlds that they're in, which is part of what education should do.
Raakhee: (11:04)
I think it seems like, you know, what's beneficial about all of these technologies, the more simple ones like gamification that we've been using for a while, right through to now virtual reality and what that can do, ⁓ is that it's going to help learners and students at different skill levels and with different sociological backgrounds and different economic levels, kind of.
even the playing field in the sense that now there are many technologies that are going to assist learners to learn better and learn faster.
Sarah Levine: (11:35)
I don't know. I don't want to rush to assume that ⁓ that's where we're headed. Here are a couple of reasons why.
first of all, whatever a bot is doing, it's doing based on all of the data that it's gathering from all of its sources and all that data is human. And ⁓ that data understands learning in a kind of a generic way.
and teaching in a generic way. And as you know, bots are responding to all of the bias that's baked into all of the data that the bots are drawing on. it's not necessarily true that there's going to be an explosion in different ways of learning or I can really
I can learn differently now. can learn my way now that I have this bot. ⁓ If that is to happen, it needs to be designed. And each kid needs their own way of communicating with these bots. sure, it's possible in five, in either one day to 10 years, it's possible ⁓ that I've got a personalized bot who knows
how to talk to me because I've told it and who can speak to me in a way I understand and help me understand what I'm reading and writing. Yes, that's totally possible. Or it's possible that I get a generic, ⁓ probably, you know, middle-class, Western-centered bot that
assumes I'm going to learn in a middle class, Western centered way. And ⁓ it's not that much better for me than it was before.
I think some people might feel that the emergence of AI will make learning personalized and will help each student write the best possible essay about the great Gatsby.
that they can. But unless we explore why it's valuable to write an essay about the Great Gatsby to begin with, and unless we explore why a student would turn to Chatsheep E.T. and say, write me this essay because I don't want to, then we aren't moving. We aren't evolving in education.
So what we don't want is AI to take over just where we left off to teach the exact same thing in the exact same way, but with more one-on-one attention.
Instead, we have to think about making our assignments more human, more personal, and we need to think about ways to help students see reasons that what we are doing in school is valuable to them. And I can tell you from experience that no bot can do that. That's going to be a human thing.
Raakhee: (14:47)
I think what's happening is it's we have to reshape education dot dot dot. It's not just about the technology, right? It's about it's exactly that question of what are we learning? Why are we learning it?
What is it going to do for us? And who are we becoming in this journey?
naturally, what's coming up when you think about all of this is what happens to human teachers or what is this journey like for them?
Sarah Levine: (15:11)
Yeah, yeah, it's very concerning. So, you know, one way that all of this could go is ⁓ wealthy schools ⁓ recognize the value of ⁓ AI and understand that without human interaction, ⁓ there is no learning. Learning is social. We understand that. ⁓ I don't know.
yet how social we can be with a bot. But ⁓ we need our classmates and we need our teachers. We need our guides. So one way that all of this could go and that some teachers are worried about is wealthy schools buy up a lot of cool AI ⁓ to use and they've got all of their humans. so now we have this beautiful melange of ⁓ human guides.
teachers and students working together with technology. And in under-resourced schools and poorer schools, ⁓ we just get an AI curriculum because we can't afford both things and we're feeling the push to use AI. And we have fewer and fewer teachers and more screen time. ⁓ So that's one set of concerns for teachers.
I think that concern that's probably most present right now in teachers' minds, is cheating, ⁓ AI as a cheating tool.
and
I have two thoughts about that. Number one, there's always been cheating. There always will be cheating. The data that we're seeing out of ⁓ large surveys,
⁓ one done by a couple of colleagues of mine at ⁓ Stanford ⁓ through Challenge Success, which is a program there that is kind of interested in how students are dealing with the pressures of school, shows that there's really not that much more cheating with AI than there was before AI. ⁓ So ⁓ there always has been cheating, there always will be cheating. The question that we have to ask is why are students cheating?
⁓ How valuable is this assignment to them? How can I, with their partnership, make sure that what I'm asking them to learn feels relevant to them? That's the question that AI allows us to re-ask ourselves.
And I think the underlying worry underneath that is, am I helping my students understand the relevance to their teenage lives of this stuff that I'm asking them to do? And do I need to change what I'm asking them to do? And can I, within the constraints of
standardized tests and state mandates and my principal's concerns. Can I do that? So that's a big one. I think the emergence of AI is also having teachers rethink just the amount of work that they do every day as one human to in a high school 150, 170 students and whether there are ways that they can harness
new models, new AI models to help them. ⁓ That can be exciting. ⁓ But I think it does highlight just the enormous amount of work that we ask our teachers to do for little pay and a little appreciation. ⁓ And then I think the larger question, what does it mean to be a human learner?
Where is the humanity in my classroom, in my lessons, in my interactions with students? And how can I make sure that they are not moving towards a world where they're just interacting with bots and with screens?
Raakhee: (19:12)
learning has to be social. It just has to, right? And not just the learning of facts or data or information, but the learning of how we become in life and who we become as a society. And we're seeing so many challenges with that in the world we're living today.
I think one specific one that is a little scary for everyone right now is things like AI hallucinations, ⁓
you know, people getting far too close to these virtual chatbots and challenges from those relationships. So just, you young people again being caught up now with technology that might fool them into thinking this is a human that I'm communicating with and the impacts of that. But also things like deep fakes and just living in a world where reality and and online is just, you know, that line is far too blurred.
Sarah Levine: (20:07)
Yeah, yeah, yeah, yeah, yeah. It's enough to make you wanna crawl into a hole. Yes, of course. think for educators, things like misinformation, disinformation, deep fakes, those are all things that I think we now have a responsibility to teach about. I think I would say one of our...
newest and biggest responsibilities. It's always been about teaching what people call critical thinking. And there's, you know, lots and lots of definitions for what that is. But in this case now with the emergence of AI, ⁓ it's about skepticism. ⁓ It's about source checking. ⁓ And we're already pretty bad, you know, as a society.
at least in the US, about checking our sources. And we're much more excited about, I think, the human ⁓ enterprise of getting excited about things that we've read or we've heard and having emotions about them. So I would say a couple of things. One, little concern is that schools may feel the need, because AI is here, to use AI.
They don't want to be left behind. So many schools, while in 2022 when OpenAI introduced chat GPT, the immediate response was we got to ban this thing. Now schools are going the other way. need professional development about it. We need our students to be using it. We need our teachers to make sure they're using it so that our students have experience with it. So now there's an embrace of and a kind of an urgency
to using AI, and I think that might be a little bit of a mistake. I think we don't need to be rushing But what we should be doing is teaching our students how to be skeptical and
insisting, inculcating in them the habits of mind, of skepticism and source checking and ⁓ restraint in terms of jumping to emotional conclusions.
Right now, I think what kids tend to do is ⁓ treat ⁓ AI as they treat Google as a search engine, ⁓ take for granted the output that they're receiving, that it is true, and
⁓ use it to support their ideas. And that, of course, is a terrible mistake. ⁓ And I think that it's AI literacy that we need to be teaching right now. It's how to understand how AI works, let's say how large language models work, ⁓ how often they can be wrong, even in little ways. And ⁓
how to be skeptical about what we're seeing that AI is creating,
Raakhee: (23:16)
Last week we spoke about some of the most important skills we're going to need and we got a chance to dig into the data around numeracy, literacy and problem solving. And to your point, know, America is not doing that great in terms of those numbers and we've seen a decrease in those skills. ⁓ So there's definitely some questions to be asked about is technology helping us here or, you know, hurting us.
Sarah Levine: (23:43)
I think maybe one useful thing to think about is
Whenever we see humans engaging with AI in ways that make us nervous or uncomfortable, we should be thinking not about the tech, but about the underlying motivations to use the tech. That is, I'll take it back to the classroom. If a student goes to AI and has AI write their paper and submit their paper,
What was the motivation for doing that? Could it be, and these are all human motivations, ⁓ I don't care, ⁓ in which case now we teachers have to address that. Could it be I'm working two jobs and I don't have time, we as a society have to address that. Could it be I don't think I'm capable, I know that whenever I write it's stupid, now as a person to person, ⁓
community to student, need to address that. So not the tech, but the human motivations for using the tech. Those are the things educators need to be looking at. And those are very human things.
Raakhee: (24:57)
Sarah, thank you so, so much. I appreciate all your insights today. This is such a great conversation. these are big questions. And I don't know if we had answers, but it's such an interesting exploration. So I really appreciate it as well as your time on a weekend.
For everyone watching, thank you so much for being here, We appreciate it and we will see you again next week. Bye for now.
.png)