It's officially mid-August, which in my mind is peak summer, but for college students across the country, it's time to start thinking about returning to campus. And for college professors, it's high time to figure out where you stand on artificial intelligence in the classroom. ♪
Students using AI to write their papers, lack of clarity in some classes, both mine and others, about what the official university policy is, what a department policy is, what an individual classroom policy is around the use of chat GPT to do student academic work. Sounds kind of stressful, to be honest. Does not feel very summery. And so going into this summer teaching my own class, I decided to write my own AI policy prohibiting AI in my classroom.
How professors are readying themselves for the robots this semester on Today Explained. Hey, everybody. I'm Ashley C. Ford, and I'm the host of Into the Mix, a Ben and Jerry's podcast about joy and justice produced with Vox Creative. And in our new miniseries, we're talking about voter fraud.
For years now, former President Donald Trump has made it a key talking point despite there being no evidence of widespread fraud. But what impact do claims like these have on ordinary voters?
People like Olivia Coley Pearson, a civil servant in Douglas, Georgia, who was arrested for voter fraud because she showed a first-time voter how the voting machines worked. Hear how she fought back on the latest episode of Into the Mix. Subscribe now, wherever you listen.
They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets. They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at citi.com slash weareciti. That's C-I-T-I dot com slash weareciti.
You're listening to Today Explains. Is it Today Explain or Today Explains? Explain-da. Explain-da. Olivia Stowell is a Ph.D. candidate and an instructor at the University of Michigan. Where I study media studies, specifically reality television. She also teaches a course on reality TV, which sounds like a lot of fun, but she's banning AI in her classrooms, which sounds...
A little less fun. We asked her when this became an issue. Yeah, it was fall of last year, so fall 2023. I was teaching a TV class. I was the TA for a TV class with my advisor. And they had an assignment where they had to write about social media reception response to a TV show of their choice.
I noticed sort of the repeated use of
phrases and repeated kind of sentence structures. Repeated phrases. Phrases. Phrases. I was like, this does not feel like student writing to me. I was pretty certain that a student had used ChatGPT and they ended up admitting to it. So confirmed case. Here is something to get you started. Good enough. At that point in the fall semester of 2023,
Did you have a policy? Did your class have a policy? Did your institution have a policy? There was no institution-wide policy, and there still isn't that I know of. So professors are kind of setting their own. And this is kind of a common thing at various universities where...
You know, the university will present a range of possible policies or suggestions for possible policies. ChatGPT is the new kid on campus, and some professors are banning it from their classes. Others are encouraging their students to test the limits of how AI can help. Duke University is doing this where they have kind of four levels of policy from, like, totally prohibit,
Use with permission, use with citation, totally allow are kind of like the four tiers. University of Delaware also has that same kind of scaffold available. And so at that point in that class that I was in, the use of AI was prohibited, but I didn't write the policy. I was the assistant in that class. Okay, so you seeing this sort of...
gap in a policy decide to fill that space with one of your own? Yes. Yeah. Tell us about it. How did you write it? What did you consult? What did you do with it once you wrote it? Obviously, as someone who is like terminally online and like also studies media, I was like aware of ChatGPT, right? And aware of the possibilities. But when it first came out and like when Dolly and all the other ones kind of first came out, I was like,
And I still kind of feel this way. My instinct was like it will kind of be a flash in the pan and then sort of be residual or recede kind of from the front stage of public conversation. But when it became clear that that didn't seem to be the case for students, I was like,
I felt like, okay, I have my gut reaction, but I need to also be informed. You know, I spent a lot of time reading a lot of academic papers, journalistic articles, media reporting, things like that. And so I came to, after all of that research, my policy, which is that I prohibit AI and I sort of present students with
five reasons that I think also represent kind of the five central ethical questions around AI for the university context. A dramatic reading of Olivia Stowell's AI policy. Reason one, this class is designed to improve your writing skills. If you are not writing, you're not improving. Reason two,
Really the undergirding question there is what's the purpose of college, right? What's the purpose of being in the classroom? To me, the job of the student is learning. Rather than achieving perfection, the goal is to learn process. And so to me, if you're outsourcing that process,
there is a mismatch with the goal of what my classroom space is and what the student is doing. Like if your task is to learn how to write an outline and you have something else generate an outline for you, there's kind of a problem. So that's the first one really like, are you learning actually if you have chat GPT or other AI forms produce your work for you? Reason two, using AI opens up academic honesty issues.
A lot of people didn't consent to have their work
be used to train AI. Taylor & Francis just recently, which is like a big publishing company, just recently signed like a $10 million deal with Microsoft to allow Microsoft to train its AI on Taylor & Francis publications. I think our company signed one of those kinds of deals too. OpenAI said it has Inc. licensing agreements with The Atlantic and Vox Media. The latest in a flurry of deals the startup has made with publishers to support the development of its artificial intelligence products.
As part of the deals, OpenAI will be able to display news from the Atlantic and Vox Media, which owns the... Yes, yeah, exactly. Right. And like, I have an article in a Taylor & Francis journal, and I didn't get to say yes or no. Like, I didn't get to opt out on having my academic work, which is the product of...
years of research and study, you know, I didn't even have the chance to say no. And beyond that, I don't see any of that 10 million, obviously, you know. And so there's all of these kinds of questions about compensation, about consent. And beyond those kinds of larger questions of plagiarism or theft or, you know, access and training, there's also the issue of like if a student misrepresents ChatGPT's work as their own, that's kind of a different kind of plagiarism and theft.
Reason 3: Using AI does not produce reliably accurate results.
When like ChatGPT generates like a fake paper that doesn't exist and cites it as a source in like a student essay that it generates, this is a problem. And some people call this like an AI hallucination. But what I actually prefer, there's a good paper by Michael Hicks and James Humphries and Joe Slater that says that it's bullshit. Their definition of bullshit is a claim without regard for the truth that ChatGPT mainly works for.
to just produce writing that seems human-like and it has no regard for the truth or accuracy of those claims. And all three of those, I think those first three reasons are really kind of about the space of the classroom itself in some ways. But I also have two reasons that are more about sort of the social context of the world in which we live.
Reason four, chat GPT has serious negative environmental impacts. Microsoft's water usage went up like over 30% from 2021 to 2022. And journalists and researchers think that that's because of the development of AI, because a lot of water is needed to cool the AI systems, especially in summer. So then there's also a global warming intersection that as the planet gets warmer, the systems are more inclined to overheat, which means you need more water to cool them. So it's kind of a...
circular sort of issue. Reason five, OpenAI and other AI companies have exploited workers. Time did a study about workers in Kenya. Washington Post did a study about workers in the Philippines. There's lots of others about the
really underpaid and exploited workers, often concentrated in the global South, but also in refugee camps and prisons around the world elsewhere, where these are the people who are doing the image sorting very often, and often for very, very little money, below minimum wage, certainly below U.S. minimum wage, but even sometimes below the minimum wage in their own countries. Washington Post, if I'm remembering right, called it digital sweatshops. And so there's all of these people in the global South who
upon whom AI depends. And so I think there's an illusion that AI is this human-less entity
But actually, the human list machine runs upon a ton of invisibilized humans who are, you know, struggling to make ends meet. So that, you know, university students in the global north can what? Not write papers? It just seems wrong to me. What did your students say when you finally introduced it to them? Did you do that this summer or are you waiting until the fall semester? Yeah, I did this for my summer class. So I'm currently teaching a class this summer that has...
about 25 students in it. And so on the first day of class, I was going over like all the other classroom policies, like, you know, here's what the assignments look like. Here's how participation is graded. And here's why we're not going to use AI this summer. And they were really, really receptive, actually. Yes, they were. And I was really sort of excited by that. I have one student who's a computer science major. And, you know, they were like, I don't even know what I think yet.
But I like talking about why. I'm curious. You went for this outright ban. You're taking this sort of hardline approach. Yeah. AI tools completely prohibited in your class. You know, I'm a professional. I fraternize with fellow professionals. And a lot of professionals I know use these tools in their work to write emails, to synthesize information, to look for potential people to talk to for a project they're working on, whatever it might be.
Did any students want to make the argument that by making this sort of ban outright, you were ill-preparing them for the work they might have to do one day in a professional space? No students made that argument, but I think that's an argument worth considering. I think that for me, and also I, you know, I want to be open to revising my position, but I think that part of the difference there is
is college is the space to learn. This is the space where you want to learn how to write an email. You know, this is the space where you want to learn how to set up a meeting. This is the space where you want to learn how to find sources. Part of the difference there is like, you know, presumably a professional who uses, you know, some kind of AI technology to write an email actually does know how to write an email. Whereas some students come into college
you know, college, not having ever learned how to do that. And so those are sometimes things we talk about in writing classes. And, you know, I don't think, you know, I'm not naive. I don't think my students are going to all never use AI again or something after my class. But I hope that in their use of it, they are more informed and not, you know, just using things uncritically. Do you think there's a day in the near distant future where...
feels more ethical and you might be more willing to let students use it in the classroom? I think that's possible. I think right now for me, part of it is that I feel like I've yet to encounter a benefit that seems to outweigh the costs. And so that's part of it for me right now is that there's a lot of downsides and very few upsides.
I'm hopeful that we do build a better world and that that better world might include the tech we have now. And so, like, I think my policy has a hardline stance, but I don't think I have a hardline stance that's set in stone for all time.
That was Olivia Stowell. It's hopefully not too late to register for her class this fall at the University of Michigan in Ann Arbor. When we're back on Today Explained, we're going to hear from a professor who wants to give the bots a chance in his classroom.
Canva presents a work love story like no other. Meet Productivity. She's all business. The Canva doc is done. Creativity is more of a free thinker. Whiteboard brainstorm. They're worlds apart, but sometimes opposites attract. Thanks to Canva.
The data is in the deck. And now it's an animated graph. Canva, where productivity meets creativity. Now showing on computer screens everywhere. Love your work at Canva.com. Support for this show comes from Amazon Business. We could all use more time. Amazon Business offers smart business buying solutions so you can spend more time growing your business and less time doing the admin. I can see why they call it smart.
Learn more about smart business buying at AmazonBusiness.com. Church's original recipe is back. You can never go wrong with original. Still tastes the same like back in the day. Right now, get two pieces of chicken starting at only $2.99 or 10 pieces starting at only $10.99. Church's. Offer valid at participating locations.
Today Explained is back with Dr. Antonio Byrd. He's an English professor at the University of Missouri, Kansas City. But he's also been thinking a lot about generative AI tools in the classroom because he's on a joint task force of college professors from across the country who are trying to develop standards around writing with artificial intelligence.
My own approach is really to give students a little bit more agency in how they want to use language models. I believe that students, they have a right to learn about different types of writing technologies. If it's like a podcast, for example, they should know how to be able to make a podcast because that is a type of writing that happens there. But I think it's also really important that students kind of understand the risks and rewards
that do come with using language models.
It's kind of like with using the internet. Like, these tools are pretty much everywhere. They are present. Right now, they're present even within Google Chrome. If you right-click and try to copy and paste, you'll see an option to see that there is a "Help me write," which is their version of Gemini being able to produce text for you, or as it says, help you write.
We are taking the next step in Gmail with HelpMeWrite. Let's say you got this email that your flight was canceled. You could reply and use HelpMeWrite. Just type in the prompt of what you want, an email to ask for a full refund, hit Create, and a full draft appears.
So because it feels like a little inevitable that students will come to use these tools, I think you're going to see a generation of students who are aware of generative artificial intelligence and they're kind of looking to faculty for some real guidance. How can I use this so I can be competitive on the job market so that way I know that I am up to date on what some of these employers are really looking for?
I think some people in our audience might be surprised to hear that you, an English professor, are on board with AI in the classroom. Could you tell us, I mean, what that looks like in your class? Have you...
integrated AI into your courses, I don't know, this past summer, this past spring, this past fall? So one of the things that I do is that at the very start of the class, I have a video that I created where I explain to students the very basics of generative artificial intelligence.
So you should have a right to know what generative AI is. You should have a right to know also how it is actually really dangerous. And that's something that I have done here in this video. Then throughout my classes, I give students the option to use it.
not copying and pasting the language that's generated from AI, but using it to think about your creativity, to think about your critical thinking, to produce your own words. So that way you can try to be more effective as a thinker on the page.
So there's this online class that I teach for first-year students. It is the second course in first-year writing where they learn research methods. And so I tell students, you could go to the library database and you can do keyword search, very old school, traditional way of looking for sources for your literature review. Or here is this tool called Research Rabbit.
You can go to Research Rabbit and you can put in just one PDF article that you found and it's going to generate other articles that have cited that article or they're related to it. And it shows up as a map on the screen. So you kind of like visualize the different types of articles that might be useful for your own research. But there is the caveat. You actually have to read those articles.
You just can't pull out the citations and start putting them into your paper. You need to know if they really fit with your argument of what you're actually doing. And this research rabbit is not actually a rabbit. It's AI.
No, it's not like a rabbit at all. It is exactly AI-empowered. I would even say it's kind of like Google Scholar. When you do a search on Google Scholar, sometimes they list an article and they will tell you how many other people cited that article. When ChatGPT arrived and became inescapable,
It felt like a lot of people were declaring the personal essay dead. Um...
Are you at all afraid that students might go a little further? Oh, you know, I'm using AI in this class, you know, for research. Maybe I'll just, you know, turn in a paper that was written by a robot too. Yeah, I think one of the issues with that, though, is that language models, they still have a particular kind of wording when they're actually doing their output.
that writing teachers, many of them kind of start to pick up that there's something kind of off or odd about this. Like there are no real specific details that you would usually see if someone was actually telling a story of some kind. So, you know, students might be able to use that, but the language is very distinct, I think, when you're actually doing those types of outputs.
So you're saying you're not too worried about AI fooling you? You know, that's a really good question. I think when it comes to students, like if they haven't taken the time to really work with the language or work with that artificial intelligence, then the language that they give teachers will be kind of glaringly obvious, like this doesn't sound like you at all. So that's what I'm thinking where we are right now. You
You know, it's clear that colleges and universities across the country, maybe across the globe, have not yet figured this out, that it's sort of the Wild West, that professors are trying to come up with their own policies for their own particular classes. What do you want to see more of as, you know, universities across the planet try to figure this out? I think the biggest thing that I would like to see is a lot more, I guess, kind of consistency on what we message to students.
Because if you have within a single, say within a hallway, in the same hallway, a student can go down to one class and that professor says, "AI is completely banned."
And then that student can go two doors down to the next professor for a different class. And that professor says, use AI. Here's how you can use it. It's a great technology. Let's go for it. So when students get this inconsistent messaging, they're not entirely sure, like, how am I supposed to think about AI and how it works with my writing and how I can actually learn with it? I did have one student who was
took that research methods class. And when she saw that here is an option on how to use AI, she really appreciated that she was having a professor who kind of tells them, here's how to think about it. Here's the guidance on it. Because for so long, all she really heard was don't use it. Plagiarism is a real problem. I'm going to ban it. So some consistent messaging for students is really important when they go from class to class.
Dr. Antonio Byrd, English, University of Missouri, Kansas City. He's also on the MLACCCC Joint Task Force on Writing and AI. It's really called that. I wonder if AI came up with the name. Our show today was made by Peter Bresch.
Balanon Rosen with an assist from Toby. We were edited by Amina Alsadi, fact-checked by Laura Bullard, and mixed by Patrick Boyd and Andrea Christen's daughter. I'm Sean Ramos-Verm, and this is Today Explained. Today Explained
Thank you.