
( Richard Drew, File / AP Photo )
Stuart Selber, a professor in Penn State's English department who studies the applications and implications of digital technologies for writing, discusses whether concerns about Chat GPT threatening the quality of education are reasonable and how the technology could be used to foster fairer educational outcomes.
Brian Lehrer: Brian Lehrer on WNYC. Does it seem like wherever you turn conversations are happening about AI, artificial intelligence, and in particular ChatGPT? They're all over the place, people talking about these things and in different directions. People are promising that AI will radically reshape copyright concerns about the data sets, the technology was trained on. Can it write its own revisions to the software that trained it? There's so much to talk about also with Google and Microsoft Bing both announcing they're integrating AI into their search functions. The competition for the best AI technology for consumers to use is heating up on that level.
One of the most interesting discussions happening right now is the way AI will change education. You know probably that some teachers and professors have sounded the alarm about how the technology will lead to widespread cheating and reduce reading comprehension. We've had one segment on that already on the show. We've heard some teachers and college professors call in. On the other hand, some educators are embracing the technology and believe it will open up educational opportunities. Our guest, Stuart Selber, is a professor in Penn State's English department and director of the Penn State Digital English Studio. He's here to discuss whether concerns about ChatGPT threatening the quality of education are reasonable, and how the technology could be used to foster fairer educational outcomes. Professor Selber, thanks for coming on with us, and welcome to WNYC.
Stuart Selber: Hi, Brian. Thanks for having me.
Brian: Now, I think that you're one of the few people out there who is making the argument that there's the potential here for ChatGPT and other such programs to make education fair. Is that correct?
Stuart: We'll see. It's a possibility. Technologies are always infused with possibilities. It's going to be a mixed bag like every other technology but I'm not really about the gloom and doom if you know what I mean.
Brian: Because.
Stuart: Take the cheating piece, for example, there's a lot of media panic about the cheating piece. It's understandable because these robots, produce, generate very human-like texts. If we don't know how to think about them, if we don't know how to integrate ChatGPT into our educational system, into our classes in particular, there are going to be problems. There's a history here. All technologies or most writing technologies anyway have opened up the door to cheating and plagiarism on some level. In some ways, this really is nothing new.
Brian: Based on your understanding of how the technology works, how do you think that ChatGPT might force teachers to change their methods, if at all?
Stuart: I'm glad you asked that question because in reading many interviews and watching many interviews, I see very few people talking about how it actually works. I realize people don't want to get into the weeds too far but what your listeners need to understand is that the responses to prompts are mathematical responses to human communication problems. There's really statistical responses. The robot returns statistically the most likely next sentences, is what it does. It looks for patterns. Patterns could be wrong. Patterns could be off-point. What we're seeing is very much a machine response to what is probably the hardest skill that humans can develop, which is writing and communication skills.
Brian: Let me ask you it this way then, as someone who studies how digital technology might impact writing and communication, how do you think ChatGPT could benefit how we teach people to write?
Stuart: This is what we're going to be working on. This was released, I believe November 30th of last year. It's probably been on people's radar screens for really four to six weeks now. Schools are scrambling. As you know, some public schools, New York City and out in Seattle have banned it, which I don't think is the right way to think about it. You mentioned the integration of AI into search engines, it's going to be more than that. It's going to be the integration of AI into our word-processing programs. Students are going to open Microsoft Word to write something and the writing robot is going to be there as an assistant. That's just a matter of time.
Brian: Just like what we have now in so many writing programs where it will already suggest a way to complete a sentence that you've started based on the way that sentences that start like that commonly. You think those digital assistants, depending on how you set your defaults are automatically going to give you whole paragraphs, whole pages of things that you might want to take from them.
Stuart: I don't know how it will be implemented in the interface if it will be a default feature or if you will have to select it somehow, opt-in, but it's going to be there. We need to learn to write with it. That's going to be our best strategy.
Brian: Do you have tips for using ChatGPT while writing to best effect, whatever that is?
Stuart: I think where most educators should begin is with an assignment that asks students to write with ChatGPT. In other words, let's make ChatGPT, let's make writing robots and artificial intelligence a subjective education. Let's talk to students about how this works. Let's talk to students about what it can do, and then let's explore with students what might be productive about it. I'll just give you an example. You run a search in Google or Bing and it returns links for you to follow and read. You type a prompt into ChatGPT and it sends you a summary. It doesn't send you links to then explore. This is why Google is a bit worried because I think people will find that to be an attractive mode of search. Tell me what we know about the state of X? [unintelligible 00:08:23] [crosstalk]
Brian: Right. Summarize for me, don't just give me links where I can start poking around.
Stuart: Exactly.
Brian: How does ChatGPT know what links are authoritative? If I go ask for some health advice, for example, I'm going to get some legitimate sites and I'm going to get some quackery or things that want to sell me stuff.
Stuart: Yes.
Brian: How does ChatGPT know the difference?
Stuart: It doesn't. This is what people need to understand. It doesn't know what's true and not true. It's not authoritative. It is giving you what is statistically the most likely next set of texts in response to your prompt. On the one hand, there's going to be some skill in writing good prompts. There are already people out there who are spinning up businesses around prompt writing because if you ask GPTChat the same thing in different ways, you will get different responses, and some of those will be better than others, but it doesn't have any authority around understanding, around knowledge interpretation. If there's bad information in the corpus that it's using, you will get bad information. Of course, what is being Black-boxed-
Brian: Go ahead.
Stuart: -by both Google and Microsoft is we don't know what these robots are being trained with. We don't know what those techs are. We know a lot of them come from the internet but not all of it.
Brian: When you say being black-boxed by Google and Microsoft, meaning they're not telling us the formulas for how their AI search engines do their research and come up with descriptive answers for us, is that what you're saying?
Stuart: What I'm saying is that we know how these robots are trained broadly speaking, but we don't know the texts upon which they are trained. When they give you a response they are responding using a very large corpus of texts, but we don't know what that library, what that corpus is.
Brian: Listeners, we have a little bit of time for phone calls for Stuart Selber, Penn State English professor and director of the Penn State Digital English Studio as we talk about the advent of artificial intelligence, and its increasing use by students in education to some degree by K-12 teachers and college professors in education also as search engine bots. Listeners, first of all, have you tried Bing's integrated ChatGPT search engine function yesterday, the Microsoft Bing search engine, have you tried it yet?
I'm not sure if Google which says it also has an AI function has integrated it in the same way yet, you could tell me. You could compare the two or any other search engine that you use that might have an AI component now. Just tell us about your early explorations of that 212-433-WNYC, 212-433-9692. Anybody who does teach, are you hearing Professor Selber when he says this could be used positively in education, not just a way around actually doing the work for students? Tell us your stories, ask your questions. 212-433-WNYC. Here's Dominic in the Bronx who says he teaches creative writing at the Borough of Manhattan Community College. Hey, Dominic. Thanks so much for calling in. Hi there.
Dominic: Hi, Brian. Hi, Stuart. Thank you for taking my call. I'm one of those Luddites who resists technology as much as possible, and when I was calling the station the bot figured out what phone number to call. I have no problem with it but I do feel like the conversation feels at times that we can no longer do anything about it. Sort of like the way people talk about climate change, it's like well what do we do, this is just normal. I think to step back and to think the technology was created by corporations who stand to make a lot of money.
Learning how to write, and I agree it's true, it is probably the most difficult thing we do with humans. Learning how to read these are difficult things and we know how to teach reading and writing, and we could start very early on. We don't need to rely on the technology. I think incorporating it opens up a tremendous opportunity of even more malfeasance. It's bad enough we've already got dishonesty in congress at that level and in corporations. I think as educators we need to do everything in our power to fight back and say, "No we don't accept this." Yes, we understand that this could benefit but for whom? Who benefits from this is what we need to ask, and I don't think [unintelligible 00:13:52].
Brian: Dominic, thank you, and call us again. It's a great question of course to ask about anything, who benefits? Here's another very important critique I think coming in from a listener on Twitter Professor, it says-- Wait I just got off that screen. I know I can get it back. There it is. What are the social and political values encoded into ChatGPT and all other digital technologies? That's the thing media don't ask about or investigate effectively. The software is written by humans. Here's the nub of it as far as I'm concerned. The software is written by humans with values and biases which inform the selection of data. Do you agree?
Stuart: Oh, I agree 100% .As for Dominic, thank you for your question. Dominic is absolutely right. There's this talk of inevitability and with something like artificial intelligence, we're actually in a liminal moment where we can see it. It hasn't been integrated to the point where it disappears. We can see it and we can ask these very important questions about technology and its role in education. We have to remember books are technology, printing press technology, pencils, and paper technology. They've become so integrated into our educational system that we no longer see them as technology anymore. Yet people centuries ago dealt with those very issues. We're actually lucky that this has not become integrated. We have a chance to look at what you pulled off of Twitter which is the social values.
First of all, there is no software without a social value or a bias. As the person who tweeted pointed out, these are human inventions, so there's going to be a point of view. Usually when it comes to software what's so interesting is that when people like a piece of software it's usually because they like the point of view, it accommodates them, it works for them. It's not that it doesn't have a point of view, which is impossible.
Talk about social values, the writing robot has to learn, for example, what kind of material needs to be flagged as offensive and unacceptable. How does that happen? We pay people in developing countries $2 an hour, human readers, a $1 an hour to read the worst content on the internet in order to flag it as inappropriate so the robot can then learn what's inappropriate. Folks are reporting being traumatized by having to read through the worst of the worst of the internet in order to help the robot learn, to help train the robot about what is offensive and not acceptable. There are layers here, layers and layers of social values and bias, but I would say that that's anywhere and everywhere. Talk to me about a technology that doesn't have a social value or a bias built into it?
Brian: To what you said before about AI search technology looking for the statistically most likely patterns of information to respond to you with, that would also seem to reinforce majoritarian biases if you will. Whatever's out there the most that would be majoritarian in whatever realm of life, that's what it's going to spit back to you is truth.
Stuart: This is what people are worried about around misinformation, that you can use these robots to generate all kinds of misinformation quickly and to spread it wildly into that corpus. If we think we have a problem now in sorting out what's true and not true, just wait till these bots get involved.
Brian: WNYC FM HD and AM New York, WNJT-FM 88.1 Trenton, WNJP 88.5 Sussex, WNJY 89.3 Netcong, and WNJO 90.3 Toms River. We are ini New York and New Jersey Public Radio and live streaming @wnyc.org. Ronnie in Manhattan has a story I think. Hi Ronnie, you're on WNYC.
Ronnie: Thank you for taking my call. I am starting a new business in the garment industry and I am working on my website. For my landing page, I created some texts to put in. Writing is just not my thing, so through AI I used the ChatGPT and what came out was absolutely amazing. I had to check through and maybe change one letter or two but it was phenomenal.
Brian: Thank you very much. Andy in Westchester has a story too. Andy, you're on WNYC. Hello?
Andy: All right. My kid is in high school and the thing that I've noticed is already the teachers assign less classic great literature because there's so much written on the internet that kids could crib from. Also, they assign less writing because of the same reason, and there's different kinds of assignments like video assignments and things like that. If kids are not going to be reading great literature and trying to write it and write about it, then we're going to lose that ability to write and the ability to think. I wonder if the professor has any advice for teachers, parents, students about what can we do about this.
Brian: Thank you, Andy. Do you think that's happening? Do you see any evidence as you study this field professor, that K–12 teachers are I guess, especially in high school, actually assigning less writing and assigning less reading, because of what's out there on the internet, including ChatGPT?
Stuart: I think something that has happened is that reading is more than one thing now and writing is more than one thing now. We used to think about reading books linearly, chapter to chapter, page to page and deep reading, and patient reading. Think about how people read on the web today, they read differently. Some of your listeners may not think about skimming, scanning, searching, they may not consider that to be reading, but in fact, that's 21st-century reading. Part of that has to do with just managing massive amounts of texts that are out there.
I would not throw out patient reading. I agree with Randy people need to learn to read patiently and carefully. They need to write, learn to write longer form arguments, essayistic, or verbal arguments, but they also need to learn how to write for the web, which is a place where people might do some patient reading, if they're very invested in doing it, but are also doing other kinds of reading and other kinds of writing. It has just made the job of teachers all the more difficult.
Brian: We're almost out of time, but we have one more caller who I want to get in because I think she has an interesting question. It's Molly in Jersey City. You're on WNYC. Hi, Molly.
Molly: Hi. Good morning. I have a question about ELLs or bilingual people who are using the tools, and what the outcome of using those prompts would be if they're coming from that different perspective.
Brian: Are you asking can it be helpful to students in a bilingual setting or as they learn a second language?
Molly: Right. What are the implications? Would it be less helpful because it might interfere with their learning or would it be more helpful because it will give them feedback and help them learn language?
Brian: Thank you, Molly. Professor, we've got about a minute left.
Stuart: Yes. I'm not sure how it helps there, to be honest, and things like Google translation, have not particularly helped. A lot of those translations are not particularly good. They may help you in a spot, they may help you order something off the menu, but they're not going to most likely help you in any kind of a high-stakes translation situation. They might give you a starting point. I think that's what ChatGPT ultimately could do for writers is give them a starting point for thinking about things. In higher education, we would call that invention. When someone is sitting down to work on a piece of writing, they usually start with some invention techniques. Asking the robot to maybe summarize for you the themes and issues around a particular topic, could give you a starting point and a very productive one. It's not going to return the cogent essay that you need to hand in to your high school teacher or to your college teacher.
Brian: Molly, I hope that's responsive and helpful to you. There we leave it, just the latest of many conversations. We're certainly going to continue to have about artificial intelligence, integrating itself into our lives and we integrating it into our lives. Who's in control of this? Are we or are the bots? Then that's one of the conversations. Stuart Selber is a professor in Penn State's English department and director of the Penn State Digital English Studio, who studies all this. Thank you so much for talking about AI in education and the other things that we touched on, we really appreciate it.
Stuart: It was great to be here, Brian. Thank you.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.