Forget AI policies. We need difficult conversations
The fall semester is opening for universities across the world, with instructors worried about AI for many reasons, including cheating on tests, the environment impact, and academic integrity policies for classrooms. It can feel like AI concerns are everywhere. But the conversation on AI misses a broader problem: students have a difficult time understanding that mechanically prose does not make good writing.
Until the release of GenAI, I could rely on calling out bad writing by identifying mechanical and organizational issues. I could usually see kernels of good ideas in rushed prose as the student developed their ideas. Together, the students and I would hone those ideas while we worked on their prose. In my classes, as I helped the students rewrite, their ideas improved with revision and by talking about those revisions. These conversations necessitated some conflict, but that conflict always had an eye toward improving the thought going into the writing. The ancient Greeks even had a word for this kind of productive conflict: agonism. Agonism embraces conflict and struggle with the understanding that such conflict will make one’s ideas better. When students revise, they clarify their thinking and rethink their ideas.
Recently, though, I’ve encountered a different problem: when students use GenAI, they end up with meaningless and boring ideas wrapped up in mechanically correct prose. They use ChatGPT to write uninspired ideas with designer syntax. I can’t see kernels of good ideas anymore. There is no thinking from GenAI. It is not interesting or novel. In fact, large language models are specifically designed not to write anything particularly novel. Rather, they find averages or likelihoods of related words. The words ChatGPT all make sense, and the sentences have complex syntax. But if I read AI prose with any critical eye, as Getrude Stein might say, there’s no there there.
Yet, I have listened to news anchors, podcasters, and corporate CEOs shill AI technologies. If you were to listen to AI evangelists, you’d mistakenly think the quality of AI written text is high quality. But GenAI tech is devoid of audience awareness. It doesn’t understand genre conventions or why people use them. It has no intentions.
In fact, companies trying to sell educators on AI miss the point of learning to write. The act of writing in a classroom is the point. Writing is an activity. Learning to write, learning to organize one’s thoughts, and crafting one’s rhetoric toward a target audience—these are central elements of being a good writer. These elements are quite scary. They’re difficult and frustrating and certainly more complicated than mechanically correct sentences.
What we need, then, is a language about the quality of ideas. Such language, I think, is found in the virtue of discernment. Discerning writers and readers judge well. As discerning writers and readers, we identify what is superficial and derivative while seeking out and valuing ideas that offer genuine insight. Discernment means examining a text and slowly thinking about the meaning of the sentences and the quality of the ideas expressed in those sentences.
Part of this discernment is recognizing that there is not a single type of good writing. A biologist, a chemist, a mechanical engineer, a physicist, a literature professor, and a linguist all have different concepts of what good writing means. Each has a set of disciplinary conventions and expectations. Those conventions reveal the expertise of the person writing. GenAI though has no expertise—it’s wrong about a great many things. If we value discernment, and by extension expertise, students need to be taught to recognize where judgment is coming from and why their work—especially their writing—is judged.
To do this, we need to embrace difficult conversations in which teachers challenge their students’ writing and ideas—even when the writing and ideas are already decent. Because decent writing and ideas can always get better. The enemy of great writing is decent writing. What’s I’m really talking about here is, as I mentioned earlier, agonism. If we embrace agonism, we can resolve some of the AI drama and hand wringing. GenAI can’t generate good ideas, not yet at least. We don’t need AI policies then. What we need instead are policies around honest and direct debate. Openness to criticism, including giving and taking feedback, are ways to short-circuit AI use because thinking is the emphasis, not a paper. When we try to be both critical and helpful, students can flourish as discerning thinkers.
Discernment takes time and patience, on both the part of the student and teacher. It takes getting to know students and building trust with them that your criticism comes from a genuine place. With students, I have found that conversations helpful if they center on what good feedback looks like. Good feedback is neither blind criticism nor effusive praise. As a reader, I tell my students that I need to be direct and transparent in my criticism. I remind them the goal is to improve their prose and their ideas. I won’t tell them what they want to hear but what they need to hear. GenAI can do none of this. Only a teacher can.