I’ve been teaching college Earth science courses as a part-time faculty member for a long time now, all while juggling other jobs. I started because it was enjoyable; no one gets into this line of work for the famously poor pay or complete lack of job security. Working with students is just one of those genuinely fulfilling experiences that is addictive enough that they ought to warn people about it.
But thanks to generative AI, it has become mostly miserable―at least in certain settings.
For the last few years, I’ve been exclusively teaching asynchronous online courses, meaning recorded videos rather than live sessions. These have always been a bit more challenging than face-to-face classes, where you have a greater ability to keep the students on track. If a student doesn’t have to show up in a room for an hour at a scheduled time and no one can see their involuntary facial expressions when they don’t understand something, the probability increases greatly that they’ll just… fall off.
But since the appearance of ChatGPT, the instructor’s job isn’t just to teach the subject and frantically attempt to keep every student’s plate spinning. Increasingly, it’s to moonlight as a detective and prosecutor because students without the motivation to do the work don’t have to skip it anymore. They can turn in a work-shaped simulacrum almost as easily. And a substantial number do—in a recent College Board survey of 600 high school students, 84 percent said they had used generative AI for schoolwork.
Teachers are certainly no strangers to cheating. But peeking at concealed notes during an exam or plagiarizing paragraphs from Wikipedia are quaint stone tools compared to the WMDs known as LLMs. I long for the binary comfort of a simple problem like “cheating or not?” Now, I’m forced to adjudicate 256 shades of gray and provide sufficient documentation to defend my decision in case a student appeals my grading to multiple levels of institutional review panels.