Oh, I really didn’t want to discuss AI. Unless it is about how ChatGPT, when spoken in French, sounds like the speaker is saying ‘Cat, I farted’.
There has already been a tonne of talk about how ChatGPT is ruining our critical thinking abilities, contributing to the acceleration of climate chaos, and providing voters with inaccurate information ahead of elections.
Elsewhere, people are praising ChatGPT for helping them prepare for job interviews, come up with solid travel itinerates, and assist those of us with neurodivergences in a number of different ways.
Moreover, Meta has - allegedly - used my published words without my permission to train its AI. These were words I sweated over, based on knowledge that took me years to acquire, and that articulated ideas that required deep human thought. They refuse to pay authors (‘content creators’), and the technology itself is already replacing writers, graphic designers and so on. It’s a threat to humanity.
And, to be honest, I had no interest in even experimenting with ChatGPT. I’m a luddite and, even when people suggested it’d help me write difficult emails, I had no desire to mess around with any of that stuff.
But ChatGPT found me.
In fact, I was reading AI-generated writing way before any of ‘the discourse’ really got going. And I was doing so without even realising what the technology was capable of (disappointingly little, but more on that below).
It began when I was grading student essays in one of my roles as an adjunct professor. I’ve been doing this for nearly 15 years now and, in that time, I’ve learned that it’s not uncommon for students to use the same arguments, visual examples, or theoretical concepts. Especially undergraduates because, of course, they are all drawing on the same module content.
However, this time something felt off. I was about a quarter of a way through the batch when I realised several students had used subtitles to divide their essay, something I’ve rarely encountered before. In my experience, students tend to struggle with coming up with their own essay titles, let alone out the effort into writing five or six solid subtitles. Moreover, not many students have enough confidence or even awareness of the coherence of their thesis.
But perhaps this was a skill they were bringing from one of their other classes. Who knows? In academic skills sessions I often recommend students to use subtitles to help them understand the building blocks of their argument.
But then I noticed other formatting quirks. Like the students who had used bullet points for each of their visual examples. This is a definite no-no. But, again, writing styles can vary across the world and this was the first time I had taught an academic programme in this country.
There were lots of visual examples referenced that we hadn’t explored in class. Again, maybe students were just confident at independent research. But something was still niggling at me.
Then, as I got over halfway through the substantial batch of essays, I noticed very specific phrases started cropping up frequently. Too many essays talked about ‘the rich tapestry of Southeast Asian arts’. Were they borrowing that from one of the readings? I checked and it didn’t seem like it.
Then the penny dropped.
I reluctantly opened ChatGPT for the first time with no idea what I was going to encounter.
I was pleasantly surprised by the clean layout, the intuitive chat feed. I typed a loose version of the essay question. And, in less than five seconds, I had an impressive response.
I felt sick.
I could instantly see how this might appeal to a stressed out student, an unconfident undergraduate. The essay itself look pretty alright, at least on a first skim. The vocabulary, syntax choices, and metaphorical flourishes were, on the face on it, highly sophisticated. But this was just gloss; these impressive sounding paragraphs had nothing to them. No nutritional value. It was over-processed garbage which lacked critical depth and was riddled with inaccuracies and completely fictitious references.
But, perhaps worse than all of that, was the little postscript beneath the assignment:
Let me know if you would like me to help a shorter version of this for a presentation? Would you like this rewritten in a more academic tone?
The sheer confidence of it all! The pure self-belief! (Can AI have self? That’s for another essay.)
The arrogance!
It reminded me of students who have learned the art of speaking with confidence and gravitas, and who will use that skill to dominate discussions without ever offering any evidence of academic intellect. Yet their classmates, who were vastly smarter, were so wowed that they kept schtum.
To err is to be human
There are far bigger problems posed by generative AI, but its sheer arrogance and inability to doubt itself terrifies me.
Would you like me to help put this information into a PDF document for you? Would you like me to draft the final text for you?
It terrifies me most because it is seductive. It doesn’t allow for doubt. It never says, I may have parsed that theoretical concept incorrectly so I recommend you check the reference yourself.
And it doesn't leave room for our own doubt. It soothes us and tells us all our ideas are brilliant and it can help us with exactly what we need next.
I’m not a total refusenik
Since then, I’ve played around with ChatGPT here and there. I need to understand it because my students will use it. (For example, the Open University, where I also teach, has a generative AI policy. This allows students to use Generative AI to support their learning).
As someone with executive function issues, I have also found GoblinTools immensely useful as I start to launch my own business. It allows me to brain dump all my ideas and then gives me a clear to-do-list that stops me from becoming overwhelmed and paralysed by decision fatigue.
Also, I work from home and much of my work is self-employed. I don’t have a colleague at the desk next to me. My neurodivergence can make me anxious about sharing ideas with people I know. So, occasionally, I like to bounce an idea off GoblinTools and it helps me sort of my thoughts. BUT
I could never and would never use it to generate writing. I am a writer. I have always written. My prose is not always the best, and sometimes lacks the clarity I wish for. But it is never as bad as the godawful garbage AI produces. I'd no sooner ask ChatGPT to write a book on my behalf than I would ask it to comfort my child after a nightmare.
Why outsource my experience putting word beside word? It's as joyful and frustrating as watching my toddler peel an orange.
Student consequences
BTW, the university had no AI policy so I had to defer to the existing academic integrity policy. In the obvious cases, students who had simply copy and pasted the entire Chat GPT response were sent a very frightening email and summoned to a meeting with me. They all admitted their use, full of fear that they would be kicked out of the university or loose their scholarships. On this basis, I decided that they could resubmit an alternative assignment but it would only receive a bare pass.
In the less clear cut cases, I awarded them their grade but left stark warnings in the feedback that AI use was suspected and it could constitute plagiarism. Incidentally, they all received lower grades anyway because the quality of work was lower than those students who submitted essays that were less polished -perhaps with grammatical errors or more straightforward language - but who nevertheless demonstrated critical thinking and a complexity of ideas.