
I did what any responsible, well-adjusted academic would do when Grammarly’s AI detector told me my writing was probably written by AI.
I panicked. Okay, I’ll be honest. I spiraled into despair, then I panicked.
After I got the self-flagellation out of the way, I did the most reasonable thing available to me in the year of our lord 2026: I went to other AI tools to soothe my nerves about being mistaken for AI.
This is where the story turns.
Immediate Relatability (or, I Did Not Have This on My Bingo Card)
I wasn’t trying to cheat. I wasn’t trying to outsource thinking.
I was writing. As thoughtfully and carefully as I could, in the voice I’ve spent literal decades cultivating.
So when Grammarly’s detector squinted at my prose and said, “Hmm… suspicious,” my brain did not respond with calm, philosophical reason. It responded with:
Wait. What???
Like to many of us, this all felt familiar. Accusations. Credibility. Professional side-eye. The quiet, steady dread of being misunderstood by a system that doesn’t explain itself but feels oddly authoritative (Does this theme feel like déjà vu? 🤔).
So I did what many people like me do these days when they’re anxious and spinning out into orbit at 11 p.m. at night:
I opened ChatGPT.
And then Claude.
The Irony Crescendos Beautifully
Here’s the part I didn’t know what to do with at first. I had actually used Grammarly earlier in the writing process. It helped me clean up grammar, smooth transitions, and structure paragraphs and the overall essay.
So the same system that helped me polish my human writing later turned around and questioned whether that writing was human at all.
At this point, the robots weren’t just arguing with me. They were arguing with themselves.
What happened next was… not what I expected.
Instead of reassuring me that detectors are flawless arbiters of truth (they didn’t), both AIs were strangely reassuring. They both explained to me, at length, why AI detection tools are unreliable, methodologically shaky, and prone to false positives.

They critiqued the very category of “AI-sounding writing.” They explained how fluency, structure, and clarity often get misread as artificial. They named the training data problems and the institutional misuse.
And then, the pièce de résistance, Claude offered me tips to make my writing sound more “human.”
Let me say that again.
I went to an AI for illumination–by extension, some comfort. The AI told me the detector was nonsense. Then the AI coached me on how to perform humanity better.
Oh, before I forget–there’s whipped cream with that cherry on top: Claude asserted that AI is trained on academic writing. So yeah. It all makes sense, see? Just ignore Grammarly.
At this point, I had to laugh.
Because if you don’t laugh, you’re just an anxious mammal staring into the abyss of recursive automation.
Academic Writing in the Age of Being Side-Eyed by Machines
Here’s the part where the joke stops being just a joke.
The writing that flagged the detector wasn’t sloppy or generic. It was clear, carefully structured, cautious, analytic, professionally toned for the serious topics it was attempting to stitch together and convey to its readers.
In other words: academic writing.
The kind we train students to produce.
The kind scholars refine over years of peer review.
The kind many of us—especially those who’ve had to be extra careful—learn to wield as armor.
If AI systems are trained on institutional prose, and humans are trained by institutions to write in that same way, then misrecognition isn’t a glitch. It’s inevitable.
The absurdity isn’t that the robot thinks I’m a robot.
The absurdity is that sounding competent now triggers suspicion.
Embracing the Absurd, Releasing the Anxiety
Eventually, somewhere between my third “it’s not you, it’s the detector” explanation from an AI, something in me loosened.
This is ridiculous. Objectively. Structurally. Existentially.
I am a human. I wrote the thing.
A tool misfired. Other tools explained why.
No agents from the Office of Student Conduct and Integrity arrived at my door.
I now choose to treat this moment as what it is: a snapshot of a strange transitional era, where machines argue with each other about whether I count as real.
The robots think I’m a robot.
The robots also say that’s nonsense.
I’m going to believe the part where I’m still allowed to write.
And maybe—just maybe—I’ll let this be funny instead of terrifying.
Because if we don’t learn how to laugh at the absurdities of the AI age, we’ll spend all our time trying to prove we’re human to systems that don’t actually know what that means.
And honestly?
I’ve already done enough of that.

Leave a Reply