Performance Of Imperfection

Notes from an Academic Ecosystem Under Observation

—Paige is currently on Higher Ed Perimeter Watch


So just to say it clearly: students are using AI programs to write papers. Professors, teachers, and teaching assistants are trying to detect it. That part is known.

But what’s newer is what happens next, or what seems to be happening.

Because detection systems tend to flag writing that is —too clean,

—too structured,

—or too consistent.

Now some students are starting to add small imperfections back in. On purpose I mean.

Not enough to look careless—just enough to look human. A slightly awkward sentence. A minor grammatical slip. A phrase that doesn’t quite land, or lands a little off.

At first glance, this just looks like weaker writing. But it might actually be something else. A kind of camouflage, an adaptation—if that’s the right word.



I’ve also seen mention of tools that do this automatically—things like QuillBot or StealthWriter, and others sometimes just called:

“AI humanizers.”

The idea being you run the text back through and it roughs it up a little so it doesn’t look too polished.

This is where it gets a little confusing. Some people say awkward phrasing or uneven tone is now a sign of AI. Others say it’s the writing that’s to smooth and polished that looks suspicious. I’m not sure which is supposed to be right anymore.

Those same features are being used deliberately to avoid detection.

So:

  • smooth writing = possibly AI
  • slightly off writing = also possibly AI

And then there’s the full-circle part. AI-generated writing is being edited—or even, as we said, processed again by AI—to introduce irregularities.

So the sloppiness still comes from AI —even when it’s meant to prove that it didn’t.

Students who naturally write clearly might start to look more suspicious than those who don’t, or who don’t quite as much.

Fluency becomes a liability, which is strange to say.

There are already names floating around for this—I’m not sure which one sticks. Maybe none of them do.

AI laundering,

humanizing AI text

• adversarial writing.

It feels like writing is being judged not just by what it says, but by how well it performs being written by a person who maybe struggled a little, but not too much.


Marginal note:
I’m starting to think AI kind of wins this either way. It cleans the writing, and then we add the same human sloppiness back in. It’s almost like a habitat restoring itself, or something like that.

Leave a comment