Athens

Orwell's 6 Rules for Writing, Applied to AI-Generated Text

- Moritz Wallawitsch

In 1946, George Orwell wrote an essay called "Politics and the English Language." It is the sharpest diagnosis of bad writing ever published. And it describes AI-generated text with eerie precision - 80 years before it existed.

Orwell's central complaint was that writers had stopped choosing words. Instead, they were "gumming together long strips of words which have already been set in order by someone else." The result was prose that sounded correct but said nothing. Sentences assembled from pre-existing phrases rather than constructed from thought.

This is literally how large language models work. They predict the next most likely token based on patterns in their training data. They are, by design, gumming together strips of words set in order by someone else. Orwell was describing the failure mode of human laziness. He accidentally described the operating principle of LLMs.

The essay ends with six rules. Each one is a lens for understanding why AI writing feels wrong and how to fix it.

Rule 1: Never Use a Metaphor, Simile, or Other Figure of Speech Which You Are Used to Seeing in Print

Ask an AI to write about a startup and you will get "navigating the landscape," "at the end of the day," "a double-edged sword," and "the tip of the iceberg." Ask it to write about challenges and everything becomes a "journey." Ask it about technology and you will hear about "unlocking potential" and "pushing boundaries."

These are dead metaphors. Orwell called them images that have "lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves." They signal that nobody is thinking. When a human uses them, it means they reached for the nearest cliche instead of describing what they actually see. When an AI uses them, it means the pattern was so common in the training data that it became the default output.

The fix is the same in both cases. Delete the dead metaphor. Say what you actually mean. If you can't, you probably don't yet know what you mean - which is useful information.

When AI writes "this is just the tip of the iceberg," replace it with a specific claim. "There are six more problems we haven't addressed." The specificity forces precision. The dead metaphor let the writer (or the model) avoid being precise.

Rule 2: Never Use a Long Word Where a Short One Will Do

AI text is addicted to Latinate vocabulary. "Utilize" instead of "use." "Implement" instead of "do." "Facilitate" instead of "help." "Demonstrate" instead of "show." "Comprehensive" instead of "full." "Subsequently" instead of "then."

This happens because the training data over-represents formal, corporate, and academic writing. These domains reward long words. They mistake polysyllabic vocabulary for intelligence. The AI learns this association and reproduces it everywhere, even when you're writing a blog post or a personal essay.

Orwell used a devastating example. He took a verse from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Then he translated it into modern bureaucratic English:

Objective consideration of contemporary phenomena compels the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

The original: 49 words, 60 syllables. The translation: 38 words, 90 syllables. Fewer words, more syllables. The meaning is the same. The humanity is gone. The second version is what AI produces by default.

When editing AI output, count the syllables. Swap long words for short ones. The writing will immediately sound more human, because humans who are actually thinking tend to reach for plain words.

Rule 3: If It Is Possible to Cut a Word Out, Always Cut It Out

AI is verbose. This is not a bug. It is the predictable result of optimizing for "helpfulness." Models are trained to be thorough, to cover all angles, to never leave a question half-answered. The result is padding. Filler words, redundant clauses, throat-clearing introductions.

A human writer might say: "The meeting was pointless." An AI will say: "It is worth noting that the meeting, while well-intentioned in its aims, ultimately failed to produce any actionable outcomes that could be considered meaningful in terms of advancing the project's core objectives."

Every "it is worth noting that," every "in terms of," every "it is important to remember that" can be cut. The sentence that remains will be stronger. As Klinkenborg puts it in Several Short Sentences About Writing, "there's often a fine sentence lurking within a bad sentence."

AI-generated text is full of fine sentences trapped inside bloated ones. Your job as editor is to free them.

Rule 4: Never Use the Passive Where You Can Use the Active

AI defaults to passive voice constantly. "The decision was made" instead of "we decided." "It should be noted" instead of "note this." "The results were analyzed" instead of "we analyzed the results."

The passive voice hides the actor. It strips agency from sentences. Orwell argued that this was politically useful - you could describe terrible things without naming who did them. "Mistakes were made" is the classic example.

AI uses the passive for a different reason. It has no self and no perspective. It cannot naturally write "I think" or "we decided" because it has no "I" and made no decisions. So it retreats to passive constructions that avoid committing to an actor.

This is why Orwell's observation that "his brain is not involved, as it would be if he were choosing his words for himself" is so apt. The AI has no brain to involve. The passive voice is its natural habitat.

When editing AI output, find every passive construction and ask: who did this? Then rewrite the sentence with that actor as the subject. The writing will become more direct, more honest, and more alive.

Rule 5: Never Use a Foreign Phrase, a Scientific Word, or a Jargon Word if You Can Think of an Everyday English Equivalent

Ask AI to write about business and you get "synergize," "leverage," "paradigm shift," "holistic approach," and "move the needle." Ask it about AI itself and you get "hallucination," "alignment," "emergent capabilities," and "reasoning." Each domain has its jargon, and the AI reproduces it faithfully because jargon is dense in its training data.

Jargon serves a purpose among specialists. Between a surgeon and an anesthesiologist, technical terms are precise and efficient. But AI uses jargon even when writing for a general audience. It cannot gauge its reader. It doesn't know if it's writing for a specialist or a curious beginner, so it defaults to the terminology most common in its training data for that topic.

Orwell's deeper point was that jargon enables lazy thinking. When you say "leverage our core competencies," you avoid specifying what you're actually good at and how you plan to use that advantage. The jargon fills the space where thought should be.

Replace every piece of jargon with a plain explanation. If the sentence collapses, the original had no substance. If it survives, it's now accessible to everyone instead of just insiders.

Rule 6: Break Any of These Rules Sooner Than Say Anything Outright Barbarous

This is the most important rule and the one AI cannot follow.

Orwell understood that rules are heuristics, not laws. Sometimes a cliche is the right choice. Sometimes the passive voice is more natural. Sometimes the long word carries a shade of meaning the short one doesn't. A good writer knows when to break the rules because they have taste, judgment, and a sense of what sounds right in context.

AI has none of these. It can follow instructions mechanically. Tell it to use active voice and it will rewrite every sentence in active voice, even the ones that sound better passive. Tell it to be concise and it will strip out detail that matters. It cannot weigh one rule against another and decide which takes priority in this particular sentence.

This is why the writer must remain in the loop. AI can apply rules. Only a human can know when to break them. Orwell's warning that "they will construct your sentences for you - even think your thoughts for you" is not a prediction about AI. It is a description of what happens when you accept AI output without judgment.

The Deeper Problem

Orwell wrote: "The great enemy of clear language is insincerity."

AI is not insincere. It has no intentions at all. But the effect is the same. It produces language that sounds like it means something without anyone meaning it. The words are assembled from patterns rather than generated from thought. The result is prose that occupies space without illuminating anything.

Orwell also wrote that "the slovenliness of our language makes it easier for us to have foolish thoughts." This cuts both ways with AI. If you accept sloppy AI output, you start thinking in its patterns. Its cliches become your cliches. Its bloated constructions become your sense of what "good writing" sounds like. The tool shapes the user.

The solution is not to avoid AI. It is to read its output the way Orwell read the political prose of his day: with suspicion, a red pen, and clear principles for what good writing looks like.

Using Orwell's Rules as an Editing Framework

Here is a practical workflow. Generate a draft with AI. Then run through Orwell's checklist:

  1. Find every dead metaphor. Replace it with something specific or delete it.
  2. Find every long word. Try the short one. Keep whichever is clearer.
  3. Read every sentence and ask: can I cut a word? Then cut it.
  4. Find every passive construction. Rewrite it with a named actor.
  5. Find every piece of jargon. Replace it with plain language.
  6. Read the result aloud. If any change sounds barbarous, undo it.

This process is tedious in a chat-based workflow. You would need to paste text into ChatGPT, ask it to apply each rule, then manually compare the output with your original. By the time you've done six passes, you've lost track of what changed where.

This is where an editor with inline diffs changes the game. Athens shows every AI edit as a visible change in your document - red for deletions, green for additions. You can accept or reject each one individually. Ask the AI to tighten a paragraph and you see exactly which words it cut. Ask it to switch to active voice and you see exactly which sentences it restructured. You apply Orwell's rules with precision instead of guessing what the AI changed.

Orwell wrote his rules for humans who had gotten lazy. They apply even better to a technology that was never thinking in the first place. The writer who takes AI output and subjects it to these six tests will produce something better than either the AI or a lazy human would alone.

The AI generates. You judge. Orwell gave you the criteria 80 years ago.