Understanding AI in Academic Writing

How AI reveals thinking problems you didn’t know you had


Last week, a PhD student messaged me: “I used ChatGPT to write my Introduction, but the reviewer said it ‘lacks clear argument.’ I don’t understand what went wrong. I told it to write academically.”

This wasn’t an AI failure. It was a misunderstanding of what AI actually does. This is a common mistake: treating AI as a thinking engine, when it is actually a clarity amplifier.

The student assumed AI could generate academic thinking. What it actually did was expose that the thinking wasn’t there yet.

AI doesn’t replace cognitive work. It makes unclear thinking visible.

If you don’t know what you’re trying to argue, AI will produce fluent sentences that say nothing at all.


AI doesn’t understand your research—it recognizes patterns

Tools like ChatGPT or Claude are trained on millions of academic papers. They know what academic writing looks like: formal register, citation density, hedging language, structured paragraphs.

What they don’t know is what you’re actually trying to say.

When you prompt: “Write an academic introduction about X,” the AI produces something that sounds academic. It will include transition phrases, cautious qualifiers, and plausible-sounding claims.

But it cannot know:

  • What gap your research addresses
  • Why your question matters in your specific context
  • What argument you’re building toward
  • What position you are ultimately taking (descriptive, explanatory, or argumentative)

This is the crucial point: AI generates text based on probability, not understanding.

If your own thinking is vague, the AI’s output will be fluent but empty. It mirrors your clarity—or lack of it.


Example: when fluency hides the absence of thought

A colleague of mine once showed me an AI-generated Methods section. Grammatically perfect. Structurally sound. Completely unusable.

The problem wasn’t the language. It was that my colleague hadn’t decided:

  • Which variables to control for
  • How to operationalize key terms
  • What limitations to acknowledge upfront

The AI filled those gaps with generic phrasing. It wrote sentences like:

“Appropriate statistical methods were applied to ensure validity and reliability.”

Technically correct. Completely uninformative. A reviewer can’t critique what isn’t specified—and that usually results in rejection, not revision.

The AI had no way to know which methods were “appropriate” because my colleague hadn’t decided yet.

When he clarified those decisions himself, the AI became useful—not for generating content, but for refining phrasing and checking logical flow.

The lesson: AI can help you express clear thinking. It cannot create clear thinking for you.


Where AI actually helps: making your assumptions visible

Here’s where AI becomes genuinely useful in research: it forces you to articulate what you already know.

When you write a detailed prompt, you’re already doing intellectual work:

  • Defining your population
  • Specifying your outcome
  • Clarifying your methods
  • Identifying potential confounders

AI doesn’t do that work. You do it while constructing the prompt.

The act of prompting is itself a form of structured thinking.

Example from my own work:

I was struggling to clarify a research question about postoperative complications. My initial prompt to Claude was vague:

“Help me write about surgical complications in children.”

Claude produced generic text. Unhelpful.

Then I rewrote the prompt:

“I’m investigating whether preoperative nutritional status (measured by albumin and weight-for-height Z-score) predicts surgical site infection rates in children under 5 undergoing abdominal surgery. The main confounder I’m concerned about is surgical duration. Help me identify what I’m assuming about causality.”

Claude’s response was immediately more useful—not because it wrote better text, but because my prompt revealed I’d already done the conceptual work. The AI simply helped me see assumptions I hadn’t made explicit.

AI functioned as a thinking partner, not a replacement.


The real workflow shift: using AI to refine thinking, not generate it

Most people approach AI the wrong way. They treat it as a content generator:

“Write my introduction.”
“Summarize these papers.”
“Generate research questions.”

This inverts the proper relationship between researcher and tool.

AI should refine thinking that already exists, not create thinking from scratch.

A better workflow:

  1. Write a rough draft yourself (even if it’s terrible)
  2. Use AI to identify unclear logic, weak transitions, or unsupported claims
  3. Revise based on that feedback
  4. Repeat

The draft doesn’t need to be good. It just needs to exist. Once you’ve committed something to text, AI can help you see where the thinking breaks down.

Without that initial draft, AI has nothing to work with except pattern-matching.


What this means practically

If you’re using AI for academic work, the most important question to ask yourself is:

“Do I know what I’m trying to say?”

If the answer is no, AI will not help. It will produce fluent confusion.

If the answer is yes—even tentatively—AI becomes useful as:

  • A clarity check (“Does this paragraph actually support my claim?”)
  • A phrasing assistant (“How can I say this more precisely?”)
  • A structural mirror (“Where does my logic break down?”)

None of these uses replace judgment. They support it.


A practical test

Before using AI to write any section of your work, try this:

Write one paragraph in your own words explaining what you’re trying to argue.

If you can’t do that clearly, AI won’t save you. Fix the thinking first.

If you can do it, even roughly, AI can help you refine, restructure, and polish.

Clarity comes before automation.


Final point

AI is powerful, but it’s not intelligent in the way we often assume. It doesn’t think. It doesn’t understand. It pattern-matches based on probability.

For academic work, that means AI reveals the quality of your thinking. If your thinking is sharp, AI helps sharpen it further. If your thinking is vague, AI makes that vagueness look professionally formatted.

The tool is neutral. The clarity—or lack of it—comes from you.