Why Most Discussions Fail

The Discussion section is where most papers lose reviewers. Not the Methods, which can be verified. Not the Results, which speak through data. The Discussion—where the author is supposed to demonstrate what the findings mean.

The reason most Discussions fail is not that researchers write them poorly. It is that researchers misunderstand what the section is for.

A bad Discussion repeats the Results in prose. A mediocre Discussion compares findings to previous studies. A good Discussion does something much harder: it interprets the results in the context of what the field does not yet know.

That difference—between summarizing and interpreting—is the gap that separates papers reviewers accept from papers they send back with the comment “lacking insight.” (See also: How I Decide What Goes Into the Discussion)

The Summary Trap

The most common pattern in failed Discussion sections follows a predictable formula:

“We found that X was associated with Y. This is consistent with the findings of Smith et al. (2019) and Lee et al. (2021), who also reported a similar association. However, Wang et al. (2020) found no such association, which may be due to differences in sample size.”

This structure appears in thousands of published papers. It is technically correct. And it says almost nothing.

What it does is narrate. It places your findings alongside other findings and notes whether they agree or disagree. If they disagree, it offers a surface-level explanation—usually sample size, population differences, or methodological variation.

The problem is that this kind of writing does not require any interpretation. A reader could construct this paragraph from your Results table and a quick search of the literature. The author’s expertise—their understanding of the field’s unresolved questions, their judgment about what these findings change—is nowhere in the paragraph.

Reviewers sense this immediately. They may not articulate it precisely, but the feedback is consistent: “The Discussion is descriptive.” “The authors do not go beyond their data.” “Lacking critical analysis.”

These comments are not asking for speculation. They are asking for thinking.

What Interpretation Actually Means

Interpretation is the act of connecting your specific findings to the broader uncertainty in your field. It answers a question that no one has answered yet—or at least advances the field’s understanding of that question.

To interpret, you need to know three things:

  1. What your field currently believes — the dominant framework, the accepted mechanisms, the working assumptions.
  2. Where that understanding breaks down — the contradictions, the unexplained observations, the questions that remain open.
  3. How your findings interact with those gaps — whether they support the current framework, challenge it, refine it, or add a new dimension.

Without knowledge of the gaps, there is nothing to interpret against. Your findings exist in isolation, and the best you can do is compare them to other isolated findings. That is description, not analysis.

This is why researchers who are deeply embedded in their field write better Discussions: they know what is uncertain, contested, or missing. They know which questions their findings speak to. They can place their results in the landscape of unknowns, not just the landscape of knowns.

The Architecture of a Working Discussion

A Discussion that works follows a specific intellectual progression. Each section answers a different question, and the order matters.

What did we find, and why does it matter?

The opening paragraph of the Discussion should do one thing: state the principal finding and immediately connect it to why it matters. Not why the topic matters—why this specific finding matters.

This is not a restatement of the Results. It is a framing move. You are telling the reader what to pay attention to and why.

A weak opening: “In this study, we found that treatment A was associated with a 15% reduction in complication rates compared to treatment B.”

A stronger opening: “The observation that treatment A reduced complication rates by 15% suggests that [mechanism] may play a larger role in [condition] than current guidelines assume.”

The difference is that the second version makes a claim. It moves from what was observed to what it might mean. The first version just reports.

How do these findings fit with—or challenge—what we thought we knew?

This is where comparison to the literature belongs. But the comparison should serve interpretation, not replace it.

When your findings agree with previous work, the relevant question is not “Who else found this?” It is “What does this convergence tell us about the underlying mechanism or principle?”

When your findings disagree with previous work, the relevant question is not “Why might they be different?” It is “What does this disagreement reveal about the limits of our current understanding?”

The difference is subtle but fundamental. In the first approach, the literature is the subject. In the second approach, the field’s understanding is the subject, and both your findings and the literature are evidence.

What are the implications—and what are the boundaries?

Implications are not conclusions. A conclusion is what you found. An implication is what should change—in thinking, practice, or future research—because of what you found.

Most Discussion sections either avoid implications entirely (too cautious) or overstate them (too ambitious). Both fail for the same reason: the author has not clearly defined the boundary between what the data supports and what it suggests.

The most effective approach is to state the implication explicitly and then immediately define its limits:

“These findings suggest that [implication]. However, this interpretation is constrained by [specific limitation], and confirmation would require [specific study design].”

This is not hedging. It is precision. Reviewers distinguish between authors who limit their claims because they understand the boundaries and authors who limit their claims because they are afraid to commit. The former signals expertise. The latter signals uncertainty.

What don’t we know yet?

The final section of a strong Discussion identifies what remains unknown—not as a generic call for “future research,” but as a specific articulation of the next question that needs answering.

“Future studies are needed” is the most meaningless sentence in academic writing. Every Discussion contains it. It communicates nothing.

What communicates something: “The mechanism linking A to B remains unclear. Specifically, it is unknown whether the effect operates through [pathway 1] or [pathway 2], and distinguishing between these would require [specific methodology].”

This kind of statement demonstrates that the author understands the limits of their own contribution. It also does something strategically valuable: it defines the space your paper occupies. By specifying what you did not answer, you clarify what you did answer.

Five Patterns That Signal a Failing Discussion

Beyond the summary trap, there are four other patterns that consistently produce reviewer rejection.

Pattern 2: The defensive limitation section. Limitations are framed as apologies rather than as honest maps of the study’s boundaries. “A limitation of our study is the small sample size” tells the reviewer nothing useful. “The sample size of 87 limits the precision of the effect estimate but is consistent with published studies of this population; a larger cohort would be required to detect effects smaller than [X]” tells the reviewer you understand what your data can and cannot say.

Pattern 3: The misplaced conclusion. The Discussion ends with a summary of findings rather than an interpretive statement. The final paragraph should answer: “Given everything discussed here, what does this study change about how the field should think?” Not “In conclusion, we found that…”

Pattern 4: The disconnected literature. References are cited as supporting evidence without explaining why the agreement or disagreement matters. Citing ten papers that found the same result is not analysis. Explaining what the convergence implies about mechanism or practice is.

Pattern 5: The hypothesis-less interpretation. The Discussion makes claims without specifying the mechanism. “Treatment A may work through anti-inflammatory pathways” is not interpretation; it is speculation. “These findings are consistent with the hypothesis that treatment A reduces complication rates via [specific pathway], given that [specific evidence]” is interpretation. Mechanism matters because reviewers in clinical and biological sciences are trained to ask for it.

The Discussion Blueprint

For a Discussion section that functions, here is a working structure:


Paragraph 1 — Principal finding + immediate significance
State the single most important finding. Connect it immediately to what this means for how the field currently thinks. Avoid restating the Results. Make a claim.

Paragraph 2–3 — Contextualization
Place the finding within the existing literature. When findings agree: what does convergence suggest? When findings disagree: what does the disagreement reveal? Keep the field’s understanding as the subject, not the individual papers.

Paragraph 4 — Mechanism or explanation
If the study is observational: what is the most plausible explanation for the observed association? If the study is experimental: what does the finding suggest about how the intervention works? This is the section where domain expertise is most visible.

Paragraph 5 — Implications
What should change—in clinical practice, policy, or research design—because of these findings? State it directly. Then define exactly where the evidence ends and where uncertainty begins.

Paragraph 6 — Limitations with boundaries
Describe each limitation as a specific constraint on interpretation, not as a general weakness. For each limitation, specify what kind of evidence would be required to address it.

Paragraph 7 — Future directions
Identify the single most important unanswered question that follows directly from this study. Specify what methodology would be required to answer it. Avoid generic calls for “larger studies.”


This is a template, not a formula. Some papers require more contextualization; others require more mechanistic explanation. What does not change is the requirement that each paragraph serves interpretation, not narration.

Why Discussions Fail: The Deeper Reason

The structural explanation—that authors summarize instead of interpret—is accurate but incomplete. The deeper question is why.

The answer, in most cases, is that interpretation feels risky.

Summarizing your results is safe. Nobody can argue with a summary. Comparing your findings to previously published findings is safe. You are standing on the authority of the literature.

But interpretation—stating what you think the findings mean, how they change understanding, what they suggest about mechanism or practice—exposes you. It is your judgment, not your data, that is on display.

This is exactly what reviewers and editors want. But it is also exactly what researchers are trained to avoid.

Graduate training emphasizes objectivity, caution, and deference to the literature. These are valuable instincts in the Methods and Results. They are destructive in the Discussion, where the reader needs to hear the author’s informed interpretation, not a neutral summary.

The Discussion is the one section of a paper where the author is expected to think out loud. When authors refuse to do so—out of caution, habit, or fear—the section fails regardless of how well it is written.

The Difference Between Discussion and Conclusion

A related problem is the confusion between Discussion and Conclusion.

The Discussion is an analytical space. It is where you examine what the findings mean, how they relate to existing knowledge, and where uncertainty remains.

The Conclusion is a summary space. It restates the principal findings and their main implications in condensed form. It should add no new reasoning.

When these two sections blur—when the Discussion reads like an extended Conclusion, or the Conclusion introduces new analysis—reviewers notice. (See: Why Reviewer Comments Sometimes Miss the Point) The paper feels structurally confused, even if no individual sentence is wrong.

The test is simple: if you can delete the Discussion and still understand the paper’s main claims from the Conclusion alone, the Discussion did not do its job. It added words but not thinking.

What a Working Discussion Signals

A Discussion that interprets rather than summarizes sends a clear signal to editors and reviewers: this author understands their field deeply enough to position their own work within it.

It demonstrates that the author knows what questions are open, where the evidence is conflicting, and how their findings move the conversation forward—even modestly.

This is the signal that separates a “descriptive” paper from an “insightful” one. (Related: Why Good Studies Still Get Rejected) The data can be identical. The interpretation is what creates the difference.

Most researchers have the knowledge to write this kind of Discussion. They know their field. They know the gaps. They know what their findings suggest.

What they often lack is permission—internal permission to claim an interpretation, to go beyond the safe territory of results and comparisons, and to say what they actually think the data means.

The Discussion is not a place for speculation. But it is also not a place for silence. It is the place where the author earns their contribution—by thinking, not just reporting. The blueprint above is a scaffold. The thinking is yours to supply.

Understanding how to write a discussion section is crucial for effectively communicating your research findings.

Use AI in Research — The Right Way

Get practical insights on using AI in academic research and receive a free PDF guide.

Tuyen Tran

Tuyen Tran

Pediatric surgeon and independent clinical researcher. I write about how real clinical research actually works — built from real manuscripts, real mistakes, and AI used deliberately as a thinking tool. More about me