Why Most Studies Lack a Real Contribution

Most studies that fail peer review do not fail because the methodology is wrong. They fail because the contribution is unclear—or because, once the methodology is stripped away, there is nothing new being added to what was already known. Understanding research contribution is what separates papers that get accepted from those that don’t.

This is an uncomfortable diagnosis because it implies that the problem preceded the study. By the time a manuscript reaches a reviewer, the contribution problem cannot be fixed through revision. It is baked into the question the study was designed to answer.

Understanding why this happens—and what a real contribution actually looks like—is more useful than any writing advice applied after the fact. What editors actually mean by “lack of depth” often traces back here: not to writing quality, but to whether the study had something genuinely new to say.

What “Contribution” Actually Means

A contribution is not novelty. Novelty means the study has not been done before. Contribution means the study changes something: a clinical belief, a methodological assumption, an understanding of mechanism, or the precision of an existing estimate.

These are different. A study can be novel—studying a combination of variables that has never been examined—and contribute nothing, because the answer was predictable before the study was run, or because the answer does not affect any decision that anyone makes.

A contribution adds something that was missing and that was worth having. The “worth having” part is the criterion that most pre-study evaluations skip.

The Increment Problem

Most clinical studies are incremental by design. They take an established question, a familiar methodology, and a locally available patient population, and produce a result that is consistent with—but slightly more detailed than—the existing literature.

Incremental studies can be published. Strong journals publish them when the increment is large enough, specific enough, or fills a gap that the existing literature cannot fill. Weaker journals publish them because the volume of submission requires accepting work that advances the field only marginally.

The problem arises when researchers fail to articulate what the increment actually is. “We replicated this finding in a different population” is an increment—if the population is meaningfully different in a way that was not previously established. “We replicated this finding in a slightly different time period” is usually not.

The contribution statement—what this study adds, stated explicitly—is the most important sentence in the Discussion. Most papers either don’t include it or include a version that restates the finding without explaining why the finding matters.

Why the Gap Analysis Fails

The standard approach to justifying a study is the literature gap: find what has not been studied and study it. This approach is structurally flawed in ways that produce contribution failures at scale.

Gaps in the literature exist for reasons. Some gaps exist because the question is difficult to study, because the population is rare, because the methodology required is beyond most centers. Filling those gaps requires unusual resources. When researchers identify gaps without examining why they exist, they often design studies that cannot fill them properly—and produce underpowered, single-center studies on questions that require something larger.

Other gaps exist because the question is not important enough to have attracted funding or research effort. An absence of studies on a particular subgroup combination is not, by itself, evidence that the question is worth studying. It may simply reflect that the question is peripheral to anything clinicians or researchers actually care about.

A gap becomes a research opportunity only when it exists despite the question being important and the question being technically answerable. Identifying that combination—important question, answerable with available resources, not yet addressed—is the actual work of finding a contribution.

The Predictability Trap

A study that produces a result that everyone in the field would have predicted before it was run contributes confirmation, not knowledge. Confirmation has value when the stakes are high and the existing evidence is weak. It has almost no value when the finding was already accepted by the field.

The practical test is to ask a senior colleague in the area: “If this study shows X, would that change how you approach this clinically or think about this research area?” If the honest answer is “not really—that’s what we already expect,” the study may produce a publishable paper in a lower-tier journal but will struggle in strong journals.

This is not about avoiding well-designed confirmatory studies. It is about being honest about what they contribute and matching submission strategy accordingly.

The Single-Center Limitation

A specific and recurring contribution failure is the single-center retrospective study submitted to journals that have already published multicenter or prospective data on the same question.

Single-center retrospective data adds value when the question is early-stage and no multicenter data exist. Once multicenter data are available, a smaller single-center study confirms the finding with less statistical power and introduces selection bias that the larger study controlled. The contribution is not zero—but it is small enough that strong journals will consistently decline it.

This mismatch between study design and evidence stage is one of the most common sources of rejection that researchers experience as unfair. The methodology is fine. The execution is careful. The problem is that the study was designed for a stage of evidence development that has already passed.

What a Real Contribution Looks Like

The clearest contributions in clinical research share a few characteristics.

They address a question that clinicians face and do not currently have good evidence for. The absence of prior data is not because the question is unimportant—it is because the study was difficult to conduct.

They produce a result large enough to be clinically meaningful, not just statistically significant. The effect size is large enough that it would change a decision at the bedside or a recommendation in a guideline.

They use methodology that was not previously applied to this question, allowing inference that prior studies could not make—a prospective design replacing retrospective series, patient-reported outcomes replacing surrogate endpoints, multicenter data replacing single-center.

Or they resolve a specific ongoing debate—a question on which prior studies have produced contradictory results and for which existing evidence is genuinely uncertain.

Any one of these is sufficient for a real contribution. None of them requires the study to be large, expensive, or complicated. They require the question to have been chosen carefully.

For a related perspective, see The Academic Publishing Game Nobody Explains.

The Honest Accounting

Doing this accounting before a study is designed is uncomfortable. It sometimes leads to the conclusion that the study you planned to run is not worth running in the form you planned it. That conclusion is useful, even when it is disappointing.

The alternative is to design and execute a study that was never going to contribute meaningfully, invest time in a manuscript that will circulate through rejections, and receive feedback that attributes the problem to writing quality when the real issue was the question.

The contribution check is not gatekeeping. It is a practical step that saves significant effort by ensuring the study was worth doing before the work of doing it begins.

Use AI in Research — The Right Way

Get practical insights on using AI in academic research and receive a free PDF guide.

Tuyen Tran

Tuyen Tran

Pediatric surgeon and independent clinical researcher. I write about how real clinical research actually works — built from real manuscripts, real mistakes, and AI used deliberately as a thinking tool. More about me