Most researchers spend more time designing studies than selecting questions. The assumption, often unstated, is that any gap in the literature is worth filling—that novelty alone justifies a research question. Understanding formulating research questions is what separates papers that get accepted from those that don’t.
It doesn’t.
A research question worth asking passes several tests that have nothing to do with whether it has been studied before. It needs to be specific enough to be answerable. It needs to produce results that matter to someone. And it needs to be the right question, not just an available one.
Most questions that reach the manuscript stage pass only the last of these by default. Understanding why most research advice is misleading starts here—with how rarely the question itself gets examined before the study begins.
The Difference Between a Gap and a Question
Literature reviews consistently produce gaps. Every systematic search ends with a list of what has not been studied, what subgroups were excluded, what comparisons were not made.
Gaps are not research questions. A gap is an absence of evidence. A question is a claim about what that absence matters.
Filling a gap produces information. Answering a question produces knowledge.
The distinction is not semantic. Studies designed around gaps frequently generate data without generating insight, because there was never a clear argument for why the data would be useful. A comparison that has never been made before is not automatically a comparison worth making.
A research question has to make a case—implicitly or explicitly—that the answer would change something: clinical practice, understanding, methodology, or the direction of future research.
Specificity Is Not the Same as Narrowness
One of the most common misunderstandings about research question design is that specificity means restriction.
A specific question is not necessarily a narrow one. Specificity means the question is answerable with a defined methodology producing a defined type of result. Narrowness refers to scope—how many cases, contexts, or populations it covers.
A question can be narrow and vague. “Is surgeon experience associated with outcomes in pediatric surgery?” is narrow in scope but vague enough that it cannot be answered without first defining surgeon experience, which outcomes, across what patient population, over what time period, and with what comparison.
A question can also be broad and specific. “Does laparoscopic versus open approach affect anastomotic leak rates in primary repair of esophageal atresia in neonates under 2.5kg?” covers a large clinical territory but is answerable with a defined study design.
Specificity is about answerable, not small.
The PICO Framework—and Its Limits
In clinical research, PICO (Population, Intervention, Comparator, Outcome) is the standard framework for structuring research questions. It works well for straightforward comparative questions in intervention research.
Its limits become apparent in several contexts:
Observational research. Where there is no intervention being evaluated, the I in PICO becomes a predictor, exposure, or characteristic—and the framework gets stretched in ways that sometimes produce vague questions.
Complex outcomes. PICO works best when the outcome is singular and measurable. When the outcome is multidimensional—quality of life, functional recovery, composite endpoints—the framework can encourage false precision about what is actually being measured.
Explanatory versus descriptive questions. PICO is designed for questions about whether something works. It is less useful for questions about why something happens, how a mechanism operates, or what the experience of patients looks like.
A more flexible framing is to ask three questions of any proposed research question:
1. What would the answer look like?
2. Who would change their behavior or thinking based on that answer?
3. What would they change?
If none of these can be answered specifically, the question is not yet ready.
The “So What?” Test
In clinical research, the most reliable filter for a research question is clinical relevance. This does not mean that every study must immediately change practice. Basic science, mechanism studies, and exploratory work all have legitimate roles.
But every question should have a plausible pathway to “so what?”—a clear argument for how, if the study goes well, the results would be useful to someone.
This pathway does not need to be direct. A study confirming that a particular biomarker correlates with outcome is useful if it supports a future study that could use that biomarker to select patients. A study documenting rates of a complication is useful if it establishes baseline data for intervention evaluation.
The requirement is that the pathway exists and is articulated. “This hasn’t been studied” is not a pathway. “This would allow us to…” is.
The so what test eliminates a large proportion of questions that feel legitimate because they are novel but would not produce anything actionable, even if perfectly executed.
Identifying Research Gaps Versus Identifying Research Opportunities
Not every gap is an opportunity. This distinction is consistently under-discussed in methods training.
A gap becomes an opportunity when three conditions align:
The gap exists for resolvable reasons. Some gaps persist because the question is technically unanswerable with current methods, because the patient population is too rare to study at single-center level, or because the variation in practice is too high to make a comparison meaningful. Gaps that exist for these reasons are not opportunities—they are constraints.
The infrastructure for answering it exists. A question about a five-year outcome requires five years of follow-up data. A question about rare complications requires a sample size that most institutions cannot achieve alone. The opportunity is real only when the infrastructure to answer the question is available or accessible.
The timing is right. Research questions exist in a literature context. A question that was premature five years ago—when the intervention was still being refined—may now be timely. A question that was important three years ago may now have been answered well enough that a new study would add marginal value.
Identifying a research opportunity means finding the intersection of these three conditions, not just identifying an absence in the literature.
Common Question Design Mistakes
Replicating without justification. Repeating a published study in a different population without an argument for why that population would produce different results. The implicit assumption that “our patients are different” is not sufficient.
Combining outcomes that answer different questions. Creating composite endpoints that include outcomes with different clinical meanings, making the primary result uninterpretable even when statistically significant.
Confusing mechanism with effect. Asking whether a mechanism exists when the clinically relevant question is whether the mechanism is large enough to matter in practice.
Selecting the outcome for measurability rather than relevance. Using a surrogate endpoint because it is available in the data rather than because it is the outcome clinicians actually care about.
Anchoring to existing data. Designing a question around what the dataset already contains rather than what the question requires. This produces studies optimized for available data rather than for useful answers.
How a Strong Question Shapes the Entire Study
The payoff for getting the question right is not just intellectual. A well-formed question has practical consequences throughout the research process.
It defines the study design. A question about whether an intervention reduces complication rates requires a comparative study. A question about what predicts a complication requires a cohort or case-control design. The question determines the methodology, not the other way around.
It specifies the primary outcome. A specific question makes it unambiguous what the primary outcome should be. This prevents post-hoc switching and the statistical problems that come with it.
It constrains the sample. A specific population in the question prevents scope creep—the tendency to include patients who are adjacent to the target population but not actually within it.
It defines what counts as a meaningful result. A question with a clear “so what?” allows you to specify, in advance, what magnitude of effect would be clinically meaningful. This is distinct from statistical significance, and it is what reviewers and clinicians actually care about.
It anchors the Discussion. A well-formed question makes the Discussion section straightforward: you answer the question you asked, explain what your answer means, and describe its limits. Papers that begin with vague questions produce Discussions that meander.
Formulating the Question Before Committing to the Study
The best time to stress-test a research question is before study design begins—not during manuscript writing, and not after data collection.
At the pre-design stage, the question can still change. Population can be redefined. Outcomes can be refined. Comparison groups can be reconsidered. Once data collection has started, these decisions are fixed, and a flawed question produces a study that cannot be salvaged at the writing stage.
A useful exercise is to write the abstract—specifically the Conclusions section—before the study begins. If you cannot write a plausible, specific conclusion statement, you do not yet have a specific, answerable question. The inability to draft a conclusion is not a writing problem. It is a question problem.
This reversal—conclusion first, design second—forces early clarity about what the study is actually trying to establish. It is uncomfortable, but it is far less uncomfortable than finishing a study and realizing the question never had a clear answer.
For a related perspective, see What Editors Actually Mean by ‘Lack of Depth’.
What Makes a Question Worth Publishing
The final filter is whether the answer is worth publishing—not whether it will be published, but whether, if everything goes right, the result deserves a place in the literature.
A publishable result:
– Shifts the prior probability of a clinical belief,
– Establishes a baseline that enables future comparative work,
– Identifies a subgroup with meaningfully different outcomes,
– Or resolves a methodological question that has limited research interpretation.
A study that produces a result that merely confirms what was already expected, at a sample size too small to be definitive, on an outcome too distant from clinical decision-making to be acted on—that study may complete successfully and still have nothing to publish.
The research question is where this determination is made. Not at the analysis stage, not at the writing stage, but when the question is first formed.
That is when the work of making it worth asking begins.

