AI for Academic
PracticeApril 8, 2026

Evaluating Research Ideas: The 'Kill It Early' Protocol

Most research projects that fail do so slowly. The data collection stalls, the analysis gets stuck, the writing produces nothing usable — and months later, the conclusion is that the project was probably never viable. The idea had problems that were knowable at the start.

Evaluating research ideas early and honestly is one of the most underappreciated skills in academic work. Not because abandoning an idea feels good, but because the cost of a bad study is not just the time spent on it — it's the time not spent on better work.

Why Most Ideas Survive Longer Than They Should

There are predictable reasons why weak ideas proceed to data collection. The researcher is already invested in the idea. The data already exists, so collecting it feels like the path of least resistance. The topic is interesting — interesting enough that it's easy to assume interest equals publishability.

None of these are good reasons to run a study.

The question a research question needs to answer isn't "is this interesting?" It's "is there a journal that would publish this, and is there a population that would care about the answer?" Those are harder questions, and they require actually thinking before opening a spreadsheet.

The Three Questions

Before committing to any study — whether it's a retrospective chart review, a prospective collection, or a systematic review — I run through three questions. They're designed to fail ideas that shouldn't proceed.

Question 1: Has this been answered well enough already?

A scoping search of three to five recent papers should establish whether the question has been answered, partially answered, or genuinely open. "Partially answered" is not the same as "worth answering again." It matters why it's partial — was the previous study underpowered, did it use a different population, was the outcome measure different in a meaningful way?

If the gap is real, name it specifically. If the justification for a new study requires multiple hedges and qualifications to survive, that's usually a sign the gap isn't as real as it seemed.

Question 2: Can I actually answer it?

This is the feasibility question, and it has three components:

  • Data: Is the required data accessible, and in what form? A dataset that exists in principle but requires extensive reconstruction is not the same as one that's ready to analyze.
  • Sample size: Is there a plausible route to the n needed to detect a clinically meaningful difference? A rough sample size estimate takes an hour. Running a study for six months and then discovering it was underpowered takes much longer.
  • Timeline: Does the project fit within a realistic window given competing obligations? An idea that requires 18 months of prospective data is a different proposition depending on where you are in a training program or academic cycle.

Question 3: Would anyone change practice based on the answer?

This is the question most researchers skip, because it requires stepping outside the idea and asking whether it matters to someone other than the person who had it. A finding that confirms what everyone already suspects is not a contribution. A finding that would force a reconsideration of clinical practice or research direction is.

If the honest answer to this question is "probably not," the study isn't worth doing — regardless of how well it could be executed.

When to Stop vs. When to Redesign

The protocol isn't always to kill the idea. Sometimes the answer is to redesign it.

A question that's been answered in one population may be genuinely unanswered in another. A study that's unfeasible at one sample size might be viable as a pilot designed to produce preliminary data rather than definitive conclusions. The answer to "has this been answered?" that reveals a methodological flaw in previous work is actually a good reason to proceed.

The difference between killing and redesigning is whether the changes address a real gap or simply rescue an idea from its problems. If the revisions keep making the original question harder to recognize, it's usually a sign that something more fundamental is wrong.

The Idea Feasibility Scorecard

Use this before any protocol development. Score each item 0–2: 0 = clear barrier, 1 = manageable concern, 2 = no issue.

Originality - [ ] Gap identified: a prior study does NOT already answer this question cleanly (0–2) - [ ] Gap is specific: the reason for the gap is named, not inferred (0–2)

Feasibility - [ ] Data exists and is accessible in usable form (0–2) - [ ] Sample size estimate done and is achievable (0–2) - [ ] Timeline fits current obligations (0–2)

Impact - [ ] Finding would change clinical practice OR research direction (0–2) - [ ] A journal exists that publishes this type of study with this scope (0–2)

Score interpretation: - 12–14: Proceed to protocol development - 8–11: Redesign before proceeding — identify which items scored 0–1 and address them - Below 8: Stop. The idea needs more development or should be set aside

A score below 8 doesn't mean the topic is wrong. It means this specific study, in this form, isn't ready.

The Point of Killing Early

The purpose of this kind of early evaluation isn't skepticism for its own sake. It's resource allocation.

Clinical researchers almost always have more ideas than time. Running a rigorous feasibility check on every new idea is how you protect the projects that deserve full execution. A question worth asking is one that has survived this kind of pressure — and that makes the work that follows feel different than starting with an untested assumption.


If you found this helpful for your manuscript, you might want to check out my Prompt Pack: Paper Structuring.

Evaluating Research Ideas: The 'Kill It Early' Protocol | AI for Academic