Skip to content

The overlooked rule about sleep research that quietly saves time and money

Person in a lab coat asleep at a desk, surrounded by papers and a cup of coffee, with a hand holding a document nearby.

It was 02:11 in a sleep lab office, the kind with a humming PC and a stack of consent forms that never seems to shrink. A junior researcher had copied a paragraph from an email thread into the protocol and left the line, “of course! please provide the text you would like me to translate.”; someone else replied with the exact same “of course! please provide the text you would like me to translate.” and, somehow, it still made it into the version sent to the ethics team. Everyone laughed, then everyone groaned, because the real cost wasn’t the typo-it was the week of back-and-forth it represented.

Sleep research is full of expensive kit, specialist staff, and “just one more” measure that feels harmless until it isn’t. The overlooked rule that quietly saves time and money is boring on purpose: decide what you will not measure before you start-and write that decision into the protocol so it can’t drift.

The rule nobody says out loud: pre-commit to fewer outcomes

Most sleep studies don’t fail because the hypothesis is silly. They fail because the study becomes a Christmas tree: a new bauble every time someone says, “While we’re at it, could we also collect…?” You add one extra questionnaire, then a second actigraphy device “for validation”, then nightly diaries “to capture nuance”. Then you spend months cleaning data you never truly needed.

Pre-committing to fewer outcomes isn’t anti-science. It’s pro-answer. When you define one primary outcome (and a small number of secondary outcomes) with a clear analysis plan, everything else becomes optional by design, not by enthusiasm. The admin load drops, the participant burden drops, and your statistical story stops wobbling.

The quiet savings come from friction you don’t see in the budget: fewer amendments, fewer missing data emails, fewer variable naming wars, fewer “why did we collect this?” meetings that end with nobody remembering.

Why the “more data” instinct costs you twice

There’s a specific moment it happens: a team meeting where someone says, “It’s only five extra minutes.” In sleep research, five minutes is never five minutes. It’s training, support, reminders, troubleshooting, scoring, adjudication, data dictionaries, quality checks, and a methods paragraph that suddenly needs three more citations.

And the bill arrives twice. First, in collection: participants drop out because the burden is heavy, or they comply poorly because the instructions are fussy, or they wear the device wrong because it’s one more thing. Second, in analysis: every extra measure invites extra comparisons, extra modelling choices, and extra reviewer questions. You pay again in time, and time is the most expensive consumable in a lab.

A small study with crisp endpoints often publishes faster than a sprawling one with “interesting” extras. Not because it’s simpler, but because it’s defendable.

A practical way to apply the rule (without starting a fight)

Make one page that your whole team can point at when scope creep begins. Call it an Outcome Lock, and keep it blunt.

  • Primary outcome: one variable, one timepoint, one analysis approach.
  • Secondary outcomes: two to four, each with a reason (“mechanism”, “safety”, “behavioural adherence”).
  • Exploratory outcomes: allowed, but clearly labelled as such, and limited.
  • Non-outcomes: a short list of what you are explicitly not collecting (and why).

That last bullet feels awkward, but it’s the shield. It stops the “we could just…” spiral because you’ve already decided that you won’t. The trick is to frame it as respect: respect for participants’ time, for the budget, and for the integrity of the answer.

If your study involves wearables or PSG, add a line about scoring decisions too: who scores, what rules, what reliability checks, and what happens when signals are bad. Ambiguity here is where weeks vanish.

The hidden bonus: better recruitment and cleaner results

Participants don’t drop out because they hate sleep science. They drop out because the study doesn’t fit inside their life. A tighter outcome set usually means fewer forms, fewer reminders, fewer “did you remember to…?” messages that make people feel like they’re failing a course.

You also get cleaner data. When you measure less, you can measure it properly: clearer instructions, better training, fewer device swaps, more consistent timing. And when reviewers ask why you didn’t include a fashionable metric, you can answer calmly: it wasn’t necessary to test the primary question, and we prioritised quality over breadth.

Consistency beats cleverness here. It’s the same principle clinicians use when they say, “Treat the main problem first.” In research, the main problem is usually: can we answer this one question with confidence?

“Simple wins because simple gets finished,” a senior PI told me once, after watching a team spend three months trying to rescue a diary measure nobody had budgeted to process.

A short checklist before you hit “submit” on the protocol

Run this in ten minutes. If you can’t answer quickly, your study will probably wobble later.

  1. Can every team member name the primary outcome without looking it up?
  2. Does every measure have an owner (collection, scoring, cleaning, analysis)?
  3. Have you written down what you will do with missing data-before you see it?
  4. Is the participant burden realistic for someone who is tired, busy, or anxious?
  5. Do you have at least one thing you deliberately chose not to measure?

If this feels strict, good. Strictness up front is kindness later.

Point clé Détail Intérêt pour le lecteur
Outcome lock Primary + limited secondary outcomes, plus explicit non-outcomes Prevents scope creep and reduces admin load
Burden control Fewer forms/devices/timepoints Improves retention and data quality
Defendable analysis Pre-specified plan and missing-data approach Faster review cycles, fewer reviewer traps

FAQ:

  • Isn’t collecting more data always better for future analyses? Not if it lowers compliance or creates messy, underpowered exploratory results. Bank high-quality essentials first; add extras only with clear resourcing and a plan to analyse them.
  • What if a collaborator insists on adding their favourite measure? Ask them to name the decision it will change, the time it adds, and who will own cleaning and analysis. If those answers are vague, it’s a strong sign it should be exploratory or excluded.
  • How many outcomes is “too many” in a sleep study? There’s no magic number, but a single primary outcome and a small set of secondary outcomes is usually more publishable than a long list you can’t process well.
  • Does pre-commitment reduce discovery? It separates discovery from confirmation. You can still explore, but you label it honestly, which protects you from overclaiming and protects the reader from confusion.
  • Where should the “non-outcomes” list live? In the protocol and the analysis plan (and ideally in the pre-registration). It’s most useful where future-you and reviewers can actually see it.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment