top of page

The Five Questions Worth Asking Before Any Video Brief Gets Written

  • 2 days ago
  • 5 min read

Walk into any production kickoff meeting at a well-run marketing department and you'll find alignment on the things that are easy to align on. Budget is confirmed. Timeline is agreed. Deliverables are listed. The platform is decided. Everyone is ready to move.


What rarely makes it into the brief — even when it has been thoroughly discussed — is a clear answer to what the video actually needs to accomplish in the specific environment where it will run. The strategic thinking exists. The gap is in how much of it survives the handoff into production.


These five questions take about twenty minutes to answer before a brief gets written. They don't replace the brief — they change the shape of everything that goes into it.


Question 1: Where exactly is this living, and what are the real constraints of that environment?

Not which platforms it will run on. That's a distribution checklist, and most briefs have one. The actual question is about attention context.


What is the viewer doing in the five seconds before they encounter this video? How much time will they give it before they decide whether to keep watching — and what does the environment demand in terms of format, pacing, and opening hook? A video built for a homepage hero, where the visitor has chosen to be there and is in an evaluative mindset, performs completely differently than the same video adapted for pre-roll, where the viewer is trying to get past it. A video built to run in both environments without being rebuilt for either will underperform in both.


This question forces a specific answer before any creative decision gets made. The brief that says "this will run on our website, LinkedIn, and as a paid social ad" without distinguishing between those contexts is a brief that is asking for one video to do three incompatible jobs.


Question 2: What does the viewer need to feel or do immediately after watching — and what is standing in the way of that right now?

Most briefs capture the strategic intent in brand terms — communicate our values, introduce the product line, tell the origin story. Those are legitimate goals. But they describe what the video will contain, not what it will cause the viewer to feel or do. That translation — from strategic intent to audience outcome — is where briefs most often have a gap.


An outcome is something the viewer feels or does after watching. Submits a form. Shifts from skeptical to curious. Understands for the first time why this product solves a problem they actually have. Shares it because it articulated something they've been trying to say.


The second half of this question is where the actual creative insight lives. What is standing in the way of that outcome right now — before the video exists? Is it awareness? Trust? A specific misconception? Friction in the buying process? The answer to that question tells you what the video needs to address. Without it, the creative has no real problem to solve, and a video with no problem to solve tends to describe rather than persuade.


This is the question most productions skip — and it is the reason most productions that look fine don't do anything. If you want to understand more about how that failure happens, the mechanics of it are worth reading about in more depth.


Question 3: Who is approving this creative, and what will they actually use to evaluate it?

Not who signs off — that is org chart information and everyone already knows it. The real question is what the approver will use to decide whether the work is good.


If the answer is "they'll know it when they see it" — or "it needs to feel on-brand" — the brief has no objective standard against which to evaluate creative decisions. That vacuum gets filled during review by personal taste and risk aversion — not because the team isn't thinking carefully, but because without an agreed standard, taste is the only available measure. The version that makes nobody uncomfortable tends to win. This happens in good departments as reliably as anywhere else.


Naming the approval criteria in advance — even imperfectly — changes what gets presented and how it gets evaluated. If the criteria is "does this video make someone who has never heard of us understand what we do within the first fifteen seconds," that is a standard the creative can be measured against. It moves the review from "do I like this" to "does this do what we said it needed to do." That is a harder conversation, but it produces better work.


Question 4: How will you measure whether this worked — and can you define that before production starts?

Most campaigns establish success metrics after launch, when the data is already arriving. The problem with that sequence is that the production has already been built by then, and what gets built is shaped by what you planned to measure.


If the metric is landing page conversion rate, the video gets structured differently than if the metric is time-on-site, brand search lift, or share rate. The hook is different. The pacing is different. The ending is different. A video built without a defined metric is being asked to succeed at a test that hasn't been written yet.


This question is sometimes uncomfortable to answer before production starts. The discomfort is informative. If the team cannot agree on what success looks like before the camera rolls, they are unlikely to agree on it afterward either — and the post-campaign review will default to subjective impressions rather than data. Naming the metric early does not guarantee the video hits it. But it gives the production a definition of success that everyone is working toward.


Question 5: What does the next version of this look like?

Is this a standalone piece, or is there a plan to learn from it and build on it?

The answer changes how the production gets scoped — what assets get captured, what variations get built, what data will be collected to inform the next iteration. A video built as a one-off gets structured differently than a video built as the first version of an ongoing campaign. The one-off optimizes for the finished product. The iterative version builds in the ability to change.


Most productions get treated as one-offs even when the intent is ongoing. The brand runs the video, watches the results, draws some conclusions, and then starts the brief process from scratch six months later without a formal connection between what was learned and what gets built next. The institutional knowledge from each production stays inside the campaign and doesn't carry forward.


This question is not about committing to a content calendar before you know if the first piece works. It is about deciding, before production starts, whether you are building something to learn from or just something to run. Those are different briefs.


None of these questions are complicated. The disconnect isn't strategic thinking — most marketing teams have done that work. It's that the brief format inherited from traditional agency workflows was never designed to carry strategy across the handoff into production.


What changes when these questions get answered before the brief is written is not the production — it is the clarity everyone is working from. The creative team knows what problem they are actually solving. The approval process has a standard beyond taste. The measurement plan exists before the work goes out the door.


That twenty minutes is the most leveraged time in any production. The cost of changing direction before a brief is zero. The cost of changing direction after a shoot is substantial. And the cost of running something that doesn't work, then figuring out afterward why it didn't, is what most campaigns absorb without naming it.


If the brief is where you want to start — with the function questions answered before the format questions get asked — that is exactly where we start too.


 
 

When we publish something worth reading, we'll send it.

No cadence. No newsletter. Just the next piece when it's ready.

bottom of page