After Sora: Why Brands Need Human‑Led Production Intelligence (Not Just Prompts)
When OpenAI shut down its Sora video app and walked away from a billion‑dollar Disney licensing deal, it looked - for a moment - like a verdict on AI video itself. In reality, it was a verdict on a particular idea of AI video: a public, deepfake‑adjacent playground that sat miles away from what studios and brands actually need.
The lesson is clear. Hollywood, and by extension serious brands in beauty, healthcare, pharma and beyond, are not looking for another viral app that can churn out synthetic clips on demand. They are looking for production intelligence: workflows that make real stories easier to tell, protect their IP and their people, and give their teams more usable material without burning more days on set.
That is exactly where my practice sits - as a human bridge between emerging AI tools and the kind of authentic, emotionally grounded storytelling brands still value most.
From deepfake playgrounds to production intelligence.
Sora’s promise was seductively simple: type a prompt, get a photorealistic clip, share it. The more spectacular the output, the better. But that same openness is what made it almost unusable for rights‑sensitive work. If anyone can generate a politician, a patient, or a famous character doing anything, you don’t have a tool - you have a moderation and legal nightmare.
Contrast that with where serious AI video is heading. Newer tools and pipelines are being designed to behave more like parts of a camera department or post house: controllable, auditable and actually pluggable into existing workflows. Instead of asking, “How do we replace production?”, they ask, “How do we support directors, DP’s, editors and producers with smarter, faster ways to explore and deliver?”
The direction of travel is away from chaotic, public‑facing ‘AI content’ and towards private, IP‑aware systems trained on a studio’s or brand’s own material. In that world, the most valuable assets are not prompts - they’re well‑organised libraries of trusted footage, performances and imagery that models can help you search, adapt and extend.
Why brands don’t really want ‘AI content’
This is the disconnect at the heart of many AI conversations. On the surface, a prompt‑only approach sounds efficient: five people in a room for a few weeks, generating endless variations until something ‘works’. But when you talk to teams in beauty, healthcare and pharma, their actual needs look very different:
They need genuine human connection - patients, clinicians, customers and contributors who feel believable, not synthetic.
They need trust - stories and likenesses that can withstand ethical, medical and regulatory scrutiny.
They need continuity - visual language that feels like their brand, not like a generic model’s best guess.
They need clarity around rights - no ambiguity about who owns what, or whether a shot can safely be used.
A purely AI‑generated library struggles here. You may save on shoot days, but you pay it back in time debating whether an image feels ‘off’, arguing about brand fit, or worrying that you’re drifting into legal grey areas. You also lose something harder to quantify: the subtle, unscripted moments that only happen when a real person relaxes in front of a camera.
Real production as the core, AI as the amplifier
My approach reverses that equation. We start with real production - carefully designed shoots that put people at ease and find the emotional core of your brief - and then use AI intelligently around that core.
On set and in the field, the priorities are human:
Creating a space where contributors feel seen and safe.
Listening for the unscripted lines and gestures that reveal who they are.
Composing images that carry your brand’s aesthetic without feeling staged or sterile.
At the same time, we’re shooting with system‑thinking in mind. We’re not just making a single hero film; we’re building a content bank:
Multiple locations, scenarios and characters that map to your key stories.
A range of coverage - wide, mid, close, detail; movement and stillness - captured with sound so it can flex across formats.
Clean metadata and structure so that material is easy to search and repurpose later.
Once that foundation exists, AI becomes a force multiplier rather than a substitute:
Helping find the right moments in hours of footage.
Suggesting rough assemblies, alternates or trims you can then refine with human judgment.
Generating safe, on‑brand derivatives - different aspect ratios, cut‑downs, light localisations - without re‑shooting the core story.
The result is a library that still feels unmistakably human, but behaves like a smart system in the background. It is far more efficient than weeks of prompt‑tweaking, because the material already is what it needs to be; we’re simply making it easier to access and adapt.
Being the bridge between AI and creativity
In this landscape, my role is to act as that bridge. I speak both languages: the emotional language of performance, composition and story, and the technical language of workflows, models and versioning. Practically, that looks like:
Starting every project by defining the human truth at its centre, not the toolset we’ll use.
Designing shoots that anticipate how assets will be re‑used, so every hour on set multiplies across your future pipeline.
Using AI in post to remove drudge work and open creative options, while keeping final calls on tone, ethics and representation firmly human.
Advising on how your content bank can evolve into an AI‑ready library without compromising contributor trust or IP control.
For brands, this means you don’t have to choose between ‘hand‑crafted’ and ‘hi‑tech’, or between emotional resonance and operational efficiency. You can have a process that treats people and stories with care, and still gives your teams a flexible, intelligent asset base to work from.
What clients are really looking for
If you strip away the jargon, most clients want the same thing: stories that feel true, at a scale that feels sustainable. They want to know that the faces in their campaigns are comfortable with how they’re being portrayed. They want reassurance that their content won’t explode into a rights issue six months down the line. They want material that can stretch across channels and markets without feeling templated or tired.
The post‑Sora world is telling us that the answer is not another deepfake app. It’s a new layer of production intelligence built around each brand’s own IP and creative culture. My job is to help you build that layer in a way that never loses sight of the humans at its centre - on camera, behind the camera, and in the audience.