I spent a lot of time at TikTok watching AI-generated ad scripts fail in ways that were instructive. The model could write. The words were coherent. The outputs were genuinely worse than what a mid-level human copywriter would produce, not because the language was bad but because the context was thin.

The problem wasn’t capability. It was input. Someone would write a one-line prompt — “write an ad for our sports drink targeting Gen Z” — and expect something good. That’s not how it works. A good copywriter working that brief would spend twenty minutes asking clarifying questions before writing a word. The AI just started writing, because nobody told it what it needed to know.

This isn’t a model problem. It’s a workflow problem. The quality of any AI content output is roughly determined by how much context went in. Not necessarily more words — relevant, specific context. The target user’s exact situation. What they’ve seen before. What they’re likely to be skeptical about. The tone the brand has established. What worked last time and why.

When you provide that kind of context, the outputs change dramatically. Not because the model got smarter, but because you gave it something to reason about instead of asking it to generate from nothing.

The way I came to think about this: a single prompt is like asking someone to cook you dinner without telling them what you’re hungry for, what’s in the fridge, or whether you have any restrictions. A good cook can make something, but probably not the thing you wanted. Constraining the problem — specifying the constraints — is itself the skill.

This has implications for how you build content products. Most of the early AI content tools were essentially prompt UIs. They put a text box in front of you and sent whatever you typed to the model. That design assumes the user knows how to specify what they need, which is rarely true. The better products recognize that the user doesn’t know how to prompt well, and that their job is to extract context through the interface rather than expecting users to supply it through text.

The TikTok work made this concrete for me. The ads that performed well were the ones where we’d pulled signals from actual audience behavior — what they’d engaged with, what they’d scrolled past, the specific language patterns that appeared in comments from different segments — and built those signals into the generation context. The model wasn’t doing anything magical. It was just working with richer information.

What changes when you get this right is that the outputs start feeling like they were written for someone, not at someone. That’s the actual difference between content that converts and content that doesn’t. The technology can get you to “competent.” Context is what gets you to “resonant.”

I think about this now every time I see a demo of a content generation product. The demo prompt is always fully specified. The user always knows exactly what they want and describes it perfectly. Real users don’t work that way. The gap between demo and reality is almost entirely a context gap.