I spent a summer interning at China Impact Investing Network, a few blocks from Huangzhuang, earning 100 RMB a day doing translation and RSS aggregation. It was mostly tedious. Someone would point me at a feed, I’d scan through items, pull anything that seemed relevant to impact investing in China, and write a short summary. Three months of that. Now GPT does it in seconds.
That’s the part of the AI story that doesn’t get told enough: the first wave of value wasn’t replacing creative professionals, it was replacing the clerical work that used to sit between information and understanding. Aggregation, summarization, format conversion, translation — tasks that were expensive enough to require humans but not interesting enough for humans to do well.
The thing I’ve been thinking about more recently is what comes after that. Once generation is cheap, what’s the new bottleneck?
My guess is distribution — but not in the simple “getting content to people” sense. In the sense of: getting the right version of something to the right person. The broadcast model assumed one piece of content, many readers. The recommendation model tried to route existing content to the people most likely to engage with it. Neither is the same as saying: take this idea, and instantiate it in whatever form actually fits this specific person right now.
That last thing is what “generation as distribution” means to me. It’s not creating content and then distributing it. It’s creating content in the act of distributing it — generating the specific version that serves this person in this context, rather than finding the existing version that comes closest.
This is a real difference. A business book written for general audiences misses 80% of the readers who might have benefited from it, because the framing doesn’t match their background or the examples don’t map to their industry or the pace is wrong. If you can generate personalized versions — same core ideas, different surface — you’re not making a distribution problem go away. You’re transforming a distribution problem into a generation problem, which is now cheap.
The limitation is context. To generate the right version, the system needs to know enough about the person to make the right choices. That’s still hard. Most “personalization” is shallow — it adjusts reading level or inserts your name. The meaningful personalization requires understanding not just what you know but how you think, what analogies will land, what framing will be legible versus opaque. That’s a different kind of modeling than recommendation engines typically do.
I don’t think this entirely displaces human content creation. Something is lost in pure generation — the particular perspective, the idiosyncratic observation, the detail that makes you feel like you’re inside someone’s mind. That has value that personalization can’t substitute. But for a certain class of knowledge transfer — the kind where the goal is just understanding, not the pleasure of the text itself — generated and personalized is probably going to beat broadcast and optimized.
I’m still uncertain about the economics of this. If every user gets a different version, what does “ownership” of the content even mean? Who captures the value — the model provider, the system designer, the person who wrote the original text that the generation is drawing from? These aren’t solved problems. But they’re interesting ones, and I think they’re the real questions in the AI content space right now, not “can AI write.”