When Knowledge Becomes Fluid: How AI Transforms Learning

Exploring how generative AI transforms static knowledge into dynamic, adaptive learning experiences. From fixed formats to fluid, contextual understanding that reshapes education and content consumption.

There’s a concept Dan Shipper wrote about โ€” he called it “free energy for text” โ€” that’s been sitting with me for a while. The idea is that language models let you transform any piece of knowledge through compression, expansion, or translation at basically zero cost. The same thing that used to take an editor days now takes seconds.

I started thinking about what this actually changes, practically. Not in the “AI will revolutionize education” sense, but in the specific sense of: what’s different now when you sit down to learn something?

The clearest change I’ve noticed is that the format of information is no longer fixed. If I’m trying to understand a concept from a dense academic paper, I can have it explained like I’m a first-year undergrad, or like I already know the adjacent field, or through three analogies to things I already understand. The information itself doesn’t change, but the way it meets me does. Before this, you either found a good explainer or you didn’t. Now the good explainer can be synthesized on demand.

This sounds like a modest improvement. I think it’s actually bigger than that, but it’s hard to articulate why. Maybe it’s because the friction in learning has usually been less about not having access to information and more about information not fitting where you are. Books are written for a reader the author imagined. Courses move at one pace. Even the best teachers can only meet you partway. When the format becomes malleable, the meeting-in-the-middle problem gets a lot easier.

What I’m less sure about is what this does to how knowledge is retained. When I read a textbook chapter and struggle with it, the struggle is doing something. It’s creating the kind of encoding that lasts. If I can always get the frictionless version, do I actually know the thing afterward? I don’t have a good answer to this. My guess is that the answer varies by what kind of knowledge it is โ€” declarative facts versus procedural skills versus conceptual understanding all work differently.

The part that strikes me most when thinking about Aibrary and similar products is the synthesis layer. Not just compression or translation, but genuine connection-making โ€” where a system can tell you that the idea in chapter 3 of this book appears in a different form in this other context you’re already familiar with. That’s hard. Most tools still just compress. They summarize, they simplify, they translate across reading level. The cross-domain connection requires something closer to reasoning.

I think that’s the actual frontier. Not “make knowledge easier to consume” but “help people see the same idea appearing in different places” โ€” which is how expertise actually works. An expert in a domain doesn’t just know more things; they have more connections between things they know. If you could accelerate that, you’d be doing something real.

For now, the fluid knowledge tools are genuinely useful for the simpler operations. I use them mostly for bridging gaps โ€” when I understand the shape of something but need to fill in a detail, when I need the same idea explained through a different lens because the first one didn’t click. That’s not nothing. The hard part is resisting the temptation to use them for everything, which would get you the frictionless version of learning without the friction that makes it stick.

I’m still figuring out the right balance. Probably everyone is.

Deciding the Right Question Is More Important Than the Answer

When problems are ambiguous, the limiting factor is often not answer quality but question quality. This piece explores why framing better questions is the real leverage point in AI-assisted thinking and product work.

There’s a category of problem I keep running into when building AI products for learning: people don’t know what they don’t know. This sounds obvious, but it has a specific implication that’s easy to overlook. If you build a search box, you’re assuming the user can formulate a query. If the user is in unfamiliar territory, they often can’t. They know they’re confused. They can’t always say about what.

I ran into this personally when I started trying to understand the financial analysis side of venture investing. I could search “how to read a cap table” and get an answer. But I didn’t know that cap tables were the right thing to be reading. I didn’t know what I was missing. The gap was invisible to me until someone more experienced pointed at it.

Google’s Learn About is interesting precisely because it tries to address this. Instead of a search box, it gives you something more like a map โ€” here’s where you currently are, here are adjacent territories, here are questions you might not know to ask. The distinction matters because exploration and retrieval are different cognitive modes. Retrieval assumes you know what you want. Exploration doesn’t.

The design problem for exploration-first learning interfaces is different from the design problem for search. For search, the main metric is result quality. For exploration, the main metric is something more like: did the person end up somewhere useful that they wouldn’t have reached on their own? That’s harder to measure and harder to optimize for.

What I’ve found trying to build in this space: the most important lever is the quality of the follow-up. When someone reads an explanation, they get partially satisfied and partially more curious โ€” but the new curiosity is often more specific than what they started with. If the interface can surface a good next question at that moment โ€” not a generic list, but the specific question that the explanation just made newly askable โ€” it captures that momentum. If it doesn’t, the user goes back to zero.

This is where most learning tools fail. They deliver an answer and then present a menu of unrelated things to look at next. That’s not how learning actually moves. You follow a chain. Each answer raises a question that only exists because you now understand something you didn’t before. The interface needs to model that chain, not just index topics.

RAG architectures help here because they let you ground the interface in specific sources โ€” which means you can trace why something appeared, which builds trust. The mess is that good sources for exploration are different from good sources for retrieval. A Wikipedia article is a fine retrieval source. For exploration, you often want something messier โ€” community discussions, expert disagreements, multiple framings of the same idea โ€” because those surfaces reveal the shape of the territory better than polished summaries.

I don’t have a clean answer to what the right interface looks like. I’ve seen demos that are impressive and demos that are gimmicky. The version I’d actually use on a daily basis would feel less like a learning app and more like a collaborator with context โ€” something that knows what I’ve already covered, can see where I’m confused, and can make a specific suggestion about where to go next. Not a comprehensive tour, just a nudge in the right direction at the right moment.

That’s a simpler description than most AI learning products market themselves as. But “nudge in the right direction at the right moment” is deceptively hard to build.

Content Platforms Will Become Distribution-by-Generation Systems

As generation costs collapse, distribution shifts from routing static content to generating the right content instance for each context. This post argues that future content platforms will distribute by generation, not retrieval.

I spent a summer interning at China Impact Investing Network, a few blocks from Huangzhuang, earning 100 RMB a day doing translation and RSS aggregation. It was mostly tedious. Someone would point me at a feed, I’d scan through items, pull anything that seemed relevant to impact investing in China, and write a short summary. Three months of that. Now GPT does it in seconds.

That’s the part of the AI story that doesn’t get told enough: the first wave of value wasn’t replacing creative professionals, it was replacing the clerical work that used to sit between information and understanding. Aggregation, summarization, format conversion, translation โ€” tasks that were expensive enough to require humans but not interesting enough for humans to do well.

The thing I’ve been thinking about more recently is what comes after that. Once generation is cheap, what’s the new bottleneck?

My guess is distribution โ€” but not in the simple “getting content to people” sense. In the sense of: getting the right version of something to the right person. The broadcast model assumed one piece of content, many readers. The recommendation model tried to route existing content to the people most likely to engage with it. Neither is the same as saying: take this idea, and instantiate it in whatever form actually fits this specific person right now.

That last thing is what “generation as distribution” means to me. It’s not creating content and then distributing it. It’s creating content in the act of distributing it โ€” generating the specific version that serves this person in this context, rather than finding the existing version that comes closest.

This is a real difference. A business book written for general audiences misses 80% of the readers who might have benefited from it, because the framing doesn’t match their background or the examples don’t map to their industry or the pace is wrong. If you can generate personalized versions โ€” same core ideas, different surface โ€” you’re not making a distribution problem go away. You’re transforming a distribution problem into a generation problem, which is now cheap.

The limitation is context. To generate the right version, the system needs to know enough about the person to make the right choices. That’s still hard. Most “personalization” is shallow โ€” it adjusts reading level or inserts your name. The meaningful personalization requires understanding not just what you know but how you think, what analogies will land, what framing will be legible versus opaque. That’s a different kind of modeling than recommendation engines typically do.

I don’t think this entirely displaces human content creation. Something is lost in pure generation โ€” the particular perspective, the idiosyncratic observation, the detail that makes you feel like you’re inside someone’s mind. That has value that personalization can’t substitute. But for a certain class of knowledge transfer โ€” the kind where the goal is just understanding, not the pleasure of the text itself โ€” generated and personalized is probably going to beat broadcast and optimized.

I’m still uncertain about the economics of this. If every user gets a different version, what does “ownership” of the content even mean? Who captures the value โ€” the model provider, the system designer, the person who wrote the original text that the generation is drawing from? These aren’t solved problems. But they’re interesting ones, and I think they’re the real questions in the AI content space right now, not “can AI write.”