There’s a concept Dan Shipper wrote about — he called it “free energy for text” — that’s been sitting with me for a while. The idea is that language models let you transform any piece of knowledge through compression, expansion, or translation at basically zero cost. The same thing that used to take an editor days now takes seconds.

I started thinking about what this actually changes, practically. Not in the “AI will revolutionize education” sense, but in the specific sense of: what’s different now when you sit down to learn something?

The clearest change I’ve noticed is that the format of information is no longer fixed. If I’m trying to understand a concept from a dense academic paper, I can have it explained like I’m a first-year undergrad, or like I already know the adjacent field, or through three analogies to things I already understand. The information itself doesn’t change, but the way it meets me does. Before this, you either found a good explainer or you didn’t. Now the good explainer can be synthesized on demand.

This sounds like a modest improvement. I think it’s actually bigger than that, but it’s hard to articulate why. Maybe it’s because the friction in learning has usually been less about not having access to information and more about information not fitting where you are. Books are written for a reader the author imagined. Courses move at one pace. Even the best teachers can only meet you partway. When the format becomes malleable, the meeting-in-the-middle problem gets a lot easier.

What I’m less sure about is what this does to how knowledge is retained. When I read a textbook chapter and struggle with it, the struggle is doing something. It’s creating the kind of encoding that lasts. If I can always get the frictionless version, do I actually know the thing afterward? I don’t have a good answer to this. My guess is that the answer varies by what kind of knowledge it is — declarative facts versus procedural skills versus conceptual understanding all work differently.

The part that strikes me most when thinking about Aibrary and similar products is the synthesis layer. Not just compression or translation, but genuine connection-making — where a system can tell you that the idea in chapter 3 of this book appears in a different form in this other context you’re already familiar with. That’s hard. Most tools still just compress. They summarize, they simplify, they translate across reading level. The cross-domain connection requires something closer to reasoning.

I think that’s the actual frontier. Not “make knowledge easier to consume” but “help people see the same idea appearing in different places” — which is how expertise actually works. An expert in a domain doesn’t just know more things; they have more connections between things they know. If you could accelerate that, you’d be doing something real.

For now, the fluid knowledge tools are genuinely useful for the simpler operations. I use them mostly for bridging gaps — when I understand the shape of something but need to fill in a detail, when I need the same idea explained through a different lens because the first one didn’t click. That’s not nothing. The hard part is resisting the temptation to use them for everything, which would get you the frictionless version of learning without the friction that makes it stick.

I’m still figuring out the right balance. Probably everyone is.