Every methodology has its breaking point.

Mountain landscape

The problem-solving frameworks that get you into a top school, the pattern recognition that earns you a job at a big tech company, the “best practices” that make you a star student—all of these become systematic liabilities the moment you encounter genuinely ill-defined problems. Which, unfortunately for ambitious people, is exactly where the most important work happens.

As I explore what I want to do with my life, and reflect past decision, I realize: we’ve been trained to excel at the wrong things.

As much as I take pride in my ability to “ask the right questions,” identify effective methods, and “navigate ambiguity” by finding answers others overlook, I’m starting to realize how much I’ve been playing in a safe zone. Having a job lined up while wanting to pursue startups part-time in college isn’t exactly what I’d call “navigating ambiguity.” When it comes to genuinely not knowing what I’ll be doing or where I’ll end up six months from now, I panic at the idea.

This reveals something uncomfortable about how institutional training actually works. In school and corporate environments, you can often be rewarded for appearing to work hard—for having the right frameworks, asking sophisticated questions, and demonstrating systematic thinking. The performance of competence gets rewarded alongside actual competence.

But in startups, that entire system breaks down. People won’t listen to you or think differently of you just because you have “Founder” or “Head of Product” stamped on your LinkedIn profile or email signature. The quality of your product and depth of your customer empathy is all that matters. The institutional markers of success—the titles, the frameworks, the sophisticated analysis—become irrelevant when you’re face-to-face with users who simply don’t care about your credentials.

Paul Graham’s insight about startups being counterintuitive reveals something deeper about this disconnect. As he notes in “Before the Startup”, there are many ski instructors but few running instructors—not because running coaching is difficult, but because running feels intuitive while skiing requires overriding your instincts. Similarly, startups are counterintuitive, which is why startup advice exists at all. But there’s a related phenomenon: institutional training teaches us to excel at activities that seem like entrepreneurship but aren’t.

Research on complex decision-making reveals something uncomfortable: without strong self-regulation skills, people default to limiting patterns when facing uncertainty. They avoid difficulty, fall into perfectionist paralysis, or break problems down mechanically without going deep enough.

The ability to step back, see problems clearly, and consciously adapt behavior is rare. It doesn’t naturally develop with age or experience—many people carry the same behavioral patterns into middle age.

This shows up everywhere, but it’s particularly visible in how we approach complex problems.


The Problem Decomposition Trap

Institutions, as a product, have done a great job of creating the impression that skill is associated with completing coursework. While this might have been true years back—you’d have to do a CS degree to learn how to code—it’s becoming less relevant as an indicator. You become good at something by doing that thing, not necessarily learning how to become good at that thing. And to actually do something well, you have to develop an understanding, rather than just follow patterns.

At leading institutions, students are trained to find the quickest path to a solution—to identify “hacks” and implement “best practices.” This system works brilliantly for well-defined problems with known solution spaces. Get the grade, pass the test, optimize the metric. But it creates a dangerous mental model when applied to genuinely ambiguous situations.

The institutional approach teaches us to pattern-match rapidly: This looks like that problem I solved before, so I’ll apply that framework. In startups, this intuition becomes a liability. The most valuable insights often come from staying with uncertainty longer than feels comfortable, from digging deeper into assumptions everyone else accepts as given.

This pattern-matching problem gets worse when you consider how different environments reward different approaches.


Safety Nets and Startup Reality

While I have limited exposure to working in big tech, I’ve developed somewhat of hints at why at large companies, the standard for success is different. If you can clearly and persuasively articulate an idea and convince your superiors, the system backs you up. In existing markets, you’ll capture some share. In growth markets, if your approach doesn’t work, you can pivot quickly. The infrastructure absorbs your experimental failures.

Startups operate under entirely different constraints. Competition edges in from multiple directions, resources are brutally limited, and the cold start problem is real. The only real differentiation isn’t execution efficiency—it’s unique understanding of the problem itself. Whether that’s insight into user pain points everyone else misses, or redefining how to reach customers in ways that weren’t obvious.

This difference reveals why methodologies optimized for big company environments often fail in startup contexts. Big companies can afford to be “approximately right” because they have scale, distribution, and capital to make imperfect solutions work. Startups need to be “precisely right” about something others are wrong about.


When Uncertainty Becomes Unbearable

International students experience this pattern more intensely because the stakes genuinely feel higher when navigating unfamiliar systems.

The human brain isn’t well-equipped to handle uncertainty. We anchor to whatever evidence seems solid. When you’re navigating an unfamiliar system (US college admissions) with limited information, hiring a college consultant provides reassurance. It’s not about whether it’s objectively helpful—it’s about having something concrete to hold onto when everything else feels uncertain.

This intensifies in college. Visa status, a competitive job market, policy uncertainties—these aren’t abstract concerns but existential realities. So when companies promise “insider guides” to breaking into finance (organizations that offer an impression to having these guides are able to charge high premiums), even students with genuine ambitions to start companies or do research end up signing contracts for tracks of “breaking into” Wall Street or Silicon Valley. The path feels safer because it’s more defined.

This reveals something deeper about how uncertainty gets commodified. When stakes feel existential, the promise of “insider knowledge” becomes irresistible—regardless of whether that knowledge actually helps. These services often reinforce the exact pattern-matching mentality that becomes counterproductive in genuinely ambiguous situations.

The almost counterintuitive observation is that many of these students are precisely the people who should be exploring unknown territories. They have unique perspectives, cross-cultural insights, and often genuine intellectual curiosity. But the very conditions that make ambiguity feel threatening—uncertainty about belonging, about future prospects—are what make engaging with it most valuable. It’s a cruel catch-22: those who would benefit most from embracing uncertainty are often those who can least afford to.


Breaking Free From Institutional Patterns

The solution isn’t to abandon systematic thinking entirely, but to develop meta-awareness about when these approaches help and when they hinder. This requires recognizing the fundamental difference between well-defined problems (where institutional training excels) and ill-defined problems (where it often misleads).

Effective navigation of ambiguous situations involves capabilities that institutional training rarely develops: comfort with not knowing while still making progress, iterative planning rather than linear execution, and the ability to hold assumptions lightly while testing them thoroughly.

Most importantly, it requires learning to recognize when you’re applying familiar frameworks to genuinely novel situations—when you’re “playing house” rather than engaging with real complexity. The very thoroughness of elite institutional training can create blind spots, making it harder to step back and question whether your cognitive tools are adequate for the problem at hand.

Which brings me back to my own struggle with this.


Maybe It’s Time to Loosen Up a Little

Paul Graham notes that knowledge grows fractally—from a distance its edges look smooth, but when you get close enough, you notice gaps that seem obvious. The feeling of “surely someone has figured this out already” might be wrong.

This hits at something I’ve been wrestling with personally. We spend so much energy learning to navigate the known world efficiently that we forget how to be genuinely curious about the unknown parts. But those gaps—the things that seem obviously missing once you get close enough to see them—that’s where the interesting work lives.

I catch myself falling into these same patterns constantly. Defaulting to frameworks when I should be sitting with confusion. Seeking pattern matches when I should be noticing what doesn’t fit any pattern I know. It’s uncomfortable to admit how often I’ve been “playing house” with ambiguity rather than actually engaging with it.

But here’s what I’m starting to understand: the discomfort of not knowing isn’t a bug to be fixed—it’s a signal. When your institutional training doesn’t immediately apply, when familiar frameworks feel inadequate, when you can’t quickly decompose the problem into manageable pieces, you might be looking at something genuinely important.

The real skill isn’t learning to eliminate that discomfort, but learning to sit with it productively. To stay curious about gaps that seem obvious but somehow remain unexplored. To trust that the feeling of “surely someone has figured this out already” might be wrong—and that if you get close enough to the fractal edge of knowledge, you’ll find whole territories waiting to be explored.

Maybe the goal isn’t to get better at navigating ambiguity, but to get more comfortable being genuinely confused by things that matter.


If you’ve managed to read till here, it’s probably worth checking out Design Technology Research (DTR). DTR is a research and learning community where students design and study technologies that support how people learn, collaborate, and create. If you go to Northwestern, I’d recommend you apply.