rohan ganapavarapu

What I Would Tell the Frontier Labs

This post is a critique of the direction of current AI models, particularly the path of scaling of current AI models.

AI as collective intelligence

Humanities intelligence and progress is really collective, everyone builds off the work of others. Take Tesla and Edison, each vehemently championed two sides of the same coin, AC and DC. Unlike a beehive where individual bees act purely on instinct for the colony’s benefit, humans paradoxically achieve collective advancement through fierce individual conviction. We’re like a hive mind that works precisely because each “bee” believes itself autonomous and will defend its ideas to the death. This tension between our fundamentally collaborative nature and our deeply held sense of individual agency creates productive friction–when Tesla insisted on AC and Edison on DC, their clash drove electrical innovation forward far more effectively than if they had immediately agreed. Our collective intelligence emerges not despite but because of our stubborn individuality, as each person’s unwavering belief in their own free will leads them to push back, challenge assumptions, and chart new territories that end up benefiting everyone.

AI models are interactable snapshots of humanity’s collective intelligence, capturing a statistical average of our reasoning. They lack the edges, randomness, and exogenous sparks (the evolutionary and environmental “noise”) that drive real progress. Humans evolve toward collective intelligence precisely because individuals believe they have free will. That belief spawns productive disagreement, which leads to genuine novelty. AI removes this disconnect.

When someone deviates from the norm in a meaningful way, they can change the world. AI, being an average with high computational depth, struggles to replicate that capacity for creative deviation. It optimizes for conformity rather than the sort of mixture of experts (MoE) that evolution harnesses to nurture breakthroughs.

the death of “ratio”

We’re automating what the ancient Greeks called “ratio”—systematic, step-by-step reasoning. AI excels at deep reasoning along established paths but fails at radical departures from first principles. Even chain-of-thought or backtracking tree searches can’t replicate the exogenous jolt that sparks true novelty.

Our focus on “PhD-level” path-following problems is misguided. We should be optimizing for creating new problems, not just tackling established ones. Human intelligence might resemble a tree search in some aspects, but we also thrive by starting from unexpected branches. Scaling current AI hones depth on familiar branches and stifles our ability to wander off the beaten path.

In other words, the metrics we are optimizing for are pretty bad. They are focused on ratio, on solving well defined problems that take high depth of knowledge (or “steps”) to solve. I mean if someone told me we got o1 level intelligence a few years ago I would expect the applications to quickly change the world, and yet finding use cases is becoming hard.

the learning paradox

Humans learn by doing: the hand learns before the mind. We physically reenact the steps that once led to original discoveries. That’s how we develop the habit of creative first-principles thinking. AI short-circuits this process. By automating “ratio,” it risks undermining the very practice that builds our capacity for radical exploration.

Yes, AI can help us reach far along known paths, but it doesn’t train us to diverge. Humanity’s trait of progressive exogeny—our knack for getting smarter by grappling with reality—depends on direct problem engagement. Offloading that to AI disincentivizes us from developing new first principles.

the idiocracy trap

The real threat isn’t “superintelligence” wiping us out; it’s the slow erosion of our ability to think differently. By handing the cognitive heavy lifting to AI, we risk creating an idiocracy where we rest on algorithmic laurels and forget how to break new ground.

If we stay on this path, we’ll lose practice in “ratio” and remain forever bounded by our current ideas. When systems do the thinking for us, we may never again muster the will—or the skill—to deviate and find something truly novel. Evolution pushes progress through conformity and deviation. Today’s AI tilts that balance heavily toward conformity, ignoring the MoE engine that creates transformative leaps.

Progress becomes algorithmic—predictable, incremental, and trapped by our existing horizons. We risk locking ourselves in a future defined by the average, lacking the edges and exogenous sparks that once propelled humanity forward. CoT is focusing on the wrong layer of intelligence.

a (general) way forward: curiosity

One crucial point has been missed: AI acts as an accelerant for the naturally curious. Consider how children and beginners interact with AI—they pepper it with endless questions, using it as a tireless teacher that explains complex topics at their level. This mirrors the historical pattern of how “script kiddies” learned programming—by cobbling together code from various sources, failing, and learning through that process.

AI simply reduces the latency of this learning cycle. Instead of waiting for Stack Overflow responses or IRC help, learners get immediate feedback. They hit harder problems faster and learn more quickly from their failures. The real constraint isn’t AI—it’s curiosity itself. Those genuinely interested in understanding will inevitably consider the AI a formidable sparring partner in developing their own ideas and provide the “disconnect” that powers the collective intelligence I proposed.

It will become ever important to focus on nurturing the curiosity that drives people instead of simply wanting to get the right answer. Growing up and attending high school in the Bay Area while going through the college application process has almost killed my belief that smart people are naturally curious. I’ve seen first hand how they’re socialized to push the right buttons for grades, college acceptances, and, ultimately, external validation.

We must not abandon our rabid curiosity for the process.

a (alternate) way forward: ASI

If we algorithmically create something that is self-improving, much like evolution on the human scale, we can ignore the problems that arise when AI replaces ratio. The issue I see is in the metrics. How can you procedurally identify surprising, exogenous things? They are by definition surprising and exogenous. It’s like trying to capture lightning in a bottle.

For example, there is something very mysterious and weird about base models with no post-training. You basically have no idea what you are getting. Post-training definitely made models smarter, but they lost some of their edge.

I think it’s possible. Evolution is perhaps the first and oldest algorithm ever and it has found intelligence that looks like its scaling.