rohan ganapavarapu

What Artificial Intelligence Implies About Natural Intelligence

premise

The parallels and invariants between artificial and natural intelligence propose unique practical and philosophical insights into both fields.

assumptions

  1. Intelligence is prediction–all intelligent systems build models to anticipate future states
  2. Self-modeling is computationally necessary–systems must represent themselves to reason about their actions
  3. Implementation is secondary–the principles are the same whether in neurons or silicon

You can find a defense of these assumptions later in this essay.

pattern recognition and self

Both LLMs and brains are prediction engines. LLMs predict tokens. Brains predict sensory input. Both build statistical models that generalize from training data.

Pattern recognition alone isn’t enough for intelligence. Both systems need a model of themselves to function. This self-modeling emerges naturally as prediction tasks become more complex. In language models, we see this in how instruction tuning doesn’t manufacture agency, but rather creates conditions where self-modeling becomes computationally advantageous–similar to how evolutionary pressures created conditions for self-modeling in biological systems.

darwinian evolution’s scaling laws

Evolution found this first: complex prediction requires self-modeling. As organisms needed better predictions, they developed more sophisticated self-models. This wasn’t extra–it was computationally necessary.

We see identical scaling in AI. Larger models naturally develop self-representation when faced with tasks that require modeling their own decision processes. Evolution didn’t “add” consciousness–it emerged from prediction requirements, just as we see emergent agency in scaled AI under similar computational pressures.

We can consider animals (or proto-humans) as pre GPT 3.5 era models, they are algorithms sanded over millennia by evolution to survive and reproduce. They have no consistent sense of self–they predict what to do next to survive.

implications

  1. Free will is emergent computational architecture, not metaphysics. Any system modeling its own decisions must represent choice internally.
  2. Consciousness is another computational necessity that emerges from sophisticated prediction. Any system complex enough to model both its environment and its own decision-making processes will develop some form of self-awareness as an efficient solution to the prediction problem.[1]
  3. Further increases in intelligence will necessitate further self-awareness and special steps will have to be taken to maintain safety.

I predict evolution, due to its massive time scale and slow rate of improvement, will be overtaken by humanities search for intelligence. Though I am unsure whether even evolution was even searching for intelligence, or if we are a random experiment in the cosmic game of survival.

defense of presuppositions

  1. Intelligence is prediction.

The brain’s hierarchical processing generates continuous sensory predictions, while every major AI advancement fundamentally performs pattern prediction. The ubiquity of prediction across biological and artificial systems, and its direct correlation with system intelligence, demonstrates its fundamental role. Both natural and artificial neural networks converge on prediction as their core operation.

  1. Self-modeling is computationally necessary.

Information theory shows optimal decisions require modeling action consequences. We observe this in brain body maps and motor models, and in AI systems developing capability awareness. Self-modeling emerges naturally under computational pressure in both domains, suggesting necessity rather than accident. Systems that model themselves outperform those that don’t.

  1. Implementation is secondary.

Similar solutions emerge across different substrates when facing similar computational challenges. Hierarchical processing, attention mechanisms, and error minimization appear in both biological and artificial systems. The success of brain-inspired AI architectures and AI-inspired neuroscience demonstrates implementation independence. The mathematical foundations—Bayesian inference, information compression—remain constant regardless of substrate.

These principles validate each other recursively. Implementation independence enables comparison of different intelligences, which reveals prediction as the core operation, which necessitates self-modeling for optimal performance.

[1] Self-awareness does not mean insight to internal structure, consider how like an LLM does not know how a transformer works (unless in the training data) we don’t know how our brain works (unless we train in that data).