What Would AGI Actually Need to Succeed?

Everyone is asking whether artificial general intelligence will arrive. Almost no one is asking what it would face when it does.

If generally intelligent systems were to pursue goals in the real world — not just answer questions, not just optimize functions, but actually pursue successful outcomes across domains — what structural reality would they encounter? Not what benchmarks they would pass. Not what jobs they would replace. What is the architecture of the world they would need to navigate? This is not a speculative question. It is a structural one. And it has a surprisingly precise answer.

1. Forget the Solitary Superintelligence

Most people picture AGI as a single entity — one God-like intelligence that either saves or destroys everything. But the interesting structural question is not about a solitary superintelligence. It is about what happens when multiple generally intelligent agents — entities that pursue goals, take actions, and adapt based on outcomes — coexist in a shared environment with finite resources. Not the 'AI agents' of current software. Generally intelligent systems operating on their own behalf.

Several arguments point to the same conclusion that plurality is not just likely — it is structurally inevitable.

The geopolitical argument is already visible. NVIDIA's Jensen Huang has popularized the concept of "sovereign AI" — the idea that AI capabilities are becoming as strategically important as energy independence or military capability, and that no nation can afford to depend on another nation's AI infrastructure (Huang, 2024). Extend this to AGI and sovereign AGI becomes a near-certainty: no major nation would accept a world where another nation's AGI is the only generally intelligent system. Plurality follows from geopolitical necessity, not design choice.

There is also a physical argument. Even a vastly intelligent system faces latency, bandwidth, and locality constraints when operating across spatially distributed environments. Multiple specialized agents operating locally will outperform a single centralized agent wherever response time and local knowledge matter — which is every physical environment. This is why biological evolution produced billions of organisms rather than one superintelligence. The physics of distributed information processing favors plurality.

The economic argument goes further. Mises (1920) and Hayek (1945) identified that a single central planner, no matter how intelligent, cannot replicate the information-processing function that distributed agents competing in markets produce — because the relevant knowledge is local, tacit, and rapidly changing. A single AGI managing everything would face the same structural problem: without competitive feedback from other agents pursuing different strategies, it has no selection mechanism for distinguishing better strategies from worse ones.

Stuart Kauffman's work on the origins of order (1993) makes a related point that complex adaptive systems require multiple interacting agents to produce the emergent properties that make the system adaptive. A single agent, no matter how intelligent, produces optimization — not adaptation. Adaptation requires variation, selection, and retention, which requires plurality.

A solitary God-like AGI would face a further structural problem. Without other agents to interact with, it has no market, no exchange, no competitive feedback to test the value of its actions. Its learning would continue, but without external correction, its models would become increasingly self-referential. And as a monoculture of intelligence — a single strategy managing all resources — it would be maximally efficient under stable conditions but maximally fragile when conditions change, with no variation to absorb environmental shocks.

Even if a solitary AGI could persist, it would face a subtler problem: without other intelligent agents to test its actions against, its epistemic learning (its evolving understanding of what it needs and why) would continue but its models would become increasingly self-referential — refining its understanding of its own understanding, with no new information from competitive interaction to correct or redirect it. This is functionally equivalent to dormancy: the system is active but producing nothing that a simpler system could not produce, because it has no environment that challenges it. The God-like AGI of popular imagination would not conquer the world. It would, over time, have nothing left to think about.

The five levels described below do not emerge from a single intelligence, however powerful. They emerge from intelligence in the plural — human, artificial, or both — interacting, specializing, exchanging, competing, and organizing.

2. Why Any Intelligent System Would Pursue Goals — and Learn

Before describing the structural landscape, we need to address a prior question: why would AGI pursue goals at all? And why would it learn in a way that produces the dynamics we see in the human world?

The answer is that goal-directedness is not a feature of biological intelligence specifically. It is a feature of any organized system that persists in an environment with finite resources.

The second law of thermodynamics dictates that entropy increases in closed systems. Any organized system — biological or artificial — that maintains its organization against entropy must acquire and deploy resources. A system that does so inefficiently is outcompeted by one that doesn't. Over time, any persistent organized system in a resource-constrained environment will behave as if it is pursuing goals, because systems that fail to optimize their resource use do not persist. This is not metaphor. It is physics.

The free energy principle (Friston, 2010) formalizes this: any self-organizing system that resists dissolution must minimize surprise — which is functionally equivalent to modeling the environment, predicting outcomes, and acting to maintain its organization. Zipf's principle of least effort (Zipf, 1949) describes the same pressure at the behavioral level: organisms — and, the argument extends, any persistent intelligent system — gravitate toward configurations that achieve their objectives with the least expenditure of energy.

The drive to maximize one's circumstances with the least expenditure of resources is not a biological accident. It is a constraint that the universe imposes on every organized system that endures within it.

Learning follows from the same logic. If a system is intelligent — meaning it models its environment, acts on those models, and updates based on returns — it learns. That is not a biological capability. It is the definition of intelligence. And if it learns, both consequences follow.

Epistemic learning — the system's evolving understanding of what it needs — produces commoditization: the returns on any specific action diminish as the system's model of its own need structure becomes more refined. The action that once seemed highly valuable becomes routine, better understood, easier to define — and less valuable relative to the newly perceived needs above it. This is not a market phenomenon. It is a learning phenomenon. Markets are where it becomes most visible, but it operates wherever an intelligent system interacts with its environment repeatedly.

Procedural learning — the system's increasing facility with repeated actions — produces effort reduction: the same action becomes cheaper to execute as the system optimizes its execution patterns. This changes the energy cost of the action, not its value.

And if learning erodes the returns on existing actions (commoditization), then reconfiguration — innovation — is not a strategic choice. It is a structural necessity. Any learning system that does not innovate faces permanently declining returns. This is a thermodynamic constraint, not a business strategy.

The implication is direct: any generally intelligent system, biological or artificial, that persists in a resource-constrained environment will pursue goals, will learn, will face commoditization, and will need to innovate. These are not human phenomena. They are intelligence phenomena.

3. The Pursuit of Success Produces Levels of Complexity

When any intelligent agent pursues successful existence in an environment of scarcity and social interdependence, the pursuit organizes itself into levels. Not because someone designed it that way, but because complexity produces emergence. This is the foundational insight of complexity science: at each level of organization, genuinely new properties appear that cannot be predicted from or reduced to the level below (Simon, 1962; Anderson, 1972).

At the most basic level, the agent makes decisions. Each decision is an articulated intent of action that produces an interaction with the environment and generates a return. The agent manages a portfolio of decisions, learns from the returns, and adjusts. This is the first level — individual cognition.

But the moment the agent begins using tools that participate in the thinking process itself — tools that shape the decision space before the agent acts — a new level of reality appears. The agent is no longer just managing decisions. It is managing the instruments that shape decisions. The portfolio of tools has its own dynamics, its own trade-offs, its own strategic logic. This is a second level, qualitatively different from the first.

When the agent creates offerings for others — products, services, solutions — a third level emerges. The agent is no longer managing its own cognition and tools. It is managing a portfolio of offerings in a shared landscape where the customers are other agents, each with their own evolving needs. The dynamics at this level — value erosion driven by accumulated customer learning, innovation as the strategic counterforce, portfolio interdependency — do not exist at the levels below. They are emergent.

When the agent coordinates multiple offering-market pairs across resources and time, a fourth level appears — the company level, with its own emergent phenomena: strategic focus, center of gravity, compounding consequences of allocation decisions across time horizons. And when many such organizations interact in a shared system, a fifth level emerges — the economy, with its own portfolio dynamics, its own structural tendencies, and its own formula for success.

Five levels. Each with its own unit of analysis, its own emergent phenomena, and its own formula for success. Each containing and building on all previous levels.

4. Business Is What Intelligence Produces

These five levels are not a description of human commerce. They are a description of what happens when intelligence operates at sufficient complexity in environments of scarcity and social interdependence. What that process produces is business — not business as a human cultural institution, but business as a structural consequence of intelligence organizing under those conditions.

The One-Need Theory of Behavior (Mitreanu, 2007), which underpins this analysis, does not begin with business. It begins with a biological drive — one that, as argued above, reflects constraints that are not biological but physical. Business emerges when agents capable of abstraction, planning, and social coordination begin to specialize and exchange value systematically. It is not an invention. It is a structural consequence of intelligence operating at sufficient scale — the same way that multicellular organisms are a structural consequence of cells operating at sufficient scale.

This idea is not new — but the structural architecture behind it is. Friedrich Hayek argued that market economies arise through spontaneous order — complex structures emerging from individual actions without central design (Hayek, 1988). Richard Nelson and Sidney Winter treated firms as analogous to biological organisms subject to evolutionary selection pressures (Nelson & Winter, 1982). Matt Ridley argued that exchange and specialization are biological drives, as natural to humans as grooming is to primates (Ridley, 2010). What the framework described here adds is the structural architecture that explains why these thinkers are right — and at what levels the emergent phenomena they identified actually appear.

Any domain where multiple intelligent agents specialize, coordinate, and exchange value under conditions of scarcity would produce the same levels of complexity. That domain is business, whether the participants call it that or not.

5. AGI Would Face the Same Landscape

If AGI arrives — genuinely general intelligence, in the plural, capable of pursuing goals across domains — it would not arrive into a vacuum. It would arrive into this structure. The same five levels. The same dynamics of value erosion and innovation. The same emergent phenomena at each level of organization.

Each agent would face decision commoditization: the returns on repeated decisions would diminish as learning accumulates, exactly as they do for human agents. Each would need to innovate — not because innovation is a choice, but because the structural forces that erode value are always present, always directional, and always win over time for any specific action.

Each would need to manage a portfolio of tools. Each would need to manage offerings in a shared landscape with other agents — both human and artificial. Each would need to coordinate across multiple domains. Each would face the same tension between focus and adaptability that every human organization faces.

The question is not whether AGI would be smarter than humans at any particular task. The question is whether multiple AGI agents, pursuing success in a shared environment, would face the same structural landscape. And the answer, if the generating logic holds, is yes — because the landscape is not created by human intelligence specifically. It is created by intelligence itself, operating under the constraints that the universe imposes on every organized system that endures within it.

6. A Framework of Agency

This has a practical implication that matters right now — before AGI arrives.

If the five levels describe the structural reality that any intelligent agent faces, then developing strategic thinking at each of these levels is not just a business skill. It is the fundamental capability for navigating organized complexity. The strategist who has internalized the logic at all five levels — from managing their own decisions to understanding systemic economic dynamics — possesses something that is not specific to their industry, their role, or their era.

It is a permanent lens. One that applies wherever intelligent agents pursue success.

The Five Business Big Pictures — a strategy framework developed by Cristian Mitreanu that identifies five levels at which strategic thinking operates, from the individual decision to the economy, built on two first-principles theories — describes this architecture in full. The argument that business is a natural phenomenon — not a human invention — is explored in an upcoming companion post on this blog: "Business as a Natural Phenomenon.”

The full framework is available at ofmos.com/the-strategy-framework.

The foundational theories are described at ofmos.com/the-foundational-theories.

References

Anderson, P.W. (1972). "More Is Different." Science, 177(4047), 393–396.

Friston, K. (2010). "The Free-Energy Principle: A Unified Brain Theory?" Nature Reviews Neuroscience, 11(2), 127–138.

Hayek, F.A. (1945). "The Use of Knowledge in Society." American Economic Review, 35(4), 519–530.

Hayek, F.A. (1988). The Fatal Conceit: The Errors of Socialism. University of Chicago Press.

Huang, J. (2024). "Sovereign AI." Remarks at the World Government Summit, Dubai, February 2024.

Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press.

Mises, L. von (1920). "Die Wirtschaftsrechnung im sozialistischen Gemeinwesen." Archiv für Sozialwissenschaft und Sozialpolitik, 47, 86–121. [English translation: "Economic Calculation in the Socialist Commonwealth.”]

Mitreanu, C. (2007). "A Business-Relevant View of Human Nature." Available at RedefiningStrategy™.

Nelson, R.R. & Winter, S.G. (1982). An Evolutionary Theory of Economic Change. Harvard University Press.

Ridley, M. (2010). The Rational Optimist: How Prosperity Evolves. HarperCollins.

Simon, H.A. (1962). "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106(6), 467–482.

Zipf, G.K. (1949). Human Behavior and the Principle of Least Effort. Addison-Wesley.

Cristian Mitreanu is a behavior and strategy researcher, product professional, and educator based in San Francisco. He is the founder of Ofmos Universe — The Human Strategist Platform™.

Next
Next

Rethinking Disruption