The Economics of Uneven Intelligence 

How AI’s patchy intelligence reshapes firms, risk and entrepreneurship

By Elias Sanchez

The AI boom seems unstoppable, but its long-term sustainability is uncertain. AI can quickly write a legal memo, but it might also invent a case citation. This inconsistency is called jagged intelligence. Courts and lawyers have warned that AI can fabricate legal citations, and lawyers have been sanctioned for submitting briefs that cite non-existent cases generated by AI.

Because AI is not always reliable, it increases the need for checking and judgment, which limits how much it can replace human discovery in business. Still, investors are not discouraged, and capital markets remain eager for higher returns.

Markets handle countless trade-offs every day, often imperfectly but by choice. Now, AI is part of this process. Its influence spreads across decision-making, production, and innovation in many industries, but its effects are uneven. This patchiness excites both market experts and futurists.

The harder question is how its structure reshapes firms, shifts markets, and influences entrepreneurs’ decision-making. Consultants such as McKinsey spy productivity gains. Other studies find little structural change. Economic history shows that big transformations begin with small shifts. AJI  (Artificial Jagged Intelligence) is important because it changes how risk is measured and how decisions are made. Right now, AI might change how entrepreneurs look for opportunities, but it has not proven it can replace their discoveries.

Why AI’s uneven brilliance complicates business

Artificial intelligence does not fail smoothly. It excels in one domain and falters in the next. A model that drafts flawless code may stumble over a simple contextual cue. Change a few words, and performance can flip. As the economist Joshua Gans observes, even small shifts in phrasing can dramatically alter outcomes. Capability does not scale linearly. It spikes and dips. It is jagged.

This matters because companies often work in situations where unpredictability is expensive—like when there is uncertainty, time pressure, and scattered information. Entrepreneurship is not a simple puzzle with clear rules; it requires judgment in changing situations.

Managers have to make sense of incomplete information, pick up on subtle hints, and take responsibility if things go wrong. Machines can help find patterns, but they struggle with context, uncertainty, and responsibility.

The chart shows that as tasks change, AI’s performance does not improve smoothly but varies unpredictably. Similar problems often get similar results, but small changes can cause big drops in reliability. There are areas where AI works well and others where it struggles. The landscape is uneven.

This unevenness helps explain a common puzzle. Surveys show that many companies are trying out AI, and leaders often mention efficiency as a goal. However, broad improvements across entire organisations are rare. Some see benefits in certain areas, but few notice big gains in profits or productivity. There is a lot of excitement, but real change is uneven.

The Integration and Coordination Challenge

The obstacle may not be technical capability but organisational integration. AI does not slot neatly into existing routines. It must enable coordination. This means aligning with reporting lines, compliance procedures, and decision-making hierarchies. Generating information is easy. Embedding it into a chain of practice and accountability is harder.

Where reliability varies, outputs must be checked. That introduces verification costs. West Midlands Police acknowledged that an AI-generated error influenced its recommendation to ban supporters of an Israeli football club from a match in Birmingham. Officers relied on information that included a fictitious fixture — created by an AI tool — when assessing security risks. The incident exposed how unverified AI output can distort operational decisions, prompting apologies and scrutiny of AI use in policing.

When an AI-generated hallucination influences a decision in policing or finance, the consequences are not statistical curiosities; they are institutional and trust liabilities, underscoring the risks of state reliance and on AI in politically and corporate sensitive decisions. When governments depend on systems with uneven reliability, compounding errors can carry serious institutional consequences and, over time, threaten civil liberties.

Firms, though, proceed cautiously. Or at least have the incentive to do so. Many automate narrow tasks while retaining human approval at key junctures. This is because they risk losing market credibility. Thus, complementarity, not substitution, becomes the norm. For firms, the lesson is more operational. AI may reduce prediction costs, but its outputs still require oversight. 

Time saved in drafting can be offset by time spent reviewing. Prediction becomes cheaper, but judgment does not. These frictions multiply across institutions and business organisations. In these contexts, workflows must be redesigned, and authority must be clarified. Who bears responsibility when an automated recommendation misfires? If a model performs most of the time but fails unpredictably, the fault becomes ambiguous. Is the error in the tool, the prompt or the oversight? That ambiguity is structural, not computational.

This dynamic helps reconcile two observations: widespread experimentation and modest productivity growth. AI may generate local efficiencies—drafting reports faster, summarising documents more quickly, sorting data more cheaply—which shows it is integrating properly. But systemic gains in productivity depend on coordination, not just local improvements.

Why Does Jaggedness Persist Though?

The best explanation for this is that large language models are optimised for prediction, not comprehension. They are trained to guess the next word in a sequence by adjusting billions of internal parameters to minimise error. Over time, this produces impressive abstractions. Models internalise statistical regularities across vast corpora of text. They appear knowledgeable because their predictions often align with patterns found in their training data.

Yet prediction is not understanding. Training data are finite; reality is not. The world evolves. Markets shift. Preferences change. New combinations of ideas emerge. Even very large models will encounter contexts where their internal patterns are sparse or misaligned. This is what figure 1 shows at least. Although performance improves on average as models scale, blind spots do not disappear entirely.

Bigger models may narrow the gaps between clusters of competence. They are unlikely to eliminate them, though. In dynamic systems—markets, politics, culture—the underlying “truth” is itself unstable. Firms, therefore, cannot assume that reliability scales smoothly with model size. They must assume instead that pockets of uncertainty will remain.

For managers, the implication is practical rather than philosophical. AI can accelerate search, expand optionality and surface patterns that might otherwise go unnoticed. It can draft, summarise, classify and recommend. Yet it cannot absorb responsibility. Strategic decisions—pricing, capital allocation, hiring, risk management—require someone to bear the cost when predictions fail.

Entrepreneurship in the Age of Uneven Intelligence

Entrepreneurs operate in environments characterised by tacit knowledge and shifting constraints. They interpret signals that are difficult to formalise: tone in a negotiation, hesitation in a supplier, subtle shifts in consumer sentiment. Such judgments are rarely reducible to pattern recognition alone. Machines can assist. But they do not replace the need for adaptability and accountability to these scenarios.

The productivity gains from AI will therefore depend less on technical breakthroughs than on organisational redesign and coordination. Firms that understand where machines are reliable—and where they are not—will extract more value. Those who assume smooth competence may automate prematurely and incur hidden risks.

The jagged machine does not eliminate friction. It rearranges it. Time saved in drafting may be spent checking. Efficiency in one department may require new oversight in another. Gains are possible. They are not automatic.

For now, AI is best viewed as a powerful but uneven input into production. It lowers the cost of generating predictions, but it raises the importance of verification and governance. Markets reward those who grasp that distinction. Intelligence, when jagged, requires judgment.

Conclusion

If AI lowers the cost of prediction but raises the cost of verification, its economic impact will depend on how firms reorganise around that trade-off. Some will extract gains; others will absorb frictions. The AI boom may yet deliver productivity gains. But its progress will not be smooth. 

When intelligence is jagged, firms must invest not only in machines but also in monitoring, redesign, and judgment. That makes AI less a wholesale replacement for entrepreneurs than a new input to be managed. In this evolving landscape, firms face a critical juncture: those that embrace the redesign will forge ahead, while those that remain stagnant risk being left behind. This pivotal moment creates a strategic imperative for companies to choose their path wisely.

For now, the machines may be brilliant. However, markets still require someone to decide when not to trust them. Scaling will shrink the gaps; it will not remove them. As long as capability remains uneven, entrepreneurs will remain central—not because machines lack power, but because markets demand action and responsibility. Defining who holds the crucial “override switch” in the workplace—whether managers, regulators, or end users—can sharpen accountability. Clarifying these stewardship roles emphasises that markets demand responsible and informed decision-making from those overseeing AI integration.

The productivity puzzle still in integrated AI systems may not reflect technological failure, but uneven integration. The jagged machine does not eliminate judgment. It changes where it is needed. Where machines are uncertain, humans must absorb the risk. That makes AI less a substitute for entrepreneurial judgment than a complement to it. The jagged machine may accelerate search. Markets still require someone to choose.

Written By
More from Other Writer
Bastiat Predicted the Baby Formula Crisis 170 Years before It Happened
The current baby formula shortage in the United States is a pressing...
Read More
0 replies on “The Economics of Uneven Intelligence ”