type
Post
status
Published
date
Mar 25, 2026
slug
equations-changed-the-world-ai-what-comes-next
summary
A reflection on how mathematical ideas have reshaped society — and what the shift to AI reveals about speed, legibility, and who's really in control.
tags
Artificial-Intelligence
Leadership
Critical-Thinking
AI-leadership-consulting
Leadership-intelligence
AI-ethics
AI-risk
Epistemology
AI-governance-NZ
category
AI Research & Intelligence
icon
password
URL
Newton didn't know his calculus would one day train weapons systems. Einstein didn't intend E=mc² as a bomb blueprint. The most powerful mathematical ideas in history share one feature: their creators lost control of them. We appear to be doing it again, only faster, and with less excuse. This time, we can see it coming. Does that change anything?
A Pattern Across History
Consider the arc. Newton needed a century to reshape science. Hinton needed a decade to reshape everything else. The people involved shrank even as the impact exploded — from centuries of slow scholarly diffusion to three people triggering a trillion-dollar industry in five years.
Something fundamental changed around 1990: the internet collapsed the adoption lag, and it hasn’t recovered.
The Acceleration Timeline
The pattern across history is stark. Every generation, world-changing mathematical ideas diffuse faster. Newton’s law of gravitation (1687) took 50–100 years to reshape science. Maxwell’s equations (1865) took about 30 years to spawn the telegraph and radio industries. Shannon’s information theory (1948) took 20 years to transform global computing. Backpropagation (1986) took 25 years to reach dominance — and then AlexNet (2012) triggered a trillion-dollar industry in just five years.
The equations AI runs on — Bayes, Shannon, gradient descent — were each world-changing in their own right. AI doesn’t replace them. It weaponises them simultaneously, at scale.
The Shift Nobody Talks About Enough
Every equation in the classical list shares one property: it is legible. E=mc² fits on a coffee mug. Maxwell’s equations fill half a page. A reasonably educated person can, with effort, understand what they mean and why they’re true.
AI is categorically different. Modern language models contain billions of parameters. No single equation governs their behaviour. Their outputs are sometimes surprising to the people who built them. We are, for the first time in history, deploying tools we cannot fully read.
This is not a small thing. It represents a genuine epistemological rupture — a shift from understanding our tools to merely deploying them and managing the fallout.
The Error Problem
History also reminds us that smart people, working within established frameworks, can get it catastrophically wrong, and entrench the error for generations.
Geocentrism persisted for 1,400 years. Phlogiston theory lasted a century before Lavoisier dismantled it with a set of scales. Continental drift was mocked for 50 years because Wegener was a meteorologist, not a geologist. Barry Marshall had to drink bacteria to prove ulcers weren’t caused by stress, then wait for a Nobel Prize.
The mechanism is always similar: institutional inertia, credentialism, and the career costs of dissent keep bad ideas alive long past their use-by date. Max Planck observed it bleakly: “Science advances one funeral at a time.”
The uncomfortable question for our moment: what are we currently wrong about with AI, and how long will it take to find out?
The Risks Worth Naming
Speed without comprehension. We are scaling AI capability faster than our ability to understand, govern, or course-correct it. Every historical equation had a lag between discovery and consequence. That lag gave society time to adapt. We may have lost that buffer.
Deployment without legibility. We can’t audit AI decisions the way we can audit an equation. This matters enormously in medicine, law, finance, and anywhere consequential decisions are made.
Concentration of influence. Newton’s equations diffused slowly through academia. AI capabilities are diffusing through a handful of companies, with enormous leverage over global systems, at speed.
The Wegener risk. Valid critics of current AI assumptions — particularly around whether scaling alone leads to general intelligence — are sometimes dismissed on credentialist or financial grounds. History suggests we should be paying attention to the outliers.
The Opportunities Worth Naming
AI is discovering equations, not just using them. DeepMind’s AlphaGeometry and symbolic regression tools are beginning to find mathematical relationships humans missed. We may be approaching a moment where AI accelerates the very process this whole history describes.
Compression of the knowledge lag. The time between insight and application is collapsing. In medicine, materials science, and climate research, that could be genuinely lifesaving.
Democratisation of capability. Backpropagation was obscure for 25 years. Today, a student in Tauranga has access to tools that would have seemed like science fiction to researchers a decade ago. That’s not nothing.
The Question Worth Sitting With
Every equation in history solved one problem and created another nobody anticipated. E=mc² gave us nuclear power and nuclear weapons. Black-Scholes gave us derivatives markets and the 2008 financial crisis. Perhaps all tools reveal something about those who wield them — and those who misuse them.
AI is not an equation. It’s something more like a universal equation-finder — and we don’t yet know what it or we will find, or what those findings will cost.
The history of world-changing ideas suggests we should be neither naively optimistic nor paralysed by fear. But it does suggest we should be paying very close attention, and that the people most worth listening to are often not the ones with the most to gain.
None of this requires you to become an AI expert — especially not in isolation. It does require intellectual and ethical honesty, both about what you’re assuming, and the people you’re impacting.
If you’re being told not to rock the boat, ask yourself if the people reassuring you have more invested in the answer than in the question. The people most likely to tell you what you're getting wrong are usually the ones with the least to lose by saying it. Find them. Listen to them. Before the answer becomes obvious in hindsight.
Further Reading
Stewart, I. (2012). 17 Equations That Changed the World. Profile Books.
Sumpter, D. (2020). The Ten Equations That Rule the World. Flatiron Books.
O’Neil, C. (2016). Weapons of Math Destruction. Crown. (Particularly recommended for anyone who engaged with the risks section — it’s the natural next read.)