In the aftermath of the dot.com bubble of the early 2000s, Carlota Perez, a British-Venezuelan scholar, published her study of interactions between technological revolutions and financial capital, where she convincingly illustrated that almost every major tech breakthrough required an asset bubble to progress, with capital markets creating a “powerful magnet to attract investment … accelerating what becomes the new economy,” by acting as the agent of massive creative destruction.
But in the process “all notion of the real value … is lost. Uncontrollable asset inflation sets in … while debt mounts at a reckless rhythm; much of it to enter the casino.” After an asset bubble bursts, governments step-in to facilitate “the recoupling of financial and productive capital” and “when this is effectively achieved, innovation and growth can take place … in a positive sum game.” This is a classic Schumpeterian analysis coupled with how inventiveness links to innovation and productivity via capital markets. Perez’s description fit perfectly with dot.com, from the early beginnings of the 1980s to the bubble of the 2000s, with its recalibration and the subsequent flowering of the internet, social and digital networks.
In the modern age, she identified five successive technological revolutions.
First, the classical “Industrial Revolution” starting with the opening of Arkwright’s mill in 1771, followed by the “Age of Steam and Railways” in 1829. That was followed by the “Age of Steel, Electricity and Heavy Engineering” with the Carnegie Bessemer steel plant in 1875. Then, the Age of Steel was replaced by the “Age of Oil, Automobile and Mass production” with the Model T in 1908 being the key milestone. Finally, Intel’s microprocessor in 1971 marked the very beginning of the Information Age.
Technological revolutions, capital markets, bubbles and wars
While one could debate the importance or dating of specific technologies, there is no doubt that progress over the last two centuries represents the greatest break since the invention of agriculture from traditional economic, social and political patterns, as reflected in a range of income, longevity and healthcare outcomes as well as radically different political and social relations. Perez described these major shifts as the arrival of new “techno-economic paradigms.”
This is in tune with modern technologists (such as Sulieman Mustafa) who emphasize the importance of what they call “superclusters of innovation.” When such superclusters emerge, each new wave starts intersecting, buttressing, and boosting other innovations (like the impact of steam, electricity or combustion engines). The diffusion and sharp drops in marginal costs drive an ever-wider adoption, and while not a straight-line process, history teaches us that once a breakthrough had occurred, there was virtually no way of stopping waves of new products and innovation that ultimately has always led to comprehensive re-wiring of most aspects of life — for both good and bad.
Perez comes into her own by linking the human spirit that drives new ideas with capital markets and the critical role that money and asset bubbles play in facilitating and accelerating innovation. Whether it is railways, canals or dot.com, each supercluster almost invariably led to an asset bubble, loss of invested capital, debt moratoriums and frequent recessions or severe economic slowdowns.
However, ultimately, societies and businesses have always adjusted, leading to a significant rise in productivity and wealth.
Whereas prior to the first Industrial Revolution, labour productivity was virtually stagnant (growing less than 0.1% per annum), the first several waves of industrial revolutions boosted this to around 0.5% by the mid-19th century and then toward 1% by the early 20th century, and around 2% in the 1960s to 1980s. For several reasons, the Information Revolution has thus far depressed productivity back toward 1%, with neither computer nor internet revolutions leading to a sustained rise in labour or multi-factor productivity.
Technologists describe this as a U shape response. Productivity falls in the early stages as neither societies nor businesses understand how to best use technologies, and many of new innovations ‘kill’ the rest of the economy, one cut at a time. But eventually, as technology adoption widens, societies start to better match positives and negatives, yielding stagnating outcomes (the bottom of the U curve), before ultimately, technologies result in a sustained rise in productivity.
Historically, it took as long as two to three human generations to fully crystallize the benefits of new technologies. In the intermediate period, technological breakthroughs aggravated inequalities and inequities, causing flare-ups in social, polarization and geopolitical tensions. One could argue that the revolutions of the 1840s to 1860s were an indirect outcome of the first waves of industrial innovations while the long war of the 20th century (i.e. the Great and Second World Wars) were an aftershock of subsequent industrial waves. Equally, today’s extreme societal polarizations and geopolitical tensions are a direct outcome of the disruptive power of the Information Revolution.
AI is far more disruptive than industrial revolutions
AI can be regarded as either a brand-new revolution (adding to the five identified by Perez) or perhaps more appropriately as the pinnacle and the escape velocity of the Information Age.
Kevin Drum, the late American journalist and blogger, once presciently summarized the difference between Industrial Revolutions and the Information and Digital Age: “The Digital revolution is going to be the biggest geopolitical revolution in human history …the industrial revolution changed the world, and all it did was replace human muscle”.
McKinsey Global Institute attempted to quantify that difference. Although clearly imprecise, they believe that the impact of the Information Age is likely to be 3,000 times the ultimate impact of the Industrial Age (i.e. 300 times the waterfront at 10 times speed). In other words, instead of just impacting one or two industries or sectors, the Information Age affects and distorts almost everything: from the functioning of labour and capital to societal interactions and the importance of such key elements as the nature of work and how we are informed or entertained, and even what is meant to be human.
The pressures are already evident in the declining premium for college education (including ‘hot’ areas like computer science) and the growing signs of white-collar employment disintermediation. Although today’s LLMs (large language models such as ChatGPT) are still highly structured (akin to microprocessors), they do not need to be better than humans. Given its exceptionally low cost and scalability, AI being ‘good enough’ is already sufficient reason for the propagation and erosion of marginal and, eventually, average demand and compensation for white collar employees.
It is only a matter of time — perhaps within a decade — before the fusion of AI with robotics, cloud computing, and 3D printing aggressively disintermediates blue-collar workers. Whether it is the ability to print houses and buildings, manufacture air conditioning units without moving parts, printing entire aircraft engines, developing self-healing pipes or the proliferation of autonomous trucks and taxis, opportunities for efficiencies and collapse in marginal costs are arguably even better than those in services.
As subsequent iterations get better (and today’s AI already outperforms most computer coders, aces general knowledge tests and human pattern recognition), these technologies will be deployed to reduce costs and improve efficiency. Although there are suggestions as to how we can turn them into augmenting rather than replacing humans, these are likely to fall on deaf ears. For sure, it does not mean that most jobs will disappear outright. Rather marginal utility and value of humans will inexorably deteriorate, and there will be a persistent fall in demand. As AI progresses towards Artificial General Intelligence (AGI), people will also gradually lose their role as the ‘brains’ of machines, something that kept humans on top by creating more complex and productive tasks for people to accomplish.
While most economists assume that there will be new jobs created that we cannot even envision today, I doubt it, at least not in a conventional sense. It is true that in the 20th century, a buggy driver became a truck driver, and a farmer turned into a factory worker. However, when there is no need for construction workers, taxi drivers, plumbers, writers or computer coders, it is not clear what other niches humans could occupy. It is far more likely that the relative value of people (in the eyes of other people, which is what matters) will be disconnected from conventional jobs or professions, and will take other forms, such as human qualities, religion, sports, and entertainment.
Eventually, a new balance will be established, and productivity will massively rise (perhaps to 5% or more) but not before societies and businesses adjust to what Perez described as the new “techno-economic” paradigm, which could take a decade or two. As McKinsey highlighted recently, in order to crystalise up to $18 trillion in AI productivity gains, societies will need to find that elusive equilibrium and business processes will need to be extensively rewired, not to mention, “workers will need support in learning new skills and some will change occupations.” That’s a gigantic understatement.
Politically, eroding marginal and average returns on labour will keep fanning social and geopolitical pressures. When combined with diminishing analytical skills (a recent MIT study suggested that extensive use of ChatGPT might result in markedly weaker cognitive skills) and declining writing and reading literacy (or what Adrian Wooldridge described as the return of The Middle Ages), the environment will magnify personality-driven politics and populism. Gustav Le Bon perfectly described this in the late 19th century as: “abusive forms of violent affirmations, exaggerations, resorting to repetitions and never attempting to prove anything by reasoning.” Today’s social media is far more dangerous than the yellow press, radio or broadcasting of prior eras.
All of this promises to be a perfect recipe for anger, frustration, grievances, and hence, deep and persistent polarization, both locally and globally.
The disruption is also becoming evident at a corporate level, especially in digital and software industries. While these are still dominated by second generation technology giants (such as Alphabet, Meta, Microsoft, Apple and Amazon), they are now aggressively challenged and, in many ways, disintermediated by a wave of new AI start-ups (such as Open AI, Cursor, Claude, Perplexity, Windsurf, Clay and Paradigm).
Unlike the conventional giants, which are still largely based on technologies that are at least twenty years old, the start-ups are built around and rooted in AI, relying to a far greater degree on intangible assets, which function very differently to conventional tangible capital by offering a much stronger operational scalability and greater synergies while delivering spillover effects. These enable new start-ups to grow much faster by breaking down barriers between industries and lowering Warren Buffett’s proverbial moats.
How information revolution changes asset bubbles
This returns us to the original question: What does it mean for capital markets and the impact that economic, social and political forces might have on the creation and deflation of asset bubbles?
There is no doubt that investment in AI infrastructure (including data centres) is running considerably ahead of likely revenues — at least in the short term.
In the year to March 2025, five US hyper scalers (Meta, Alphabet, Microsoft, Amazon and Oracle) invested in capital expenditure and R&D totalling more than $500 billion, up 85% when compared to the run rate in 2021. This represented around 37% of revenues versus closer to 20-25% in the previous decade. While none of the players fully dissect individual categories, the bulk of it must relate to AI, with the current expectation of a rise in the investment rate to more than $1 trillion by 2029.
On the other side, while corporates report growing AI revenue streams, at this stage, these are mostly measured in billions (for example Microsoft AI revenue stands at approximately $15 billion), creating a perfect backdrop for Perez’s bubble of excessive investment, inflated values and limited revenues.
However, unlike prior revolutions, the Information Age massively shortens and accelerates business cycles. As described by one of the financiers: “Many start-ups are reaching up to $200 million in annual recurring revenues in less than couple of years, faster than [we have] seen before, and often with very nimble teams.” This is driven by AI’s scalability, synergies and spillover effects. As Charles Ferguson suggested, we might soon be in a position in which just a single start-up with as few as a dozen employees could cause an unprecedented disruption. Unlike dot.com, the time between investment and cashflows is measured in quarters, not years or decades, reducing strain on capital markets and valuations.
Also, for the first time ever in human history, investors are inhabiting a world of abundant, and not scarce, capital. Depending on how one measures overlapping claims and rapidly growing private capital and debt, the value of financial capital is at least five to six times that of the real economy (according to the Financial Stability Board) and potentially as much as 10 times. As with any oversupplied product, excess capital no longer has a price, explaining why yield curves and spreads have not been generating predictable outcomes. Although abundant capital makes the job of estimating things like neutral or risk rates harder, it does offer another layer of protection against a more brutal capital market repricing.
Finally, AI’s sheer size and depth of disruption yields unparalleled opportunities across numerous fault lines: everything from defines and security to transactions (such as the blockchain) and money; from robotics and automation to biotechnology; from education to news and entertainment; from fundamental research to innovation. Neither dot.com, the combustion engine nor even electricity was as pervasive nor as structural. This offers start-ups multiple niches and ultimately justifies some overcapitalization.
Of course, this does not mean that investors will avoid bumps along the road. As one recent book on disruption described it: “industrial revolutions pale in comparison to today’s convulsions … we have to rethink the assumptions that drive our decisions on such critical issues as consumption, resources, labour, capital and competition.” Anyone who does not overcapitalize today risks being left behind tomorrow, gasping for air. The atrophy of mean-reversion and conventional investment styles is another important side-effect with the ‘winner takes all’ mantra dominating the investment landscape, resulting in persistently high concentration of returns. This does not mean investors should just stick with the “Magnificent Seven” (indeed, AI can just as easily destroy as it can create) but rather having a razor-sharp focus on the next winners.
This article was first published by Brad deLong’s Grasping Reality.