The Great Inflection? A Debate About AI and Explosive Growth

Matt Clancy Tamay Besiroglu

A conversation about what happens to the economy when intelligence becomes too cheap to meter.

Many working on artificial intelligence and AI-related issues think that our world will change very dramatically once we develop AI capable of performing most of the cognitive work currently reserved for humans. On the other hand, many economists adopt a more cautious stance, expressing doubt regarding the potential for AI to dramatically increase the rate of change. 1 In this conversation, economist Matt Clancy and research scientist Tamay Besiroglu debate the prospects for a radical rupture with historical rates of economic growth and technological progress.

This conversation is based on a series of back-and-forths Matt and Tamay had in spring of 2023. The opinions expressed do not necessarily reflect the views of their employers.

Tamay: Hi, Matt. I’m excited to have this chat.

Matt: Likewise!

Tamay: Before we delve in, I believe it’s crucial to frame our dialogue by outlining the central themes we’ll be addressing.

This is a debate about the expected economic impact of artificial intelligence much more advanced than today’s large language models like GPT-4. Specifically, I want to discuss the impacts of AI advanced enough to perform most or all tasks currently performed by humans. This includes things like running companies and all the planning and strategic thinking that comes along with that, designing and running scientific experiments, producing and directing movies, conducting novel philosophical inquiry, and much more.

These systems I have in mind are clearly leaps and bounds more advanced than any systems we have today, so why do I think this is even worth discussing? I think there is compelling work that points to it being very likely (say about 80% likely) that such AI systems that at least match the capacities of humans both in generality and capability will be developed this century. 2

Our second theme centers on the concept of “explosive growth.” I'm referring to a rate of growth that far surpasses anything we’ve previously witnessed — a minimum of tenfold the annual growth rate observed over the past century, sustained for at least a decade.

I am inclined to believe that such explosive growth is not just a possibility, but a probable outcome when we transition to an era where AI automates the vast majority of tasks currently performed by humans. To put this in numbers, I’d currently assign a 65% chance of this happening. I think you disagree with this view, correct?

Matt: That’s right. While I think it’s very likely that growth will pick up once we deploy AI throughout the economy, I think it’s maybe a 10% to 20% chance, depending on how I’m feeling, that economic growth becomes explosive, by your definition, with most of that probability clustered around the low end of explosive growth. 

‘Explosive’ is certainly an apt term for what we’re talking about. GDP per capita grew at roughly 2% per year over the 20th century, so if we jump to 20% per year for 10 years, that’s about 90 years of technological progress (at 2% per year) compressed into a decade. Ninety years of progress was enough to go from covered wagons to rocket ships! And your definition also encompasses even faster growth persisting for even longer!

Frankly, I’m not sure our models of economic growth are up to the task of extrapolating that far outside of historical experience. The economy is so complicated that economists have to make a large number of simplifying assumptions. We try to focus on simplifications that aren’t decisive so that greater complexity and realism wouldn’t much change the model’s takeaway. But the more you take a model outside of the context it was designed to explain, the more uncertain you have to be that one of those simplifications might no longer be innocuous.

Tamay: In my view, accelerating growth is probably not decoupled from historical experience. Economic growth today is much faster than before the industrial revolution roughly 200 years ago. Moreover, the agrarian societies that emerged from the Neolithic Revolution likely saw much faster economic growth than hunter-gatherer subsistence societies of 10,000 years ago. In this sense, economic acceleration roughly on the order we’re considering for this debate is perhaps actually something of a historical norm.

There is also precedent for very high levels of growth. In particular, double-digit growth has occurred many times in the context of “catch-up growth” in East Asia in the ’60s and ’70s, notably in China but also in Hong Kong, Singapore, and South Korea, to name a few. I think this further helps rule out very low priors for growth accelerations.

I think this historical evidence is compelling enough to require one to assess the arguments for expecting about 20% growth rates in gross world product on its merits, rather than supposing that we should ignore this possibility until we have very strong watertight models of the economic effects of AI.

Matt: I’m happy to grant that we have seen accelerations of growth on par with what you’re describing, and maybe rare cases of sustained growth that get within sight of the levels you are talking about. But at the same time, I think it’s notable that none of the accelerations we’re talking about or the rapid rates of catch-up growth experienced in the Asian tigers are typically believed to be driven by a sudden influx of intelligence into the economy. Still, I think the fact that growth has accelerated and has in some cases gone into double digits is a good reason to think there is some chance it could happen again.

I agree AI like you’re talking about will be transformative. But we’ve had transformative technologies before without explosive growth. 3 Since the so-called First Industrial Revolution set the economy on its modern growth trajectory, we’ve gone through several subsequent industrial revolutions: electricity, the chemical revolution, and the birth of the computer age. In the end, each of those radically transformed the material world we live in. And yet, if you tried to spot those revolutions by looking at a chart of economic growth, you would be hard pressed to see much of anything: Growth has been remarkably stable at around 2% per year, in the U.S.A., for more than a century.

What do you think makes AI different?

Karol Banach

Tamay: Our best models of economic growth seem to support the prediction that if we can develop AI that is a suitable substitute for human labor, the growth rate could potentially increase very substantially, at least for a while.

One key insight, from the Nobel laureate Paul Romer, is that ideas are important for economic growth and unusual relative to other economic factors, since their availability does not diminish with increased usage. The Python programming language, the chain rule in calculus, or Maxwell’s equations can be used by countless individuals without becoming scarce. Most goods are not like that. For example, if you invest in a new building, the more people who use it, the less space they each have.

The semi-endogenous model of economic growth, incorporating Romer’s insight, says that there are three important “factors of production,” or inputs, for final goods: capital (machinery, tools, buildings, etc.), labor, and ideas. Theoretically, doubling all your inputs to production should double your output, since you could set up identical copies of existing production processes. But doubling capital and labor also doubles the input of the production of new ideas, as more workers have the resources to do more research. And because each worker and firm can use these new ideas without diminishing the total supply, all of them can become more productive. This results in the total output growing more than proportionally. In other words, this model implies what economists call “increasing returns to scale”: When your inputs double, your output more than doubles.

Semi-endogenous growth theory predicts that economic growth is primarily constrained by population growth — and with a growing population, the economy can grow super-exponentially. A larger population generates more ideas, thereby enhancing productivity. The enhanced productivity then boosts output further, creating a larger economy which can sustain an even larger population, creating a loop of continuously accelerating growth. Although historical economic data can be unreliable, one prevalent interpretation, favored by economist Michael Kremer, aligns with this theory: The human population and the economy have grown in lockstep, resulting in a super-exponential increase in output.

When innovations like an agricultural or industrial revolution led to population explosions, growth accelerated. And economic growth has likely been capped on the order of 2% during the 20th century because, due to biological limitations on reproduction, population growth can’t exceed mid-single-digit percentages annually.

This framework also explains why the invention of electricity, the chemical revolution, or the birth of the computer age didn’t cause accelerating growth: They didn’t perfectly substitute for human labor, and so didn’t fundamentally change how the inputs to production are brought about. But with AI, our population of workers and idea producers could once again grow exponentially. What I take to be our most compelling theory of economic growth — semi-endogenous growth — implies that this will return us to what we might consider the historical trend of accelerating growth.

Matt: I think that’s a fair representation of what those economic growth models say. In short, the ultimate driver of economic growth is the discovery and application of new knowledge. Since AI can create ideas, like minds, this feedback loop is possible in a way that it isn’t for other technologies. This is one reason I don’t think explosive growth is simply impossible. At the same time, I’m not sure the dynamic you’re describing, where AI is different because it helps us create new ideas, is actually as different from historical experience as you say.

Here’s a verbal sketch of a model of innovation with AI by economists Philippe Aghion, Benjamin Jones, and Charles Jones that I think does a good job of illustrating just how AI needs to be different from other technologies in order to lead to explosive growth. Suppose there are an enormous number of different tasks that need to be done to invent new technologies — everything from developing new scientific theories and conducting experiments to figuring out how to manufacture and distribute new inventions. Let’s also assume that to invent enough new technologies to deliver 2% annual economic growth, every one of those tasks needs to get done — you can’t skip any. And each task takes a certain amount of time to do. Last, let’s assume innovation gets harder as you go, 4 so that each of those tasks needs 1% more inventor hours every year in order to keep up the same pace of technological progress.

Now for the AI. Let’s assume that technological progress means we steadily figure out how to get machines to do tasks that previously only humans could do. I think there is actually nothing new about that, even for cognitive work. We used to transmit knowledge to each other by meeting face to face; now you can put the knowledge in a book that can automatically communicate it to any reader. We used to calculate statistics with human computers; now we use mechanical ones. AI continues that dynamic. One might think, in this model, that as we figure out how to hand off more and more of the tasks to the machines, growth should steadily accelerate, since machines can be multiplied at a much faster rate than human workers.

But that’s not actually the case. For example, suppose we figure out how to hand off half the tasks of technological progress to machines. For now, we can assume the humans who used to do these tasks are unemployed, or receive some kind of universal basic income. The machines might be able to complete their half of the tasks at lightning speed but that wouldn’t, on its own, speed up the overall rate of technological progress. That’s because the other half of tasks would take just as long to do as before, and technological progress requires all the tasks to be completed. It’s like a factory assembly line where some workers are really fast and others are slow. If workers are trained to do only their task, and can’t help each other out, then the overall speed of production is bottlenecked on the slowest worker.

That example isn’t quite right either, though, because workers can be trained to help each other out. In fact, if half the tasks humans do were automated, then we might be able to retrain the workers whose jobs are replaced to focus on tasks only humans can do. With twice the workforce on each of these tasks now, we can get those tasks done in half the time. So, in fact, this simple model implies that if we automate half the tasks, technological progress takes half the time (compared to automating nothing).

If we make more realistic assumptions about the pace of automation historically, this story shows how advancing automation is consistent with steady exponential growth like we observed over the previous century. Suppose we automate 1% of the tasks each year. That frees up 1% of the labor force, and, with retraining, the tasks we have not yet automated get a 1% larger labor force. But recall I assumed that innovation gets harder, so that each year, each task takes 1% more hours to complete. The two forces balance out, and we end up getting consistent 2% growth.

That does seem to match the experience of the 20th century. During the 20th century we automated a great deal of stuff that previously only humans could do, and humans had to continually shift the nature of their work. And yet, through that whole period, growth didn’t accelerate.

In this kind of model, what can deliver explosive growth? There are two routes. Either you greatly speed up the pace of automation, so that you can shift a lot more than 1% of the workforce onto the remaining tasks, or you automate 100% of the tasks necessary for innovation. If machines really can do everything, you do get a feedback effect where greater economic growth rapidly improves your ability to discover new technologies.

You don’t find this argument compelling though. Can you explain where you think it goes awry?

Tamay: The usual way we think about economic bottlenecks is as goods or services that are complementary to one another: The outputs from the automated task are more valuable when combined with the outputs of non-automated tasks. For example, an AI that can design new products is much more useful when we can quickly build working prototypes. This means that scaling up “digital workers” could provide limited value if they still could not perform all the tasks humans can.

I think you give the impression that, in this case, “digital workers” would provide very little value. However, I don’t think this is correct. The standard theory of economic production tells us it is hard but not impossible to increase productivity when bottlenecked by human labor. Let’s say we automate 75% of all tasks in the economy. In this case, we might conservatively need to scale up the number of “digital workers” 10 times to match the effect of doubling all human inputs, but the scaling of at least this magnitude is precisely what I expect! Digital workers are just computations on chips, so we can make more of them quickly by channeling more money into producing and improving AI hardware.

To make your argument work, I think you need to make a few bold — and, to me, implausible-seeming — assumptions. The existence of some bottleneck tasks is not enough. You must show that there are many tasks that AI just cannot automate, say on the order of 25% or more.

Since the output of different tasks complement each other, the value of automation compounds: as more tasks are automated, already-automated tasks become even more valuable, substantially boosting growth. Combined with the growth effect of concentrating your workers in a smaller set of non-automated tasks, AI automation could increase output by one or two orders of magnitude, even if we assume that there are 25% of tasks that AI cannot do.

Therefore, even if we assume that there are quite a few tasks that AI systems cannot do, we will probably still see explosive growth if going from little to substantial AI automation happens on the order of decades. Hence, for the argument to work, you must show that this AI automation will likely be drawn out and take on the order of a century. However, this runs counter to the existing research on the topic, such Tom Davidson’s report, as well as recent evidence from the rapid progress in AI. 5 This evidence suggests, by contrast, that we should expect AI automation not to be drawn out but to be relatively compressed around the middle of this century.

Another way you could rescue the argument is to suppose that it’s very hard to substitute AI for human labor (notably, more so than I think we have evidence to suppose). However, this likely has the effect of delaying rather than blocking explosive growth. In fact, severe difficulties here could make explosive growth more likely! That’s because — as you say — such difficulties mean that growth will be bottlenecked by the part of the process that’s slowest to improve. But when those final tasks are automated, this same dynamic leads to very substantial sudden spurts in growth as the accumulated productivity that the prior bottlenecks suppressed is unlocked.

We should also expect investment and the vastly expanded amount of cognitive effort to be specifically aimed at automating bottleneck tasks. Take a specific example: Performing “embodied tasks” might be hard for AI. As a result, the prices of manufacturing goods will remain high, while the prices of automated “knowledge work” might come down, just like how the share of the economy devoted to agriculture plummeted after tasks like plowing and harvesting could be done by machines, while everything else grew. Manufacturing, construction, and similar sectors could see higher relative prices once “knowledge work” is substantially automated. Investors will generally aim at automating tasks that bottleneck economic growth, as these sectors become more relatively valuable and profitable.

Will all this investment in compute and R&D be enough to automate most or all tasks? While this is a very difficult question for which we only have fairly weak evidence, relevant work suggests that the amount of additional computation required for full automation is, in some sense, not all that large: Scaling computation by only half as much as we’ve seen in the past 50 years could very well be sufficient. Overall, this leaves me with the mainline expectation of the development of advanced AI involving accelerating automation until full automation.

Matt: Got it. Let me respond to your rebuttals.

First, your arguments suggest that even if artificial general intelligence can’t do everything, we can still get a temporary bout of explosive growth — maybe lasting decades — before human bottlenecks come back to bite us. That happens because extra intelligence applied to automated tasks isn’t negligible even in the absence of full automation; it vastly increases economic output and frees up a bunch of labor to work on the “human-essential” tasks. Sure, it eventually hits diminishing returns, so maybe you need many more digital workers than you would need humans to double production, but digital workers will probably be plentiful. 

I think our historical experience of automation is evidence against that. We’ve been automating parts of the economy for a long time now: Dockworkers used to manually unload ships, and that’s now done much more often by automation; assembly line workers are often replaced by industrial robots; human computers used to do the work of silicon ones. In principle, you can build as many machines as you want, and so one could argue that the number of effective workers in those automated sectors exploded, or could have exploded. And humans, freed from the need to work the docks, stand in assembly lines, or calculate by pen and pencil, can focus on the non-automated remaining jobs. And they did! But economic growth remained steady. So I’m not sure why it should be so different when it is cognitive work that is being handed off to the machines.

Second, you’re also saying that as automation proceeds, it will get more and more profitable to figure out how to automate the remnants that depend on expensive human labor. That will lead to more effort to automate these bottleneck sectors. I agree that will be the case; lots of economic studies document that R&D responds to these kinds of opportunities. But again — hasn’t this always been the case? The U.S. economy is a lot bigger today than at the beginning of the 20th century, and machines can do a lot more of the jobs we used to have to do ourselves, freeing up a lot of brainpower. Meanwhile, the incentive to automate surely has gone up, as wages rise and the consumers are richer than ever. And in fact, we do spend a lot more on R&D! But the increased effort at automating the rest of the economy hasn’t led to an uptick in growth. That suggests to me your model is missing something important.

For explosive growth to happen, we need a break from that historical experience of steadily advancing automation. Either the rate at which we automate the tasks humans do needs to accelerate or we need to actually automate everything so the pesky human bottlenecks don’t matter anymore. If, instead, we end up in a world where AGI slowly and steadily takes over more and more tasks, then we remain always stuck in the kind of world we’ve been in for the last century, with steady exponential growth.

Tamay: I agree that explosive growth most likely requires accelerating or full automation. It’s a good question: Why did past automation not noticeably accelerate growth, as I expect will likely happen with AI? 

In the past, automation mostly took the form of technologies that automate small segments of production, offering modest benefits while requiring numerous expensive synchronized changes across the economy to be implemented. In contrast, if AI is capable of everything a human can do, we could potentially automate large numbers of tasks in one go, with fewer costly updates to existing processes.

In the past, automation was largely the product of human ingenuity: Engineers designed better machines and reorganized factories in new ways to ensure these machines complemented existing processes. But scaling the compute used to train AI models can meaningfully substitute for human ingenuity.

In contrast to labor, compute increases proportionally with investment. This means that the inputs that fuel automation can be expanded much more rapidly and efficiently. 6 While engineering and tinkering are still useful for AI automation, simply adding compute can produce models that perform very well at a wide variety of tasks straight out of the box. 

To really appreciate the force of this argument, it is important to recognize just how incredibly fast the stock of AI-relevant compute can expand. In the past decade, the amount of computation used to train AI systems has doubled every six months, increasing by roughly 100-million-fold over this period. 7 This is a key reason to expect AI automation to happen in a short time span — given compute trends, we will likely have enough compute to automate 90% of tasks no more than a few decades after we will have enough compute to automate the first 20%. 

Even though full automation is not necessary for explosive growth, it just seems very likely to happen. It is, of course, a coherent possibility that we will come up with a new task for humans each time we automate one, so that, like Zeno’s tortoise, humans will stay ahead in the race between us and machine. However, I think there are no good reasons to believe that when AI systems can perform almost all the tasks humans can do, there will be some convenient gap that humans can snugly fit into, and that, even with million-fold more computation, these tasks will remain impervious to AI-automation.

Given that you attribute only a 10% likelihood to explosive growth, it appears you consider both accelerating automation and full automation from AI highly improbable. I'd be interested to learn the underlying rationale that gives you such confidence in this perspective.

Matt: As a quick aside, note that the 100-million-fold increase in computing power dedicated to AI over the last decade has not led to an acceleration in economic growth so far. That said, I do think AI is pretty likely to boost growth, for many of the reasons you articulate. But my best guess at why we won’t see changes as dramatic as you anticipate is because there are going to be a billion little bottlenecks that will persistently slow the rate at which AGI takes over tasks.

Let me give you some examples. This is going to be a long list, so I won’t go into much detail on any particular item. Even so, I suspect there are many other issues that I am failing to imagine, precisely because it is hard to see the details that matter unless you are in the weeds.

To start, most tasks today require the ability to do stuff in the physical world. We can assume we’ll develop robots that can do that work, but that’s not a given. In other sectors, the issue might be supply of crucial raw materials (rare earth metals?), without which all the brain and muscle power in the world is useless. Elsewhere, the scarce resource might be suitable training data. The economy is full of jobs that can’t be easily codified into data accessible to a machine. Assuming our AGI has a robot body and full cooperation from the humans (neither guaranteed), it may need to learn a job at the same pace as a human apprentice (since that’s how fast data is generated). 

Time could be a binding constraint in a lot of other ways as well. In agriculture, it just takes a certain amount of time for the plants to grow. In entertainment, there are only so many hours in the day to watch movies and TV, read books, or play video games. Research itself also takes time beyond just the time to think — it tends to be an iterative process, where you theorize and plan, then test your ideas against reality. Those tests involve waiting for natural processes to play out: diseases to progress, social interventions to take effect, rockets to be built and launched (and blown up), and so on.

There are still other sectors where humanity (not merely intelligence) is seen as a crucial part of the value provided. Today we can watch or listen to the best performances ever recorded, but people still go to live concerts, plays, and sports. All else equal, in-person education seems to be preferred by a large number of people, despite the many conveniences of remote education. People could also insist that humans remain the ultimate decision-makers in politics and the legal system.

Elsewhere, the issues may be regulatory. If you want to sell cars, you’ll usually need to go through a dealer. If you want to build new buildings and infrastructure, typically you’ll need planning permission and to conduct environmental impact assessments. If you want to release new drugs, you need to run clinical trials to get approval from the FDA. If you want to fly autonomous vehicles, you need to get clearance from the FAA. If you want to provide services in the medical, legal, accounting, engineering, architectural, plumbing, cosmetology, and other licensed professions, you need a license. Then there is likely future regulation on AGI itself, which is a whole can of worms that I won’t get into.

I bet we will eventually update our existing regulations to better suit a world with AGI. But it will take time. And a lot of that updating has to proceed through the slow and messy world of democratic policymaking. 

Finally, there is a whole set of activities for which intelligence is deployed in a zero-sum game that doesn’t push forward overall progress. Much of politics has this character and it’s not clear AGI will do anything more here than create a massive arms race between opposing parties and special interest groups. And there are other parts of the economy with elements of this style of zero-sum competition. Imagine an AGI arms race between advertisers for rival products. Or between corporate giants fighting over the patents of the innovations their AGI dreams up.

To sum up, one scenario I can imagine is that many of the bottlenecks above (and many more I don’t have the institutional knowledge to imagine) are steadily overcome, but at a pace slower than anticipated by AGI optimists today. Then, by the time we clear out these bottlenecks, the parts of the economy where extra cognitive resources are least helpful for driving forward growth — perhaps zero-sum sectors, those best protected by entrenched interest groups, or those where time and humanity are key constraints — may have grown to occupy a large share of the economy, slowing the maximum possible contribution of AI to growth. Another scenario, just as likely, is that solutions to old problems will lead to new ones. That’s how it usually is.

I’m sure that’s a frustrating response to reply to, but at a high level, what do you think of my argument that a lot of annoying details will slow the impact of AGI enough to keep explosive growth perpetually out of reach?

Tamay: I agree that reality is messy, and many of these details might end up mattering in important ways. I’ll focus on the considerations that I find most compelling.

Might regulation impede the development and deployment of AI sufficiently to keep growth rates close to historical rates? I think that this is plausible, but I’m not confident it will.

The costs of training AI systems are dropping precipitously. Current estimates indicate that the costs involved in training machine learning models fall by roughly 60% every year. This means that training runs that currently only the largest technology companies could do will be accessible to most hobbyists in only 10 years’ time. Effective restrictions will therefore very quickly require surveillance at a potentially unprecedented scale.

It is likely that, as you point out, AI systems will be precluded by regulation from providing various services, such as practicing law or medicine, among many other things. Regulation has arguably slowed down many futuristic technologies, such as nuclear energy, human genetic manipulation, and gene drives.

However, I’m not sure this provides much evidence for our ability to stem the tide with respect to AI. The potential value of AI deployment could be immense, with the prospect of increasing output by many orders of magnitude. I think the growth implications are therefore truly formidable, creating powerful incentives for eliminating or bypassing any existing constraints. I think this might be quite unprecedented relative to most other technologies that regulatory constraints were able to suppress in the past. 

Moreover, advanced AI could — and this is of course very unfortunate — potentially undermine the democratic process. As AI systems become capable of performing cognitive tasks at significantly lower costs, human labor may lose most, if not almost all, of its value. AI could enable the automation of protest suppression, while valuable assets like data centers can be located away from urban centers, reducing the risk of industrial sabotage. This suggests that beneficiaries of AI-driven growth could eventually play a major role in shaping regulations.

I think the bottom line on regulation is just that there are many unknowns and it’s difficult to be confident one way or another.

Secondly, let’s consider time-related bottlenecks. I agree that many important economic and R&D tasks require feedback from processes that typically play out over a long time. Now, imagine a world where advanced AI technology has enabled us to put 1,000 times more cognitive effort into R&D. 8 Would we still expect processes like testing new nuclear fusion reactors or drugs to take the same amount of time to yield useful feedback? I believe there's a strong chance that such delays could be significantly reduced. 

Many tasks that currently take months or years can be parallelized to reduce the amount of serial time involved. Rather than launching one rocket design, observing it blow up, going back to the drawing board, and launching the next design the following year, tens or hundreds of rockets could be launched basically simultaneously. While this approach doesn't allow for continuous refinement of each experiment, there's often a certain number of parallel experiments that can provide the same value of information as a set of sequential ones. This might be wasteful, but remember, we’re supposing that we might get bottlenecked by these types of experiments, so we are willing to spend a larger fraction of a larger amount of output on expediting this process.

Furthermore, I am quite confident that significant improvements can be made in current experimental design. Experiments usually aren’t optimally designed for maximizing the value of information. In a world where a thousandfold increase in R&D effort is also constrained by the serial time required for experiments, we will likely run much more well-crafted and informative experiments.

Additionally, in the future I'm picturing, AI systems could potentially lessen the need for certain experiments. Take drug trials, for instance. It seems plausible that AI systems could more effectively digest the results of all relevant prior experiments, use specialized AI systems for drug toxicity prediction for safety evaluations, and so on. In many hard-tech domains, like the design of cars, rockets, and semiconductor chips, it seems plausible that high-fidelity physics simulations could reduce the need for some, if not many, key experiments. Combining the results of AI-generated evidence to inform AI-designed highly parallel experiments will probably mean that we will use limited serial time manifold more effectively.

I remain unconvinced by the arguments of specific resource bottlenecks that people often bring up. To convincingly argue that a resource could significantly limit rapid economic growth, one would need to demonstrate that A) the resource is vital for the economy, B) it is extremely challenging to find a substitute for it, even with significantly advanced technology, and C) the resource is so scarce that, even with formidable efforts, we cannot increase its supply by, say, an order of magnitude.

While I don't possess the expertise to determine if rare earth metals specifically meet these criteria, without further evidence supporting these points, I regard such arguments as weak.

Matt: Let me make three broad observations about your rebuttals before wrapping up.

First, I don’t think each of these bottlenecks is enough, on its own, to short-circuit explosive growth. It’s their accumulation. Access to specific materials won’t matter in all sectors, but it might in some. In others it’s time; in others data; in others regulation; and so on. Indeed, I don’t doubt there will also be some sectors where nothing much gets in the way of AI automation. Those sectors may well experience explosive progress, but in a big economy, if human demand for the service doesn’t expand dramatically, the most likely outcome is those parts of the economy become cheap and no longer count for much of GDP.

Second, a lot of the rebuttals strike me as pretty speculative. Can advanced AI learn a lot from parallel experiments? Can it find massive efficiencies in how we design experiments? Can it skip experiments by running sufficiently detailed simulations? Will economic benefits of AI be strong enough to incentivize regulatory reforms? Will AI disempower labor in a way that upends the political voice of the masses?

We just don’t know. There is a path through all these unknowns that leads to explosive growth, but I suspect that’s not where most paths lead.

Third, some of the arguments on regulation themselves hinge on the notion that AI will be very powerful, and then layering on top of that some additional theories about how that will affect our politics. If AI turns out to not be as powerful as you think — for example, because it turns out to be harder than you think to efficiently gather data or one of the other bottlenecks described above turns out to be hard to crack — then that will undermine the conditions necessary for those theories about the political effects of AI to be applicable.

One final meta point. To return to some of my opening remarks, I am nervous about relying heavily on economic models to project a break with historical experience, as I don’t think the models are up to the job of making strong quantitative forecasts outside the range of historical experience. I think they point to faster economic growth, all else equal, and that’s my forecast for the effects of advanced AI too; but that’s about as far as I would take them. 

Another way to put this is: I just don’t think the tools of pure reason — in this case, mathematical models of the economy, in concert with only somewhat applicable historical data — are sufficiently powerful to reveal deep truths about situations where we have a paucity of data and experience. The world is too full of surprises. And I think that skepticism about the tools of pure reason also underlies my skepticism about the transformative power of artificial intelligence itself. If intelligence is powerful enough to accurately forecast far out of sample, into a world transformed by a novel technology, then a technology wielding vastly more intelligence will have a powerful tool at its disposal to remake the world. But if intelligence is too weak a light to see very far, then a technology wielding it may find its global impact slower and smaller than some AI optimists and pessimists believe.

Tamay: Numerous plausible obstacles could potentially hinder the course toward explosive growth. There are also other considerations that we haven't delved into, such as delays in investment or issues related to AI misalignment.

In light of this, extreme confidence in explosive growth happening even conditional on advanced AI being developed seems unwarranted. On the other hand, it seems that confidence in explosive growth not happening also seems misguided given the base rates implied by economic history, the predictions of multiple economic models, our understanding of the pace at which AI could facilitate extensive automation, and a lack of devastating counterarguments. Given what I mentioned, I believe that placing the likelihood of explosive growth — conditional on AGI — somewhere between 25% and 75% strikes a balance between this conflicting evidence.

Lastly, I am grateful for joining me in this deep-dive discussion. I admire your work, and your time and insights are truly appreciated!

Matt: Same to you! This has been great. In fact, I propose we meet back once GDP per capita has tripled, whether that takes a few years or a few decades, to discuss what we got right and wrong.

  1.  For example, in a recent survey of economic experts, only 20% believed AI developed in the next 10 years would have a larger impact on growth than the internet, while 61% were uncertain.
  2.  This isn’t the forum to go into this in detail, but I have in mind specifically work such as Cotra (2020), Davidson (2022), and Epoch’s various works on the topic, as well as expert surveys such as Grace (2022).
  3.  Philippe Aghion, Benjamin F. Jones, Charles Jones, “Artificial Intelligence and Economic Growth,” in The Economics of Artificial Intelligence: An Agenda, ed. Ajay Agrawal, Joshua Gans, and Avi Goldfarb (NBER: 2019). For an explainer see "What If We Could Automate Invention?," published on New Things Under the Sun.
  4.  I think we have good evidence this is so: see "Science Is Getting Harder" and "Innovation (Mostly) Gets Harder," both published on New Things Under the Sun.
  5.  The report I am referring to is Tom Davidson’s “What a Compute-Centric Framework Says About Takeoff Speeds.” A summary can be found here, and an interactive playground can be found here. By “recent rapid progress” I’m broadly gesturing to the jump from AlexNet to GPT-4 in a decade.
  6.  See my paper with Nicholas Emery-Xu and Neil Thompson, "The Economic Impacts of AI Augmented R&D."
  7.  See Epoch AI's data on Our World in Data.
  8. This “thousandfold” multiple is meant to be illustrative rather than something I’m confident in. Given the trajectory of hardware and software and the costs of running these models, this is certainly plausible.

Matt Clancy is a research fellow at Open Philanthropy. He writes a living literature review on academic research about innovation at New Things Under the Sun.

Tamay Besiroglu is a Research Scientist at MIT’s Computer Science and AI lab and Associate Director of Epoch. Tamay focuses on the intersection of economics and computing.

Published June 2023

Have something to say? Email us at letters@asteriskmag.com.

Further Reading

Subscribe