Two months after its release in late November, ChatGPT reached 100 million users — the fastest-growing software application in history. The past year has seen artificial intelligence models evolve from niche interests to household names. Image generation models can produce lifelike photographs of fantastical worlds, anyone can output functioning Python code, and AI assistants can do everything from take meeting notes to order groceries. And, like the web in its early days, the full impacts of AI have yet to be imagined, let alone realized.
But foundation models — the computationally intensive, powerful systems trained at large labs like OpenAI, Google DeepMind, or Anthropic (I happen to work at Anthropic, but wrote this in a purely personal capacity) — have darker potential too. They could automate disinformation campaigns and widen vulnerabilities to sophisticated cyberattacks. They could generate revenge porn and other disturbing deepfakes. They could be used to engineer a pandemic-class virus or make a chemical weapon. In the future, more capable models could become hard for humans to supervise, making them potentially difficult or impossible to safely control.
In recent months, everyone from policymakers and journalists to the heads of major labs have called for more oversight of AI — but there’s no clear consensus what that oversight might look like, or even what "AI" means. Fortunately, the most dangerous type of AI — the foundation models — are also the easiest to regulate. This is because creating them requires huge agglomerations of microchips grouped in data centers the size of football fields. These are expensive, tangible resources which only government and major AI labs can obtain in large quantities. Regulating how they’re used is the focus of compute governance, one of the most promising approaches to mitigating potential harms from AI without imposing onerous restrictions on small academic projects or burgeoning startups.
Chips-First Regulation
AI hardware is a uniquely promising governance lever. In 2020, researchers from OpenAI noted that “Computing chips, no matter how fast, can perform only a finite and known number of operations per second, and each one has to be produced using physical materials that are countable, trackable, and inspectable.” Unlike the other components of AI development, hardware can be tracked using the same tools used to track other physical goods.
In the years since, policy researchers have begun to map out what a compute governance regime would look like. The basic elements involve tracking the location of advanced AI chips,
and then requiring anyone using large numbers of them to prove that the models they train meet certain standards for safety and security. In other words, we need to know who owns advanced AI chips, what they’re being used for, and whether they’re in jurisdictions that enforce compute governance policies.
Who Owns Advanced AI Chips?
In order to track who owns advanced chips, and how many they have, a compute governance system will need to create a chip registry.
A chip registry can build off existing practices within the advanced chip supply chain. There are fewer than two dozen facilities worldwide capable of producing advanced AI chips, and these chips already come tagged by their manufacturers with unique numbers that are relatively hard to remove. These numbers could be stored in a registry with a list of each chip’s owner. The registry would be updated each time the chip changed hands, and would also track damaged and retired chips.
There’s precedent for registries like this across a range of domains, from fissile nuclear material to stock shares to cars. Tracking physical objects, particularly when very few companies produce them, is not a difficult technical problem — though it would require coordination and incentives. The most plausible maintainer of the registry would be a government agency, but it could also be done by an international agency, an industry association, or even an independent watchdog organization.
What Are the Chips Being Used For?
With an effective registry in place, we need to be able to tell what the chips are being used for. The most dangerous scenarios involve large numbers of chips being used to train a new state-of-the-art AI model, since larger models are more broadly capable and often display unexpected new abilities — both qualities that represent greater risk. Even if we already know from the registry that an actor owns a large quantity of chips, they could be spread across many smaller projects. We’d need a way to tell if all those chips were being used for a single training run — the compute-intensive process that produces a new model.
One way to achieve this is to require that anyone with a large number of chips “preregister” their training runs — likely with the same entity that maintains the chip registry. This could be done by the AI company, the data center (which is almost always a separate company, like one of the major cloud providers), or both, to provide an additional check.
In preregistration, the developer would have to specify what they were training and how. For example, they might declare their intention to make the next generation of their language model, and provide information about the safety measures in place, how the previous generation had performed on a slate of evaluations, and what evaluations they intended to conduct on the latest version.
For a regulator to fully guarantee that the training run is in compliance, they would need a way to verify what computations were actually taking place on the AI chips. But this presents a problem: AI companies wouldn’t want to share this information — which is, after all, their most sensitive intellectual property. However, there are ways for the regulator to check that the contents of a training run are in compliance without being granted full access. Yonadav Shavit, PhD researcher at Harvard, has suggested a method where chips would occasionally store “snapshots” of their computations at different checkpoints. Regulators could then examine these snapshots — after they’ve been run through an algorithm that maintains key information while preserving the privacy of the most sensitive model information — and verify that they match up with the conditions of the preregistered training run.
This method would give regulators a high degree of confidence that a training run complies with regulations, but it’s still uncertain what it might look like in practice. It does have limitations — there are still open research problems to building the technical components of this process, and the risks of leaking IP and the costs of implementing it might make it unwieldy to comply with. But less-thorough measures could still be a meaningful improvement over the status quo. For example, regulators could compel data center operators to report training runs above a certain compute size, and ask them to conduct “Know Your Customer” processes — procedures like the ones banks use to confirm that their customers are who they say they are. With this information, regulators could, at minimum, ensure the actor conducting the run is not a criminal organization or rogue state, and encourage them to comply with best practices. This alone could increase safety — and it has the virtue of being possible to implement immediately.
In situations where training runs are deemed noncompliant, enforcement would be necessary. This could take the form of fines, criminal penalties, confiscation of chips, or the termination of a training run. And this enforcement would need to be fast — on the order of weeks or months — given the speed at which models are developed and deployed. (This may sound like an obvious point, but the speed of government enforcement actions varies widely — the Bell antitrust case took approximately a decade to resolve, whereas a drug bust might take only hours). Compute governance researcher Lennart Heim uses the example of Silk Road — the illicit goods and services network hosted on the dark web — as an analogue: After the government deemed Silk Road illegal, they identified the data centers that hosted it and forced them to shut down the website. It's possible the government could repeat a similar playbook to halt a training run midway.
Will the Chips Stay in Countries That Enforce These Rules?
All of this only matters if the chips stay in jurisdictions that sign on to enforce regulations on AI. Export controls are one rather blunt tool to keep chips from leaving compute-governed areas.
As the home to all of the companies that design advanced chips, the U.S. is well placed to enforce export controls. American companies Intel, Google, and Nvidia each possess between about 70% and 100% of the market for the processors they make for AI (central processing units, tensor processing units, and graphics processing units, respectively). State-of-the-art AI models are overwhelmingly trained on GPUs and TPUs. This means that the U.S. would be able to establish these proposals almost unilaterally. (There are two extraordinarily important non-U.S. companies further up the chain — TSMC, in Taiwan, which fabricates the advanced chips, and ASML, in the Netherlands, which makes the machines TSMC uses. However, both countries are sufficiently interconnected to the U.S. supply chain that they will almost certainly cooperate on implementing a plan for oversight.)
The U.S. has already implemented a series of export controls that limit China’s access to frontier chips and chip-making technology. Other key countries in advanced chip manufacturing like the Netherlands, Taiwan, Japan, Germany, and South Korea have supported the U.S. efforts so far, suggesting they may be willing to support future export policies as well.
Ideally, this more extreme measure could be avoided by making a lightweight regulatory regime that only targeted the largest, most risky AI development projects, which only a handful of companies can afford. In this case, other countries might be open to enforcing the policies themselves.
Decentralized Computing — A Challenge to Compute Governance?
So far, the methods I’ve discussed protect against risks from one kind of AI: foundation models trained on advanced compute in large data centers. Though state-of-the-art models are currently trained in this fashion, this may change. Researchers are currently studying the feasibility of using decentralized sources of compute to train a large model. This might look like stringing together smaller clusters of CPUs and GPUs located far from each other. An extreme case might even involve chaining together larger numbers of less-powerful processors, like laptops.
On its face, decentralized computing looks prohibitively inefficient. Current foundation models are much too large to store on an individual or even a few chips. Instead, they are born in data centers where hundreds or thousands of processors are clustered together in racks and connected with cables. This enables chips to quickly talk to each other. The farther apart they are — the greater the latency — the longer training takes. Latency also degrades the model’s performance: Because updates are slower to reach the relevant part of the neural network in response to a specific incorrect output, it hampers its ability to learn. Decentralized training will need to reckon with these challenges.
However, they aren’t insurmountable. Imagine a large language model trained solely on laptops. For a very quick sketch of the problem with some speculative math: The amount of compute needed to train GPT-3 on laptops instead of Nvidia chips would require around 10,000 2022 Macbook Pros working for a month. (While this gives an analogy for the amount of computing power needed; it is currently an infeasible setup.)
Ten thousand Macbook Pros cost $24 million. The cost of training GPT-3 was likely between $4 and $12 million when it first came out, and would be less than $1 million today. Decentralized training this way is not efficient by any means.
Now imagine the above example, but instead of 10,000 laptops, some actor used 600,000 laptops, accomplishing the same training run in one night. You might volunteer your computer while you sleep to contribute to some scientific effort or to receive access to the model in exchange. Around a million people mine Bitcoin, despite Visa more efficiently accomplishing a similar purpose. Decentralized computing for training AI could similarly become viable despite its inefficiency.
Why worry about a problem that is both technically unsolved and less efficient than the current paradigm? Already decentralized computing is gaining traction among researchers as an interesting area to work in, and some progress has been made in fields like medical AI. While inefficient, these methods could still be attractive for actors that don’t have a better choice. Most notably, Chinese labs could become key players in advancing decentralized training techniques. Export controls have left them without access to state-of-the-art chips. In the future, stringing together older chips with lower interconnect might be their only option to compete with other frontier models.
Ultimately, decentralized computing does not seem to undermine the case for compute governance. It is currently impossible to train state-of-the-art models via decentralized training, and even as research on it progresses it seems likely there will be a large efficiency penalty. And if it does become viable to train state-of-the-art models this way? These processes would involve the coordination of a large number of actors or large capital outlays for older chips and other networking hardware. If the world has implemented significant compute governance, then it’s probably possible to detect these other methods using standard intelligence.
Evaluations and Standards
So far, I’ve described a regulatory regime without discussing what purpose it might serve. I have used “building safe models” as a placeholder, but to be clear: Compute governance is compatible with a wide range of goals. A compute governance regime is only as good as the standards it enforces — and determining those standards is a significant challenge in itself.
The public and the government need to decide what standards and evaluations they want AI models to meet. Researchers and independent organizations have started to address the technical problems this entails, and the public debate around values and policy objectives is just beginning.
Likely, these standards will — and, in my opinion, should — focus on whether AI models can cause harm in the real world, from helping terrorists build weapons or enabling cyberattacks to operating in ways that are difficult for humans to control.
Unfortunately, nobody knows how to train a model that will consistently refuse to take harmful actions. There are techniques that let a model learn from human feedback whether its responses are helpful or harmful, but these are currently imperfect and unreliable.
Right now, we can address this problem by testing whether models are capable of doing something dangerous. For example, standards could try to catch new, dangerous capabilities by requiring that each new model not exceed a prespecified size increase from the last model that was verified to be safe. This would help because larger models often contain new and surprising capabilities, and more careful scaling increases the chance that we can catch them while they are more manageable. Another potential standard is to require that AI companies and data centers both follow best practices for cybersecurity, so that a model will not be stolen and misused by criminals or rogue states.
One organization working to develop these standards is ARC Evals — a project of the Alignment Research Center, a nonprofit focused on developing safe AI systems. Their early work has focused on two challenges: developing tests and standards that reliably capture what models are capable of, and building the processes and infrastructure to evaluate models for compliance. Their first standard, nicknamed “Survive and Spread,” asks whether an AI model is able to self-replicate and acquire resources — and possibly therefore elude human control. We can imagine other standards that focus on concrete harms: for example, if AI systems are capable of manipulating and persuading humans to achieve their goals.
While this work is promising, our ability to evaluate advanced AIs is still in its infancy. Considerable work needs to be done to develop reliable and effective evaluations and standards. There are also open questions about who should produce and enact them — a government body like the National Institute of Standards and Technology, independent third-party nongovernmental entities, or some combination.
Compute governance is more of a vision than a template we can roll out immediately. But it is one of the most promising levers for governing the development and deployment of AI systems. More research needs to be done to work out its open technical problems. At the same time, there needs to be a parallel effort to create the standards that compute governance will enforce.