Why Worry?

The Editors

In January, when we started commissioning the first essays for this issue, the public conversation about risks from artificial intelligence looked very different. We thought our biggest task would be to convince anyone outside of a small community of professional worriers to take the problem seriously. In the past couple of months there has been, for lack of a better term, a vibe shift. The New Yorker and the Financial Times have published essays arguing that AI represents a catastrophic threat to humanity. It got a cover story in Time. AI pioneers and Turing Award winners Geoffrey Hinton and Yoshua Bengio publicly stated their concern. UK Prime Minister Rishi Sunak called AI an “existential risk.” Our moms are scared.

But not everybody is on board the one-way train to doomsville. The new wave of visibility for AI risk led to a predictable backlash from people who took one look at this whole mess of issues and quite reasonably concluded that those of us worried about Skynet are all insane. And while the language they use does tend towards the hyperbolic (“hysteria,” “alarmism,” “science fiction”), the concerns these critics raise make sense: Is this just more groundless Silicon Valley hype? How can we trust people whose careers and livelihoods depend on investing in AI — or investing in protecting us from AI — not to exaggerate what the technology is capable of? What about the opportunity costs? How is a computer program supposed to take over anything in the real, physical world? And is this just a distraction from the harms AI could cause right now?

We’re certainly not going to tell anyone to take AI risk seriously because some computer science professors and tech CEOs say so, nor will we pretend that slowing down AI development would be costless. We agree that current LLMs don’t live up to the hype, and we have yet to be sold on any particular story of certain doom. That said: we’re scared too. The full case for why we should be afraid of creating entities more intelligent than ourselves has been made at length by many different experts working from many different sets of assumptions. We won't attempt to replicate their work here, but we can try to explain what keeps us up at night. 

For the past decade, ever since the advent of deep learning, the more computing power used to train AIs, the more capable they become. There is no reason to believe that human intelligence represents a natural limit on what artificial minds are capable of, or that this progress – so lucrative to so many — will necessarily stop. Humans, as a general rule, aren't great at predicting technology more than a few years out. Our most reliable technique is still to simply extrapolate from current trends, and those trends predict that AIs which match or exceed us in cognitive power will be built within our lifetimes. Of course, trends sometimes break. We might enter another AI winter. We might succeed at building AIs that surpass humans at all cognitive tasks, but have no goals or desires of their own. Politicians and CEOs might make sensible decisions about the degree and kind of decisions they’re willing to delegate to AI systems — but we don’t want to count on it. 

So while the advent of artificial intelligences willing and able to wrest control from humanity isn't certain — what is? — it represents a real and plausible threat. Taking this threat seriously doesn’t mean uncritically accepting everything the heads of OpenAI, DeepMind, or Anthropic have to say on the subject. In fact, we’re skeptical of anyone who says they have the future of AI progress all figured out. Instead, we’d like to try and understand it for ourselves. 

This issue of Asterisk can't answer every pressing question about AI (we tried), but it does attempt to step back and put some recent developments in a broader context. It might be a little more abstract than usual. It's certainly more speculative. But there's one thing it isn't: a distraction. We're worried about the impacts of AI on everything from privacy, inequality, and jobs to the end of life on earth. We think that a stable, democratic society will be necessary for navigating the changes we're pretty sure are coming — and we’re worried that AI will shake its foundations. We want our future to be technologically advanced, prosperous, peaceful, and free. We’d also strongly prefer that it contain humans who are able to make substantive decisions about their own lives. And right now, we'd like to figure out what’s happening in the present, and how we’re going to get there. 

Published June 2023

Have something to say? Email us at letters@asteriskmag.com.

Subscribe