I don’t know what it is about artificial intelligence (AI) that makes us suddenly so bad at communicating coherently. The result is a lot of talking past each other. Hopefully this will cut through a lot of confusion, especially for anyone who gets the feeling they’re spectating something they don’t understand, but feels like they should.

Let’s start here: when I say intelligence I mean the ability to make choices, based on information from the outside world, to achieve a goal. More intelligence means being able to make better choices. Narrow intelligence means having intelligence for a specific task, like playing chess or solving math equations or making lunch. General intelligence means being able to adapt it to a variety of situations, including ones you’ve never seen before. A calculator is a narrow intelligence, a human is a general intelligence.

AI is the field of computer science that attempts to make computer programs with intelligence. We have narrow superintelligences for chess, basic math, and some video games. AI is a powerful, and potentially dangerous tool. That’s where the AI safety debate comes in.

There are two main AI safety scenarios we’re concerned about:

  1. AI becomes a weapon one person (or a group of people) uses to harm another.

  2. AI becomes an intelligence far surpassing human capabilities and bad things happen as a result (more on this later).

One of the most common causes of confusion when talking about AI safety is mixing these two things up.


The first one is a relatively down-to-earth argument:

  • AI is technology
  • It’s getting increasingly powerful
  • All technology can be used for harm if wielded in a particular way
  • As a result we should build safeguards to make it much harder to use it for harm, or protection from the AI’s negative effects, or both.

There are multiple schools of thought here. “Harm” is an extremely broad word (anything from mildly uncomfortable situations all the way to enormous amounts of suffering). Despite this breadth, this scenario generally doesn’t contribute a lot to the confusion. The part that we seem to struggle more with is the second case.


Here’s the core argument of the second AI safety scenario:

  • It might be possible for there to be an entity that’s far, far more intelligent than all humans combined. After all, there’s no obvious reason why decision-making ability has to be limited to human capabilities. We’ll call this a “superintelligence”.
  • A superintelligence doesn’t need to be “conscious”, “sentient”, have “free will”, or other poorly-defined philosophical term, to be a threat to all of humanity. The only thing that’s required is for its goals to be not perfectly aligned with human desires.

Here’s one possible way this could materialize:

  • We train an AI to discover novel proteins/ RNA strands useful for preventing/ treating/ curing disease. It rapidly becomes superintelligent.
  • We give the AI more and more capabilities. It has free rein to send requests to labs, which synthesize the new organic molecules it generates. It works great.
  • One day the AI proposes a new RNA sequence. The lab happily synthesizes the RNA. It turns out this unnatural RNA is actually an enormously contagious virus with a long incubation period and incredibly high fatality rate. One year later, 8 billion humans fall over dead at the same time. Game over.

If this sounds far-fetched, consider that “we synthesized an unknown molecule proposed by an artificial intelligence” has already happened.

Scenarios featuring superintelligent are usually taken to extremes– either things go extremely well for humanity with a highly capable technology in its grasp, or catastrophe happens.


There are generally two camps when it comes to the AI debate: accelerationism and decelerationism.

The basic deceleration argument is that we should slow down improvements in AI capability in order to give safety research more time to figure out how to prevent these catastrophes from happening. The crown jewel of AI safety is currently interpretability research, which is meant to improve our understanding of what’s going on in the inscrutable mathematics of deep neural networks. By understanding how things work on the inside, we can detect if something has gone wrong long before it becomes a problem.

The basic accelerationist argument is that AI is a powerful tool that could seriously help potentially solve many of our problems, and that the potential benefit far outweighs the potential harm of having highly capable systems. There is no standing still here– every minute we spend without AI is another minute of potentially preventable unnecessary suffering. They argue that AI safety research looks quite like a make-work program– a field of study whose primary goal is to funnel resources towards AI safety researchers, with no concrete results.

These beliefs also sit on a fairly broad spectrum. Accelerationists include moderate positions, such as “AI seems good for the future” all the way up to “disregard all safety measures; an AI arms race will mean all the danger cancels out”; decels range from mildly supporting more caution in AI usage all the way up to “there should be one world government controlling all major sources of computational power”.


A brief aside: what do you think is the purpose of brakes on a car?

No, seriously, take a moment to think it through.

If you’ve read this far, there’s a chance you forgot about the title, where I claim that brakes are for going fast, not slow. On the surface this seems plainly wrong, because brakes are designed specifically to slow down. They allow us to navigate roads where high speeds are dangerous and come to a stop before crashing into things.

Consider, however: how fast would you drive in a world where cars existed but we’d never invented brakes?

You probably wouldn’t go very fast. In fact, you might not feel comfortable moving faster than walking speed, because without brakes you have no way to stop your enormous hunk of metal from hitting what’s in front of it. Cars would be useless in a world without brakes. In this sense, brakes are the thing that allow us to confidently drive at a high speed, because we know we can safely slow down any time we want. Safety mechanisms, properly designed, are meant to allow us to go fast.

One major reason why many AI models are increasingly walled-off these days is because the researchers are worried about how the models will be used. The whole acceleration/ deceleration debate fundamentally misses the point. Accelerating AI safety is accelerating AI capabilities.