Not long ago, I rewatched the movie Free Solo, a documentary in which rock climber Alex Honnold scales a 3,000-foot sheer cliff face without a rope.
At one point in the film, Honnold is asked to justify how the benefits of free solo-ing can possibly be worth the fatal risk of falling. “I like to differentiate between risk and consequence,” says Honnold. “You know, when I’m doing these hard free solos, I like to think that the risk—the chance of me falling off—is quite low, even though the consequence is extremely high.”
It’s a simple point, but one worth repeating: The probability an event will occur is separate from the impact of said event.
As many have pointed out, this distinction is crucial when discussing AI safety. You can believe there is a <1% chance of AI destroying humanity and still be very motivated to safeguard against that <1% chance consequence. In fact, this more or less is my view (although admittedly my p(doom) is probably a little higher than one percent). It’s also a view in line with statements made by ousted-now-unousted CEO of OpenAI, Sam Altman: He claims to think AI will be the best thing ever; but he concedes that this won’t happen automatically, and that the worst case scenarios are so bad that its creators should proceed with caution.
In this light, it’s easy to see how terms like “techno-optimist” or “doomer” are reductive. If I had to bet on whether we’re in the AI = good world or the AI = bad one, I’d put my chips on the former. But it doesn’t necessarily follow from this bet that we should be accelerating the pace of AI development. The low-chance/disastrous-consequence outcome may be so bad that it swamps the high-chance/good-consequence one (i.e., the outcome may be negative in expectation). This is a nuance that often seems to be lost in most mainstream media coverage.
The risk-vs.-consequence framework also offers useful insight into the mindsets of different factions. Leaving aside the many unresolved empirical questions in the AI safety debate, it seems clear that we have deep-seated psychological tendencies around risk tolerance and loss aversion. Would you spin a wheel that gave you a 99% chance of winning a million dollars and a 1% chance of death? What about a 99.9% chance of ten trillion dollars? Reasonable people might make different bets depending on their age, economic need, and other people in their life.
By the same token: Does, say, unbounded leisure time, a cure for cancer, and a personal tutor for everyone on earth (on the techno-optimist side) justify a 5% chance we’ll be ruled by robot overlords and a .05% chance they’ll wipe us out of existence? Hell if I know.
To be sure, there are many AI skeptics who would say that there is a high probability of AI doom—not merely a high consequence. And with respect to recent events, it’s possible that Altman may simply be paying lip service to AI safety but behaving recklessly in practice. At this point, too many details remain unknown.
What we do know—and see time and time again—is that access to the same information is no guarantee people will agree on the appropriate course of action. Where some see hand-holds up a wall, others see a precipice.
I asked ChatGPT to summarize my own sentiments on this topic. Here are two points that came back:
1. Humans often disproportionately fixate on low-probability, high-consequence events.
2. Humans are predisposed to prioritize immediate, tangible threats over abstract, distant risks, often swayed by recent exposures.
In my opinion, if at some point in the future, AI does something with "bad consequences", that will sway our thinking.
But for now, I'm sticking with your more optimistic view (i.e. that the benefits far outweigh the risks).
Having said that, we surely need good guardrails in place to protect us from bad actors (or from AI, on it's own), becoming like HAL in 2001 a Space Odyssey.
Russian roulette is a dark analogy. A one is six risk is very high. IMHO, the risk of “AI bad” is way lower.
My optimism stems in part from knowing that the world has contained the risk of nukes for over 70 years. I think there are enough people worrying about AI risks that we’ll be okay.