I asked ChatGPT to summarize my own sentiments on this topic. Here are two points that came back:
1. Humans often disproportionately fixate on low-probability, high-consequence events.
2. Humans are predisposed to prioritize immediate, tangible threats over abstract, distant risks, often swayed by recent exposures.
In my opinion, if at some point in the future, AI does something with "bad consequences", that will sway our thinking.
But for now, I'm sticking with your more optimistic view (i.e. that the benefits far outweigh the risks).
Having said that, we surely need good guardrails in place to protect us from bad actors (or from AI, on it's own), becoming like HAL in 2001 a Space Odyssey.
I don't necessarily think the benefits of AI outweigh the risks. I think we're more likely in the "AI-good" world than "AI-bad" world; but I'm agnostic as to whether AI is good in *expectation*.
(Analogy: In Russian Roulette, a win is 5x more likely than a loss, but a loss is waaay more bad than a win is good, so Expected Value of playing RR is negative.)
In other words, while I agree with your first point, it can be an appropriate bias when the low-probability event has fatal consequences.
Russian roulette is a dark analogy. A one is six risk is very high. IMHO, the risk of “AI bad” is way lower.
My optimism stems in part from knowing that the world has contained the risk of nukes for over 70 years. I think there are enough people worrying about AI risks that we’ll be okay.
I asked ChatGPT to summarize my own sentiments on this topic. Here are two points that came back:
1. Humans often disproportionately fixate on low-probability, high-consequence events.
2. Humans are predisposed to prioritize immediate, tangible threats over abstract, distant risks, often swayed by recent exposures.
In my opinion, if at some point in the future, AI does something with "bad consequences", that will sway our thinking.
But for now, I'm sticking with your more optimistic view (i.e. that the benefits far outweigh the risks).
Having said that, we surely need good guardrails in place to protect us from bad actors (or from AI, on it's own), becoming like HAL in 2001 a Space Odyssey.
Agreed re: two points!
I don't necessarily think the benefits of AI outweigh the risks. I think we're more likely in the "AI-good" world than "AI-bad" world; but I'm agnostic as to whether AI is good in *expectation*.
(Analogy: In Russian Roulette, a win is 5x more likely than a loss, but a loss is waaay more bad than a win is good, so Expected Value of playing RR is negative.)
In other words, while I agree with your first point, it can be an appropriate bias when the low-probability event has fatal consequences.
Russian roulette is a dark analogy. A one is six risk is very high. IMHO, the risk of “AI bad” is way lower.
My optimism stems in part from knowing that the world has contained the risk of nukes for over 70 years. I think there are enough people worrying about AI risks that we’ll be okay.