Discussion about this post

User's avatar
Stephen Seckler's avatar

I asked ChatGPT to summarize my own sentiments on this topic. Here are two points that came back:

1. Humans often disproportionately fixate on low-probability, high-consequence events.

2. Humans are predisposed to prioritize immediate, tangible threats over abstract, distant risks, often swayed by recent exposures.

In my opinion, if at some point in the future, AI does something with "bad consequences", that will sway our thinking.

But for now, I'm sticking with your more optimistic view (i.e. that the benefits far outweigh the risks).

Having said that, we surely need good guardrails in place to protect us from bad actors (or from AI, on it's own), becoming like HAL in 2001 a Space Odyssey.

Expand full comment
Stephen Seckler's avatar

Russian roulette is a dark analogy. A one is six risk is very high. IMHO, the risk of “AI bad” is way lower.

My optimism stems in part from knowing that the world has contained the risk of nukes for over 70 years. I think there are enough people worrying about AI risks that we’ll be okay.

Expand full comment
1 more comment...

No posts