Consider the old cliché, This is just a tool—it can be used for good or evil!
On its face, this claim is misleading: Clearly, certain technologies1 lend themselves more readily to harm than help, and vice versa. While it’s true that guns sometimes do serve to defend people from terrorist attacks, it’s also true, obviously, that guns are much more efficient killing machines than butter knives, and that it would be very difficult to hurt anyone at all with a bandaid or piece of tissue paper. All tools can be used for good or evil—but some are much more likely to be used for one purpose over the other.2
So let’s ask a different but closely related question: Is the average tool more likely to be used for good or evil? Still, this is quite a broad and poorly defined query. (If there’s some innate quality of “tools” or “technology” writ large that makes them mostly conducive to help vs. harm, it’s hard to see what that quality is.) The best we can do with this puzzle, I think, is to wack around the answer with a big hammer of an idea: To determine how the average technology will be used, look to its typical user.
Of course, this “big hammer” is a Rorschach test—one that depends on your prior beliefs about human nature. But to me, it suggests cautious optimism.
I don’t have some killer argument for why the typical person is more likely to use the average tool for good over evil—just vibes. On the one hand, it’s well-established that we humans are largely prosocial creatures, with an innate tendency toward groupishness, language acquisition, and caring for our young; on the other hand, we frequently act in self-interest—especially under conditions of threat or scarce resources—and are sometimes prone to lying, warmongering, and creating Ponzi schemes.3 To determine which of these dueling natures is more likely to govern typical tool use, I’d ask you to simply imagine what an early hominid might do with a rock, a stick, or a clump of moss; or, more vividly: What would you be inclined to do with a rock, a stick, or a clump of moss if you existed as an early hominid?4 I don’t know about anyone else, but while I can imagine, you know, bashing in the skull of an innocent hyena or stabbing a member of a rival tribe, I find it far more plausible that I’d use these objects, respectively, to crack open nuts, to build a fire for my family, and to cushion the weight of my huge, hairy cranium.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88ee6120-8ffe-45dc-9579-7dd61b4684ea_1600x991.jpeg)
All this to say: If you agree that the typical human is “good”—i.e., more-or-less nonviolent, cooperative, concerned about others’ wellbeing etc.—then your prior or intuition about new technologies should probably also be that they’re net good.
(I don't mean to suggest that people are great—just decent, good enough. Really quite unlikely to become sociopaths or fascist dictators. In the hands of these imperfect but well-intentioned creatures, most tools seem likely to help rather than hurt, at least in the long run once we iron out the kinks. Long-term social and economic progress seem to mostly bear this out. In the absence of disconfirming evidence, then, techno-optimism ought to be the default stance, all else being equal.)
Are there exceptions? God, yes. Some tool-users really are, well, complete tools. And, again, some technologies have much more harmful affordances than others. Furthermore, it would seem that the downside of giving evil people access to destructive tools (e.g., a nuke) outweighs the upside of giving good people access to constructive ones (e.g., a new vaccine). So if a technology is created with obvious ill intent, if the consequences of misuse are too severe to risk, or if your domain expertise clearly supports the thesis that it’s net bad, then by all means: Let us know! Don’t let the big hammer of People are good (ish) blind you to the particulars of individually evil nails.
At the same time, it’s easy to reflexively push back against every new idea or invention when, really, we have no clue what their ultimate impact will be. This is understandable, given the enduring popularity of dystopian fiction, and the enduring obnoxiousness of crypto enthusiasts. But I think kneejerk techno-pessimism is a mistake. Major setbacks notwithstanding, I bet you’d ultimately say you prefer being alive today to living in the year 1350—or even 1980 for that manner. If so, this is almost certainly because there are social and material opportunities available to you now that would not have been available in the past. These (mostly) good things were made by, and for, (mostly) good people.
It’s a bit of a reductive framing, I know—a blunt instrument of analysis.
I still believe it’s one worth holding in your mental toolkit.
In this post, I’m using “tool” and “technology” interchangeably. I also think they can be defined fairly loosely (i.e., therapy provides “mental tools” to help people function; religion is a “social technology”)—although I admittedly tend to picture physical tools and digital technology, as I imagine readers do, too.
This is a take I first heard articulated by Kevin Roose on the podcast Hard Fork.
Since human nature is a malleable mishmash, some social systems surely promote more positive technology use than others; I’m agnostic as to what kind of society achieves this goal in practice. In theory, though, you could imagine it supporting a kind of capitalist logic—i.e., Let’s try to align incentives such that self-interested behavior produces greater social value.
After all, our cognitive machinery has hardly changed in the past hundred-thousand years.