A man is fumbling around on the ground on a street corner. He seems distraught, and possibly drunk—muttering to himself and frequently patting the grass around the base of a lamp post.
A police officer shows up.
“Everything all right here?” asks the cop.
“No, officer,” the man slurs. “I can’t find my keys.”
The police officer is skeptical, but he plays along and joins the man on the ground. For fifteen minutes or so they grope around in the grass without success.
“Are you sure this is where you lost your keys?” the cop finally asks.
“Here?” says the man. “God no. I lost them at the park.”
“What the fuck,” says the cop, pulling himself to his feet. “Why did we just waste all this time crawling around on the ground, if your keys are at the park?”
The man observes the police officer coolly. “Because,” he says, “this is where the light is.”
What is Streetlight Effect?
This allegory is a (dramatized) version of a common failure in problem-solving—a phenomenon known as streetlight effect. Simply put, streetlight effect is the tendency to look for solutions in the places where answers are clear or easy to find.
The “drunkard’s search” story is amusing because the fallacy it highlights is obvious. Often, though, streetlight effect creates issues because we don’t realize we’ve been a victim of it: If we want to know our compatibility with someone, we pluck daisy petals or consult astrological charts. If we’re waiting for an important communication from someone, we might constantly check our email, or stalk them on social media for any sign of what they’re up to. In all these examples, the behavior provides an illusion of control, but we don’t actually get the information we crave any faster.
The bias is particularly pernicious with metrics that appear to give a high level of precision. As mathematician Hannah Fry notes in her piece, “What Data Can’t Do”: “when faced with a difficult question, we have a habit of swapping it for an easy one, often without noticing that we’ve done so.” (This quote is in fact a paraphrase from Daniel Kahneman’s Thinking Fast and Slow.)
And once you’re aware of streetlight effect, you start seeing it everywhere—I’m frankly a bit obsessed. A few examples that come to mind:
Mistakenly thinking that superstitions/worrying/OCD behavior (i.e., where the light is) will fix a problem.
Mistakenly believing a single major life adjustment/accomplishment (i.e., light) is the secret to happiness…
…or that low-hanging fruit alone is the secret to happiness 😏.
Standardized tests = how “smart” someone is.
BMI = how “fit” someone is.
Crime rate = how “safe” society is.
Polling = how “popular” a candidate is.
Number of subscribers = how “good” a blog is (hahaha amirite guys??).
Some of these examples, like poll numbers, are misleading because there is a systemic error in what the proverbial light illuminates. This, famously, is what happened in the 2016 presidential election: We thought we were looking at a true reflection of the country’s political preferences, but the sample wasn’t representative and failed to capture the intensity of Trump’s right-wing populist appeal.
Other metrics are correlated with things we care about, but are sometimes mistaken for the thing itself. Take standardized tests: Is it true that on average, kids who do better on the SAT are better equipped to succeed in college? Probably! (Actually I’ll go further: on average, almost definitely—but other factors, most notably high school GPA, are much stronger predictors.)
But of course there are also lots of kids with low SAT scores who could thrive in college; some combination of bad luck and inequality prevented this reality from showing up on the test. Similarly, there are a handful of kids who did quite well on the SAT who are really going to struggle in college. To be clear, I’m not taking a stance here on whether colleges should or shouldn’t use standardized tests as part of their admissions process—just observing that the SAT is an imperfect proxy for the information they truly hope to find out.
Then there are the metrics that, even if accurate, fail to account for other crucial information. Let’s briefly grant that you can improve violent crime statistics by locking up a ton of people for minor offenses—just as you could probably “reduce” violent crime by preemptively handcuffing the entire U.S. population, or only allowing people to leave their houses for one hour per day; a low incidence of violence alone, in other words, doesn’t mean mass incarceration is a policy success. More likely, it means that a really difficult question (How do we permanently and ethically alter the psychosocial conditions that lead to violence?) has been swapped out in favor of a slightly easier one (How do we get the crime rate down?), whose answer elides certain tradeoffs and creates shitty incentives.
Sadly, reductive summary statistics may be the best we’ve got in a lot of cases: Surely the ability to make apples-to-apples comparisons across time and individuals is usually better than just ~vibes~ and qualitative feedback. Surely it’s often preferable to have an imperfect answer to a question than no answer at all.
The challenge is just using quantitative metrics while still remembering how much they leave out—and sometimes even mislead. This is a tricky psychological puzzle to solve. It’s hard, for example, not to feel the weather app has wronged you when you’re soaking wet and the forecast said 15% chance of rain.
But of course the forecast wasn’t wrong: It said 15% chance, not 0. You’re wrong for believing true enlightenment lay in the place the sun just happened to be shining.