> Effective Altruists evaluate charities with very utilitarian metrics: their impact has to be quantifiable, and it has to strive for volume over depth. If more people get a minimal improvement, that’s far better than fewer people getting a deeper, more holistic benefit.
Schiller seems to imply that the benefits QALYs don’t track are somehow higher-quality than the ones they do track (by virtue of being “deeper” or “more holistic”) and you do a nice job challenging that position.
Something else that occurs to me: that second sentence appears to say that, if we use QALY calculations to compare a small benefit to many against a big benefit to few, we’ll always find that the former is better. But that’s obviously wrong—it’s totally possible for QALY calculations to work out either way. So, maybe Schiller is saying that QALY calculations are in practice somehow biased to favor quantity over quality? But if so, she doesn’t give any supporting argument. Even if her other point is true (that we’re underestimating the importance of the benefits that QALYs don’t track) it’s not clear why that would cause such a bias.
It’s interesting that you bring up the repugnant conclusion, because people sometimes interpret the repugnant conclusion as saying that we should aim for worlds where many are slightly happy over worlds where few are very happy. But that’s a misreading: what it says is only that arbitrarily-good high-population, low-welfare worlds exist. It’s also true that (almost-?) arbitrarily-good low-population, high-welfare worlds exist, and the repugnant conclusion doesn’t tell us anything about which to aim for in practice. So I wonder if Schiller is making a similar mistake here.
In this piece, I was most invested in playing defense against a particularly unfair charge (and, frankly, one that I found hurtful): that EAs don't *really* care about people or value their humanity. If you want to go on offense, yeah...I think there are probably a lot of inconsistencies in her position...
That said, I still haven't read her book, and my general feeling is: The more people thinking/discussing these issues, the better!
> Effective Altruists evaluate charities with very utilitarian metrics: their impact has to be quantifiable, and it has to strive for volume over depth. If more people get a minimal improvement, that’s far better than fewer people getting a deeper, more holistic benefit.
Schiller seems to imply that the benefits QALYs don’t track are somehow higher-quality than the ones they do track (by virtue of being “deeper” or “more holistic”) and you do a nice job challenging that position.
Something else that occurs to me: that second sentence appears to say that, if we use QALY calculations to compare a small benefit to many against a big benefit to few, we’ll always find that the former is better. But that’s obviously wrong—it’s totally possible for QALY calculations to work out either way. So, maybe Schiller is saying that QALY calculations are in practice somehow biased to favor quantity over quality? But if so, she doesn’t give any supporting argument. Even if her other point is true (that we’re underestimating the importance of the benefits that QALYs don’t track) it’s not clear why that would cause such a bias.
It’s interesting that you bring up the repugnant conclusion, because people sometimes interpret the repugnant conclusion as saying that we should aim for worlds where many are slightly happy over worlds where few are very happy. But that’s a misreading: what it says is only that arbitrarily-good high-population, low-welfare worlds exist. It’s also true that (almost-?) arbitrarily-good low-population, high-welfare worlds exist, and the repugnant conclusion doesn’t tell us anything about which to aim for in practice. So I wonder if Schiller is making a similar mistake here.
Great points and 100p agreed!
In this piece, I was most invested in playing defense against a particularly unfair charge (and, frankly, one that I found hurtful): that EAs don't *really* care about people or value their humanity. If you want to go on offense, yeah...I think there are probably a lot of inconsistencies in her position...
That said, I still haven't read her book, and my general feeling is: The more people thinking/discussing these issues, the better!