Disparate Impact
The biggest moral issues are often the most ignored. That raises the question: why are ethicists so bad at their jobs? I won’t fault them for failing to convince the public about what matters most. But I will fault them for failing to convince themselves. The biggest moral issues are often the most ignored—by lay people and ethicists alike.
When I was in high school I thought climate change was the biggest moral issue. When I was in college, I thought it was factory farming. Then I slowly learned how there’s probably way more suffering among wild animals, and I thought wild animal suffering (WAS) was the biggest moral issue.
These issues dwarf the prosaic moral issues that obsess lay people and professional ethicists alike. You know what I’m talking about. Income inequality, racial/gender injustice, abortion, taxation, genetic selection, lying to your kids about Santa. Etc.
Imagine whole university departments committing themselves to “centering WAS in all that we do.” Imagine painting over “End Racism” in NFL endzones with “End WAS.”
Imagine David Foster Wallace exhorting us to consider the beetle rather than the lobster. Ok, I’m getting carried away.
The Big One
But now I think there’s an issue that dwarfs all of these. Which is really saying something. Did you guess it? Probably not. Probably you never even heard of it because your head’s in the sand like everyone else.
It’s S-risks.
Or “suffering-risks.” That is, the risks of cosmically significant amounts of suffering. Amounts of suffering orders of magnitude greater than has existed on earth. Yes, far more than all the WAS over the past 4 billion years. You can’t fathom the amount of suffering at risk here, so don’t try. You can hardly appreciate the scale of present-day horrors like factory farming or WAS.
On the plausible assumption that suffering is bad, s-risks are really really really scary. They should terrify you like Shelob terrified me when I was a wee beetle.
S-risks concern the far future. Longtermism, the view that influencing the far future is a moral priority—maybe even our highest one—has gained momentum in recent years. However, the dominant focus has been on existential risks (x-risks), especially the risk of human extinction. But this focus is puzzling if we actually take suffering seriously. As most secular ethicists would acknowledge, there are fates worth than death.
Far worse! Do a deep dive into s-risks and you might never be the same. Get started at the link above, which routes to the incredibly controversial Center for Reducing Suffering, or check out the “deep research” report on s-risks that ChatGPT did for me:
You might be inclined to dismiss s-risks on the grounds that they are totally speculative or intangible, unlike the immediate, visible tragedies of poverty or present-day animal suffering. But that would be pretty dumb. You can’t honestly play the moral game without relying on models, projections, and inference. Climate change, for instance, is a big moral issue, but today its impacts are basically invisible. Indeed, all game-playing relies on this. Even if we wanted to do the most harm, it would be idiotic not to make projections and inferences and stuff.
Nor are s-risks totally speculative. Somewhat speculative, sure. They haven’t materialized yet—but neither has (the brunt of) climate change.
We have ample evidence that suffering can arise and persist as by-products of complex systems—factory farming, competition, pollution—all without anyone explicitly intending it. History teaches that suffering doesn’t need villains, just perverse incentives. In the future, there’s a very good chance there will be way more beings and way more powerful technology—more nukes and bioweapons, but also (perhaps) tons of digital minds who can suffer and misaligned AGI to enable it. That means way higher stakes.
And those are just incidental s-risks. There are also natural ones, where (like WAS), suffering just happens because “shit happens.” If humans spread the seed of life around the galaxy, we might repeat the history of evolution on Earth many times. That may sound quaint, but it’s mostly not. There might also be alien life and a lot of alien suffering. We can’t rule that out—after all, we haven’t been looking for aliens all that long.
Then there are agential s-risks. These involves agents (organic or inorganic) deliberately causing suffering because they are sadists or retributivists. Fun.
You’ve got to be more Hegelian than the Sage of Jena himself to not worry about this stuff. Deep down you know it could all go wrong—horribly wrong. Tobias Baumann—PDF of his book on s-risks here—estimates the probability of s-risk actualizing at no lower than 1 in 1,000. That’s not a negligible chance that you can just write off. We’re not talking one in a billion.
But I think Baumann’s number is extremely conservative. My lower bound is around 1/100. And my estimate of the chance itself is around 10%. That may sound crazy, but it’s not. Because there are so many ways to lose (hence the plural s-risks) and because there’s plenty of time to lose (the future might be really really long—and populous).
I punched the numbers into my EV calculator, and it overheated. Bad omen.
How to Get Cosmically Rich
As Bryan Caplan would say, bet on it.
If I wanted to make a cosmically significant amount of money, I would definitely bet on s-risks being realized. You should too. Here’s what you do: find all the stubborn longtermist skeptics and exploit them for cash.
Take Greg the Idiot, for instance, who insists it’ll never happen and offers a gazillion to one odds against s-risks materializing. Bet a third of your life savings, at least. Better yet, pool your richest friends and do it together. This sets you up to win big. Really big.
It’s totally normal to feel uncertain about such a big bet. Here are some thoughts you should try having—they always restore my confidence.
There are so many ways for things to go wrong and way fewer ways for things to go right. And they have to go very right!
There’s been so much suffering in the past. If I’m an extrapolating man, I’ll focus on all that suffering over billions of years rather than on the recent alleviation of suffering thanks to technology. That alleviation has really only happened in the past 10k years and especially in the past 300 years. And it’s been almost entirely reserved for humans, who make up a fraction of all the organisms.
What can happen will happen. Evil will triumph, it’s just a matter of when and by how much.
It really would not surprise me if I found out one day that s-risks did materialize. If 3 billion years from now, there was a ton of suffering permeating the galaxy, I wouldn’t be like Oh God how did we let that happen, wow, wow, unbelievable! Instead I’d be like, Well yeah, I totally see how that happened. What did you expect would happen when you started playing with fancy tools like AI and rapid growth and space colonization? That everything would be hunky dory?
There’s so much time! Even if I bet on s-risks and they don’t materialize for a 10k years, I still have got billions of years to go (maybe even way longer!). It’s like buying a lottery ticket every year. Eventually I’ll win, and when I do, I’ll win bigly.
These are my thoughts. If you don’t like them, I have others.
What Should We Do About S-Risks?
I don’t know, I was hoping you might.
In the meantime, why not get rich?