Should altruists prioritize the far future?

Brian Tomasik’s essay Should Altruists Focus on Reducing Short-Term or Far-Future Suffering? contains many interesting arguments for and against far future dominance. I agree with most of Brian’s points as well as his overall assessment. This post serves as a supplement, and I recommend reading Brian’s essay first.

Contents

Clarifications and definitions

The “far future” refers to everything (much) more than 1000 years from now.1 Whether far future considerations should dominate our decisions is related to two different questions:

  • What fraction of expected suffering occurs in the far future? Note the difference between actual and expected suffering.
  • What fraction of variance2 in expected suffering in a “random intervention” is because of the far future? I define “random intervention” as “ask a randomly selected effective altruist to come up with a reasonable thing to do”. The fraction of variance is (roughly) the fraction of expected suffering times3 multipliers for tractability and neglectedness (cf. 80k’s framework).

The second question is more action-relevant because it implicitly considers tractability and neglectedness. Intractable suffering in the far future does not matter for our actions.

In the following, I will discuss the relative tractability and neglectedness of short-term helping and efforts to reduce far future suffering.

Tractability

Many people argue that reducing suffering in the far future is less tractable than reducing short-term suffering. We can introduce an “entropy penalty” to reflect how difficult it is to shape the future in predictable ways. This penalty should be quite large – perhaps many orders of magnitude.

The relative tractability differs by less than it seems, though. This is because reducing short-term suffering – such as wild animal suffering – may be more difficult than it seems. Effects on wild animals are usually chaotic, except for habitat reduction, which is impossible for reasons of cooperation. This is particularly true for small marine animals or insects, which may be the dominating forms of sentience in the short run. So, it seems not that simple – but possible – to effectively reduce short-term suffering. This also depends on whether and how we weight by brain size.

In a nutshell, short-term suffering may be more tractable, but not by as much as many people think – and perhaps it’s the other way around.

Neglectedness

Is work on the far future more neglected than reducing short-term suffering? At first glance, the answer is obviously yes; but upon reflection, it’s not clear-cut. Many people work on global poverty and (to a lesser extent) factory farming, but far fewer people reduce wild animal suffering, let alone invertebrate suffering. These forms of short-term suffering are just as neglected as far future suffering – and maybe the latter is less neglected than it seems because future generations will work on it, too.

Anthropics

I follow the standard solution of anthropics: to reason as if controlling all copies at once. Accordingly, when I talk of the “fraction of expected suffering”, this is weighted by the number of copies. This essay by Brian Tomasik explains the details.

I endorse Robin Hanson’s leverage penalty as a potential solution to Pascalian reasoning. The leverage penalty punishes hypotheses in which you occupy an extraordinary position of power based on the idea that only a small fraction of agents can be in control of a large number of others. Applied to the far future, the leverage penalty a priori balances out astronomical stakes.

How much evidence we have for being able to influence the far future – compared to other sources of suffering – is unclear. Our best world models do suggest astronomical stakes, which is strong Bayesian evidence: conditional on us not being able to influence astronomical stakes, the world is (a priori) highly unlikely to appear as if it were possible. Yet we can conceive of explanations for why our observations may mislead us:

  • We – that is, most of our copies – may be in a simulation that ends with the creation of artificial intelligence. This seems far-fetched, but conditional on superintelligence and simulations being feasible, the convergent instrumental goal of gaining knowledge may lead to the creation of vast numbers of “ancestor simulations”. See Brian Tomasik’s piece on this idea for more details.
  • A late Great Filter may extinguish almost all civilizations that are on track to build superintelligence.
  • Explanations of the Fermi paradox other than the Rare Earth Hypothesis imply that the stakes aren’t as astronomical as it seems because the universe contains with lots of civilizations that we don’t see.

In light of these, how strong is the evidence for our ability to shape astronomical stakes?

I don’t have a good answer, but Occam’s razor suggests to take reality at face value – that is, we can impact astronomical stakes – rather than accepting radical changes to our worldview based on fancy anthropic arguments. This is a crude heuristic, though, and additional work on this issue may yield valuable insights.

Conclusion

The main takeaways of this analysis are:

  • It’s complicated. Claims that the far future dominates by many orders of magnitude are exaggerated. (See here for reasons why we should generally be sceptical of such claims.)
  • We should conceptually distinguish the fraction of expected suffering and the fraction of variance for a random intervention. The latter implicitly considers tractability and neglectedness.
  • The relative tractability and neglectedness are unclear and quite different from what one would naively expect. The dominant forms of short-term suffering may be similarly neglected and (in)tractable compared to far-future interventions.
  • The fraction of expected suffering and the fraction of variance are therefore roughly equal.
  • Anthropic considerations are a major source of uncertainty.
  • Overall, far future effects are more important, but not astronomically so – I’d say not more than an order of magnitude. My expected values4 are:
    • For the fraction of expected suffering: 92%
    • For the fraction of variance: 90%

Acknowledgements

I am indebted to Lukas Gloor, Max Daniel and Brian Tomasik for valuable comments and discussions.

Footnotes

  1. If many copies of us exist in simulations, timespans refer to each individual copy; that is, I consider the next 1000 years from the point of view of each particular copy as “near-term” and everything beyond that as “far future”.
  2. More precisely, we care about the standard deviation, not the variance. This post uses “variance” in an informal sense, not as the mathematical concept.
  3. Mathematical nitpick: It’s not simple multiplication if we use fractions, but it is for the ratio of (variance in) expected suffering in the far future vs. short-term, the ratio of tractibilities, and so on.
  4. This refers to the EV of future suffering divided by the EV of total suffering, which is not the same as the EV of the fraction.

Leave a Reply

Your email address will not be published. Required fields are marked *