Defence from extinction and log-odds

Is there a natural way to think about extinction probabilities?

Adam Howes (Imperial College London)
2020-12-27

Background

A disaster which causes the extinction of humanity is arguably a lot worse than a large-scale disaster in which humanity survives (Ord 2020). In a recent 80,000 hours podcast episode, Owen Cotton-Barratt discusses a simple model for how events can lead to extinction.

Products of probabilities

The model (Cotton-Barratt, Daniel, and Sandberg 2020) assumes1 that the probability that an event (\(D\) for disaster, let’s say) causes extinction can be decomposed into a product of conditional probabilities. So, roughly speaking \[\begin{align*} \mathbb{P}(D \text{ causes extinction}) &= \mathbb{P}(D \text{ originates}) \\ &\times \mathbb{P}(D \text{ scales up} \, | \, D \text{ originates}) \\ &\times \mathbb{P}(D \text{ reaches endgame} \, | \, D \text{ originiates and scales up}). \end{align*}\] Writing out this decomposition motivates thinking about how each of these probabilities can individually be reduced: described respectively in the paper as prevention, response and resilience.

Though Cotton-Barratt, Daniel, and Sandberg (2020) use three layers, they could (mathematically) have chosen any number. So, more generally extinction probability \(p\) could be written as a product of \(J\) (independent) probabilities \[ \mathbb{P}(D \text{ causes extinction}) = p = \prod_{j = 1}^J p_j. \]

Halving

Cotton-Barratt mentions that halving any of the individual probabilities also halves the extinction probability, and that we can use this fact to think about balancing our resources across layers of protection. Clearly the first part is true, and without further information there is no reason to think that halving any of the particular \(p_j\) would be better than halving any other. However, it’s not obvious to me that all “halvings” are equally valuable and can be used as a basis for comparison for interventions.

Initially I thought “isn’t halving a large number better than halving a small number?”. In other words, consider two intervention scenarios:

  1. the extinction probability is halved from 0.5 to 0.25,
  2. the extinction probability is halved from 0.01 to 0.005.

If there is a fixed pay-off \(R\) for survival, then it seems as if intervention 1 is clearly better than intervention 2 as clearly \[ 0.25R > 0.005R. \] I think this argument does not apply in this situation as the probabilities are more like rates and survival is not associated with a single fixed pay-off.

A way to see this intuitively is that if the extinction rate is as high as 0.5 or 0.25 per year say then we can’t expect to live long and so the future is less valuable. To be more precise, suppose the intervention halves the extinction rate \(p\) permanently until extinction, then the difference in the two expected number of years survived, and therefore the value in the intervention, is \[ \frac{2}{p} - \frac{1}{p} = \frac{1}{p}. \] According to this model, intervention 1 is only worth 2 years in comparison to intervention 2 being worth 100 years, with that number only increasing as \(p\) gets smaller.

I’m also unsure about this second model: initially I was talking about the probability that a single event causes extinction now I’m talking about the total extinction rate, and furthermore none of this changes how I might prioritise between layers of defense.

A more realistic model look something like the following. Suppose that there are I independent sources of extinction risk \(i= 1, \ldots, I\) which contribute to the total extinction risk \(p\). Each of these risks can be decomposed into a product of \(J_i\) layers \(j = 1, \ldots, J_i\) such that \(p_i = \prod_{j = 1}^{J_i} p_{ij}\) as before. Extinction can result from any of the \(I\) sources, such that we must avoid each of them in order to survive \[ p = 1 - \prod_{i = 1}^I (1 - p_i) = 1 - \prod_{i = 1}^I \left( 1 - \prod_{j = 1}^{J_i} p_{ij} \right). \] This could be continued by analysing how (say) halving a given \(p_{ij}\) might alter \(p\), though I’m unsure anything useful would result2

Log-odds

Owen mentions how the log-odds operation, given by the logarithm of the odds ratio \(p / (1−p)\) as \[ \text{logit}(p) = \log \left(\frac{p}{1 − p}\right), \] “stretches out” probabilities near zero and one. It can be useful to think of probabilities this way in situations like the above where a small difference between two probabilities is actually really important – at least more important than \(|p − q|\) would have you believe. An easy way to see how the interval \([0, 1]\) is stretched out is by plotting the inverse operation \(\text{logit}^{−1}(p)\), the logistic transformation.

logistic <- function(x) 1 / (1 + exp(-x))

You can see that all the (additive) action is happening in a relatively narrow region: between -5 (corresponding to the logit of 0.007) and 5 (the logit of 0.993)3.

The logit transformation is not unique in mapping \([0,1] \to \mathbb{R}\), so why do we use it? It looks like the answer is mostly because of the mathematical convenience which comes from logit being the natural parameter of the Bernoulli distribution, which “does not imply that the logit link is more likely to be a realistic representation of the real world than some other link” (Allison 2015).

On the topic of halving probabilities, it doesn’t appear to me that \(\text{logit}(p)\) has any particular relation in terms of \(\text{logit}(p / 2)\). Note that it not possible4 to design link function \(f:[0, 1] \to \mathbb{R}\) with the properties:

Logit satisfies the first two, but not the last. Setting \(f^{−1}(x) = 2^{x − 1}\) satisfies the last two but not the first. In the range \(p < 0.5\) the two functions are not that dissimilar.

f <- function(x) 2^(x - 1)

Product model with log-odds

Is logit a natural way to think about products of independent probabilities? Logarithms take multiplication to addition, so it’s reasonable to think that there might be. Let’s see what happens \[\begin{align} \text{logit}(p) &= \log \left( \frac{p}{1 - p} \right) = \log(p) - \log(1 - p) \\ &= \log \left( \prod_{j = 1}^J p_j \right) - \log \left( 1 - \prod_{j = 1}^J p_j \right) \\ &= \sum_{j = 1}^J \log p_j - \underbrace{\log \left(1 - \prod_{j = 1}^J p_j \right)}_{\text{approx } 0 \text{ for } p \text{ small}} \\ &\approx \sum_{j = 1}^J \log p_j. \end{align}\] The result is that, supposing that \(p\) is small, then \(\text{logit}(p)\) is approximately the sum of individual log probabilities.

Is this useful? I’m not sure. Supposing we really did want to minimise the logit of extinction probability (which I haven’t presented any good case for) then perhaps it could give us a way to trade of reducing different \(p_j\).

Conclusion

Perhaps the question which I’m vaguely pointing with this post is: is there some way to transform an extinction probability \(p \in [0,1]\) (or perhaps rate) to a real number \(x \in \mathbb{R}\) such that \(x\) is on an interpretable scale for the purposes of extinction risk reduction. One of the things I mean by interpretable is that we should be prepared to spend the same amount of resources reducing \(x\) to \(x − 1\) as we should to reduce \(x − 1\) to \(x − 2\) (I’d guess there are other useful properties you’d want too).

That being said, it might be that this is the wrong question and we should really try to keep probabilities and outcomes separate when performing an analysis – I’m not sure!

Allison, Paul. 2015. What’s So Special About Logit? https://statisticalhorizons.com/whats-so-special-about-logit.
Cotton-Barratt, Owen, Max Daniel, and Anders Sandberg. 2020. “Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter.” Global Policy 11 (3): 271–82.
Gelman, Andrew, Aki Vehtari, Daniel Simpson, Charles C Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, and Martin Modrák. 2020. “Bayesian Workflow.” arXiv Preprint arXiv:2011.01808.
Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. Hachette Books.

  1. It’s unclear to me what proportion of events posing extinction risk meet the assumptions – in that they are “processes” which pass through “layers” of escalation. That being said, any event which doesn’t meet this assumption is likely to be intractable (and so, borrowing another argument from the podcast, so we might choose to restrict our attention to events we have a chance of doing something about).↩︎

  2. This kind of thing seems like something that manufacturing or process engineers might have thought about. They would be willing to accept a small defect rate though would want to quantify that rate and the factors which effect it.↩︎

  3. I previously came across the fact that the logistic transformation of even modestly large real numbers gives probabilities very close to either zero or one in the context of setting priors. In my research about HIV we can be relatively sure that neither virtually everybody (prevalence close to one) or virtually nobody (prevalence close to zero) in a given region is infected. In a Bayesian logistic regression we input this prior information via the linear predictor, corresponding to \(\text{logit}(p)\), and so we need to be careful to avoid placing much prior probability outside \([−5, 5]\). A more general strategy for making sure that your model reflects your true prior beliefs (which may be more difficult to express by placing distributions on many interacting quantities than you’d think) is to use prior predictive checks whereby you simulate from your prior model (Gelman et al. 2020).↩︎

  4. \(f(1)\) must equal 1, so then \(f(0) = −1\) but \(f(0) \to -\infty\).↩︎

References

Citation

For attribution, please cite this work as

Howes (2020, Dec. 27). Adam Howes: Defence from extinction and log-odds. Retrieved from https://athowes.github.io/posts/2021-11-03-defence-from-extinction-and-log-odds/

BibTeX citation

@misc{howes2020defence,
  author = {Howes, Adam},
  title = {Adam Howes: Defence from extinction and log-odds},
  url = {https://athowes.github.io/posts/2021-11-03-defence-from-extinction-and-log-odds/},
  year = {2020}
}