Randomisation as modesty

And optimisation as hubris

Adam Howes (Independent)
2024-03-28

Status: draft, last updated 2024-04-11

I’d like to understand when we use randomness in statistics, and why. This post is a placeholder for me figuring that out.

Monte Carlo is fundamentally unsound

Suppose that we wish to integrate the function \(f: \mathbb{R} \to \mathbb{R}\) over the real line \[ I = \int_{- \infty}^{\infty} f(x) \text{d} x. \] If we are able to generate samples \(x_{1:n} = x_1, \ldots, x_n\) from \(f\) then \(I\) may be estimated using Monte Carlo as \[ \hat I_{\text{MC}} = \frac{1}{n} \sum_{i = 1}^n f(x_i). \] Alternatively, if we are not able to sample directly from \(f\) then we may use importance sampling. If \(y_{1:n}\) are samples from \(g\) then \(I\) may be estimated by \[ \hat I_{\text{IS}} = \frac{1}{n} \sum_{i = 1}^n \frac{f(y_i)}{g(y_i)}. \]

O’Hagan (1987) makes two objections to Monte Carlo…

Notes

O’Hagan, Anthony. 1987. “Monte Carlo Is Fundamentally Unsound.” The Statistician, 247–49.

References

Citation

For attribution, please cite this work as

Howes (2024, March 28). Adam Howes: Randomisation as modesty. Retrieved from https://athowes.github.io/posts/2024-03-28-randomisation/

BibTeX citation

@misc{howes2024randomisation,
  author = {Howes, Adam},
  title = {Adam Howes: Randomisation as modesty},
  url = {https://athowes.github.io/posts/2024-03-28-randomisation/},
  year = {2024}
}