Skip to content

Latest commit

 

History

History
125 lines (76 loc) · 10.3 KB

README.md

File metadata and controls

125 lines (76 loc) · 10.3 KB

Computational Statistics ("Estatística Computacional")

Course materials for Computational Statistics, a PhD-level course at EMAp.

Lecture notes and other resources

  • We will be using the excellent materials from Professor Patrick Rebeschini (Oxford University) as a general guide for our course.

As complementary material,

Other materials, including lecture notes and slides may be posted here as the course progresses.

Here you can find a nascent annotated bibliography with landmark papers in the field. This review paper by Professor Hedibert Lopes is far better than anything I could conjure, however.

Books

Books marked with [a] are advanced material.

Main

Supplementary

News

An assigment on Gibbs samplers for linear regression with heterokedasticity under conjugate priors is now available.

Simulation

Markov chains

  • These notes from David Levin and Yuval Peres are excellent and cover a lot of material one might find interesting on Markov processes.

Markov chain Monte Carlo

Hamiltonian Monte Carlo

The two definitive texts on HMC are Neal (2011) and Betancourt (2017). A nice set of notes is Vishnoi (2021). Moreover, Hoffman & Gelman (2014) describes the No-U-turn sampler.

Normalising Constants

This post by Radford Neal explains why the Harmonic Mean Estimator (HME) is a terrible estimator of the evidence.

Sequential Monte Carlo and Dynamic models

  • This book by Nicolas Chopin and Omiros Papaspiliopoulos is a great introduction (as it says in the title) about SMC. SMC finds application in many areas, but dynamic (linear) models deserve a special mention. The seminal 1997 book by West and Harrison remains the de facto text on the subject.

Optmisation

The EM algorithm

Simulated Annealing

  • The original 1983 paper in Science open link by Kirpatrick et al is a great read.
  • These visualisations of the traveling salesman problem might prove useful.
  • These notes have a little bit of theory on the cooling scheme.

Bootstrap

Miscellanea

  • In these notes, Terence Tao gives insights into concentration of measure, which is the reason why integrating with respect to a probability measure in high-dimensional spaces is hard.

  • A Primer for the Monte Carlo Method, by the great Ilya Sobol, is one of the first texts on the Monte Carlo method.

  • The Harris inequality, E[fg] >= E[f]E[g], for f and g increasing, is a special case of the FKG inequality.

  • In Markov Chain Monte Carlo Maximum Likelihood, Charlie Geyer shows how one can use MCMC to do maximum likelihood estimation when the likelihood cannot be written in closed-form. This paper is an example of MCMC methods being used outside of Bayesian statistics.

  • This paper discusses the solution of Problem A in assigment 0 (2021).

Reparametrisation

Sometimes a clever way to make a target distribution easier to compute expectations with respect to is to reparametrise it. Here are some resources:

See #4. Contributed by @lucasmoschen.

Variance reduction

  • Rao-Blackwellisation is a popular technique for obtaining estimators with lower variance. I recommend the recent International Statistical Review article by Christian Robert and Gareth Roberts on the topic.

Extra (fun) resources

In these blogs and websites you will often find interesting discussions on computational, numerical and statistical aspects of applied Statistics and Mathematics.