Foundations of statistics
The foundations of statistics consists of the mathematical and philosophical basis for arguments and inferences made using statistics. This includes the justification for methods of statistical inference, estimation and hypothesis testing, the quantification of uncertainty in the conclusions of statistical arguments, and the interpretation of those conclusions in probabilistic terms. A valid foundation can be used to explain statistical paradoxes such as Simpson's paradox, provide a precise description of observed statistical laws, and guide the application of statistical conclusions in social and scientific applications.
Statistical inference addresses issues related to the analysis and interpretation of data. Examples include the use of Bayesian inference versus frequentist inference; the distinction between Fisher's "significance testing" and the Neyman-Pearson "hypothesis testing"; and whether the likelihood principle should be followed. Some of these issues have been subject to unresolved debate for up to two centuries. Others have achieved a pragmatic consensus for specific applications, such as the use of Bayesian methods in fitting complex ecological models.
Bandyopadhyay & Forster describe four statistical paradigms: classical statistics (or error statistics), Bayesian statistics, likelihood-based statistics, and the use of the Akaike Information Criterion as a statistical basis. More recently, Judea Pearl reintroduced a formal mathematics for attributing causality in statistical systems that addresses fundamental limitations of both Bayesian and Neyman-Pearson methods.