Math Wiki
Advertisement

Statistics is a broad mathematical discipline which studies ways to collect, summarize, and draw conclusions from data. It is applicable to a wide variety of academic fields from the physical and social sciences to the humanities, as well as to business, government and industry.

Once data is collected, either through a formal sampling procedure or some other, less formal method of observation, graphical and numerical summaries may be obtained using the techniques of descriptive statistics. The specific summary methods chosen depend on the method of data collection. The techniques of descriptive statistics can also be applied to census data, which is collected on entire populations.

If the data can be viewed as a sample (a subset of some population of interest), inferential statistics can be used to draw conclusions about the larger, mostly unobserved population. These inferences, which are usually based on ideas of randomness and uncertainty quantified through the use of probabilities, may take any of several forms:

  1. Answers to essentially yes/no questions (hypothesis testing)
  2. Estimates of numerical characteristics (estimation)
  3. Predictions of future observations (prediction)
  4. Descriptions of association (correlation)
  5. Modeling of relationships (regression)

The procedures by which such inferences are made are sometimes collectively known as applied statistics. In contrast, statistical theory (or, as an academic subject sometimes called mathematical statistics) is the subdiscipline of applied mathematics which uses probability theory and mathematical analysis to place statistical practice on a firm theoretical basis. (If applied statistics is what you do in statistics, statistical theory tells you why it works.)

In academic statistics courses, the word statistic (no final s) is usually defined as a numerical quantity calculated from a set of data. In this usage, statistics would be the plural form meaning a collection of such numerical quantities. See Statistic for further discussion.

Less formally, the word statistics (singluar statistic) is often used in a way roughly synonymous with data or simply numbers, a common example being sports "statistics" published in newspapers. In the United States, the Bureau of Labor Statistics collects data on employment and general economic conditions; also, the Census Bureau publishes a large annual volume called the Statistical Abstract of the United States based on census data.

Etymology

The word statistics comes from the modern Latin phrase statisticum collegium (lecture about state affairs), which gave rise to the Italian word statista (statesman or politician — compare to status) and the German Statistik (originally the analysis of data about the state). It acquired the meaning of the collection and classification of data generally in the early 19th century. The collection of data about states and localities continues, largely through national and international statistical services.

History

Main article: History of statistics

Basic concepts

There are several approaches to statistics, most of which rely on a few basic concepts.

Population vs. sample

In statistics, a population is the set of all objects (people, etc.) that one wishes to make conclusions about. In order to do this, one usually selects a sample of objects: a subset of the population. By carefully examining the sample, one may make inferences about the larger population.

For example, if one wishes to determine the average height of adult women aged 20-29 in the U.S., it would be impractical to try to find all such women and ask or measure their heights. However, by taking small but representative sample of such women, one may determine the average height of all young women quite closely. The matter of taking representative samples is the focus of sampling.

Randomness, probability and uncertainty

The concept of randomness is difficult to define precisely. In general, any outcome of an action, or series of actions, which cannot be predicted beforehand may be described as being random. When statisticians use the word, they generally mean that while the exact outcome cannot be known beforehand, the set of all possible outcomes is known — or, at least in theory, knowable. A simple example is the outcome of a coin toss: whether the coin will land heads up or tails up is (ideally) unknowable before the toss, but what is known is that the outcome will be one of these two possibilities and not, say, on edge (assuming that the coin cannot stand upright on its edge). The set of all possible outcomes is usually called the sample space.

The probability of an event is also difficult to define precisely but is basically equivalent to the everyday idea of the likelihood or chance of the event happening. An event that can never happen has probability zero; an event that must happen has probability one. (Note that the reverse statements are not necessarily true; see the article on probability for details.) All other events have a probability strictly between zero and one. The greater the probability the more likely the event, and thus the less our uncertainty about whether it will happen; the smaller the probability the greater our uncertainty.

There are two basic interpretations of probability used to assign or compute probabilities in statistics:

  • Relative frequency interpretation: The probability of an event is the long-run relative frequency of occurrence of the event. That is, after a long series of trials, the probability of event A is taken to be:
To make this definition rigorous, the right-hand side of the equation should be preceded by the limit as the number of trials grows to infinity.
  • Subjective interpretation: The probability of an event reflects our subjective assessment of the likelihood of the event happening. This idea can be made rigorous by considering, for example, how much one should be willing to pay for the chance to win a given amount of money if the event happens. For more information, see Bayesian probability.

Note that the relative frequency interpretation does not require that a long series of trials actually be conducted. Typically probability calculations are ultimately based upon perceived equally-likely outcomes — as obtained, for example, when one tosses a so-called "fair" coin or rolls or "fair" die. Many frequentist statistical procedures are based on simple random samples, in which every possible sample of a given size is as likely as any other.

Prior information and loss

Once a procedure has been chosen for assigning probabilities to events, the probabilistic nature of the phenomenon under consideration can be summarized in one or more probability distributions. The data collected is then viewed as having been generated, in a sense, according to the chosen probability distribution.

Section requires expansion...

Data collection

Sampling

Main article: Sampling (statistics)

Experimental design

Main article: Experimental design (statistics)

Data summary: descriptive statistics

Main article: Descriptive statistics

Levels of measurement

Main article: Level of measurement
  • Qualitative (categorical)
    • Nominal
    • Ordinal
  • Quantitative (numerical)
    • Interval
    • Ratio

Graphical summaries

Main article: Statistical graphs

Numerical summaries

Main article: Summary statistics

Data interpretation: inferential statistics

Main article: Statistical inference

Estimation

Main article: Statistical estimation

Prediction

Main article: Statistical prediction

Hypothesis testing

Main article: Statistical hypothesis testing

Relationships and modeling

Correlation

Main article: Correlation

Two quantities are said to be correlated if greater values of one tend to be associated with greater values of the other (positively correlated) or with lesser values of the other (negatively correlated). In the case of interval or ratio variables, this is often apparent in a scatterplot of the data: positive correlation is reflected in an overall increasing trend in the data points when viewed left to right on the graph; negative correlation appears as an overall decreasing trend.

The correlation between two variables is a number measuring the strength and usually the direction of this relationship. Most measures of correlation take on values from -1 to 1 or from 0 to 1. Zero correlation means that greater values of one variable are associated with neither higher nor lower values of the other, or possibly with both. A correlation of 1 implies a perfect positive correlation, meaning that an increase in one variable is always associated with an increase in the other (and possibly always of the same size, depending on the correlation measure used). Finally, a correlation of -1 means that an increase in one variable is always associated with a decrease in the other.

See the article on correlation for more information.

Regression

Main article: Regression

Time series

Main article: Time series

Data mining

Main article: Data mining

Statistics in other fields

  • Biostatistics
  • Business statistics
  • Chemometrics
  • Demography
  • Economic statistics
  • Engineering statistics
  • Epidemiology
  • Geostatistics
  • Psychometrics
  • Statistical physics

Subfields or specialties in statistics

  • Mathematical statistics
  • Reliability
  • Survival analysis
  • Quality control
  • Time series
  • Categorical data analysis
  • Multivariate statistics
  • Large-sample theory
  • Bayesian inference
  • Regression
  • Sampling theory
  • Design of experiments
  • Statistical computing
  • Non-parametric statistics
  • Density estimation
  • Simultaneous inference
  • Linear inference
  • Optimal inference
  • Decision theory
  • Linear models
  • Data modeling
  • Sequential analysis
  • Spatial statistics

Probability:

  • Stochastic processes
  • Queueing theory

Related areas of mathematics

Statistical software

Main article: List of statistical software

Commercial

  • CART
  • ECHIPS (EChips)
  • Excel
    • add-ins: Analyse-It, SigmaXL, statistiXL, WinSTAT, XLSTAT (XLSTAT)
  • JMP
  • Minitab
  • NCSS
  • nQuery
  • PASS
  • SAS System (SAS)
  • S
    • descendents: S-PLUS (S-Plus), S2, S3, S4, S5, S6
  • SPSS
  • Stata
  • STATISTICA (Statistica)
  • StatXact, LogXact
  • SUDAAN (Sudaan)
  • SYSTAT (Systat)

Free versions of commercial software

  • Gnumeric — not a clone of Excel, but implements many of the same functions
  • R — free version of S
  • FIASCO or PSPP — free version of SPSS

Other free software

  • BUGS — Bayesian inference Using Gibbs Sampling
  • ESS — a GNU Emacs add-on
  • ...
  • see also [1]

Licensing unknown

  • Genstat
  • XLispStat
  • ...

World Wide Web

  • StatLib — large repository of statistical software and data sets

Online data sources

  • StatLib
  • ...

External links

Advertisement