The Commons Math User Guide - Statistics

The statistics package provides frameworks and implementations for basic Descriptive statistics, frequency distributions, bivariate regression, and t- and chi-square test statistics.

Descriptive statistics

Frequency distributions

Simple Regression

Statistical Tests

The stat package includes a framework and default implementations for the following Descriptive statistics:

  • arithmetic and geometric means
  • variance and standard deviation
  • sum, product, log sum, sum of squared values
  • minimum, maximum, median, and percentiles
  • skewness and kurtosis
  • first, second, third and fourth moments

With the exception of percentiles and the median, all of these statistics can be computed without maintaining the full list of input data values in memory. The stat package provides interfaces and implementations that do not require value storage as well as implementations that operate on arrays of stored values.

The top level interface is org.apache.commons.math.stat.descriptive.UnivariateStatistic. This interface, implemented by all statistics, consists of evaluate() methods that take double[] arrays as arguments and return the value of the statistic. This interface is extended by StorelessUnivariateStatistic, which adds increment(), getResult() and associated methods to support "storageless" implementations that maintain counters, sums or other state information as values are added using the increment() method.

Abstract implementations of the top level interfaces are provided in AbstractUnivariateStatistic and AbstractStorelessUnivariateStatistic respectively.

Each statistic is implemented as a separate class, in one of the subpackages (moment, rank, summary) and each extends one of the abstract classes above (depending on whether or not value storage is required to compute the statistic). There are several ways to instantiate and use statistics. Statistics can be instantiated and used directly, but it is generally more convenient (and efficient) to access them using the provided aggregates, DescriptiveStatistics and SummaryStatistics.

DescriptiveStatistics maintains the input data in memory and has the capability of producing "rolling" statistics computed from a "window" consisting of the most recently added values.

SummaryStatisics does not store the input data values in memory, so the statistics included in this aggregate are limited to those that can be computed in one pass through the data without access to the full array of values.

AggregateStatistics IncludedValues stored? "Rolling" capability?
DescriptiveStatisticsmin, max, mean, geometric mean, n, sum, sum of squares, standard deviation, variance, percentiles, skewness, kurtosis, medianYesYes
SummaryStatisticsmin, max, mean, geometric mean, n, sum, sum of squares, standard deviation, varianceNoNo

There is also a utility class, StatUtils, that provides static methods for computing statistics directly from double[] arrays.

Here are some examples showing how to compute Descriptive statistics.

Compute summary statistics for a list of double values


Using the DescriptiveStatistics aggregate (values are stored in memory): // Get a DescriptiveStatistics instance using factory method DescriptiveStatistics stats = DescriptiveStatistics.newInstance(); // Add the data from the array for( int i = 0; i < inputArray.length; i++) { stats.addValue(inputArray[i]); } // Compute some statistics double mean = stats.getMean(); double std = stats.getStandardDeviation(); double median = stats.getMedian();
Using the SummaryStatistics aggregate (values are not stored in memory): // Get a SummaryStatistics instance using factory method SummaryStatistics stats = SummaryStatistics.newInstance(); // Read data from an input stream, // adding values and updating sums, counters, etc. while (line != null) { line = in.readLine(); stats.addValue(Double.parseDouble(line.trim())); } in.close(); // Compute the statistics double mean = stats.getMean(); double std = stats.getStandardDeviation(); //double median = stats.getMedian(); <-- NOT AVAILABLE
Using the StatUtils utility class: // Compute statistics directly from the array // assume values is a double[] array double mean = StatUtils.mean(values); double std = StatUtils.variance(values); double median = StatUtils.percentile(50); // Compute the mean of the first three values in the array mean = StatuUtils.mean(values, 0, 3);
Maintain a "rolling mean" of the most recent 100 values from an input stream


Use a DescriptiveStatistics instance with window size set to 100 // Create a DescriptiveStats instance and set the window size to 100 DescriptiveStatistics stats = DescriptiveStatistics.newInstance(); stats.setWindowSize(100); // Read data from an input stream, // displaying the mean of the most recent 100 observations // after every 100 observations long nLines = 0; while (line != null) { line = in.readLine(); stats.addValue(Double.parseDouble(line.trim())); if (nLines == 100) { nLines = 0; System.out.println(stats.getMean()); } } in.close();

org.apache.commons.math.stat.descriptive.Frequency provides a simple interface for maintaining counts and percentages of discrete values.

Strings, integers, longs and chars are all supported as value types, as well as instances of any class that implements Comparable. The ordering of values used in computing cumulative frequencies is by default the natural ordering, but this can be overriden by supplying a Comparator to the constructor. Adding values that are not comparable to those that have already been added results in an IllegalArgumentException.

Here are some examples.

Compute a frequency distribution based on integer values


Mixing integers, longs, Integers and Longs: Frequency f = new Frequency(); f.addValue(1); f.addValue(new Integer(1)); f.addValue(new Long(1)); f.addValue(2); f.addValue(new Integer(-1)); System.out.prinltn(f.getCount(1)); // displays 3 System.out.println(f.getCumPct(0)); // displays 0.2 System.out.println(f.getPct(new Integer(1))); // displays 0.6 System.out.println(f.getCumPct(-2)); // displays 0 System.out.println(f.getCumPct(10)); // displays 1
Count string frequencies


Using case-sensitive comparison, alpha sort order (natural comparator): Frequency f = new Frequency(); f.addValue("one"); f.addValue("One"); f.addValue("oNe"); f.addValue("Z"); System.out.println(f.getCount("one")); // displays 1 System.out.println(f.getCumPct("Z")); // displays 0.5 System.out.println(f.getCumPct("Ot")); // displays 0.25
Using case-insensitive comparator: Frequency f = new Frequency(String.CASE_INSENSITIVE_ORDER); f.addValue("one"); f.addValue("One"); f.addValue("oNe"); f.addValue("Z"); System.out.println(f.getCount("one")); // displays 3 System.out.println(f.getCumPct("z")); // displays 1

org.apache.commons.math.stat.regression.SimpleRegression provides ordinary least squares regression with one independent variable, estimating the linear model:

y = intercept + slope * x

Standard errors for intercept and slope are available as well as ANOVA, r-square and Pearson's r statistics.

Observations (x,y pairs) can be added to the model one at a time or they can be provided in a 2-dimensional array. The observations are not stored in memory, so there is no limit to the number of observations that can be added to the model.

Usage Notes:

  • When there are fewer than two observations in the model, or when there is no variation in the x values (i.e. all x values are the same) all statistics return NaN. At least two observations with different x coordinates are requred to estimate a bivariate regression model.
  • getters for the statistics always compute values based on the current set of observations -- i.e., you can get statistics, then add more data and get updated statistics without using a new instance. There is no "compute" method that updates all statistics. Each of the getters performs the necessary computations to return the requested statistic.

Implementation Notes:

  • As observations are added to the model, the sum of x values, y values, cross products (x times y), and squared deviations of x and y from their respective means are updated using updating formulas defined in "Algorithms for Computing the Sample Variance: Analysis and Recommendations", Chan, T.F., Golub, G.H., and LeVeque, R.J. 1983, American Statistician, vol. 37, pp. 242-247, referenced in Weisberg, S. "Applied Linear Regression". 2nd Ed. 1985. All regression statistics are computed from these sums.
  • Inference statistics (confidence intervals, parameter significance levels) are based on on the assumption that the observations included in the model are drawn from a Bivariate Normal Distribution

Here are some examples.

Estimate a model based on observations added one at a time


Instantiate a regression instance and add data points regression = new SimpleRegression(); regression.addData(1d, 2d); // At this point, with only one observation, // all regression statistics will return NaN regression.addData(3d, 3d); // With only two observations, // slope and intercept can be computed // but inference statistics will return NaN regression.addData(3d, 3d); // Now all statistics are defined.
Compute some statistics based on observations added so far System.out.println(regression.getIntercept()); // displays intercept of regression line System.out.println(regression.getSlope()); // displays slope of regression line System.out.println(regression.getSlopeStdErr()); // displays slope standard error
Use the regression model to predict the y value for a new x value System.out.println(regression.predict(1.5d) // displays predicted y value for x = 1.5 More data points can be added and subsequent getXxx calls will incorporate additional data in statistics.
Estimate a model from a double[][] array of data points


Instantiate a regression object and load dataset double[][] data = { { 1, 3 }, {2, 5 }, {3, 7 }, {4, 14 }, {5, 11 }}; SimpleRegression regression = new SimpleRegression(); regression.addData(data);
Estimate regression model based on data System.out.println(regression.getIntercept()); // displays intercept of regression line System.out.println(regression.getSlope()); // displays slope of regression line System.out.println(regression.getSlopeStdErr()); // displays slope standard error More data points -- even another double[][] array -- can be added and subsequent getXxx calls will incorporate additional data in statistics.

The interfaces and implementations in the org.apache.commons.math.stat.inference package provide Student's t and Chi-Square test statistics as well as p-values associated with t- and Chi-Square tests.

Implementation Notes

  • Both one- and two-sample t-tests are supported. Two sample tests can be either paired or unpaired and the unpaired two-sample tests can be conducted under the assumption of equal subpopulation variances or without this assumption. When equal variances is assumed, a pooled variance estimate is used to compute the t-statistic and the degrees of freedom used in the t-test equals the sum of the sample sizes minus 2. When equal variances is not assumed, the t-statistic uses both sample variances and the Welch-Satterwaite approximation is used to compute the degrees of freedom. Methods to return t-statistics and p-values are provided in each case, as well as boolean-valued methods to perform fixed significance level tests. The names of methods or methods that assume equal subpopulation variances always start with "homoscedastic." Test or test-statistic methods that just start with "t" do not assume equal variances. See the examples below and the API documentation for more details.
  • The validity of the p-values returned by the t-test depends on the assumptions of the parametric t-test procedure, as discussed here
  • p-values returned by both t- and chi-square tests are exact, based on numerical approximations to the t- and chi-square distributions in the distributions package.
  • p-values returned by t-tests are for two-sided tests and the boolean-valued methods supporting fixed significance level tests assume that the hypotheses are two-sided. One sided tests can be performed by dividing returned p-values (resp. critical values) by 2.
  • Degrees of freedom for chi-square tests are integral values, based on the number of observed or expected counts (number of observed counts - 1) for the goodness-of-fit tests and (number of columns -1) * (number of rows - 1) for independence tests.

Examples:

One-sample t tests


To compare the mean of a double[] array to a fixed value: double[] observed = {1d, 2d, 3d}; double mu = 2.5d; TTestImpl testStatistic = new TTestImpl(); System.out.println(testStatistic.t(mu, observed); The code above will display the t-statisitic associated with a one-sample t-test comparing the mean of the observed values against mu.
To compare the mean of a dataset described by a org.apache.commons.math.stat.descriptive.StatisticalSummary to a fixed value: double[] observed ={1d, 2d, 3d}; double mu = 2.5d; SummaryStatistics sampleStats = null; sampleStats = SummaryStatistics.newInstance(); for (int i = 0; i < observed.length; i++) { sampleStats.addValue(observed[i]); } System.out.println(testStatistic.t(mu, observed);
To compute the p-value associated with the null hypothesis that the mean of a set of values equals a point estimate, against the two-sided alternative that the mean is different from the target value: double[] observed = {1d, 2d, 3d}; double mu = 2.5d; TTestImpl testStatistic = new TTestImpl(); System.out.println(testStatistic.tTest(mu, observed); The snippet above will display the p-value associated with the null hypothesis that the mean of the population from which the observed values are drawn equals mu.
To perform the test using a fixed significance level, use: testStatistic.tTest(mu, observed, alpha); where 0 < alpha < 0.5 is the significance level of the test. The boolean value returned will be true iff the null hypothesis can be rejected with confidence 1 - alpha. To test, for example at the 95% level of confidence, use alpha = 0.05
Two-Sample t-tests


Example 1: Paired test evaluating the null hypothesis that the mean difference between corresponding (paired) elements of the double[] arrays sample1 and sample2 is zero.

To compute the t-statistic: TTestImpl testStatistic = new TTestImpl(); testStatistic.pairedT(sample1, sample2);

To compute the p-value: testStatistic.pairedTTest(sample1, sample2);

To perform a fixed significance level test with alpha = .05: testStatistic.pairedTTest(sample1, sample2, .05);

The last example will return true iff the p-value returned by testStatistic.pairedTTest(sample1, sample2) is less than .05
Example 2: unpaired, two-sided, two-sample t-test using StatisticalSummary instances, without assuming that subpopulation variances are equal.

First create the StatisticalSummary instances. Both DescriptiveStatistics and SummaryStatistics implement this interface. Assume that summary1 and summary2 are SummaryStatistics instances, each of which has had at least 2 values added to the (virtual) dataset that it describes. The sample sizes do not have to be the same -- all that is required is that both samples have at least 2 elements.

Note: The SummaryStatistics class does not store the dataset that it describes in memory, but it does compute all statistics necessary to perform t-tests, so this method can be used to conduct t-tests with very large samples. One-sample tests can also be performed this way. (See Descriptive statistics for details on the SummaryStatistics class.)

To compute the t-statistic: TTestImpl testStatistic = new TTestImpl(); testStatistic.t(summary1, summary2);

To compute the p-value: testStatistic.tTest(sample1, sample2);

To perform a fixed significance level test with alpha = .05: testStatistic.tTest(sample1, sample2, .05);

In each case above, the test does not assume that the subpopulation variances are equal. To perform the tests under this assumption, replace "t" at the beginning of the method name with "homoscedasticT"

Computing chi-square test statistics


To compute a chi-square statistic measuring the agreement between a long[] array of observed counts and a double[] array of expected counts, use: ChiSquareTestImpl testStatistic = new ChiSquareTestImpl(); long[] observed = {10, 9, 11}; double[] expected = {10.1, 9.8, 10.3}; System.out.println(testStatistic.chiSquare(expected, observed)); the value displayed will be sum((expected[i] - observed[i])^2 / expected[i])
To get the p-value associated with the null hypothesis that observed conforms to expected use: testStatistic.chiSquareTest(expected, observed);
To test the null hypothesis that observed conforms to expected with alpha siginficance level (equiv. 100 * (1-alpha)% confidence) where 0 < alpha < 1 use: testStatistic.chiSquareTest(expected, observed, alpha); The boolean value returned will be true iff the null hypothesis can be rejected with confidence 1 - alpha.
To compute a chi-square statistic statistic associated with a chi-square test of independence based on a two-dimensional (long[][]) counts array viewed as a two-way table, use: testStatistic.chiSquareTest(counts); The rows of the 2-way table are count[0], ... , count[count.length - 1].

The chi-square statistic returned is sum((counts[i][j] - expected[i][j])^2/expected[i][j]) where the sum is taken over all table entries and expected[i][j] is the product of the row and column sums at row i, column j divided by the total count.
To compute the p-value associated with the null hypothesis that the classifications represented by the counts in the columns of the input 2-way table are independent of the rows, use: testStatistic.chiSquareTest(counts);
To perform a chi-square test of independence with alpha siginficance level (equiv. 100 * (1-alpha)% confidence) where 0 < alpha < 1 use: testStatistic.chiSquareTest(counts, alpha); The boolean value returned will be true iff the null hypothesis can be rejected with confidence 1 - alpha.