How to determine what is the probability distribution function from a numpy array?

Assuming you have used the test correctly, my guess is that you have a small deviation from a normal distribution and because your sample size is so large, even small deviations will lead to a rejection of the null hypothesis of a normal distribution.

One possibility is to visually inspect your data by plotting a normed histogram with a large number of bins and the pdf with loc=data.mean() and scale=data.std().

There are alternative test for testing normality, statsmodels has Anderson-Darling and Lillifors (Kolmogorov-Smirnov) tests when the distribution parameters are estimated.

However, I expect that the results will not differ much given the large sample size.

The main question is whether you want to test whether your sample comes "exactly" from a normal distribution, or whether you are just interested in whether your sample comes from a distribution that is very close to the normal distribution, close in terms of practical usage.

To elaborate on the last point:

http://jpktd.blogspot.ca/2012/10/tost-statistically-significant.html http://www.graphpad.com/guides/prism/6/statistics/index.htm?testing_for_equivalence2.htm

As the sample size increases a hypothesis test gains more power, that means that the test will be able to reject the null hypothesis of equality even for smaller and smaller differences. If we keep our significance level fixed, then eventually we will reject tiny differences that we don't really care about.

An alternative type of hypothesis test is where we want to show that our sample is close to the given point hypothesis, for example two samples have almost the same mean. The problem is that we have to define what our equivalence region is.

In the case of goodness of fit tests we need to choose a distance measure and define a threshold for the distance measure between the sample and the hypothesized distribution. I have not found any explanation where intuition would help to choose this distance threshold.

stats.normaltest is based on deviations of skew and kurtosis from those of the normal distribution.

Anderson-Darling is based on a integral of the weighted squared differences between the cdf.

Kolmogorov-Smirnov is based on the maximum absolute difference between the cdf.

chisquare for binned data would be based on the weighted sum of squared bin probabilities.

and so on.

I only ever tried equivalence testing with binned or discretized data, where I used a threshold from some reference cases which was still rather arbitrary.

In medical equivalence testing there are some predefined standards to specify when two treatments can be considered as equivalent, or similarly as inferior or superior in the one sided version.