(Basic Statistics for Citizen Data Scientist)
Single Sample Hypothesis Testing
Suppose we take a sample of size n from a normal population N(μ, σ) and ask whether the sample mean differs significantly from the overall population mean.
This is equivalent to testing the following null hypothesis H0:
We use a two-tailed hypothesis, although sometimes a one-tailed hypothesis is preferred (see examples below). By Theorem 1 of Basic Concepts of Sampling Distributions, the sample mean has normal distribution
We can use this fact directly to test the null hypothesis or employ the following test statistic (i.e. the z-score):
Example 1: National norms for a school mathematics proficiency exam are distributed N(80,20). A random sample of 60 students from New York City is taken showing a mean proficiency score of 75. Do these sample scores differ significantly from the overall population mean?
We would like to show that any deviation from the expected value of 80 for the sample mean is due to chance. We consider three approaches, each based on a different initial hypothesis.
Approach 1: Suppose that before any data were collected we had postulated that a particular sample would have a mean lower than the population mean (one-tailed null hypothesis H0).
Note that we have stated the null hypothesis in a form that we want to reject; i.e. we are hoping to prove the alternative hypothesis H1
The distribution of the sample mean is N(μ, ) where μ = 80, σ = 20 and n = 60. Since the standard error = = = 2.58, the distribution of the sample mean is N(80,2.58). The critical region is the left tail, representing α = 5% of the distribution. We now test to see whether x̄ is in the critical region.
critical value (left tail) = NORMINV(α, μ, ) = NORMINV(.05, 80, 2.58) = 75.75
Since we reject the null hypothesis.
Alternatively, we can test to see whether the p-value is less than α, namely
p-value = NORMDIST(x̄, μ, ), TRUE) = NORMDIST(75, 80, 2.58, TRUE) = .0264
Since p-value = .0264 < .05 = α, we again reject the null hypothesis.
Another approach for arriving at the same conclusion is to use the z-score
Based on either of the following tests, we again reject the null hypothesis:
p-value = NORMSDIST(z) = NORMSDIST(-1.94) = .0264 < .05 = α
zcrit = NORMSINV(α) = -1.64 > – 1.94 = zobs
The conclusion from all these approaches is that the sample has significantly lower scores than the general population.
Approach 2: Suppose that before any data were collected we had postulated that the sample mean would be higher than the population mean (one-tailed hypothesis H0).
This time, the critical region is the right tail, representing α = 5% of the distribution. We can now run any of the following four tests:
p-value = 1 – NORMDIST(75, 80, 2.58, TRUE) = 1 – .0264 = .9736 > .05 = α
x̄crit = NORMINV(.95, 80, 2.58) = 84.25 > 75 = x̄obs
p-value = 1 – NORMSDIST(-1.94) = 1 – .0264 = .9736 > .05 = α
zcrit = NORMSINV(.95) = 1.64 > -1.94 = zobs
We retain the null hypothesis and conclude that we do not have enough evidence to claim that the sample mean is higher than the population mean.
Approach 3: Suppose that before any data were collected we had postulated that a particular sample would have a mean different from the population mean (two-tailed hypothesis H0).
Here we are testing to see whether the sample mean is significantly higher or lower than the population mean (alternative hypothesis H1).
This time, the critical region is a combination of the left tail representing α/2 = 2.5% of the distribution, plus the right tail representing α/2 = 2.5% of the distribution. Once again we test to see whether x̄ is in the critical region, in which case we reject the null hypothesis.
Due to the symmetry of the normal distribution, the p-value =
Thus testing whether p-value < α is equivalent to testing whether
P(x̄ < 75) = NORMDIST(75, 80, 2.58, TRUE) < α/2
Since
NORMDIST(75, 80, 2.58, TRUE) = .0264 > .025 = α/2
we cannot reject the null hypothesis. We can reach the same conclusion as follows:
x̄crit-left = NORMINV(.025, 80, 2.58) = 74.94 < 75 = x̄obs
If the sample mean had been x̄obs= 85 instead then this test would become
x̄crit-right= NORMINV(.975, 80, 2.58) = 85.06 > 85 = x̄obs
In either case, x̄obs would lie just outside the critical region, and so we would retain the null hypothesis. Finally, we can reach the same conclusion by testing the z-score as follows:
NORMSDIST(-1.94) = .0264 > .025 = α/2
|zobs| = 1.94 < 1.96 = NORMSINV(.975) = |zcrit|
Example 2: Suppose that in the previous example we took a larger sample of 100 students and once again the sample mean was 75. Repeat the two-tailed test.
This time the sample error = = = 2, and so
NORMDIST(75, 80, 2, TRUE) = .006 < .025 = α/2.
This time we reject the null hypothesis.
Statistics for Beginners in Excel – Single Sample Hypothesis Testing
Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.
Learn by Coding: v-Tutorials on Applied Machine Learning and Data Science for Beginners
Latest end-to-end Learn by Coding Projects (Jupyter Notebooks) in Python and R:
All Notebooks in One Bundle: Data Science Recipes and Examples in Python & R.
End-to-End Python Machine Learning Recipes & Examples.
End-to-End R Machine Learning Recipes & Examples.
Applied Statistics with R for Beginners and Business Professionals
Data Science and Machine Learning Projects in Python: Tabular Data Analytics
Data Science and Machine Learning Projects in R: Tabular Data Analytics
Python Machine Learning & Data Science Recipes: Learn by Coding
R Machine Learning & Data Science Recipes: Learn by Coding
Comparing Different Machine Learning Algorithms in Python for Classification (FREE)
There are 2000+ End-to-End Python & R Notebooks are available to build Professional Portfolio as a Data Scientist and/or Machine Learning Specialist. All Notebooks are only $29.95. We would like to request you to have a look at the website for FREE the end-to-end notebooks, and then decide whether you would like to purchase or not.