Basic of Statistical Inference Part-IV: An Overview of Hypothesis Testing

# Basic of Statistical Inference Part-IV: An Overview of Hypothesis Testing

In this series we cover the basic of statistical inference, this is the fourth part of our discussion where we explain the concept of hypothesis testing which is a statistical technique. You could also check out the 3rd part of the series here.

#### Introduction

The objective of sampling is to study the features of the population on the basis of sample observations. A carefully selected sample is expected to reveal these features, and hence we shall infer about the population from a statistical analysis of the sample. This process is known as Statistical Inference.

Β There are two types of problems. Firstly, we may have no information at all about some characteristics of the population, especially the values of the parameters involved in the distribution, and it is required to obtain estimates of these parameters. This is the problem of Estimation. Secondly, some information or hypothetical values of the parameters may be available, and it is required to test how far the hypothesis is tenable in the light of the information provided by the sample. This is the problem of Test of Hypothesis or Test of Significance.

Β In many practical problems, statisticians are called upon to make decisions about a population on the basis of sample observations. For example, given a random sample, it may be required to decide whether the population, from which the sample has been obtained, is a normal distribution with mean = 40 and s.d. = 3 or not. In attempting to reach such decisions, it is necessary to make certain assumptions or guesses about the characteristics of population, particularly about the probability distribution or the values of its parameters. Such an assumption or statement about the population is called Statistical Hypothesis. The validity of a hypothesis will be tested by analyzing the sample. The procedure which enables us to decide whether a certain hypothesis is true or not, is called Test of Significance or Test of Hypothesis.

#### What Is Testing Of Hypothesis?

Statistical Hypothesis

Hypothesis is a statistical statement or a conjecture about the value of a parameter. The basic hypothesis being tested is called the null hypothesis. It is sometimes regarded as representing the current state of knowledge & belief about the value being tested. In a test the null hypothesis is constructed with alternative hypothesis denoted by π»1 ,when a hypothesis is completely specified then it is called a simple hypothesis, when all factors of a distribution are not known then the hypothesis is known as a composite hypothesis.

Testing Of Hypothesis

The entire process of statistical inference is mainly inductive in nature, i.e., it is based on deciding the characteristics of the population on the basis of sample study. Such a decision always involves an element of risk i.e., the risk of taking wrong decisions. It is here that modern theory of probability plays a vital role & the statistical technique that helps us at arriving at the criterion for such decision is known as the testing of hypothesis.

Testing Of Statistical Hypothesis

A test of a statistical hypothesis is a two action decision after observing a random sample from the given population. The two action being the acceptance or rejection of hypothesis under consideration. Therefore a test is a rule which divides the entire sample space into two subsets.

1. A region is which the data is consistent with π»0.
2. The second is its complement in which the data is inconsistent with π»0.

The actual decision is however based on the values of the suitable functions of the data, the test statistic. The set of all possible values of a test statistic which is consistent with π»0 is the acceptance region and all these values of the test statistic which is inconsistent with π»0 is called the critical region. One important condition that must be kept in mind for efficient working of a test statistic is that the distribution must be specified.

Does the acceptance of a statistical hypothesis necessarily imply that it is true?

The truth a fallacy of a statistical hypothesis is based on the information contained in the sample. The rejection or the acceptance of the hypothesis is contingent on the consistency or inconsistency of the π»0 with the sample observations. Therefore it should be clearly bowed in mind that the acceptance of a statistical hypothesis is due to the insufficient evidence provided by the sample to reject it & it doesnβt necessarily imply that it is true.

#### Elements: Null Hypothesis, Alternative Hypothesis, Pot

Null Hypothesis

A Null hypothesis is a hypothesis that says there is no statistical significance between the two variables in the hypothesis. There is no difference between certain characteristics of a population. It is denoted by the symbol π»0. For example, the null hypothesis may be that the population mean is 40 then

Β π»0(π = 40)

Let us suppose that two different concerns manufacture drugs for including sleep, drug A manufactured by first concern and drug B manufactured by second concern. Each company claims that its drug is superior to that of the other and it is desired to test which is a superior drug A or B? To formulate the statistical hypothesis let X be a random variable which denotes the additional hours of sleep gained by an individual when drug A is given and let the random variable Y denote the additional hours to sleep gained when drug B is used. Let us suppose that X and Y follow the probability distributions with means ππ₯ and ππ respectively.

Here our null hypothesis would be that there is no difference between the effects of two drugs. Symbolically,

π»0: ππ = ππ

Alternative Hypothesis

A statistical hypothesis which differs from the null hypothesis is called an Alternative Hypothesis, and is denoted by π»1. The alternative hypothesis is not tested, but its acceptance (rejection) depends on the rejection (acceptance) of the null hypothesis. Alternative hypothesis contradicts the null hypothesis. The choice of an appropriate critical region depends on the type of alternative hypothesis, whether both-sided, one-sided (right/left) or specified alternative.

Alternative hypothesis is usually denoted by π»1.

For example, in the drugs problem, the alternative hypothesis could be

Power Of Test

The null hypothesis π»0 π = π0 is accepted when the observed value of test statistic lies the critical region, as determined by the test procedure. Suppose that the true value of π is not π0, but another value π1, i.e. a specified alternative hypothesis π»1 π = π1 is true. Type II error is committed if π»0 is not rejected, i.e. the test statistic lies outside the critical region. Hence the probability of Type II error is a function of π1, because now π = π1 is assumed to be true. If π½ π1 denotes the probability of Type II error, when π = π1 is true, the complementary probability 1 β π½ π1 is called power of the test against the specified alternative π»1 π = π1 . Power = 1-Probability of Type II error=Probability of rejection π»0 when π»1 is true Obviously, we could like a test to be as βpowerfulβ as possible for all critical regions of the same size. Treated as a function of π, the expression of π π = 1 β π½ π is called Power Function of the test for π0 against π. the curve obtained by plotting P(π) against all possible values of π, is known as Power Curve.

#### Elements: Type I & Type II Error

Type I Error & Type Ii Error

The procedure of testing statistical hypothesis does not guarantee that all decisions are perfectly accurate. At times, the test may lead to erroneous conclusions. This is so, because the decision is taken on the basis of sample values, which are themselves fluctuating and depend purely on chance. The errors in statistical decisions are two types:

1. Type I Error β This is the error committed by the test in rejecting a true null hypothesis.
2. Type II Error β This is the error committed by the test in accepting a false null hypothesis.

Considering for the population mean is 40, i.e. π»0 π = 40 , let us imagine that we have a random sample from a population whose mean is really 40. if we apply the test for π»0 π = 40 , we might find that the values of test statistic lines in the critical region, thereby leading to the conclusion that the population mean is not 40; i.e. the test rejects the null hypothesis although it is true. We have thus committed what is known as βType I errorβ or βError of first kindβ. On the other hand, suppose that we have a random sample from a population whose mean is known to different from 40, say 43. if we apply the test for π»0 π = 40 , the value of the statistic may, by chance, lie in the acceptance region, leading to the conclusion that the mean may be 40; i.e. the test does not reject the null hypothesis π»0 π = 40 , although it is false. This is again another form of incorrect decision, and the error thus committed is known as βType II errorβ or βError of second kindβ.

Using sampling distribution of the test statistic, we can measure in advance the probabilities of committing the two types of error. Since the null hypothesis is rejected only when the test statistic falls in the critical region.

Probability of Type I error = Probability of rejecting π»0 π = π0 , when it is true
= Probability that the test statistic lies in the critical region, assuming π = π0.

The probability of Type I error must not exceed the level of significance (πΌ) of the test.

ππππππππππ‘π¦ ππ ππ¦ππ πΌ πππππ β€ πΏππ£ππ ππ ππππππππππππ

The probability of Type II error assumes different values for different values of π covered by the alternative hypothesis π»1. Since the null hypothesis is accepted only when the observed value of the best statistic lies outside the critical region.

Probability of Type II error πβππ π = π1
= Probability of accepting π»0 π = π0 , when it is false
= Probability that the test statistic lies in the region of acceptance, assuming π = π1

The probability of Type I error is necessary for constructing a test of significance. It is in fact the βsize of the Critical Regionβ. The probability of Type II error is used to measure the βpowerβ of the test in detecting falsity of the null hypothesis. When the population has a continuous distribution

Probability of Type I error
= Level of significance
= Size of critical region

#### Elements: Level Of Significance & Critical Region

Level Of Significance And Critical Region

The decision about rejection or otherwise of the null hypothesis is based on probability considerations. Assuming the null hypothesis to be true, we calculate the probability of obtaining a difference equal to or greater than the observed difference. If this probability is found to be small, say less than .05, the conclusion is that the observed value of the statistic is rather unusual and has been caused due to the underlying assumption (i.e. null hypothesis) that is not true. We say that the observed difference is significant at 5 per cent level, and hence the βnull hypothesis is rejectedβ at 5 per cent level of significance. If, however, this probability is not very small, say more than .05, the observed difference cannot be considered to be unusual and is attributed to sampling fluctuation only. The difference is, now said to be not significant at 5 per cent level, and we conclude that there is no reason to reject the null hypothesisβ at 5 per cent level of significance. It has become customary to use 5% and 1% level of significance, although other levels, such as 2% or 5% may also be used.

Without actually going to calculate this probability, the test of significance may be simplified as follows. From the sampling distribution of the statistic, we find the maximum difference is which is exceeded in (say 5) percent of cases. If the observed difference in larger than this value, the null hypothesis is rejected. It is less there in no reason to reject the null hypothesis.

Suppose, the sampling distribution of the statistic is a normal distribution. Since the area under normal curve outside the ordinates at mean Β±1.96 (π . π. ) is only 5%, the probability that the observed value of the statistic differs from the expected value of 1.96 times the S.E. or more is .05; and the probability of a larger difference will be still smaller. If, therefore

Is either greater than 1.96 or less than -1.96 (i.e. numerically greater than 1.96), the null hypothesis π»0 is rejected at 5% level of significance. The set values π§ β₯ 1.96 ππ β€ β1.96, i.e.

|π§| β₯ 1.96

constitutes what is called the Critical Region for the test. Similarly since the area outside mean Β±2.58 (s.d.) is only 1%. π»0 is rejected at 1% level of significance, if z numerically exceeds 258, i.e. the critical region is π§ β₯ 2.58 at 1% level. Using the sampling distribution of an appropriate test statistic we are able to establish the maximum difference at a specified level between the observed and expected values that is consistent with null hypothesis π»0 . The set of values of the test statistic corresponding to this difference which lead to the acceptance of π»0 is called Region of acceptance. Conversely, the set of values of the statistic leading to the rejection of π»0 is referred to as Region of Rejection or βCritical Regionβ of the test. The value of the statistic which lies at the boundary of the regions of acceptance and the rejection is called Critical value. When the null hypothesis is true, the probability of observed value of the test statistic falling in the critical region is often called the βSize of Critical Regionβ.

πππ§π ππ πΆπππ‘ππππ ππππππ β€ πΏππ£ππ ππ ππππππππππππ

However, for a continuous population, the critical region is so determined that its size equals the Level of Significance (πΌ).

#### Two-Tailed And One-Tailed Tests

Our discussion above were centered around testing the significance of βdifferenceβ between the observed and expected values, i.e. whether the observed value is significantly different from (i.e. either larger or smaller than) the expected value, as could arise due to fluctuations of random sampling. In the illustration, the null hypothesis is tested against βboth-sided alternativesβ π > 40 ππ π < 40 , i.e.

π»0 π = 40 ππππππ π‘ π»1 π β  40

Thus assuming π»0 to be true, we would be looking for large differences on both sides of the expected value, i.e. in βboth tailsβ of the distribution. Such tests are, therefore, called βTwo-tailed testsβ.

Sometimes we are interested in tests for large differences on one side only i.e., in one βone tailβ of the distribution. For example, whether a change in the production bricks with a βhigherβ breaking strength, or whether a change in the production technique yields βlowerβ percentage of defectives. These are known as βOne-tailed testsβ.

For testing the null hypothesis against βone-sided alternatives (right side)β π > 40 , i.e.

π»0 π = 40 ππππππ π‘π»1 π > 40

The calculated value of the statistic z is compared with 1.645, since 5% of the area under the standard normal curve lies to the right of 1.645. if the observed value of z exceeds 1.645, the null hypothesis π»0 is rejected at 5% level of significance. If a 1% level were used, we would replace 1.645 by 2.33. thus the critical regions for test at 5% and 1% levels are π§ β₯ 1.645 and π§ β₯ 2.33 respectively.

For testing the null hypothesis against βone-sided alternatives (left side)β π < 40 i.e.

π»0 π = 40 ππππππ π‘π»1 π < 40

The value of z is compared with -1.645 for significance at 5% level, and with -2.33 for significance at 1% level. The critical regions are now π§ β€ β1.645 and π§ β€ β2.33 for 5% and 1% levels respectively. In fact, the sampling distributions of many of the commonly-used statistics can be approximated by normal distributions as the sample size increases, so that these rules are applicable in most cases when the sample size is βlargeβ, say, more than 30. It is evident that the same null hypothesis may be tested against alternative hypothesis of different types depending on the nature of the problem. Correspondingly, the type of test and the critical region associated with each test will also be different.

#### Solving Testing Of Hypothesis Problem

Step 1
Set up the βNull Hypothesisβ π»0 and the βAlternative Hypothesisβ π»1 on the basis of the given problem. The null hypothesis usually specifies the values of some parameters involved in the population: π»0 π = π0 . The alternative hypothesis may be any one of the following types: π»1 ( ) π β  π1 π»1 π > π0 , π»1 π < π0 . The types of alternative hypothesis determines whether to use a two-tailed or one-tailed test (right or left tail).

Step 2

State the appropriate βtest statisticβ T and also its sampling distribution, when the null hypothesis is true. In large sample tests the statistic π§ = (π β π0)Ξ€π. πΈ. , (T) which approximately follows Standard Normal Distribution, is often used. In small sample tests, the population is assumed to be Normal and various test statistics are used which follow Standard Normal, Chi-square, t for F distribution exactly.

Step 3
Select the βlevel of significanceβ πΌ of the test, if it is not specified in the given problem. This represents the maximum probability of committing a Type I error, i.e., of making a wrong decision by the test procedure when in fact the null hypothesis is true. Usually, a 5% or 1% level of significance is used (If nothing is mentioned, use 5% level).

Step 4

Find the βCritical regionβ of the test at the chosen level of significance. This represents the set of values of the test statistic which lead to rejection of the null hypothesis. The critical region always appears in one or both tails of the distribution, depending on weather the alternative hypothesis is one-sided or both-sided. The area in the tails must be equal to the level of significance πΌ. For a one-tailed test, πΌ appears in one tail and for two-tailed test πΌ/2 appears in each tail of the distribution. The critical region is

Where ππΌ is the value of T such that the area to its tight is πΌ.

Step 5

Compute the value of the test statistic T on the basis of sample data the null hypothesis. In large sample tests, if some parameters remain unknown they should be estimated from the sample.
Step 6

If the computed value of test statistic T lies in the critical region, βreject π»0β; otherwise βdo not reject π»0 β. The decision regarding rejection or otherwise of π»0 is made after a comparison of the computed value of T with critical value (i.e., boundary value of the appropriate critical region).

Step 7
Write the conclusion in plain non-technical language. If π»0 is rejected, the interpretation is: βthe data are not consistent with the assumption that the null hypothesis is true and hence π»0 is not tenableβ. If π»0 is not rejected, βthe data cannot provide any evidence against the null hypothesis and hence π»0 may be accepted to the trueβ. The conclusion should preferably be given in the words stated in the problem.

#### Conclusion

Hypothesis is a statistical statement or a conjecture about the value of a parameter. The legal concept that one is innocent until proven guilty has an analogous use in the world of statistics. In devising a test, statisticians do not attempt to prove that a particular statement or hypothesis is true. Instead, they assume that the hypothesis is incorrect (like not guilty), and then work to find statistical evidence that would allow them to overturn that assumption. In statistics this process is referred to as hypothesis testing, and it is often used to test the relationship between two variables. A hypothesis makes a prediction about some relationship of interest. Then, based on actual data and a pre-selected level of statistical significance, that hypothesis is either accepted or rejected. There are some elements of hypothesis like null hypothesis, alternative hypothesis, type I & type II error, level of significance, critical region and power of test and some processes like one and two tail test to find the critical region of the graph as well as the error that help us reach the final conclusion.

A Null hypothesis is a hypothesis that says there is no statistical significance between the two variables in the hypothesis. There is no difference between certain characteristics of a population. It is denoted by the symbol π»0. A statistical hypothesis which differs from the null hypothesis is called an Alternative Hypothesis, and is denoted by π»1. The procedure of testing statistical hypothesis does not guarantee that all decisions are perfectly accurate. At times, the test may lead to erroneous conclusions. This is so, because the decision is taken on the basis of sample values, which are themselves fluctuating and depend purely on chance, this process called types of error. Hypothesis testing is very important part of statistical analysis. By the help of hypothesis testing many business problem can be solved accurately.

That was the fourth part of the series, that explained hypothesis testing and hopefully it clarified your notion of the same by discussing each crucial aspect of it. You can find more informative posts like this one on Data Science course topics. Just keep on following the Dexlab Analytics blog to stay informed.

.

+91 8676079880

+91 9903662244