﻿ One-Way ANOVA/F-Test About Multiple U's, Dr. Usip, Economics

One-Way Analysis of Variance (ANOVA): An F-Test/eeusip Problem Description and Data

The problem is to determine whether 3 methods of teaching a statistics course differ in effectiveness as measured by students' scores on the final examination.

Method 1. The lecturer neither works out nor assigns problems.

Method 2. The lecturer works out and assigns problems.

Method 3. The lecturer works out and assign problems. Students are also required to carry out a projects that entail the use of the techniques as they are being covered in class.

The research questions are: Does a significant difference exist between the mean scores (µj) from the three sub-populations (j = 1,2,3)? If indeed it does, which sub-population/teaching method/treatment will produce the highest possible score, on the average?

The same professor teaches 3 different section of students, using one of the 3 methods in each class. All of the students are sophomores at the same university and are randomly assigned to the 3 sections. There are only 12 students in the experiment, 4 in each of the 3 different section. This problem is adapted from Hamburg, et al, 1994, p. 400.

The data matrix for the students' score sin the final examination is as follows:

 Student Method 1 Method 2 Method 3 1 16 19 24 2 21 20 21 3 18 21 22 4 13 20 25 Total 68 80 92

SPSS/win Data Entry Format:

Method                        Score
1                                16
1                                21
1                                18
1                                13
2                                19
2                                20
2                                21
2                                20
3                                24
3                                21
3                                22
3                                25

In SPSS/win, you must enter the data into columns by Factor (teaching Method) and Response (Score) even though the data matrix is a 3 by 4 table. Note that by viewing a student score as depending on the specific teaching method employed, Factor is conceivably the Independent variable (IV) with Factor levels/Treatments as values; while Response is the Dependent Variable (DV) with student scores as the values. The dichotomy between the DV and the IV will become more apparent when we examine the Regression Analysis method.

Use the following commands to declare the variable names and their labels:

Double-click on var in column one to open the Define Variable window; type method in the Variable Name box. Open the Type window and set Decimal Places to zero ( i.e.; type 0 to replace the default value of 2). Then open the Labels window and type Teaching Method in the Variable Label box. Click on Continue option then Okay to return to the data entry screen.

To define the variable score, double-click on var in the second column. Repeat the above steps; type Final Exam Score in the Variable label box.

Note: Because the values are quantitative, the variable Type is automatically set to Numeric.

Execute the ANOVA command sequence as described in the previous page (provide link).
Select FILE/PRINT or the Printer Icon to send your output to the local printer.

Discussion of the Outputs and Testing Procedure
The Outputs:

Descriptives

N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum
Lower Bound Upper Bound
Final Exam Score Teaching Method 1 4 17.00 3.37 1.68 11.64 22.36 13 21
2 4 20.00 .82 .41 18.70 21.30 19 21
3 4 23.00 1.83 .91 20.09 25.91 21 25
Total 12 20.00 3.28 .95 17.92 22.08 13 25

ANOVA

Sum of Squares df Mean Square F Sig.
Final Exam Score Between Groups 72.000 2 36.000 7.043 .014
Within Groups 46.000 9 5.111

Total 118.000 11

SPSS/win produces both the Descriptive Statistics table and the ANOVA table. As demonstrated in class, the descriptive table provides the necessary input/information for the individual formulas in the key identity relation TSS = BSS + WSS upon which the ANOVA table is based. The ANOVA table per se summarizes all the essential statistics that are needed to actually do the test. These include the BSS = 72 and the associated degrees of freedom (v1) = 2; the WSS = 46 and the associated degrees of freedom (v2) = 9; the Mean Square Between the treatments (MSB) = BSS/v1 = 36; the Mean Square Within the treatments (MSW) = WSS/v2 = 5.111; and the computed/observed F value = (BSS/v1)/(WSS/v2) = 7.043. The computed probability value (sig) of .014 indicate the significance degree of the test relative to the value of alpha (see interpretation under conclusion below).

The Testing Procedure:
Step 1: State Ho and Ha such that they contradict each other completely or rather relate in a mutually exclusive manner. For this problem, this implies the following statements:
Ho: µ1 = µ2 = µ3
Ha : They are not all equal

The null hypothesis Ho says that the 3 teaching methods are not significantly different in effect as measured by the average score of the students in the final examination; stated otherwise, they are equally effective in enhancing the the students' performance. The alternative hypothesis Ha says that the average score is significantly different; and that the differences among them are due to the varying degree of effectiveness of the treatments (Teaching Methods) applied.

Step 2: Specify the level of significance, which in this case is given to you (alpha = .05).

Step 3: Identify the test statistic and its sampling distribution. As explained in class and stated above, this test statistic is the ratio (BSS/v1)/(WSS/v2). In repeated sampling this random quantity follows a probability distribution called the F distribution that was developed by R. A. Fisher in 1924, and later named in his honor by G. W. Snedecor. This is why the ANOVA test is also referred to as an F test.

Step 4: In practice, this is the juncture that requires sampling from the target population using the randomized block design method. Also, all the necessary computations using the computer and a statistical program (SPSS/win in this case) are carried out at this step.

Step 5: Specify the decision rule by relating the computed/observed F value (Fov) to the critical F value (Fcv) that you obtain from the F-table using the values of alpha, v1, and v2 as stated above. The rule is stated in the following manner: Reject Ho if Fov > Fcv; Retain Ho if otherwise. Note that Fcv = F.05, 2, 9 = 4.26.

Step 6: Draw valid statistical and administrative conclusions.
a) Statistical Conclusion - Reject Ho since Fov = 7.043 is greater than Fcv = 4.26. Furthermore, the computed probability value (sig) of .014 simply means that the test is strongly significant at 5%; hence Ho must be unequivocally rejected at the critical value of 5% because .014 is << .05
b) Administrative Conclusion --Based on the experimental result, it would appear that the 3 teaching methods do not have the same effect in determining the final examination score of the students. One treatment (Method # 3) is obviously superior in this regard. The descriptive table indicates that students who were taught with this method earned a higher sample mean score of 23, seconded by those taught with method # 2. Method # 1 could only be described as a poor approach to delivering statistical knowledge to the students. Some of those students could have done better if they were challenged with assignments, if not projects.