True Experimental Designs
Probably the most common design is the Pretest-Posttest Group Design with random assignment. This design is used so often that it is frequently referred to by its popular name: the “classic” experimental design. In a true experimental design, the proper test of a hypotheses is the comparison of the posttests between the treatment group and the control group.
Experimental group: R O X O
Control group: R O O
This design utilizes a control group, using random assignment to equalize the comparison groups, which eliminates all the threats to internal validity except mortality. Because of this, we can have considerable confidence that any differences between treatment group and control group are due to the treatment.
Why are internal threats to validity removed by this design? History is removed as a rival explanation of differences between the groups on the posttest because both groups would experience the same events. Maturation effects are removed, because the same amount of time passes for both groups. Instrumentation threats are controlled by this design because although any unreliability in the measurement could cause a shift in scores from pretest to posttest, both groups would experience the same effect. By removing threats to internal validity you maintain equivalence between the groups. This enables you to conclude with a high degree of confidence that your treatment caused the observed effect and not some alternate plausible explanation.
With respect to regression, the classic experimental design can control for regression through random assignment of subjects with extreme characteristics. This ensures that whenever regression does take place both groups will equally experience its effect. Regression toward the mean should not, therefore, account for any differences between the groups on the posttest. Randomization also controls for selection threat to internal validity by making sure that the comparison groups are equivalent.
Another true experimental design is the Solomon Four-Group Design which is more sophisticated in that four different comparison groups are used.
Experimental group 1: R O X O
Control group 1: R O O
Experimental group 2: R X O
Control group 2: R O
The major advantage of the Solomon design is that it can tell us whether changes in the dependent variable are due to some interaction effect between the pretest and the treatment. For example, let’s say we wanted to assess the effect on attitude about police officers (the dependent variable) after receiving positive information about a group of police officers’ community service work (the independent variable). During the pretest, the groups are asked questions regarding their attitudes toward police officers. Next, they are exposed to the experimental treatment: newspaper articles reporting on civic deeds and rescue efforts of members of the police department.
If treatment group 1 scores lower on the attitude test than control group 1, it might be due to the independent variable. But it could also be that filling out a pretest questionnaire has sensitized people to the difficulties of being a police officer. The people in treatment group 1 are alerted to the issues and they react more strongly to the experimental treatment than they would have without such pretesting. If this is true, then experimental group 2 should show less change than experimental group 1. If the independent variable has an effect separate from its interaction with the treatment, then experimental group 2 should show more change than control group 1. If control group 1 and experimental group 2 show no change but experimental group 1 does show a change, then change is produced only by the interaction of pretesting and treatment.
When using the Solomon Four-Group Design our concern with history and maturation effects is usually only in terms of controlling their effects. The Solomon design enables us to make a more complex assessment of the cause of changes in the dependent variable. However, the combined effects of maturation and history can not only be controlled but also measured. By comparing the posttest of control group 2 with the pretests of experimental group 1 and control group 1, these effects can be assessed. However, our concern with history and maturation effects is usually only in terms of controlling their effects, not measuring them.
The Solomon design is often bypassed because it requires twice as many groups. This effectively doubles the time and cost of conducting the experiment. Many researchers decide that the advantages are not worth the added cost and complexity (Graziano and Raulin, 1996).