scientific investigation includes both independent and dependent variables. The independent variable is the cause (antecedent) of the dependent variable, the presumed effect (consequence). For the present study there are two independent variables: inclusive and self-contained educational programs. The receiving, or dependent, variables are child development and social competence as measured by the SIB and ASC tests. Although not included in the present investigation additional independent variables could have been age, gender, and eligibility category.
Purpose and Design of Study According to the authors the study was designed to study the effects of two different types of educational programs (inclusive vs. self-contained) for students with significant disabilities with respect to gains and rate of improvement in levels of development and social competence as measured by the SIB and ASC on a pre and post-test basis. The author's research question was stated somewhat appropriately: "What are the effects of attending inclusive vs. self-contained programs on children with significant disabilities, and what types of gains and rates of improvement can be anticipated." (See Study Methodology for the relationship research question and selected statistic.) When stating a research question the investigator is not a liberty to assert directional consequences in a presented research question. The author did this by stating "What are the effects... " Using the word "what" implies that differences are to be expected. The researchers should have stated the research question simply by stating "Are there any effects.... " In this manner the researchers can expect results or not results - which is the prudent way in which to formulate a research question.
When a research question has been presented the natural flow leads to the statement of a testable null hypothesis or hypotheses. Null hypotheses state that there will not exist any effect of the independent variable on the dependent variable as measured by a pre-selected assessment instrument. The Fisher and Meyer study failed to state any testable null hypothesis, and, as a result, the test data is susceptible to erroneous and flawed interpretation. More appropriately the authors should have constructed the following testable major null hypothesis and follow up the main effects null hypothesis plus second and third interaction level effect null hypotheses: "There exists no statistically significant difference at ?=.05 in pre and post test SIB and ASC scores for severely learning disabled students and social competence with respect to inclusive and self-contained educational programming. As the study contains various levels of social competence and skill development the appropriate null hypotheses should have been stated with respect to interaction effects. Nested variable consideration should also have been given with respect to nested interaction and nested variables such as age, gender and eligibility levels. Written testable null hypotheses are required also for making the appropriate statistical tool selection. When a researcher chooses to study differences a certain statistic is called for. On the other hand when a researcher is looking to study relationships a different type statistical tool is employed. Not having a null hypothesis from the very beginning stating an examination of differences, relationships and/or effects, the researchers have no basis for choosing the statistical tool for which they opted to use nor interpreting the results.
With respect to the study's research design the author's lightly presented the type of study they conducted. As for the null hypothesis and research question the design section is to be extremely organized and point specific. All researchers are to state specifically what type of research investigation is being presented for evaluation and conclusion. All sub-components of the selected design are to be clearly stated as well. The Fisher and Meyer study again failed in their commitment to regulatory research compliance. Although they did mention very early on in the study that a longitudinal approach was taken there was no mention of how the study was to be kept free of internal, external, or extraneous error. A researcher must always keep in mind that any selected design brings with it certain limitations as well as uniqueness or error. In order to counter these effects certain manipulations or controls to the design and statistical tool must occur. Anything that can affect the controls of any study affects the study's interval validity. Problems with external control will be discussed under the last segment of this paper, "Results and Implications" With respect to internal validity contamination brought about by lack of control Fisher and Meyers failed to report sampling procedures, test administration controls, procedures, and interpretations, scoring mechanisms, SIB and ASC reliability and reliability factors, and a host of possible nested variables such as standardization procedures in test administration, location of testing, conditions of testing environment, and subject level of test taking acceptance. Not having sufficient information with respect to the entire testing situation and structure, chosen sampling method, and standardization qualities of the selected measurement instruments the results and outcomes of the study are highly suspect as to garnering meaningful interpretation and result application.
Research Participants All research endeavors must clearly specify how the participants were chosen. This not only provides the reader with information as how far to extend the implications of the research results but certain statistical tools are subject to certain types of sampling procedures. The present study did not describe the sampling procedure not did it suggest any correction procedure possibly needed for the statistical tool chosen to analyze the data. Therefore, results are suspect again to faulty interpretation and meaning.
Study Methodology As stated above no information was presented as to the reliability and validity of the two chosen instruments. Whether or not the selected measurement instruments meet the requirements of standardization is unknown. In turn, the review of literature section did not discuss the worth of the two instruments whatsoever; and it is within this section that a reader looks for information as to the importance of using the selected measurement instruments. Additionally, Fisher and Meyers failed to present any previous research information as to the selected measurments' comparability to, or compatibility with, other similar assessment devices. Should, however, there not exist any similar measurement instruments the authors were obligated to state the lack thereof. All this is required for the reader to be able to objectively evaluation to "goodness" of the research and the data being produced.
Not only are results interpreted on the basis of the an alignment between the research question and the null hypothesis, the very manner in which a research question is phrased dictates the type of statistic that will be used to analyze the collected data. For example should a researcher wish to test for the relationship between intelligence and school achievement a correlation coefficient is used to analyze the data. Fisher and Meyers stated their research interest lie in testing the "effects" of the independent variable on the dependent variable with respect to two sets or measurement scores in a pre and post test situation. Choosing a "t" Test of independent sample statistical analysis does not, in any way, show effects. A "t" Test is employed to test for differences between two independent groups that have been measured in one or more ways. By using a "t" Test the Fisher and Meyers research question should have read: Are their statistically significant differences in the SID and ASC scores of learning disabled students on a pre and post test basis who have been in either an inclusion program or a self-contained program? Under no circumstance is the "t" Test amenable to test effects - only differences. As a result the analyzed information is inappropriately applied back to the investigators original research question. Either the question has to be revised or another statistical tool, one for testing for effect, must be selected. The recommended chosen tool in this example is an analysis of variance with orthogonal modifications. Even though the authors…