Methods of Instruction and Intervention Research Paper

Excerpt from Research Paper :

proponents of evidence based instruction represent one end of the methods of teaching continuum where practices that have been tested empirically using rigorous research designs are considered to be the only valid method of instruction (Odom et al., 2005). On the other end of the spectrum are methods that may be have some basis for use such an intuition, theory, etc. But have not been subject to empirical scrutiny are considered valid to use. Evidence based instruction or scientific research-based instruction consists of instructional practices or programs for which empirical data have been collected to determine the effectiveness of the program (Odom et al., 2005). In these types of practices/programs rigorous research designs have been conducted to evaluate the effectiveness of the practices. Such research designs can include randomized, controlled trials; quasi-experiments; single subject designs; correlational methods, and/or qualitative research. The most empirically sound designs, randomized controlled experiments, are used to demonstrate that the students who are exposed to the practice demonstrate significant improvements compared to a group of matched students who are not exposed to the method. In these types of tightly controlled studies researchers have more confidence that any changes occurring in the study are due to the specific practice or the intervention used as experimental designs allow the researchers to state (with a fair degree of confidence) that the change is not due to some other variable (for example particular teacher practices/characteristics, student characteristics, environmental conditions, etc.). The types of educational practices with the strongest research evidence have been examined over fairly large numbers of students, in a wide range of settings, and with different subgroups of students. Often there are relatively a small number of studies that match such rigorous criteria and investigating and analyzing available studies to determine if they fit this category of research can be a lengthy, complex, and expensive process (Odom et al., 2005).

If we consider that the federal government got itself involved with educational research and the National Academies we would have to realize there must have been a number of fundamental assumptions about education for this to happen. The first assumption is that there is a widespread belief that educational research should or can be scientific, at least in part. The second assumption is that at least regarding education policies, the federal government distinctively seeks out scientific research in order to make decisions regarding policy and practice. A third assumption is that the quality of educational research must have been considered not to be up to standards in at least some areas in order for research to be needed. And finally, the consideration of a scientific basis for educational research itself is worthy of the attention of science and should not involve political influence.

With respect to educational research the No Child Left Behind Act is probably the most well-known advocate of the requirement that teachers utilize empirically validated techniques in the classrooms. Many of the big agencies that focus on empirically validated have maintained that the "gold standard" for validating an educational practice or program is indeed the randomized controlled design (e.g., What Works Clearing House [WWC], 2012). However, in recent years there has been concern in other fields about the types and quality of scientific evidence that is acceptable to validate some premise, program, or theoretical assumption (White & Smith, 2002). The National Research Council has stated that the questions regarding the effectiveness or utility of practices in education is made up of different types of questions that require different types of methodologies to address (Shavelson & Towne, 2002). The reliance on randomized controlled trials most likely has seeped into other fields from the medical field where such trials are indeed necessary to test the effectiveness of medications and medical treatments against no treatment or against existing treatments to combat certain diseases or medical ailments. In terms of the question "Which is generally more effective, treatment A or treatment B?" this method is the gold standard when the definition of "effective" is easily operationalized. This is often the case with medications or other medical treatments that combat diseases where the pathology and "wellness" are well-defined. However, in education the "treatment effect" can be somewhat more variable and the subject variables can be more diverse and involve significantly more interactions than they do in medical treatments.

The primary policy of education research should be to improve the quality of education for everybody. The National Council of Science proposed that research questions in education are of three broad types:

1. Descriptive where the question is "What is happening?"

2. Causal where the question is something like "Is there a systematic effect?"

3. Process focused where the question is "How or why is it happening?" (Shavelson & Towne, 2002).

Each of these questions would require a different method to answer. Most medical research has traditionally concentrated on question number two, which is an important question but not the only question to be considered. Thus, the emphasis on randomized controlled trials. For question number three correlational (e.g., surveys, observational studies, etc.) or qualitative methods would work better. For question number one correlational methods would be appropriate. There should be an appropriate match between the question and the method. Thus, the definition of what constitutes evidence-based research has broadened somewhat by understanding that the questions educational research itself asks are often broader that those of the medical field. Studies can employ a number of methodologies including systematic, empirical methods that rely on observational designs or experimental designs but are appropriate for the questions asked. When reviewing such research intervention studies that use experimental or quasi-experimental designs are often still given the highest priority (August & Shanahan, 2006; Condelli & Wrigley, 2004). However, descriptive, correlational, or ethnographic studies may be used to support experimental findings from these studies or these alternative methods may also be used to assist the development of theories (August & Shanahan, 2006). Studies that rely on measurement or observational methods that provide valid data across observers and evaluators across multiple observations and measurements are also considered to provide empirical evidence for appropriate research questions (August & Shanahan, 2006; Condelli & Wrigley, 2004). However, these studies should always include some form of rigorous data analyses that is sufficient to test the hypotheses of the researchers and to be able to justify the researchers' conclusions.

Evidence based instruction methods have been help in a number of different areas including special education (Odom et al., 2005), learning disabilities (Roberts, Torgesen, Boardman, & Scammacca, 2008), general reading for all children (Foorman, & Torgesen, 2001), working with special needs children (Odom et al., 2005) and a number of other different groups. However, there are some caveats to be noted.

First, do teachers actually follow the research in their field of teaching? Perhaps those that work with special populations are more apt to follow the research compared to say a public or private school teacher (Odom et al., 2005). Do elementary school teachers in public schools really critically read and evaluate the research on teaching and apply the methods therein?

Related to the first question we need to ask, "Are there really many teachers that are trained to critically evaluate research?" The answer to this question is most likely a resounding "no." Teachers are often not well-versed in research methods and statistics.

Third, despite the abundance of empirical research it is quite clear that no single method works equally well for all teachers or for all students (Foorman & Torgesen, 2001). Teachers may have to mix and match various methods within same class and schools may have to mix and match methods among different teachers. There is no empirical evidence-based method to define who or what will work with whom. Thus the process of trying to understand and apply much of the research…

Sources Used in Document:


August, D., & Shanahan, T. (Eds.). (2006). Executive summary. Developing literacy in second- language learners: Report of the National Literacy Panel on Language-Minority Children and Youth. Mahwah, NJ: Erlbaum.

Condelli, L., & Wrigley, H.S. (2004). Identifying promising interventions for adult ESL literacy students: A review of the literature. Washington, DC: U.S. Department of Education, Institute of Education Sciences.

Foorman, B.R., & Torgesen, J. (2001). Critical elements of classroom and small-group instruction promote reading success in all children. Learning Disabilities Research & Practice, 16, 203 -- 213.

Odom, S.L., Brantlinger, E., Gersten, R., Homer, R.H., Thompson, B., & Harris, K.R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71, 137-149.

Cite This Research Paper:

"Methods Of Instruction And Intervention" (2012, May 07) Retrieved May 22, 2019, from

"Methods Of Instruction And Intervention" 07 May 2012. Web.22 May. 2019. <>

"Methods Of Instruction And Intervention", 07 May 2012, Accessed.22 May. 2019,