Research practices depend on clearly defined guidelines. Those guidelines include general suggestions for how to conduct research effectively, how to apply research to clinical practice, and also how to maximize research reliability and validity. The scientist-practitioner model has become the “framework for many training programs in clinical psychology,”...
Abstract In this tutorial essay, we are going to tell you everything you need to know about writing research proposals. This step-by-step tutorial will begin by defining what a research proposal is. It will describe the format for a research proposal. We include a template...
Research practices depend on clearly defined guidelines. Those guidelines include general suggestions for how to conduct research effectively, how to apply research to clinical practice, and also how to maximize research reliability and validity. The scientist-practitioner model has become the “framework for many training programs in clinical psychology,” (Belar & Perry, 1992, p. 71). However, it is also important to pay attention to specific statistical analyses due to the potential for misinterpreting data. Cortina (1993) points out the significance of coefficient alpha, noting that proper interpretations of alpha enhance research validity and reliability. Alpha can often be misunderstood, particularly in the realm of scientist-practitioner and other types of applied research. It is not just misinterpretation of the alpha coefficient that can stymie research validity in the social sciences. Measurement errors, attenuation, and related biases can also impede research validity (Schmidt & Hunter, 1996).
Another core area of concern in applied psychology research is whether to use broad versus narrow constructs in research design, and also the efficacy of core self-evaluations (Judge & Kammeyer-Meller, 2012). Depending on the area of applied research, such as intelligence testing or job skills testing, researchers can determine what type of construct to use. Yet research does not only guide practice; psychological research also informs theory. As Bacharach (1989) points out, researchers also need ground rules for how to interpolate data results in ways that can reinforce existing theories, challenge them, or propose new theories for newly observed phenomenon. A review of literature on best practices in psychological research reveals ways to improve research design and application to ensure validity and reliability.
Belar & Perry (1992) present the findings of the National Conference on Scientist-Practitioner Education and Training for the Professional Practice of Psychology, addressing mainly the topic of the scientist-practitioner model in general, its relevance for applied psychology, and especially its merits for informing best practices in fields like human resources and other industrial-organizational (I-O) fields. Exposing the core of professional practice points, Belar & Perry (1992) do not necessarily focus on specific issues impacting research reliability and validity so much as they outline the merits of the scientist-practitioner framework and how it can be used and improved. Because many industrial-organizational psychologists will rely on the scientist-practitioner model, it is important to keep the core nine points the authors outline in mind, while also paying attention to research on specific research issues like those related to statistical analysis.
For example, Cortina (1993) and Schmidt & Hunter (1996) reflect on specific statistical analytical biases that can impede psychology research, adversely affecting the reliability of results. Researchers need to pay close attention to every element of research design, including how research is framed, and how results are interpreted depending on the type of data analysis being used. A plethora of psychology studies rely on the use of the alpha coefficient for factors like standard deviations, in order to determine reliability and validity. Unfortunately, there are many ways to use alpha, and the researcher also determines the alpha coefficient value; a researcher-determined coefficient value is something that could give rise to researcher bias. Cortina (1993) also offers insight into different alpha constructs like Cronbach’s coefficient, which is commonly used in psychological metrics. Presenting the results of different hypothetical tests to manipulate alpha, Cortina (1993) illustrates how the way alpha is used has a direct bearing on how the results will be interpreted and used. For scientist-practitioners, this information is especially important because the results of the study could have a direct impact on the lives of subjects, raising pertinent ethical questions. Schmidt & Hunter (1996) focus on another type of statistical issue, measurement validity. Researchers need to know how to correct for measurement biases, and the authors therefore offer some methods by which to correct for problems in the survey instrument itself. Taken together, the Schmidt & Hunter (1996) and the Cortina (1993) articles are extremely useful for research psychologists with an interest in applied psychology such as I-O.
More general approaches to enhancing research validity in I-O psychology can be found in Bacharach’s (1989) study on transforming research into theory and also Judge & Kammeyer-Muller’s (2011) study on the differences between general and specific constructs in I-O psychology metrics. Both of these studies are instrumental for the applied social sciences. Bacharach (1989) discusses why theory matters, and also discusses the limitations of theory, how theory is used and misused, and how theory impacts research questions. Theoretical viewpoints can create or reinforce researcher bias, which can also impact practitioner bias, with clear ethical implications. It is therefore important to pay close attention to the ways research data is misinterpreted, and to the logic being used to validate or falsify existing theories. Bacharach (1989) also touches upon construct validity, which is also what Judge & Kammeyer-Muller (2011) discuss in their research on different I-O applied metrics such as the Five Factor Model of personality and intelligence testing. When psychometrics are used in the I-O setting, they can be based on general or specific constructs, each of which presents strengths and limitations. Understanding these strengths and limitations is crucial for improving research ethics, promoting a high standard for the scientist-practitioner. All together, this review of literature shows how scientist-practitioners in the field of I-O psychology can improve their research competencies.
References
Bacharach, S. B. (1989). Organizational theories: Some criteria for evaluation. Academy of Management Review, 14(4), 496-515.
Belar, C. D., & Perry, N. W. (1992). National conference on scientist–practitioner education and training for the professional practice of psychology. American Psychologist, 47(1), 71–75.
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98-104.
Judge, T. A., & Kammeyer-Meller, J. D. (2012). General and specific measures in organizational behavior research: Considerations, examples, and recommendations for researchers. Journal of Organizational Behavior, 33(2), 161–174.
Schmidt, F. L., & Hunter, J. E. (1996). Measurement error in psychological research: Lessons from 26 research scenarios. Psychological Methods, 1(2), 199–223.
The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.
Always verify citation format against your institution's current style guide.